Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/1.2/patches/2227954/?format=api
{ "id": 2227954, "url": "http://patchwork.ozlabs.org/api/1.2/patches/2227954/?format=api", "web_url": "http://patchwork.ozlabs.org/project/ovn/patch/20260424153558.1084949-3-dceara@redhat.com/", "project": { "id": 68, "url": "http://patchwork.ozlabs.org/api/1.2/projects/68/?format=api", "name": "Open Virtual Network development", "link_name": "ovn", "list_id": "ovs-dev.openvswitch.org", "list_email": "ovs-dev@openvswitch.org", "web_url": "http://openvswitch.org/", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20260424153558.1084949-3-dceara@redhat.com>", "list_archive_url": null, "date": "2026-04-24T15:35:58", "name": "[ovs-dev,2/2] northd: Enable ARP/ND responder for localnet-sourced requests.", "commit_ref": null, "pull_url": null, "state": "new", "archived": false, "hash": "8b72a3b2495fcb72e33ee6a3a3dfa84470d3f23a", "submitter": { "id": 76591, "url": "http://patchwork.ozlabs.org/api/1.2/people/76591/?format=api", "name": "Dumitru Ceara", "email": "dceara@redhat.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/ovn/patch/20260424153558.1084949-3-dceara@redhat.com/mbox/", "series": [ { "id": 501383, "url": "http://patchwork.ozlabs.org/api/1.2/series/501383/?format=api", "web_url": "http://patchwork.ozlabs.org/project/ovn/list/?series=501383", "date": "2026-04-24T15:35:56", "name": "Enable ARP/ND responder for localnet-sourced requests", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/501383/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/2227954/comments/", "check": "success", "checks": "http://patchwork.ozlabs.org/api/patches/2227954/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<ovs-dev-bounces@openvswitch.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "ovs-dev@openvswitch.org" ], "Delivered-To": [ "patchwork-incoming@legolas.ozlabs.org", "ovs-dev@lists.linuxfoundation.org" ], "Authentication-Results": [ "legolas.ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=geKVqnG2;\n\tdkim-atps=neutral", "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org\n (client-ip=140.211.166.133; helo=smtp2.osuosl.org;\n envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org)", "smtp2.osuosl.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key)\n header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=geKVqnG2", "smtp1.osuosl.org; dmarc=pass (p=quarantine dis=none)\n header.from=redhat.com", "smtp1.osuosl.org;\n dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com\n header.a=rsa-sha256 header.s=mimecast20190719 header.b=geKVqnG2" ], "Received": [ "from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g2HB04mBYz1yD5\n\tfor <incoming@patchwork.ozlabs.org>; Sat, 25 Apr 2026 01:36:24 +1000 (AEST)", "from localhost (localhost [127.0.0.1])\n\tby smtp2.osuosl.org (Postfix) with ESMTP id 9FD38423D8;\n\tFri, 24 Apr 2026 15:36:22 +0000 (UTC)", "from smtp2.osuosl.org ([127.0.0.1])\n by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id HczYYZNIIPC3; Fri, 24 Apr 2026 15:36:18 +0000 (UTC)", "from lists.linuxfoundation.org (lf-lists.osuosl.org\n [IPv6:2605:bc80:3010:104::8cd3:938])\n\tby smtp2.osuosl.org (Postfix) with ESMTPS id A252F423CE;\n\tFri, 24 Apr 2026 15:36:18 +0000 (UTC)", "from lf-lists.osuosl.org (localhost [127.0.0.1])\n\tby lists.linuxfoundation.org (Postfix) with ESMTP id 8D329C04FB;\n\tFri, 24 Apr 2026 15:36:18 +0000 (UTC)", "from smtp1.osuosl.org (smtp1.osuosl.org [140.211.166.138])\n by lists.linuxfoundation.org (Postfix) with ESMTP id 0247CC04FA\n for <ovs-dev@openvswitch.org>; Fri, 24 Apr 2026 15:36:18 +0000 (UTC)", "from localhost (localhost [127.0.0.1])\n by smtp1.osuosl.org (Postfix) with ESMTP id 113FF84CC6\n for <ovs-dev@openvswitch.org>; Fri, 24 Apr 2026 15:36:15 +0000 (UTC)", "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id 4kAizCVC6aSD for <ovs-dev@openvswitch.org>;\n Fri, 24 Apr 2026 15:36:13 +0000 (UTC)", "from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [170.10.129.124])\n by smtp1.osuosl.org (Postfix) with ESMTPS id E2F1884CC9\n for <ovs-dev@openvswitch.org>; Fri, 24 Apr 2026 15:36:12 +0000 (UTC)", "from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com\n (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by\n relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,\n cipher=TLS_AES_256_GCM_SHA384) id us-mta-257-5ewDmJxCMO-W5sRJ5d1F0g-1; Fri,\n 24 Apr 2026 11:36:09 -0400", "from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com\n (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n (No client certificate requested)\n by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS\n id 159A9180049F; Fri, 24 Apr 2026 15:36:09 +0000 (UTC)", "from cecil-rh.redhat.com (unknown [10.44.33.15])\n by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP\n id 66E631800347; Fri, 24 Apr 2026 15:36:07 +0000 (UTC)" ], "X-Virus-Scanned": [ "amavis at osuosl.org", "amavis at osuosl.org" ], "X-Comment": "SPF check N/A for local connections -\n client-ip=2605:bc80:3010:104::8cd3:938; helo=lists.linuxfoundation.org;\n envelope-from=ovs-dev-bounces@openvswitch.org; receiver=<UNKNOWN> ", "DKIM-Filter": [ "OpenDKIM Filter v2.11.0 smtp2.osuosl.org A252F423CE", "OpenDKIM Filter v2.11.0 smtp1.osuosl.org E2F1884CC9" ], "Received-SPF": "Pass (mailfrom) identity=mailfrom; client-ip=170.10.129.124;\n helo=us-smtp-delivery-124.mimecast.com; envelope-from=dceara@redhat.com;\n receiver=<UNKNOWN>", "DMARC-Filter": "OpenDMARC Filter v1.4.2 smtp1.osuosl.org E2F1884CC9", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1777044971;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=CKetYaYAURa6mD7ThNOFN/jVyrVgwdkBpzIaURLLK6Q=;\n b=geKVqnG2T6ckL/KVm3f6ywWeWtLjjWVTjOHjuOGW0g8RUe7Dx0FFUKxlU5Qjy78NQzeFQ9\n /XetwOeaN34B+3K+4SKQCUyZr1oh+ezDbuqIq5ZFZcxgm2C/O+N02utfMn/ReoO5s5oh+o\n 37XM3HRyrd//PY0XJvbRF/EM9Zb0ZSQ=", "X-MC-Unique": "5ewDmJxCMO-W5sRJ5d1F0g-1", "X-Mimecast-MFC-AGG-ID": "5ewDmJxCMO-W5sRJ5d1F0g_1777044969", "To": "ovs-dev@openvswitch.org", "Date": "Fri, 24 Apr 2026 17:35:58 +0200", "Message-ID": "<20260424153558.1084949-3-dceara@redhat.com>", "In-Reply-To": "<20260424153558.1084949-1-dceara@redhat.com>", "References": "<20260424153558.1084949-1-dceara@redhat.com>", "MIME-Version": "1.0", "X-Scanned-By": "MIMEDefang 3.4.1 on 10.30.177.93", "X-Mimecast-Spam-Score": "0", "X-Mimecast-MFC-PROC-ID": "j9ArA-KWHtDuOjAGoyPZiwXK-DGMuPGOWYGQW8i2kmY_1777044969", "X-Mimecast-Originator": "redhat.com", "Subject": "[ovs-dev] [PATCH ovn 2/2] northd: Enable ARP/ND responder for\n localnet-sourced requests.", "X-BeenThere": "ovs-dev@openvswitch.org", "X-Mailman-Version": "2.1.30", "Precedence": "list", "List-Id": "<ovs-dev.openvswitch.org>", "List-Unsubscribe": "<https://mail.openvswitch.org/mailman/options/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=unsubscribe>", "List-Archive": "<http://mail.openvswitch.org/pipermail/ovs-dev/>", "List-Post": "<mailto:ovs-dev@openvswitch.org>", "List-Help": "<mailto:ovs-dev-request@openvswitch.org?subject=help>", "List-Subscribe": "<https://mail.openvswitch.org/mailman/listinfo/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=subscribe>", "From": "Dumitru Ceara via dev <ovs-dev@openvswitch.org>", "Reply-To": "Dumitru Ceara <dceara@redhat.com>", "Content-Type": "text/plain; charset=\"us-ascii\"", "Content-Transfer-Encoding": "7bit", "Errors-To": "ovs-dev-bounces@openvswitch.org", "Sender": "\"dev\" <ovs-dev-bounces@openvswitch.org>" }, "content": "The ARP/ND responder stage (ls_in_arp_rsp) unconditionally\nbypassed all traffic arriving from localnet ports via a\npriority-100 \"next;\" flow. This caused broadcast ARP/ND\nrequests from the physical network to be flooded to every\nlogical switch port instead of being handled by proxy\nARP/ND. On switches with ~200+ ports the resulting\nmulticast replication exceeded the OVS 4K resubmit limit,\ndropping the packets and breaking connectivity.\n\nReplace the bypass with a targeted mechanism:\n\n - In ls_in_lookup_fdb, set flags.localnet = 1 for\n packets arriving from localnet ports (P50 fallback;\n the existing P100 FDB-learning flow already sets this\n flag when FDB learning is enabled).\n\n - In the P50 ARP/ND reply flows, append the condition\n \"((flags.localnet == 1 && is_chassis_resident(port))\n || flags.localnet == 0)\" on switches that have\n localnet ports.\n\nThis ensures that ARP/ND requests from localnet are only\nanswered on the chassis hosting the target VIF, preventing\nboth the flood and duplicate replies from multiple\nhypervisors. VIF-to-VIF proxy ARP/ND is unchanged because\nflags.localnet is 0 for non-localnet-sourced traffic.\n\nFixes: f763a3273b84 (\"ovn: Avoid ARP responder for packets from localnet port\")\nReported-at: https://redhat.atlassian.net/browse/FDP-3436\nAssisted-by: Claude Opus 4.6, Claude Code\nSigned-off-by: Dumitru Ceara <dceara@redhat.com>\n---\n northd/northd.c | 44 +++++++---\n northd/ovn-northd.8.xml | 76 +++++++++++-----\n tests/ovn-northd.at | 111 ++++++++++++++++++++++-\n tests/ovn.at | 189 ++++++++++++++++++++++++++++++++++++++++\n 4 files changed, 389 insertions(+), 31 deletions(-)", "diff": "diff --git a/northd/northd.c b/northd/northd.c\nindex 02c7e7e54e..8305e0428b 100644\n--- a/northd/northd.c\n+++ b/northd/northd.c\n@@ -10402,25 +10402,43 @@ build_arp_nd_service_monitor_lflow(const char *svc_monitor_mac,\n }\n }\n \n-/* Ingress table 24: ARP/ND responder, skip requests coming from localnet\n- * ports. (priority 100); see ovn-northd.8.xml for the rationale. */\n-\n+/* Ingress table: Lookup FDB. Set flags.localnet for packets arriving from\n+ * localnet ports so that downstream stages (e.g., ARP/ND responder) can\n+ * condition their behavior on whether the packet came from localnet. */\n static void\n-build_lswitch_arp_nd_responder_skip_local(struct ovn_port *op,\n- struct lflow_table *lflows,\n- struct ds *match)\n+build_lswitch_from_localnet_op(struct ovn_port *op,\n+ struct lflow_table *lflows,\n+ struct ds *match)\n {\n ovs_assert(op->nbsp);\n- if (!lsp_is_localnet(op->nbsp) || op->od->has_arp_proxy_port) {\n+ if (!lsp_is_localnet(op->nbsp)) {\n return;\n }\n ds_clear(match);\n ds_put_format(match, \"inport == %s\", op->json_key);\n- ovn_lflow_add(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP, 100, ds_cstr(match),\n- \"next;\", op->lflow_ref, WITH_IO_PORT(op->key),\n+ ovn_lflow_add(lflows, op->od, S_SWITCH_IN_LOOKUP_FDB, 50,\n+ ds_cstr(match), \"flags.localnet = 1; next;\",\n+ op->lflow_ref, WITH_IO_PORT(op->key),\n WITH_HINT(&op->nbsp->header_));\n }\n \n+/* On switches with localnet ports, restrict ARP/ND replies for\n+ * localnet-sourced requests to the chassis hosting the target VIF\n+ * (preventing duplicate replies from every hypervisor). Non-localnet\n+ * requests (VIF-to-VIF) are answered unconditionally as before. */\n+static void\n+build_lswitch_arp_nd_local_resp_match(struct ds *match,\n+ const struct ovn_port *op)\n+{\n+ if (!ls_has_localnet_port(op->od)) {\n+ return;\n+ }\n+\n+ ds_put_format(match,\n+ \" && ((flags.localnet == 1 && is_chassis_resident(%s))\"\n+ \" || flags.localnet == 0)\", op->json_key);\n+}\n+\n /* Ingress table 24: ARP/ND responder, reply for known IPs.\n * (priority 50). */\n static void\n@@ -10562,6 +10580,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,\n ds_truncate(match, match_len);\n }\n ds_put_cstr(match, \" && eth.dst == ff:ff:ff:ff:ff:ff\");\n+ size_t match_arp_len = match->length;\n+ build_lswitch_arp_nd_local_resp_match(match, op);\n \n ds_clear(actions);\n ds_put_format(actions,\n@@ -10593,6 +10613,7 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,\n * address is intended to detect situations where the\n * network is not working as configured, so dropping the\n * request would frustrate that intent.) */\n+ ds_truncate(match, match_arp_len);\n ds_put_format(match, \" && inport == %s\", op->json_key);\n ovn_lflow_add(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP, 100,\n ds_cstr(match), \"next;\", op->lflow_ref,\n@@ -10632,6 +10653,8 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,\n \"nd_ns_mcast && ip6.dst == %s && nd.target == %s\",\n op->lsp_addrs[i].ipv6_addrs[j].sn_addr_s,\n op->lsp_addrs[i].ipv6_addrs[j].addr_s);\n+ size_t match_nd_len = match->length;\n+ build_lswitch_arp_nd_local_resp_match(match, op);\n \n ds_clear(actions);\n ds_put_format(actions,\n@@ -10658,6 +10681,7 @@ build_lswitch_arp_nd_responder_known_ips(struct ovn_port *op,\n \n /* Do not reply to a solicitation from the port that owns\n * the address (otherwise DAD detection will fail). */\n+ ds_truncate(match, match_nd_len);\n ds_put_format(match, \" && inport == %s\", op->json_key);\n ovn_lflow_add(lflows, op->od, S_SWITCH_IN_ARP_ND_RSP, 100,\n ds_cstr(match), \"next;\", op->lflow_ref,\n@@ -19554,7 +19578,7 @@ build_lswitch_and_lrouter_iterate_by_lsp(struct ovn_port *op,\n build_mirror_lflows(op, ls_ports, lflows);\n build_lswitch_port_sec_op(op, lflows, actions, match);\n build_lswitch_learn_fdb_op(op, lflows, actions, match);\n- build_lswitch_arp_nd_responder_skip_local(op, lflows, match);\n+ build_lswitch_from_localnet_op(op, lflows, match);\n build_lswitch_arp_nd_responder_known_ips(op, lflows, ls_ports,\n meter_groups, actions, match);\n build_lswitch_dhcp_options_and_response(op, lflows, meter_groups);\ndiff --git a/northd/ovn-northd.8.xml b/northd/ovn-northd.8.xml\nindex 4d6370da6b..4ba4ab3cd1 100644\n--- a/northd/ovn-northd.8.xml\n+++ b/northd/ovn-northd.8.xml\n@@ -488,6 +488,21 @@\n </ul>\n </li>\n \n+ <li>\n+ <p>\n+ For each localnet logical port <var>p</var>, a priority-50\n+ fallback flow is added with the match\n+ <code>inport == <var>p</var></code> and action\n+ <code>flags.localnet = 1; next;</code>. This marks traffic\n+ arriving from localnet ports so that downstream stages (e.g.,\n+ ARP/ND responder) can condition their behavior. When FDB\n+ learning is enabled on the localnet port, the priority-100\n+ flow described above already sets <code>flags.localnet</code>,\n+ so this priority-50 flow only takes effect when FDB learning\n+ is not configured.\n+ </p>\n+ </li>\n+\n <li>\n One priority-0 fallback flow that matches all packets and advances to\n the next table.\n@@ -1734,12 +1749,16 @@\n </p>\n \n <p>\n- Note that ARP requests received from <code>localnet</code> logical\n- inports can either go directly to VMs, in which case the VM responds or\n- can hit an ARP responder for a logical router port if the packet is used\n- to resolve a logical router port next hop address. In either case,\n- logical switch ARP responder rules will not be hit. It contains these\n- logical flows:\n+ ARP/ND requests received from <code>localnet</code> logical inports\n+ do hit the ARP/ND responder, but the response is limited to the\n+ chassis that hosts the target VIF. This is achieved by adding\n+ a <code>flags.localnet</code> check to the priority-50 reply flows\n+ (see below): when the request arrives from a localnet port\n+ (<code>flags.localnet == 1</code>), only the chassis on which the\n+ target port is resident will reply. When the request arrives from\n+ a non-localnet port (<code>flags.localnet == 0</code>), the\n+ response is unconditional, preserving VIF-to-VIF proxy ARP/ND\n+ behavior. It contains these logical flows:\n </p>\n \n <ul>\n@@ -1750,18 +1769,10 @@\n router ingress pipeline.\n </li>\n <li>\n- If the logical switch has no router ports with options:arp_proxy\n- configured add a priority-100 flows to skip the ARP responder if inport\n- is of type <code>localnet</code> advances directly to the next table.\n- ARP requests sent to <code>localnet</code> ports can be received by\n- multiple hypervisors. Now, because the same mac binding rules are\n- downloaded to all hypervisors, each of the multiple hypervisors will\n- respond. This will confuse L2 learning on the source of the ARP\n- requests. ARP requests received on an inport of type\n- <code>router</code> are not expected to hit any logical switch ARP\n- responder flows. However, no skip flows are installed for these\n- packets, as there would be some additional flow cost for this and the\n- value appears limited.\n+ ARP/ND requests received on an inport of type <code>router</code> are\n+ not expected to hit any logical switch ARP responder flows. However,\n+ no skip flows are installed for these packets, as there would be some\n+ additional flow cost for this and the value appears limited.\n </li>\n \n <li>\n@@ -1816,6 +1827,18 @@ flags.loopback = 1;\n output;\n </pre>\n \n+ <p>\n+ On logical switches that have a localnet port, the match for\n+ these flows includes an additional condition:\n+ <code>((flags.localnet == 1 &&\n+ is_chassis_resident(<var>port</var>)) ||\n+ flags.localnet == 0)</code>.\n+ This ensures that when an ARP request arrives from a localnet\n+ port, only the chassis hosting the target VIF responds. When\n+ the request arrives from a non-localnet port, the response is\n+ unconditional, preserving VIF-to-VIF proxy ARP behavior.\n+ </p>\n+\n <p>\n These flows are omitted for logical ports (other than router ports or\n <code>localport</code> ports) that are down (unless <code>\n@@ -1877,6 +1900,19 @@ nd_na_router {\n };\n </pre>\n \n+ <p>\n+ On logical switches that have a localnet port, the match for\n+ these flows includes an additional condition:\n+ <code>((flags.localnet == 1 &&\n+ is_chassis_resident(<var>port</var>)) ||\n+ flags.localnet == 0)</code>.\n+ This ensures that when an ND solicitation arrives from a\n+ localnet port, only the chassis hosting the target VIF\n+ responds. When the solicitation arrives from a non-localnet\n+ port, the response is unconditional, preserving VIF-to-VIF\n+ proxy ND behavior.\n+ </p>\n+\n <p>\n These flows are omitted for logical ports (other than router ports or\n <code>localport</code> ports) that are down (unless <code>\n@@ -1896,8 +1932,8 @@ nd_na_router {\n \n <li>\n <p>\n- Priority-100 flows with match criteria like the ARP and ND flows\n- above, except that they only match packets from the\n+ Priority-100 flows with match criteria similar to the ARP and ND\n+ flows above, except that they only match packets from the\n <code>inport</code> that owns the IP addresses in question, with\n action <code>next;</code>. These flows prevent OVN from replying to,\n for example, an ARP request emitted by a VM for its own IP address.\ndiff --git a/tests/ovn-northd.at b/tests/ovn-northd.at\nindex 1d7bd6c288..df7bac1529 100644\n--- a/tests/ovn-northd.at\n+++ b/tests/ovn-northd.at\n@@ -7730,7 +7730,9 @@ AT_CHECK([grep -e \"ls_in_.*_fdb.*S1-vm1\" S1flows | ovn_strip_lflows], [0], [dnl\n ])\n \n #Verify the flows for a non-default port type (localnet port)\n-AT_CHECK([grep -e \"ls_in_.*_fdb.*S1-localnet\" S1flows], [1], [])\n+AT_CHECK([grep -e \"ls_in_.*_fdb.*S1-localnet\" S1flows | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_lookup_fdb ), priority=50 , match=(inport == \"S1-localnet\"), action=(flags.localnet = 1; next;)\n+])\n \n OVN_CLEANUP_NORTHD\n AT_CLEANUP\n@@ -10039,6 +10041,7 @@ AT_CHECK([ovn-nbctl --wait=sb sync])\n # Check MAC learning flows with 'localnet_learn_fdb' default (false)\n AT_CHECK([ovn-sbctl dump-flows ls0 | grep -e 'ls_in_\\(put\\|lookup\\)_fdb' | ovn_strip_lflows], [0], [dnl\n table=??(ls_in_lookup_fdb ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_lookup_fdb ), priority=50 , match=(inport == \"ln_port\"), action=(flags.localnet = 1; next;)\n table=??(ls_in_put_fdb ), priority=0 , match=(1), action=(next;)\n ])\n \n@@ -10047,6 +10050,7 @@ AT_CHECK([ovn-nbctl --wait=sb lsp-set-options ln_port localnet_learn_fdb=true])\n AT_CHECK([ovn-sbctl dump-flows ls0 | grep -e 'ls_in_\\(put\\|lookup\\)_fdb' | ovn_strip_lflows], [0], [dnl\n table=??(ls_in_lookup_fdb ), priority=0 , match=(1), action=(next;)\n table=??(ls_in_lookup_fdb ), priority=100 , match=(inport == \"ln_port\"), action=(flags.localnet = 1; reg0[[11]] = lookup_fdb(inport, eth.src); next;)\n+ table=??(ls_in_lookup_fdb ), priority=50 , match=(inport == \"ln_port\"), action=(flags.localnet = 1; next;)\n table=??(ls_in_put_fdb ), priority=0 , match=(1), action=(next;)\n table=??(ls_in_put_fdb ), priority=100 , match=(inport == \"ln_port\" && reg0[[11]] == 0), action=(put_fdb(inport, eth.src); next;)\n ])\n@@ -10055,6 +10059,7 @@ AT_CHECK([ovn-sbctl dump-flows ls0 | grep -e 'ls_in_\\(put\\|lookup\\)_fdb' | ovn_s\n AT_CHECK([ovn-nbctl --wait=sb lsp-set-options ln_port localnet_learn_fdb=false])\n AT_CHECK([ovn-sbctl dump-flows ls0 | grep -e 'ls_in_\\(put\\|lookup\\)_fdb' | ovn_strip_lflows], [0], [dnl\n table=??(ls_in_lookup_fdb ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_lookup_fdb ), priority=50 , match=(inport == \"ln_port\"), action=(flags.localnet = 1; next;)\n table=??(ls_in_put_fdb ), priority=0 , match=(1), action=(next;)\n ])\n \n@@ -10404,6 +10409,110 @@ OVN_CLEANUP_NORTHD\n AT_CLEANUP\n ])\n \n+OVN_FOR_EACH_NORTHD_NO_HV([\n+AT_SETUP([ARP/ND responder for localnet-sourced requests])\n+ovn_start\n+\n+dnl Switch with localnet port.\n+check ovn-nbctl ls-add ls1\n+check ovn-nbctl lsp-add-localnet-port ls1 ln1 physnet1\n+check ovn-nbctl lsp-add ls1 vm1 \\\n+ -- lsp-set-addresses vm1 \"00:00:00:00:00:01 10.0.0.1 fd01::1\"\n+check ovn-nbctl lsp-add ls1 vm2 \\\n+ -- lsp-set-addresses vm2 \"00:00:00:00:00:02 10.0.0.2 fd01::2\"\n+\n+dnl Switch without localnet port.\n+check ovn-nbctl ls-add ls2\n+check ovn-nbctl --wait=sb lsp-add ls2 vm3 \\\n+ -- lsp-set-addresses vm3 \"00:00:00:00:00:03 10.0.0.3 fd01::3\"\n+\n+AS_BOX([FDB learning disabled])\n+\n+dnl ls1: ls_in_lookup_fdb should have priority 0 default +\n+dnl priority 50 flags.localnet.\n+AT_CHECK([ovn-sbctl dump-flows ls1 | grep -e 'ls_in_lookup_fdb' | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_lookup_fdb ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_lookup_fdb ), priority=50 , match=(inport == \"ln1\"), action=(flags.localnet = 1; next;)\n+])\n+\n+dnl ls1: ls_in_arp_rsp should include flags.localnet condition for\n+dnl priority 50 ARP/ND reply flows but NOT for priority 100 self-reply\n+dnl flows (since those match on inport == VIF, flags.localnet is always 0).\n+AT_CHECK([ovn-sbctl dump-flows ls1 | grep -e 'ls_in_arp_rsp' | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_arp_rsp ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(arp.tpa == 10.0.0.1 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && inport == \"vm1\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(arp.tpa == 10.0.0.2 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && inport == \"vm2\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:1 && nd.target == fd01::1 && inport == \"vm1\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:2 && nd.target == fd01::2 && inport == \"vm2\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(arp.tpa == 10.0.0.1 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && ((flags.localnet == 1 && is_chassis_resident(\"vm1\")) || flags.localnet == 0)), action=(eth.dst = eth.src; eth.src = 00:00:00:00:00:01; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 00:00:00:00:00:01; arp.tpa = arp.spa; arp.spa = 10.0.0.1; outport = inport; flags.loopback = 1; output;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(arp.tpa == 10.0.0.2 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && ((flags.localnet == 1 && is_chassis_resident(\"vm2\")) || flags.localnet == 0)), action=(eth.dst = eth.src; eth.src = 00:00:00:00:00:02; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 00:00:00:00:00:02; arp.tpa = arp.spa; arp.spa = 10.0.0.2; outport = inport; flags.loopback = 1; output;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:1 && nd.target == fd01::1 && ((flags.localnet == 1 && is_chassis_resident(\"vm1\")) || flags.localnet == 0)), action=(nd_na { eth.src = 00:00:00:00:00:01; ip6.src = fd01::1; nd.target = fd01::1; nd.tll = 00:00:00:00:00:01; outport = inport; flags.loopback = 1; output; };)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:2 && nd.target == fd01::2 && ((flags.localnet == 1 && is_chassis_resident(\"vm2\")) || flags.localnet == 0)), action=(nd_na { eth.src = 00:00:00:00:00:02; ip6.src = fd01::2; nd.target = fd01::2; nd.tll = 00:00:00:00:00:02; outport = inport; flags.loopback = 1; output; };)\n+])\n+\n+dnl ls2: ls_in_arp_rsp should NOT include flags.localnet condition.\n+AT_CHECK([ovn-sbctl dump-flows ls2 | grep -e 'ls_in_arp_rsp' | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_arp_rsp ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(arp.tpa == 10.0.0.3 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && inport == \"vm3\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:3 && nd.target == fd01::3 && inport == \"vm3\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(arp.tpa == 10.0.0.3 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff), action=(eth.dst = eth.src; eth.src = 00:00:00:00:00:03; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 00:00:00:00:00:03; arp.tpa = arp.spa; arp.spa = 10.0.0.3; outport = inport; flags.loopback = 1; output;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:3 && nd.target == fd01::3), action=(nd_na { eth.src = 00:00:00:00:00:03; ip6.src = fd01::3; nd.target = fd01::3; nd.tll = 00:00:00:00:00:03; outport = inport; flags.loopback = 1; output; };)\n+])\n+\n+dnl ls2: ls_in_lookup_fdb should only have priority 0 default,\n+dnl no priority 50 flags.localnet.\n+AT_CHECK([ovn-sbctl dump-flows ls2 | grep -e 'ls_in_lookup_fdb' | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_lookup_fdb ), priority=0 , match=(1), action=(next;)\n+])\n+\n+AS_BOX([Enable FDB learning on ln1])\n+check ovn-nbctl --wait=sb lsp-set-options ln1 localnet_learn_fdb=true\n+\n+dnl ls1: ls_in_lookup_fdb should have priority 100 FDB +\n+dnl priority 50 fallback.\n+AT_CHECK([ovn-sbctl dump-flows ls1 | grep -e 'ls_in_lookup_fdb' | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_lookup_fdb ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_lookup_fdb ), priority=100 , match=(inport == \"ln1\"), action=(flags.localnet = 1; reg0[[11]] = lookup_fdb(inport, eth.src); next;)\n+ table=??(ls_in_lookup_fdb ), priority=50 , match=(inport == \"ln1\"), action=(flags.localnet = 1; next;)\n+])\n+\n+dnl ls1: ls_in_arp_rsp should be unchanged.\n+AT_CHECK([ovn-sbctl dump-flows ls1 | grep -e 'ls_in_arp_rsp' | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_arp_rsp ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(arp.tpa == 10.0.0.1 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && inport == \"vm1\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(arp.tpa == 10.0.0.2 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && inport == \"vm2\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:1 && nd.target == fd01::1 && inport == \"vm1\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:2 && nd.target == fd01::2 && inport == \"vm2\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(arp.tpa == 10.0.0.1 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && ((flags.localnet == 1 && is_chassis_resident(\"vm1\")) || flags.localnet == 0)), action=(eth.dst = eth.src; eth.src = 00:00:00:00:00:01; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 00:00:00:00:00:01; arp.tpa = arp.spa; arp.spa = 10.0.0.1; outport = inport; flags.loopback = 1; output;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(arp.tpa == 10.0.0.2 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && ((flags.localnet == 1 && is_chassis_resident(\"vm2\")) || flags.localnet == 0)), action=(eth.dst = eth.src; eth.src = 00:00:00:00:00:02; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 00:00:00:00:00:02; arp.tpa = arp.spa; arp.spa = 10.0.0.2; outport = inport; flags.loopback = 1; output;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:1 && nd.target == fd01::1 && ((flags.localnet == 1 && is_chassis_resident(\"vm1\")) || flags.localnet == 0)), action=(nd_na { eth.src = 00:00:00:00:00:01; ip6.src = fd01::1; nd.target = fd01::1; nd.tll = 00:00:00:00:00:01; outport = inport; flags.loopback = 1; output; };)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:2 && nd.target == fd01::2 && ((flags.localnet == 1 && is_chassis_resident(\"vm2\")) || flags.localnet == 0)), action=(nd_na { eth.src = 00:00:00:00:00:02; ip6.src = fd01::2; nd.target = fd01::2; nd.tll = 00:00:00:00:00:02; outport = inport; flags.loopback = 1; output; };)\n+])\n+\n+AS_BOX([Disable FDB learning])\n+check ovn-nbctl --wait=sb lsp-set-options ln1 localnet_learn_fdb=false\n+\n+AT_CHECK([ovn-sbctl dump-flows ls1 | grep -e 'ls_in_lookup_fdb' | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_lookup_fdb ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_lookup_fdb ), priority=50 , match=(inport == \"ln1\"), action=(flags.localnet = 1; next;)\n+])\n+\n+AT_CHECK([ovn-sbctl dump-flows ls1 | grep -e 'ls_in_arp_rsp' | ovn_strip_lflows], [0], [dnl\n+ table=??(ls_in_arp_rsp ), priority=0 , match=(1), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(arp.tpa == 10.0.0.1 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && inport == \"vm1\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(arp.tpa == 10.0.0.2 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && inport == \"vm2\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:1 && nd.target == fd01::1 && inport == \"vm1\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=100 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:2 && nd.target == fd01::2 && inport == \"vm2\"), action=(next;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(arp.tpa == 10.0.0.1 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && ((flags.localnet == 1 && is_chassis_resident(\"vm1\")) || flags.localnet == 0)), action=(eth.dst = eth.src; eth.src = 00:00:00:00:00:01; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 00:00:00:00:00:01; arp.tpa = arp.spa; arp.spa = 10.0.0.1; outport = inport; flags.loopback = 1; output;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(arp.tpa == 10.0.0.2 && arp.op == 1 && eth.dst == ff:ff:ff:ff:ff:ff && ((flags.localnet == 1 && is_chassis_resident(\"vm2\")) || flags.localnet == 0)), action=(eth.dst = eth.src; eth.src = 00:00:00:00:00:02; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = 00:00:00:00:00:02; arp.tpa = arp.spa; arp.spa = 10.0.0.2; outport = inport; flags.loopback = 1; output;)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:1 && nd.target == fd01::1 && ((flags.localnet == 1 && is_chassis_resident(\"vm1\")) || flags.localnet == 0)), action=(nd_na { eth.src = 00:00:00:00:00:01; ip6.src = fd01::1; nd.target = fd01::1; nd.tll = 00:00:00:00:00:01; outport = inport; flags.loopback = 1; output; };)\n+ table=??(ls_in_arp_rsp ), priority=50 , match=(nd_ns_mcast && ip6.dst == ff02::1:ff00:2 && nd.target == fd01::2 && ((flags.localnet == 1 && is_chassis_resident(\"vm2\")) || flags.localnet == 0)), action=(nd_na { eth.src = 00:00:00:00:00:02; ip6.src = fd01::2; nd.target = fd01::2; nd.tll = 00:00:00:00:00:02; outport = inport; flags.loopback = 1; output; };)\n+])\n+\n+OVN_CLEANUP_NORTHD\n+AT_CLEANUP\n+])\n+\n OVN_FOR_EACH_NORTHD_NO_HV([\n AT_SETUP([Address set incremental processing])\n ovn_start\ndiff --git a/tests/ovn.at b/tests/ovn.at\nindex c0ae611bc9..fbaa63d99c 100644\n--- a/tests/ovn.at\n+++ b/tests/ovn.at\n@@ -10190,6 +10190,195 @@ OVN_CLEANUP([hv1])\n AT_CLEANUP\n ])\n \n+OVN_FOR_EACH_NORTHD([\n+AT_SETUP([ARP/ND from localnet -- proxy reply on resident chassis only])\n+AT_SKIP_IF([test $HAVE_SCAPY = no])\n+ovn_start\n+\n+dnl Create logical switch with localnet port.\n+check ovn-nbctl ls-add ls1\n+check ovn-nbctl lsp-add-localnet-port ls1 ln1 physnet1\n+check ovn-nbctl lsp-add ls1 vm1 \\\n+ -- lsp-set-addresses vm1 \"f0:00:00:00:00:01 10.0.0.1 fd01::1\"\n+check ovn-nbctl lsp-add ls1 vm2 \\\n+ -- lsp-set-addresses vm2 \"f0:00:00:00:00:02 10.0.0.2 fd01::2\"\n+\n+dnl Two hypervisors with bridge-mappings.\n+net_add n1\n+\n+sim_add hv1\n+as hv1\n+ovs-vsctl \\\n+ -- add-br br-phys \\\n+ -- add-br br-eth0\n+ovn_attach n1 br-phys 192.168.0.1\n+check ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-mappings=physnet1:br-eth0\n+check ovs-vsctl add-port br-eth0 snoopvif1 \\\n+ -- set Interface snoopvif1 options:tx_pcap=hv1/snoopvif-tx.pcap \\\n+ options:rxq_pcap=hv1/snoopvif-rx.pcap\n+check ovs-vsctl add-port br-int vm1 \\\n+ -- set Interface vm1 external-ids:iface-id=vm1 \\\n+ options:tx_pcap=hv1/vm1-tx.pcap \\\n+ options:rxq_pcap=hv1/vm1-rx.pcap\n+\n+sim_add hv2\n+as hv2\n+ovs-vsctl \\\n+ -- add-br br-phys \\\n+ -- add-br br-eth0\n+ovn_attach n1 br-phys 192.168.0.2\n+check ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-mappings=physnet1:br-eth0\n+check ovs-vsctl add-port br-eth0 snoopvif2 \\\n+ -- set Interface snoopvif2 options:tx_pcap=hv2/snoopvif-tx.pcap \\\n+ options:rxq_pcap=hv2/snoopvif-rx.pcap\n+check ovs-vsctl add-port br-int vm2 \\\n+ -- set Interface vm2 external-ids:iface-id=vm2 \\\n+ options:tx_pcap=hv2/vm2-tx.pcap \\\n+ options:rxq_pcap=hv2/vm2-rx.pcap\n+\n+wait_for_ports_up vm1 vm2\n+OVN_POPULATE_ARP\n+check ovn-nbctl --wait=hv sync\n+\n+dnl Helper: construct ARP request.\n+build_arp_request() {\n+ local sha=$1 spa=$2 tpa=$3\n+ fmt_pkt \"Ether(dst='ff:ff:ff:ff:ff:ff', src='${sha}')/ \\\n+ ARP(hwsrc='${sha}', hwdst='ff:ff:ff:ff:ff:ff', \\\n+ psrc='${spa}', pdst='${tpa}')\"\n+}\n+\n+dnl Helper: construct expected ARP reply.\n+build_arp_reply() {\n+ local req_sha=$1 req_spa=$2 reply_sha=$3 reply_spa=$4\n+ fmt_pkt \"Ether(dst='${req_sha}', src='${reply_sha}')/ \\\n+ ARP(op=2, hwsrc='${reply_sha}', hwdst='${req_sha}', \\\n+ psrc='${reply_spa}', pdst='${req_spa}')\"\n+}\n+\n+dnl Helper: construct ND solicitation.\n+build_nd_ns() {\n+ local sha=$1 spa=$2 tpa=$3 sol_mcast=$4\n+ fmt_pkt \"Ether(dst='33:33:ff:00:00:0${tpa##*:}', src='${sha}')/ \\\n+ IPv6(src='${spa}', dst='${sol_mcast}')/ \\\n+ ICMPv6ND_NS(tgt='${tpa}')/ \\\n+ ICMPv6NDOptSrcLLAddr(lladdr='${sha}')\"\n+}\n+\n+dnl Helper: construct expected ND advertisement.\n+build_nd_na() {\n+ local req_sha=$1 req_spa=$2 reply_sha=$3 reply_tgt=$4\n+ fmt_pkt \"Ether(dst='${req_sha}', src='${reply_sha}')/ \\\n+ IPv6(src='${reply_tgt}', dst='${req_spa}')/ \\\n+ ICMPv6ND_NA(tgt='${reply_tgt}', R=0, S=1, O=1)/ \\\n+ ICMPv6NDOptDstLLAddr(lladdr='${reply_sha}')\"\n+}\n+\n+test_arp_nd_localnet() {\n+ AS_BOX([ARP from localnet on hv1 for vm1 - expect reply])\n+ as hv1 reset_pcap_file snoopvif1 hv1/snoopvif\n+ as hv2 reset_pcap_file snoopvif2 hv2/snoopvif\n+ as hv1 reset_pcap_file vm1 hv1/vm1\n+ as hv2 reset_pcap_file vm2 hv2/vm2\n+\n+ dnl ARP request from br-eth0 on hv1 for vm1 (10.0.0.1).\n+ dnl vm1 is resident on hv1, so hv1 should reply.\n+ local arp_req=$(build_arp_request \"f0:00:00:00:00:99\" \"10.0.0.99\" \"10.0.0.1\")\n+ as hv1 ovs-appctl netdev-dummy/receive snoopvif1 $arp_req\n+ local arp_rep=$(build_arp_reply \"f0:00:00:00:00:99\" \"10.0.0.99\" \\\n+ \"f0:00:00:00:00:01\" \"10.0.0.1\")\n+ echo $arp_rep > expected_arp_reply\n+ OVN_CHECK_PACKETS_CONTAIN([hv1/snoopvif-tx.pcap], [expected_arp_reply])\n+\n+ AS_BOX([ARP from localnet on hv2 for vm1 - expect no reply])\n+ as hv2 reset_pcap_file snoopvif2 hv2/snoopvif\n+\n+ dnl ARP request from br-eth0 on hv2 for vm1 (10.0.0.1).\n+ dnl vm1 is NOT resident on hv2, so hv2 should NOT reply.\n+ dnl To avoid relying on sleep, we also send an ARP request for vm2\n+ dnl (which IS resident on hv2) and wait for that reply. This proves\n+ dnl the pipeline is running and any reply for vm1 would have appeared.\n+ as hv2 ovs-appctl netdev-dummy/receive snoopvif2 $arp_req\n+\n+ local arp_req_vm2=$(build_arp_request \"f0:00:00:00:00:99\" \"10.0.0.99\" \"10.0.0.2\")\n+ as hv2 ovs-appctl netdev-dummy/receive snoopvif2 $arp_req_vm2\n+ local arp_rep_vm2=$(build_arp_reply \"f0:00:00:00:00:99\" \"10.0.0.99\" \\\n+ \"f0:00:00:00:00:02\" \"10.0.0.2\")\n+ echo $arp_rep_vm2 > expected_arp_vm2\n+ OVN_CHECK_PACKETS_CONTAIN([hv2/snoopvif-tx.pcap], [expected_arp_vm2])\n+\n+ dnl Now verify that no ARP reply for vm1 was generated on hv2.\n+ AT_CHECK([$PYTHON \"$ovs_srcdir/utilities/ovs-pcap.in\" hv2/snoopvif-tx.pcap | \\\n+ grep -c \"$arp_rep\"], [1], [dnl\n+0\n+])\n+\n+ AS_BOX([ARP from vm2 VIF for vm1 - expect proxy reply])\n+ as hv2 reset_pcap_file vm2 hv2/vm2\n+ local arp_req2=$(build_arp_request \"f0:00:00:00:00:02\" \"10.0.0.2\" \"10.0.0.1\")\n+ as hv2 ovs-appctl netdev-dummy/receive vm2 $arp_req2\n+ local arp_rep2=$(build_arp_reply \"f0:00:00:00:00:02\" \"10.0.0.2\" \\\n+ \"f0:00:00:00:00:01\" \"10.0.0.1\")\n+ echo $arp_rep2 > expected_arp_proxy\n+ OVN_CHECK_PACKETS_CONTAIN([hv2/vm2-tx.pcap], [expected_arp_proxy])\n+\n+ AS_BOX([ND from localnet on hv1 for vm1 - expect reply])\n+ as hv1 reset_pcap_file snoopvif1 hv1/snoopvif\n+ as hv2 reset_pcap_file snoopvif2 hv2/snoopvif\n+\n+ dnl ND solicitation from br-eth0 on hv1 for vm1 IPv6 (fd01::1).\n+ dnl vm1 is resident on hv1, so hv1 should reply.\n+ local nd_ns=$(build_nd_ns \"f0:00:00:00:00:99\" \"fd01::99\" \"fd01::1\" \"ff02::1:ff00:1\")\n+ as hv1 ovs-appctl netdev-dummy/receive snoopvif1 $nd_ns\n+ local nd_na=$(build_nd_na \"f0:00:00:00:00:99\" \"fd01::99\" \\\n+ \"f0:00:00:00:00:01\" \"fd01::1\")\n+ echo $nd_na > expected_nd_reply\n+ OVN_CHECK_PACKETS_CONTAIN([hv1/snoopvif-tx.pcap], [expected_nd_reply])\n+\n+ AS_BOX([ND from localnet on hv2 for vm1 - expect no reply])\n+ as hv2 reset_pcap_file snoopvif2 hv2/snoopvif\n+\n+ dnl ND solicitation from br-eth0 on hv2 for vm1 IPv6 (fd01::1).\n+ dnl vm1 is NOT resident on hv2, so hv2 should NOT reply.\n+ dnl Same technique: send ND for vm2 (resident) and wait for that reply.\n+ as hv2 ovs-appctl netdev-dummy/receive snoopvif2 $nd_ns\n+\n+ local nd_ns_vm2=$(build_nd_ns \"f0:00:00:00:00:99\" \"fd01::99\" \"fd01::2\" \"ff02::1:ff00:2\")\n+ as hv2 ovs-appctl netdev-dummy/receive snoopvif2 $nd_ns_vm2\n+ local nd_na_vm2=$(build_nd_na \"f0:00:00:00:00:99\" \"fd01::99\" \\\n+ \"f0:00:00:00:00:02\" \"fd01::2\")\n+ echo $nd_na_vm2 > expected_nd_vm2\n+ OVN_CHECK_PACKETS_CONTAIN([hv2/snoopvif-tx.pcap], [expected_nd_vm2])\n+\n+ dnl Now verify that no ND advertisement for vm1 was generated on hv2.\n+ AT_CHECK([$PYTHON \"$ovs_srcdir/utilities/ovs-pcap.in\" hv2/snoopvif-tx.pcap | \\\n+ grep -c \"$nd_na\"], [1], [dnl\n+0\n+])\n+\n+ AS_BOX([ND from vm2 VIF for vm1 - expect proxy reply])\n+ as hv2 reset_pcap_file vm2 hv2/vm2\n+ local nd_ns2=$(build_nd_ns \"f0:00:00:00:00:02\" \"fd01::2\" \"fd01::1\" \"ff02::1:ff00:1\")\n+ as hv2 ovs-appctl netdev-dummy/receive vm2 $nd_ns2\n+ local nd_na2=$(build_nd_na \"f0:00:00:00:00:02\" \"fd01::2\" \\\n+ \"f0:00:00:00:00:01\" \"fd01::1\")\n+ echo $nd_na2 > expected_nd_proxy\n+ OVN_CHECK_PACKETS_CONTAIN([hv2/vm2-tx.pcap], [expected_nd_proxy])\n+}\n+\n+AS_BOX([FDB learning disabled])\n+test_arp_nd_localnet\n+\n+AS_BOX([FDB learning enabled])\n+dnl Use 'set' instead of 'lsp-set-options' to preserve network_name.\n+check ovn-nbctl --wait=hv set Logical_Switch_Port ln1 \\\n+ options:localnet_learn_fdb=true\n+test_arp_nd_localnet\n+\n+OVN_CLEANUP([hv1],[hv2])\n+AT_CLEANUP\n+])\n+\n OVN_FOR_EACH_NORTHD([\n AT_SETUP([send reverse arp for router without ipv4 address])\n ovn_start\n", "prefixes": [ "ovs-dev", "2/2" ] }