From patchwork Mon May 7 18:36:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Numan Siddique X-Patchwork-Id: 909890 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=openvswitch.org (client-ip=140.211.169.12; helo=mail.linuxfoundation.org; envelope-from=ovs-dev-bounces@openvswitch.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40frtM3Wfkz9s2k for ; Tue, 8 May 2018 04:36:27 +1000 (AEST) Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 8EB9F8FF; Mon, 7 May 2018 18:36:24 +0000 (UTC) X-Original-To: dev@openvswitch.org Delivered-To: ovs-dev@mail.linuxfoundation.org Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id 8ABF18E3 for ; Mon, 7 May 2018 18:36:23 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 61B0DF4 for ; Mon, 7 May 2018 18:36:21 +0000 (UTC) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 901DA40058AD; Mon, 7 May 2018 18:36:20 +0000 (UTC) Received: from numans.blr.redhat.com (ovpn-116-156.sin2.redhat.com [10.67.116.156]) by smtp.corp.redhat.com (Postfix) with ESMTP id AA4CF63538; Mon, 7 May 2018 18:36:17 +0000 (UTC) From: nusiddiq@redhat.com To: dev@openvswitch.org Date: Tue, 8 May 2018 00:06:05 +0530 Message-Id: <20180507183605.11612-1-nusiddiq@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Mon, 07 May 2018 18:36:20 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Mon, 07 May 2018 18:36:20 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'nusiddiq@redhat.com' RCPT:'' X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on smtp1.linux-foundation.org Subject: [ovs-dev] [PATCH] ovn: Add a new action 'nd_na_router' to handle NS requests for router IPs X-BeenThere: ovs-dev@openvswitch.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: ovs-dev-bounces@openvswitch.org Errors-To: ovs-dev-bounces@openvswitch.org From: Numan Siddique Presently when a VM's IPv6 stack sends a Neighbor Solicitation request for its router IP, (mostly when the ND cache entry for the router is in STALE state) ovn-controller responds with a Neighbor Adv packet (using the action nd_na). But it doesn't set 'ND_RSO_ROUTER' in the RSO flags. Because of which, the VM deletes the default route. The default route gets added again when the next RA is received (but would again gets deleted if its sends NS request). And this results in disruption of IPv6 traffic. This patch addresses this issue by adding a new action 'nd_na_router' which is same as 'nd_na' but it sets the 'ND_RSO_ROUTER' in the RSO flags. ovn-northd uses this action. A new action is added instead of modifying the existing 'nd_na' action. This is because - We cannot set the RSO flags in the "nd_na { ..actions .. }" - It would be ugly to have something like nd_na { router_flags, ...actions .. } (Please note: There are 3 'Line length is >79-characters' warnings in ovn.at which I couldn't resolve.) Reported-at: https://bugzilla.redhat.com/show_bug.cgi?id=1567735 CC: Justin Pettit CC: Mark Michelson Signed-off-by: Numan Siddique --- include/ovn/actions.h | 7 +++++++ ovn/controller/pinctrl.c | 23 +++++++++++++++-------- ovn/lib/actions.c | 22 ++++++++++++++++++++++ ovn/northd/ovn-northd.c | 2 +- ovn/utilities/ovn-trace.c | 1 + tests/ovn.at | 5 +++++ 6 files changed, 51 insertions(+), 9 deletions(-) diff --git a/include/ovn/actions.h b/include/ovn/actions.h index fb8f51509..638465193 100644 --- a/include/ovn/actions.h +++ b/include/ovn/actions.h @@ -68,6 +68,7 @@ struct ovn_extend_table; OVNACT(ICMP6, ovnact_nest) \ OVNACT(TCP_RESET, ovnact_nest) \ OVNACT(ND_NA, ovnact_nest) \ + OVNACT(ND_NA_ROUTER, ovnact_nest) \ OVNACT(GET_ARP, ovnact_get_mac_bind) \ OVNACT(PUT_ARP, ovnact_put_mac_bind) \ OVNACT(GET_ND, ovnact_get_mac_bind) \ @@ -444,6 +445,12 @@ enum action_opcode { * The actions, in OpenFlow 1.3 format, follow the action_header. */ ACTION_OPCODE_TCP_RESET, + + /* "nd_na_router { ...actions... }" with rso flag 'ND_RSO_ROUTER' set. + * + * The actions, in OpenFlow 1.3 format, follow the action_header. + */ + ACTION_OPCODE_ND_NA_ROUTER, }; /* Header. */ diff --git a/ovn/controller/pinctrl.c b/ovn/controller/pinctrl.c index 6e6aa1caa..305f20649 100644 --- a/ovn/controller/pinctrl.c +++ b/ovn/controller/pinctrl.c @@ -81,7 +81,8 @@ static void send_garp_run(struct controller_ctx *ctx, struct sset *active_tunnels); static void pinctrl_handle_nd_na(const struct flow *ip_flow, const struct match *md, - struct ofpbuf *userdata); + struct ofpbuf *userdata, + bool is_router); static void reload_metadata(struct ofpbuf *ofpacts, const struct match *md); static void pinctrl_handle_put_nd_ra_opts( @@ -1154,7 +1155,11 @@ process_packet_in(const struct ofp_header *msg, struct controller_ctx *ctx) break; case ACTION_OPCODE_ND_NA: - pinctrl_handle_nd_na(&headers, &pin.flow_metadata, &userdata); + pinctrl_handle_nd_na(&headers, &pin.flow_metadata, &userdata, false); + break; + + case ACTION_OPCODE_ND_NA_ROUTER: + pinctrl_handle_nd_na(&headers, &pin.flow_metadata, &userdata, true); break; case ACTION_OPCODE_PUT_ND: @@ -2308,7 +2313,7 @@ reload_metadata(struct ofpbuf *ofpacts, const struct match *md) static void pinctrl_handle_nd_na(const struct flow *ip_flow, const struct match *md, - struct ofpbuf *userdata) + struct ofpbuf *userdata, bool is_router) { /* This action only works for IPv6 ND packets, and the switch should only * send us ND packets this way, but check here just to be sure. */ @@ -2322,13 +2327,15 @@ pinctrl_handle_nd_na(const struct flow *ip_flow, const struct match *md, struct dp_packet packet; dp_packet_use_stub(&packet, packet_stub, sizeof packet_stub); - /* xxx These flags are not exactly correct. Look at section 7.2.4 - * xxx of RFC 4861. For example, we need to set ND_RSO_ROUTER for - * xxx router's interfaces and ND_RSO_SOLICITED only if it was - * xxx requested. */ + /* These flags are not exactly correct. Look at section 7.2.4 + * of RFC 4861. */ + uint32_t rso_flags = ND_RSO_SOLICITED | ND_RSO_OVERRIDE; + if (is_router) { + rso_flags |= ND_RSO_ROUTER; + } compose_nd_na(&packet, ip_flow->dl_dst, ip_flow->dl_src, &ip_flow->nd_target, &ip_flow->ipv6_src, - htonl(ND_RSO_SOLICITED | ND_RSO_OVERRIDE)); + htonl(rso_flags)); /* Reload previous packet metadata and set actions from userdata. */ set_actions_and_enqueue_msg(&packet, md, userdata); diff --git a/ovn/lib/actions.c b/ovn/lib/actions.c index a6945812d..0669cc1c9 100644 --- a/ovn/lib/actions.c +++ b/ovn/lib/actions.c @@ -1155,6 +1155,12 @@ parse_ND_NA(struct action_context *ctx) parse_nested_action(ctx, OVNACT_ND_NA, "nd_ns"); } +static void +parse_ND_NA_ROUTER(struct action_context *ctx) +{ + parse_nested_action(ctx, OVNACT_ND_NA_ROUTER, "nd_ns"); +} + static void parse_ND_NS(struct action_context *ctx) { @@ -1206,6 +1212,12 @@ format_ND_NA(const struct ovnact_nest *nest, struct ds *s) format_nested_action(nest, "nd_na", s); } +static void +format_ND_NA_ROUTER(const struct ovnact_nest *nest, struct ds *s) +{ + format_nested_action(nest, "nd_na_router", s); +} + static void format_ND_NS(const struct ovnact_nest *nest, struct ds *s) { @@ -1282,6 +1294,14 @@ encode_ND_NA(const struct ovnact_nest *on, encode_nested_actions(on, ep, ACTION_OPCODE_ND_NA, ofpacts); } +static void +encode_ND_NA_ROUTER(const struct ovnact_nest *on, + const struct ovnact_encode_params *ep, + struct ofpbuf *ofpacts) +{ + encode_nested_actions(on, ep, ACTION_OPCODE_ND_NA_ROUTER, ofpacts); +} + static void encode_ND_NS(const struct ovnact_nest *on, const struct ovnact_encode_params *ep, @@ -2305,6 +2325,8 @@ parse_action(struct action_context *ctx) parse_TCP_RESET(ctx); } else if (lexer_match_id(ctx->lexer, "nd_na")) { parse_ND_NA(ctx); + } else if (lexer_match_id(ctx->lexer, "nd_na_router")) { + parse_ND_NA_ROUTER(ctx); } else if (lexer_match_id(ctx->lexer, "nd_ns")) { parse_ND_NS(ctx); } else if (lexer_match_id(ctx->lexer, "get_arp")) { diff --git a/ovn/northd/ovn-northd.c b/ovn/northd/ovn-northd.c index ce472a536..b157cd1eb 100644 --- a/ovn/northd/ovn-northd.c +++ b/ovn/northd/ovn-northd.c @@ -5104,7 +5104,7 @@ build_lrouter_flows(struct hmap *datapaths, struct hmap *ports, ds_clear(&actions); ds_put_format(&actions, "put_nd(inport, ip6.src, nd.sll); " - "nd_na { " + "nd_na_router { " "eth.src = %s; " "ip6.src = %s; " "nd.target = %s; " diff --git a/ovn/utilities/ovn-trace.c b/ovn/utilities/ovn-trace.c index 9c19b5b9a..1fd48f22e 100644 --- a/ovn/utilities/ovn-trace.c +++ b/ovn/utilities/ovn-trace.c @@ -1927,6 +1927,7 @@ trace_actions(const struct ovnact *ovnacts, size_t ovnacts_len, break; case OVNACT_ND_NA: + case OVNACT_ND_NA_ROUTER: execute_nd_na(ovnact_get_ND_NA(a), dp, uflow, table_id, pipeline, super); break; diff --git a/tests/ovn.at b/tests/ovn.at index 4a5316510..2c0ae9877 100644 --- a/tests/ovn.at +++ b/tests/ovn.at @@ -1033,6 +1033,11 @@ nd_na { eth.src = 12:34:56:78:9a:bc; nd.tll = 12:34:56:78:9a:bc; outport = inpor formats as nd_na { eth.src = 12:34:56:78:9a:bc; nd.tll = 12:34:56:78:9a:bc; outport = inport; inport = ""; output; }; encodes as controller(userdata=00.00.00.03.00.00.00.00.00.19.00.10.80.00.08.06.12.34.56.78.9a.bc.00.00.00.19.00.10.80.00.42.06.12.34.56.78.9a.bc.00.00.ff.ff.00.18.00.00.23.20.00.06.00.20.00.00.00.00.00.01.1c.04.00.01.1e.04.00.19.00.10.00.01.1c.04.00.00.00.00.00.00.00.00.ff.ff.00.10.00.00.23.20.00.0e.ff.f8.40.00.00.00) has prereqs nd_ns +# nd_na_router +nd_na_router { eth.src = 12:34:56:78:9a:bc; nd.tll = 12:34:56:78:9a:bc; outport = inport; inport = ""; /* Allow sending out inport. */ output; }; + formats as nd_na_router { eth.src = 12:34:56:78:9a:bc; nd.tll = 12:34:56:78:9a:bc; outport = inport; inport = ""; output; }; + encodes as controller(userdata=00.00.00.0c.00.00.00.00.00.19.00.10.80.00.08.06.12.34.56.78.9a.bc.00.00.00.19.00.10.80.00.42.06.12.34.56.78.9a.bc.00.00.ff.ff.00.18.00.00.23.20.00.06.00.20.00.00.00.00.00.01.1c.04.00.01.1e.04.00.19.00.10.00.01.1c.04.00.00.00.00.00.00.00.00.ff.ff.00.10.00.00.23.20.00.0e.ff.f8.40.00.00.00) + has prereqs nd_ns # get_nd get_nd(outport, ip6.dst);