From patchwork Tue Sep 27 12:46:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Blakey X-Patchwork-Id: 675564 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 3sk0vs4swbz9s4x for ; Tue, 27 Sep 2016 22:46:49 +1000 (AEST) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id 19C5E102AC; Tue, 27 Sep 2016 05:46:17 -0700 (PDT) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id E148A1026B for ; Tue, 27 Sep 2016 05:46:13 -0700 (PDT) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 491FC1620B5 for ; Tue, 27 Sep 2016 06:46:13 -0600 (MDT) X-ASG-Debug-ID: 1474980371-0b32373c80105830001-byXFYA Received: from mx3-pf1.cudamail.com ([192.168.14.2]) by bar6.cudamail.com with ESMTP id wCEfN5G5WAz0z62Q (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Tue, 27 Sep 2016 06:46:11 -0600 (MDT) X-Barracuda-Envelope-From: paulb@mellanox.com X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.2 Received: from unknown (HELO mellanox.co.il) (193.47.165.129) by mx3-pf1.cudamail.com with SMTP; 27 Sep 2016 12:46:10 -0000 Received-SPF: pass (mx3-pf1.cudamail.com: SPF record at _mtablock1.salesforce.com designates 193.47.165.129 as permitted sender) X-Barracuda-Apparent-Source-IP: 193.47.165.129 X-Barracuda-RBL-IP: 193.47.165.129 Received: from Internal Mail-Server by MTLPINE1 (envelope-from paulb@mellanox.com) with ESMTPS (AES256-SHA encrypted); 27 Sep 2016 15:46:06 +0300 Received: from r-vnc04.mtr.labs.mlnx (r-vnc04.mtr.labs.mlnx [10.208.0.116]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id u8RCk6Iw028432; Tue, 27 Sep 2016 15:46:06 +0300 X-CudaMail-Envelope-Sender: paulb@mellanox.com From: Paul Blakey To: dev@openvswitch.org X-CudaMail-MID: CM-V1-926008848 X-CudaMail-DTE: 092716 X-CudaMail-Originating-IP: 193.47.165.129 Date: Tue, 27 Sep 2016 15:46:00 +0300 X-ASG-Orig-Subj: [##CM-V1-926008848##][PATCH ovs RFC 5/9] dpif-hw-netlink: converting a tc flow back to ovs flow Message-Id: <1474980364-9291-6-git-send-email-paulb@mellanox.com> X-Mailer: git-send-email 1.7.8.2 In-Reply-To: <1474980364-9291-1-git-send-email-paulb@mellanox.com> References: <1474980364-9291-1-git-send-email-paulb@mellanox.com> X-GBUdb-Analysis: 0, 193.47.165.129, Ugly c=0 p=0 Source New X-MessageSniffer-Rules: 0-0-0-15081-c X-Barracuda-Connect: UNKNOWN[192.168.14.2] X-Barracuda-Start-Time: 1474980371 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 X-Barracuda-Spam-Score: 0.60 X-Barracuda-Spam-Status: No, SCORE=0.60 using global scores of TAG_LEVEL=3.5 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=4.0 tests=BSF_SC5_MJ1963, RDNS_NONE, UNPARSEABLE_RELAY X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.33260 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 UNPARSEABLE_RELAY Informational: message has unparseable relay lines 0.10 RDNS_NONE Delivered to trusted network by a host with no rDNS 0.50 BSF_SC5_MJ1963 Custom Rule MJ1963 Cc: Shahar Klein , Andy Gospodarek , Rony Efraim , Paul Blakey , Simon Horman , Or Gerlitz Subject: [ovs-dev] [PATCH ovs RFC 5/9] dpif-hw-netlink: converting a tc flow back to ovs flow X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" new function that converts a tc flow back to dpif-flow. Signed-off-by: Paul Blakey Signed-off-by: Shahar Klein --- lib/dpif-hw-netlink.c | 169 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 169 insertions(+) diff --git a/lib/dpif-hw-netlink.c b/lib/dpif-hw-netlink.c index e14c64c..92ac6cb 100644 --- a/lib/dpif-hw-netlink.c +++ b/lib/dpif-hw-netlink.c @@ -381,6 +381,175 @@ get_policy(struct dpif_hw_netlink *dpif, const ovs_u128 * ovs_ufid) return data->offloading_policy; } +static int +dpif_hw_tc_flow_to_dpif_flow(struct dpif_hw_netlink *dpif, + struct tc_flow *tc_flow, + struct dpif_flow *dpif_flow, odp_port_t inport, + struct ofpbuf *outflow, struct netdev *indev) +{ + struct ofpbuf mask_d, *mask = &mask_d; + + ofpbuf_init(mask, 512); + + dpif_flow->pmd_id = PMD_ID_NULL; + + size_t key_offset = nl_msg_start_nested(outflow, OVS_FLOW_ATTR_KEY); + size_t mask_offset = nl_msg_start_nested(mask, OVS_FLOW_ATTR_MASK); + + nl_msg_put_u32(outflow, OVS_KEY_ATTR_IN_PORT, inport); + nl_msg_put_u32(mask, OVS_KEY_ATTR_IN_PORT, 0xFFFFFFFF); + + /* OVS_KEY_ATTR_ETHERNET */ + struct ovs_key_ethernet *eth_key = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_ETHERNET, + sizeof (*eth_key)); + struct ovs_key_ethernet *eth_key_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_ETHERNET, + sizeof (*eth_key_mask)); + + memset(eth_key_mask, 0xFF, sizeof (*eth_key_mask)); + eth_key->eth_src = tc_flow->src_mac; + eth_key->eth_dst = tc_flow->dst_mac; + eth_key_mask->eth_src = tc_flow->src_mac_mask; + eth_key_mask->eth_dst = tc_flow->dst_mac_mask; + + nl_msg_put_be16(outflow, OVS_KEY_ATTR_ETHERTYPE, tc_flow->eth_type); + nl_msg_put_be16(mask, OVS_KEY_ATTR_ETHERTYPE, 0xFFFF); + + /* OVS_KEY_ATTR_IPV4 */ + if (tc_flow->ip_proto) { + struct ovs_key_ipv4 *ipv4 = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_IPV4, + sizeof (*ipv4)); + struct ovs_key_ipv4 *ipv4_mask = + nl_msg_put_unspec_zero(mask, OVS_KEY_ATTR_IPV4, + sizeof (*ipv4_mask)); + + memset(&ipv4_mask->ipv4_proto, 0xFF, sizeof (ipv4_mask->ipv4_proto)); + ipv4->ipv4_proto = tc_flow->ip_proto; + ipv4_mask->ipv4_frag = UINT8_MAX; + + if (tc_flow->ip_type == 4) { + if (tc_flow->ipv4.ipv4_src) + ipv4->ipv4_src = tc_flow->ipv4.ipv4_src; + if (tc_flow->ipv4.ipv4_src_mask) + ipv4_mask->ipv4_src = tc_flow->ipv4.ipv4_src_mask; + if (tc_flow->ipv4.ipv4_dst) + ipv4->ipv4_dst = tc_flow->ipv4.ipv4_dst; + if (tc_flow->ipv4.ipv4_dst_mask) + ipv4_mask->ipv4_dst = tc_flow->ipv4.ipv4_dst_mask; + } + + if (tc_flow->ip_proto == IPPROTO_ICMP) { + /* putting a masked out icmp */ + struct ovs_key_icmp *icmp = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_ICMP, + sizeof (*icmp)); + struct ovs_key_icmp *icmp_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_ICMP, + sizeof (*icmp_mask)); + + icmp->icmp_type = 0; + icmp->icmp_code = 0; + memset(icmp_mask, 0, sizeof (*icmp_mask)); + } + if (tc_flow->ip_proto == IPPROTO_TCP) { + struct ovs_key_tcp *tcp = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_TCP, + sizeof (*tcp)); + struct ovs_key_tcp *tcp_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_TCP, + sizeof (*tcp_mask)); + + memset(tcp_mask, 0x00, sizeof (*tcp_mask)); + + tcp->tcp_src = tc_flow->src_port; + tcp_mask->tcp_src = tc_flow->src_port_mask; + tcp->tcp_dst = tc_flow->dst_port; + tcp_mask->tcp_dst = tc_flow->dst_port_mask; + } + if (tc_flow->ip_proto == IPPROTO_UDP) { + struct ovs_key_udp *udp = + nl_msg_put_unspec_uninit(outflow, OVS_KEY_ATTR_UDP, + sizeof (*udp)); + struct ovs_key_udp *udp_mask = + nl_msg_put_unspec_uninit(mask, OVS_KEY_ATTR_UDP, + sizeof (*udp_mask)); + + memset(udp_mask, 0xFF, sizeof (*udp_mask)); + + udp->udp_src = tc_flow->src_port; + udp_mask->udp_src = tc_flow->src_port_mask; + udp->udp_dst = tc_flow->dst_port; + udp_mask->udp_dst = tc_flow->dst_port_mask; + } + } + nl_msg_end_nested(outflow, key_offset); + nl_msg_end_nested(mask, mask_offset); + + size_t actions_offset = + nl_msg_start_nested(outflow, OVS_FLOW_ATTR_ACTIONS); + if (tc_flow->ifindex_out) { + /* TODO: make this faster */ + int ovsport = get_ovs_port(dpif, tc_flow->ifindex_out); + + nl_msg_put_u32(outflow, OVS_ACTION_ATTR_OUTPUT, ovsport); + } + nl_msg_end_nested(outflow, actions_offset); + + struct nlattr *mask_attr = + ofpbuf_at_assert(mask, mask_offset, sizeof *mask_attr); + void *mask_data = ofpbuf_put_uninit(outflow, mask_attr->nla_len); + + memcpy(mask_data, mask_attr, mask_attr->nla_len); + mask_attr = mask_data; + + struct nlattr *key_attr = + ofpbuf_at_assert(outflow, key_offset, sizeof *key_attr); + struct nlattr *actions_attr = + ofpbuf_at_assert(outflow, actions_offset, sizeof *actions_attr); + + dpif_flow->key = nl_attr_get(key_attr); + dpif_flow->key_len = nl_attr_get_size(key_attr); + dpif_flow->mask = nl_attr_get(mask_attr); + dpif_flow->mask_len = nl_attr_get_size(mask_attr); + dpif_flow->actions = nl_attr_get(actions_attr); + dpif_flow->actions_len = nl_attr_get_size(actions_attr); + + if (tc_flow->stats.n_packets.hi || tc_flow->stats.n_packets.lo) { + dpif_flow->stats.used = tc_flow->lastused ? tc_flow->lastused : 0; + dpif_flow->stats.n_packets = + get_32aligned_u64(&tc_flow->stats.n_packets); + dpif_flow->stats.n_bytes = get_32aligned_u64(&tc_flow->stats.n_bytes); + } else { + dpif_flow->stats.used = 0; + dpif_flow->stats.n_packets = 0; + dpif_flow->stats.n_bytes = 0; + } + dpif_flow->stats.tcp_flags = 0; + + dpif_flow->ufid_present = false; + + ovs_u128 *ovs_ufid = + findufid(dpif, inport, tc_flow->handle, tc_flow->eth_type); + if (ovs_ufid) { + VLOG_DBG("Found UFID!, handle: %d, ufid: %s\n", tc_flow->handle, + printufid(ovs_ufid)); + dpif_flow->ufid = *ovs_ufid; + dpif_flow->ufid_present = true; + } else { + VLOG_DBG("Creating new UFID\n"); + ovs_assert(dpif_flow->key && dpif_flow->key_len); + dpif_flow_hash(&dpif->dpif, dpif_flow->key, dpif_flow->key_len, + &dpif_flow->ufid); + dpif_flow->ufid_present = true; + puthandle(dpif, &dpif_flow->ufid, indev, inport, tc_flow->handle, + tc_flow->eth_type); + } + + return 0; +} + static struct dpif_hw_netlink * dpif_hw_netlink_cast(const struct dpif *dpif) {