From patchwork Thu Mar 10 18:54:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jarno Rajahalme X-Patchwork-Id: 595920 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from archives.nicira.com (archives.nicira.com [96.126.127.54]) by ozlabs.org (Postfix) with ESMTP id 15DB314031D for ; Fri, 11 Mar 2016 05:55:24 +1100 (AEDT) Received: from archives.nicira.com (localhost [127.0.0.1]) by archives.nicira.com (Postfix) with ESMTP id F175A109BD; Thu, 10 Mar 2016 10:55:01 -0800 (PST) X-Original-To: dev@openvswitch.org Delivered-To: dev@openvswitch.org Received: from mx3v3.cudamail.com (mx3.cudamail.com [64.34.241.5]) by archives.nicira.com (Postfix) with ESMTPS id 0260F109BB for ; Thu, 10 Mar 2016 10:55:00 -0800 (PST) Received: from bar6.cudamail.com (localhost [127.0.0.1]) by mx3v3.cudamail.com (Postfix) with ESMTPS id 8F63016111A for ; Thu, 10 Mar 2016 11:54:59 -0700 (MST) X-ASG-Debug-ID: 1457636099-0b32374d00081b0001-byXFYA Received: from mx3-pf3.cudamail.com ([192.168.14.3]) by bar6.cudamail.com with ESMTP id hdFPEVIhOJcCa5FB (version=TLSv1 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Thu, 10 Mar 2016 11:54:59 -0700 (MST) X-Barracuda-Envelope-From: jarno@ovn.org X-Barracuda-RBL-Trusted-Forwarder: 192.168.14.3 Received: from unknown (HELO relay6-d.mail.gandi.net) (217.70.183.198) by mx3-pf3.cudamail.com with ESMTPS (DHE-RSA-AES256-SHA encrypted); 10 Mar 2016 19:48:09 -0000 Received-SPF: pass (mx3-pf3.cudamail.com: SPF record at ovn.org designates 217.70.183.198 as permitted sender) X-Barracuda-Apparent-Source-IP: 217.70.183.198 X-Barracuda-RBL-IP: 217.70.183.198 Received: from mfilter19-d.gandi.net (mfilter19-d.gandi.net [217.70.178.147]) by relay6-d.mail.gandi.net (Postfix) with ESMTP id E0EF0FB886; Thu, 10 Mar 2016 19:54:56 +0100 (CET) X-Virus-Scanned: Debian amavisd-new at mfilter19-d.gandi.net Received: from relay6-d.mail.gandi.net ([IPv6:::ffff:217.70.183.198]) by mfilter19-d.gandi.net (mfilter19-d.gandi.net [::ffff:10.0.15.180]) (amavisd-new, port 10024) with ESMTP id 8mLHOBfuKwRu; Thu, 10 Mar 2016 19:54:55 +0100 (CET) X-Originating-IP: 208.91.1.34 Received: from sc9-mailhost1.vmware.com (unknown [208.91.1.34]) (Authenticated sender: jarno@ovn.org) by relay6-d.mail.gandi.net (Postfix) with ESMTPSA id E6F14FB89E; Thu, 10 Mar 2016 19:54:53 +0100 (CET) X-CudaMail-Envelope-Sender: jarno@ovn.org From: Jarno Rajahalme To: netfilter-devel@vger.kernel.org X-CudaMail-Whitelist-To: dev@openvswitch.org X-CudaMail-MID: CM-V3-309042942 X-CudaMail-DTE: 031016 X-CudaMail-Originating-IP: 217.70.183.198 Date: Thu, 10 Mar 2016 10:54:20 -0800 X-ASG-Orig-Subj: [##CM-V3-309042942##][PATCH nf-next v10 5/8] openvswitch: Find existing conntrack entry after upcall. Message-Id: <1457636063-92746-6-git-send-email-jarno@ovn.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1457636063-92746-1-git-send-email-jarno@ovn.org> References: <1457636063-92746-1-git-send-email-jarno@ovn.org> X-Barracuda-Connect: UNKNOWN[192.168.14.3] X-Barracuda-Start-Time: 1457636099 X-Barracuda-Encrypted: DHE-RSA-AES256-SHA X-Barracuda-URL: https://web.cudamail.com:443/cgi-mod/mark.cgi X-ASG-Whitelist: Header =?UTF-8?B?eFwtY3VkYW1haWxcLXdoaXRlbGlzdFwtdG8=?= X-Virus-Scanned: by bsmtpd at cudamail.com X-Barracuda-BRTS-Status: 1 Cc: dev@openvswitch.org, netdev@vger.kernel.org Subject: [ovs-dev] [PATCH nf-next v10 5/8] openvswitch: Find existing conntrack entry after upcall. X-BeenThere: dev@openvswitch.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: dev-bounces@openvswitch.org Sender: "dev" Add a new function ovs_ct_find_existing() to find an existing conntrack entry for which this packet was already applied to. This is only to be called when there is evidence that the packet was already tracked and committed, but we lost the ct reference due to an userspace upcall. ovs_ct_find_existing() is called from skb_nfct_cached(), which can now hide the fact that the ct reference may have been lost due to an upcall. This allows ovs_ct_commit() to be simplified. This patch is needed by later "openvswitch: Interface with NAT" patch, as we need to be able to pass the packet through NAT using the original ct reference also after the reference is lost after an upcall. Signed-off-by: Jarno Rajahalme Acked-by: Joe Stringer --- net/openvswitch/conntrack.c | 103 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 90 insertions(+), 13 deletions(-) diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c index a487bb3..ae36fe2 100644 --- a/net/openvswitch/conntrack.c +++ b/net/openvswitch/conntrack.c @@ -356,14 +356,101 @@ ovs_ct_expect_find(struct net *net, const struct nf_conntrack_zone *zone, return __nf_ct_expect_find(net, zone, &tuple); } +/* This replicates logic from nf_conntrack_core.c that is not exported. */ +static enum ip_conntrack_info +ovs_ct_get_info(const struct nf_conntrack_tuple_hash *h) +{ + const struct nf_conn *ct = nf_ct_tuplehash_to_ctrack(h); + + if (NF_CT_DIRECTION(h) == IP_CT_DIR_REPLY) + return IP_CT_ESTABLISHED_REPLY; + /* Once we've had two way comms, always ESTABLISHED. */ + if (test_bit(IPS_SEEN_REPLY_BIT, &ct->status)) + return IP_CT_ESTABLISHED; + if (test_bit(IPS_EXPECTED_BIT, &ct->status)) + return IP_CT_RELATED; + return IP_CT_NEW; +} + +/* Find an existing connection which this packet belongs to without + * re-attributing statistics or modifying the connection state. This allows an + * skb->nfct lost due to an upcall to be recovered during actions execution. + * + * Must be called with rcu_read_lock. + * + * On success, populates skb->nfct and skb->nfctinfo, and returns the + * connection. Returns NULL if there is no existing entry. + */ +static struct nf_conn * +ovs_ct_find_existing(struct net *net, const struct nf_conntrack_zone *zone, + u8 l3num, struct sk_buff *skb) +{ + struct nf_conntrack_l3proto *l3proto; + struct nf_conntrack_l4proto *l4proto; + struct nf_conntrack_tuple tuple; + struct nf_conntrack_tuple_hash *h; + enum ip_conntrack_info ctinfo; + struct nf_conn *ct; + unsigned int dataoff; + u8 protonum; + + l3proto = __nf_ct_l3proto_find(l3num); + if (!l3proto) { + pr_debug("ovs_ct_find_existing: Can't get l3proto\n"); + return NULL; + } + if (l3proto->get_l4proto(skb, skb_network_offset(skb), &dataoff, + &protonum) <= 0) { + pr_debug("ovs_ct_find_existing: Can't get protonum\n"); + return NULL; + } + l4proto = __nf_ct_l4proto_find(l3num, protonum); + if (!l4proto) { + pr_debug("ovs_ct_find_existing: Can't get l4proto\n"); + return NULL; + } + if (!nf_ct_get_tuple(skb, skb_network_offset(skb), dataoff, l3num, + protonum, net, &tuple, l3proto, l4proto)) { + pr_debug("ovs_ct_find_existing: Can't get tuple\n"); + return NULL; + } + + /* look for tuple match */ + h = nf_conntrack_find_get(net, zone, &tuple); + if (!h) + return NULL; /* Not found. */ + + ct = nf_ct_tuplehash_to_ctrack(h); + + ctinfo = ovs_ct_get_info(h); + if (ctinfo == IP_CT_NEW) { + /* This should not happen. */ + WARN_ONCE(1, "ovs_ct_find_existing: new packet for %p\n", ct); + } + skb->nfct = &ct->ct_general; + skb->nfctinfo = ctinfo; + return ct; +} + /* Determine whether skb->nfct is equal to the result of conntrack lookup. */ -static bool skb_nfct_cached(const struct net *net, const struct sk_buff *skb, - const struct ovs_conntrack_info *info) +static bool skb_nfct_cached(struct net *net, + const struct sw_flow_key *key, + const struct ovs_conntrack_info *info, + struct sk_buff *skb) { enum ip_conntrack_info ctinfo; struct nf_conn *ct; ct = nf_ct_get(skb, &ctinfo); + /* If no ct, check if we have evidence that an existing conntrack entry + * might be found for this skb. This happens when we lose a skb->nfct + * due to an upcall. If the connection was not confirmed, it is not + * cached and needs to be run through conntrack again. + */ + if (!ct && key->ct.state & OVS_CS_F_TRACKED && + !(key->ct.state & OVS_CS_F_INVALID) && + key->ct.zone == info->zone.id) + ct = ovs_ct_find_existing(net, &info->zone, info->family, skb); if (!ct) return false; if (!net_eq(net, read_pnet(&ct->ct_net))) @@ -396,7 +483,7 @@ static int __ovs_ct_lookup(struct net *net, struct sw_flow_key *key, * actually run the packet through conntrack twice unless it's for a * different zone. */ - if (!skb_nfct_cached(net, skb, info)) { + if (!skb_nfct_cached(net, key, info, skb)) { struct nf_conn *tmpl = info->ct; /* Associate skb with specified zone. */ @@ -459,18 +546,8 @@ static int ovs_ct_commit(struct net *net, struct sw_flow_key *key, const struct ovs_conntrack_info *info, struct sk_buff *skb) { - u8 state; int err; - state = key->ct.state; - if (key->ct.zone == info->zone.id && - ((state & OVS_CS_F_TRACKED) && !(state & OVS_CS_F_NEW))) { - /* Previous lookup has shown that this connection is already - * tracked and committed. Skip committing. - */ - return 0; - } - err = __ovs_ct_lookup(net, key, info, skb); if (err) return err;