get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/852528/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 852528,
    "url": "http://patchwork.ozlabs.org/api/patches/852528/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/netdev/patch/20171222192732.13188-3-pablo@netfilter.org/",
    "project": {
        "id": 7,
        "url": "http://patchwork.ozlabs.org/api/projects/7/?format=api",
        "name": "Linux network development",
        "link_name": "netdev",
        "list_id": "netdev.vger.kernel.org",
        "list_email": "netdev@vger.kernel.org",
        "web_url": null,
        "scm_url": null,
        "webscm_url": null,
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20171222192732.13188-3-pablo@netfilter.org>",
    "list_archive_url": null,
    "date": "2017-12-22T19:27:27",
    "name": "[nf-next,v3,2/7] netfilter: add generic flow table infrastructure",
    "commit_ref": null,
    "pull_url": null,
    "state": "rfc",
    "archived": true,
    "hash": "fdf0c0d1d7bd6558f1e6312a78e774b04e19e037",
    "submitter": {
        "id": 1315,
        "url": "http://patchwork.ozlabs.org/api/people/1315/?format=api",
        "name": "Pablo Neira Ayuso",
        "email": "pablo@netfilter.org"
    },
    "delegate": {
        "id": 34,
        "url": "http://patchwork.ozlabs.org/api/users/34/?format=api",
        "username": "davem",
        "first_name": "David",
        "last_name": "Miller",
        "email": "davem@davemloft.net"
    },
    "mbox": "http://patchwork.ozlabs.org/project/netdev/patch/20171222192732.13188-3-pablo@netfilter.org/mbox/",
    "series": [
        {
            "id": 20090,
            "url": "http://patchwork.ozlabs.org/api/series/20090/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/netdev/list/?series=20090",
            "date": "2017-12-22T19:27:25",
            "name": "Flow offload infrastructure",
            "version": 3,
            "mbox": "http://patchwork.ozlabs.org/series/20090/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/852528/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/852528/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<netdev-owner@vger.kernel.org>",
        "X-Original-To": "patchwork-incoming@ozlabs.org",
        "Delivered-To": "patchwork-incoming@ozlabs.org",
        "Authentication-Results": "ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)",
        "Received": [
            "from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3z3JSx0tGyz9sNr\n\tfor <patchwork-incoming@ozlabs.org>;\n\tSat, 23 Dec 2017 06:28:17 +1100 (AEDT)",
            "(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1756816AbdLVT2P (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tFri, 22 Dec 2017 14:28:15 -0500",
            "from mail.us.es ([193.147.175.20]:42328 \"EHLO mail.us.es\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1756716AbdLVT2F (ORCPT <rfc822;netdev@vger.kernel.org>);\n\tFri, 22 Dec 2017 14:28:05 -0500",
            "from antivirus1-rhel7.int (unknown [192.168.2.11])\n\tby mail.us.es (Postfix) with ESMTP id E4945EBAE6\n\tfor <netdev@vger.kernel.org>; Fri, 22 Dec 2017 20:28:03 +0100 (CET)",
            "from antivirus1-rhel7.int (localhost [127.0.0.1])\n\tby antivirus1-rhel7.int (Postfix) with ESMTP id D3D21F7326\n\tfor <netdev@vger.kernel.org>; Fri, 22 Dec 2017 20:28:03 +0100 (CET)",
            "by antivirus1-rhel7.int (Postfix, from userid 99)\n\tid C6863F731F; Fri, 22 Dec 2017 20:28:03 +0100 (CET)",
            "from antivirus1-rhel7.int (localhost [127.0.0.1])\n\tby antivirus1-rhel7.int (Postfix) with ESMTP id 1B540F7310;\n\tFri, 22 Dec 2017 20:28:01 +0100 (CET)",
            "from 192.168.1.97 (192.168.1.97) by antivirus1-rhel7.int\n\t(F-Secure/fsigk_smtp/550/antivirus1-rhel7.int); \n\tFri, 22 Dec 2017 20:28:01 +0100 (CET)",
            "from salvia.here (129.166.216.87.static.jazztel.es\n\t[87.216.166.129]) (Authenticated sender: pneira@us.es)\n\tby entrada.int (Postfix) with ESMTPA id 2EFFB4265A31;\n\tFri, 22 Dec 2017 20:28:00 +0100 (CET)"
        ],
        "X-Spam-Checker-Version": "SpamAssassin 3.4.1 (2015-04-28) on\n\tantivirus1-rhel7.int",
        "X-Spam-Level": "",
        "X-Spam-Status": "No, score=-108.2 required=7.5 tests=ALL_TRUSTED,BAYES_50,\n\tSMTPAUTH_US2,USER_IN_WHITELIST autolearn=disabled version=3.4.1",
        "X-Virus-Status": "clean(F-Secure/fsigk_smtp/550/antivirus1-rhel7.int)",
        "X-SMTPAUTHUS": "auth mail.us.es",
        "From": "Pablo Neira Ayuso <pablo@netfilter.org>",
        "To": "netfilter-devel@vger.kernel.org",
        "Cc": "netdev@vger.kernel.org, f.fainelli@gmail.com,\n\tsimon.horman@netronome.com, ronye@mellanox.com, jiri@mellanox.com,\n\tnbd@nbd.name, john@phrozen.org, kubakici@wp.pl, fw@strlen.de",
        "Subject": "[PATCH nf-next,\n\tv3 2/7] netfilter: add generic flow table infrastructure",
        "Date": "Fri, 22 Dec 2017 20:27:27 +0100",
        "Message-Id": "<20171222192732.13188-3-pablo@netfilter.org>",
        "X-Mailer": "git-send-email 2.11.0",
        "In-Reply-To": "<20171222192732.13188-1-pablo@netfilter.org>",
        "References": "<20171222192732.13188-1-pablo@netfilter.org>",
        "X-Virus-Scanned": "ClamAV using ClamSMTP",
        "Sender": "netdev-owner@vger.kernel.org",
        "Precedence": "bulk",
        "List-ID": "<netdev.vger.kernel.org>",
        "X-Mailing-List": "netdev@vger.kernel.org"
    },
    "content": "This patch defines the API to interact with flow tables, this allows to\nadd, delete and lookup for entries in the flow table. This also adds the\ngeneric garbage code that removes entries that have expired, ie. no\ntraffic has been seen for a while.\n\nUsers of the flow table infrastructure can delete entries via\nflow_offload_dead(), which sets the dying bit, this signals the garbage\ncollector to release an entry from user context.\n\nSigned-off-by: Pablo Neira Ayuso <pablo@netfilter.org>\n---\n include/net/netfilter/nf_flow_table.h |  94 ++++++++\n net/netfilter/Kconfig                 |   7 +\n net/netfilter/Makefile                |   3 +\n net/netfilter/nf_flow_table.c         | 434 ++++++++++++++++++++++++++++++++++\n 4 files changed, 538 insertions(+)\n create mode 100644 net/netfilter/nf_flow_table.c",
    "diff": "diff --git a/include/net/netfilter/nf_flow_table.h b/include/net/netfilter/nf_flow_table.h\nindex 3a0779589281..161f71ca78a0 100644\n--- a/include/net/netfilter/nf_flow_table.h\n+++ b/include/net/netfilter/nf_flow_table.h\n@@ -1,7 +1,12 @@\n #ifndef _NF_FLOW_TABLE_H\n #define _NF_FLOW_TABLE_H\n \n+#include <linux/in.h>\n+#include <linux/in6.h>\n+#include <linux/netdevice.h>\n #include <linux/rhashtable.h>\n+#include <linux/rcupdate.h>\n+#include <net/dst.h>\n \n struct nf_flowtable;\n \n@@ -20,4 +25,93 @@ struct nf_flowtable {\n \tstruct delayed_work\t\tgc_work;\n };\n \n+enum flow_offload_tuple_dir {\n+\tFLOW_OFFLOAD_DIR_ORIGINAL,\n+\tFLOW_OFFLOAD_DIR_REPLY,\n+\t__FLOW_OFFLOAD_DIR_MAX\t\t= FLOW_OFFLOAD_DIR_REPLY,\n+};\n+#define FLOW_OFFLOAD_DIR_MAX\t(__FLOW_OFFLOAD_DIR_MAX + 1)\n+\n+struct flow_offload_tuple {\n+\tunion {\n+\t\tstruct in_addr\t\tsrc_v4;\n+\t\tstruct in6_addr\t\tsrc_v6;\n+\t};\n+\tunion {\n+\t\tstruct in_addr\t\tdst_v4;\n+\t\tstruct in6_addr\t\tdst_v6;\n+\t};\n+\tstruct {\n+\t\t__be16\t\t\tsrc_port;\n+\t\t__be16\t\t\tdst_port;\n+\t};\n+\n+\tint\t\t\t\tiifidx;\n+\n+\tu8\t\t\t\tl3proto;\n+\tu8\t\t\t\tl4proto;\n+\tu8\t\t\t\tdir;\n+\n+\tint\t\t\t\toifidx;\n+\n+\tstruct dst_entry\t\t*dst_cache;\n+};\n+\n+struct flow_offload_tuple_rhash {\n+\tstruct rhash_head\t\tnode;\n+\tstruct flow_offload_tuple\ttuple;\n+};\n+\n+#define FLOW_OFFLOAD_SNAT\t0x1\n+#define FLOW_OFFLOAD_DNAT\t0x2\n+#define FLOW_OFFLOAD_DYING\t0x4\n+\n+struct flow_offload {\n+\tstruct flow_offload_tuple_rhash\t\ttuplehash[FLOW_OFFLOAD_DIR_MAX];\n+\tu32\t\t\t\t\tflags;\n+\tunion {\n+\t\t/* Your private driver data here. */\n+\t\tu32\t\ttimeout;\n+\t};\n+};\n+\n+#define NF_FLOW_TIMEOUT (30 * HZ)\n+\n+struct nf_flow_route {\n+\tstruct {\n+\t\tstruct dst_entry\t*dst;\n+\t\tint\t\t\tifindex;\n+\t} tuple[FLOW_OFFLOAD_DIR_MAX];\n+};\n+\n+struct flow_offload *flow_offload_alloc(struct nf_conn *ct,\n+\t\t\t\t\tstruct nf_flow_route *route);\n+void flow_offload_free(struct flow_offload *flow);\n+\n+int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow);\n+void flow_offload_del(struct nf_flowtable *flow_table, struct flow_offload *flow);\n+struct flow_offload_tuple_rhash *flow_offload_lookup(struct nf_flowtable *flow_table,\n+\t\t\t\t\t\t     struct flow_offload_tuple *tuple);\n+int nf_flow_table_iterate(struct nf_flowtable *flow_table,\n+\t\t\t  void (*iter)(struct flow_offload *flow, void *data),\n+\t\t\t  void *data);\n+void nf_flow_offload_work_gc(struct work_struct *work);\n+extern const struct rhashtable_params nf_flow_offload_rhash_params;\n+\n+void flow_offload_dead(struct flow_offload *flow);\n+\n+int nf_flow_snat_port(const struct flow_offload *flow,\n+\t\t      struct sk_buff *skb, unsigned int thoff,\n+\t\t      u8 protocol, enum flow_offload_tuple_dir dir);\n+int nf_flow_dnat_port(const struct flow_offload *flow,\n+\t\t      struct sk_buff *skb, unsigned int thoff,\n+\t\t      u8 protocol, enum flow_offload_tuple_dir dir);\n+\n+struct flow_ports {\n+\t__be16 source, dest;\n+};\n+\n+#define MODULE_ALIAS_NF_FLOWTABLE(family)\t\\\n+\tMODULE_ALIAS(\"nf-flowtable-\" __stringify(family))\n+\n #endif /* _FLOW_OFFLOAD_H */\ndiff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig\nindex e4a13cc8a2e7..af0f58322515 100644\n--- a/net/netfilter/Kconfig\n+++ b/net/netfilter/Kconfig\n@@ -649,6 +649,13 @@ endif # NF_TABLES_NETDEV\n \n endif # NF_TABLES\n \n+config NF_FLOW_TABLE\n+\ttristate \"Netfilter flow table module\"\n+\thelp\n+\t  This option adds the flow table core infrastructure.\n+\n+\t  To compile it as a module, choose M here.\n+\n config NETFILTER_XTABLES\n \ttristate \"Netfilter Xtables support (required for ip_tables)\"\n \tdefault m if NETFILTER_ADVANCED=n\ndiff --git a/net/netfilter/Makefile b/net/netfilter/Makefile\nindex d3891c93edd6..1f7d92bd571a 100644\n--- a/net/netfilter/Makefile\n+++ b/net/netfilter/Makefile\n@@ -106,6 +106,9 @@ obj-$(CONFIG_NFT_FIB_NETDEV)\t+= nft_fib_netdev.o\n obj-$(CONFIG_NFT_DUP_NETDEV)\t+= nft_dup_netdev.o\n obj-$(CONFIG_NFT_FWD_NETDEV)\t+= nft_fwd_netdev.o\n \n+# flow table infrastructure\n+obj-$(CONFIG_NF_FLOW_TABLE)\t+= nf_flow_table.o\n+\n # generic X tables \n obj-$(CONFIG_NETFILTER_XTABLES) += x_tables.o xt_tcpudp.o\n \ndiff --git a/net/netfilter/nf_flow_table.c b/net/netfilter/nf_flow_table.c\nnew file mode 100644\nindex 000000000000..e1024b17b910\n--- /dev/null\n+++ b/net/netfilter/nf_flow_table.c\n@@ -0,0 +1,434 @@\n+#include <linux/kernel.h>\n+#include <linux/init.h>\n+#include <linux/module.h>\n+#include <linux/netfilter.h>\n+#include <linux/rhashtable.h>\n+#include <linux/netdevice.h>\n+#include <net/netfilter/nf_flow_table.h>\n+#include <net/netfilter/nf_conntrack.h>\n+#include <net/netfilter/nf_conntrack_core.h>\n+#include <net/netfilter/nf_conntrack_tuple.h>\n+\n+struct flow_offload_entry {\n+\tstruct flow_offload\tflow;\n+\tstruct nf_conn\t\t*ct;\n+\tstruct rcu_head\t\trcu_head;\n+};\n+\n+struct flow_offload *\n+flow_offload_alloc(struct nf_conn *ct, struct nf_flow_route *route)\n+{\n+\tstruct flow_offload_entry *entry;\n+\tstruct flow_offload *flow;\n+\n+\tif (unlikely(nf_ct_is_dying(ct) ||\n+\t    !atomic_inc_not_zero(&ct->ct_general.use)))\n+\t\treturn NULL;\n+\n+\tentry = kzalloc(sizeof(*entry), GFP_ATOMIC);\n+\tif (!entry)\n+\t\tgoto err_ct_refcnt;\n+\n+\tflow = &entry->flow;\n+\n+\tif (!dst_hold_safe(route->tuple[FLOW_OFFLOAD_DIR_ORIGINAL].dst))\n+\t\tgoto err_dst_cache_original;\n+\n+\tif (!dst_hold_safe(route->tuple[FLOW_OFFLOAD_DIR_REPLY].dst))\n+\t\tgoto err_dst_cache_reply;\n+\n+\tentry->ct = ct;\n+\n+\tswitch (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.l3num) {\n+\tcase NFPROTO_IPV4:\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_v4 =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u3.in;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_v4 =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u3.in;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_v4 =\n+\t\t\tct->tuplehash[IP_CT_DIR_REPLY].tuple.src.u3.in;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_v4 =\n+\t\t\tct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u3.in;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.l3proto =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.l3num;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.l4proto =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.l3proto =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.l3num;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.l4proto =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum;\n+\t\tbreak;\n+\tcase NFPROTO_IPV6:\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_v6 =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u3.in6;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_v6 =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u3.in6;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_v6 =\n+\t\t\tct->tuplehash[IP_CT_DIR_REPLY].tuple.src.u3.in6;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_v6 =\n+\t\t\tct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u3.in6;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.l3proto =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.l3num;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.l4proto =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.l3proto =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.l3num;\n+\t\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.l4proto =\n+\t\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum;\n+\t\tbreak;\n+\t}\n+\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_cache =\n+\t\t  route->tuple[FLOW_OFFLOAD_DIR_ORIGINAL].dst;\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_cache =\n+\t\t  route->tuple[FLOW_OFFLOAD_DIR_REPLY].dst;\n+\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_port =\n+\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.tcp.port;\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_port =\n+\t\tct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u.tcp.port;\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_port =\n+\t\tct->tuplehash[IP_CT_DIR_REPLY].tuple.src.u.tcp.port;\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_port =\n+\t\tct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.tcp.port;\n+\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dir =\n+\t\t\t\t\t\tFLOW_OFFLOAD_DIR_ORIGINAL;\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dir =\n+\t\t\t\t\t\tFLOW_OFFLOAD_DIR_REPLY;\n+\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.iifidx =\n+\t\troute->tuple[FLOW_OFFLOAD_DIR_ORIGINAL].ifindex;\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.oifidx =\n+\t\troute->tuple[FLOW_OFFLOAD_DIR_REPLY].ifindex;\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.iifidx =\n+\t\troute->tuple[FLOW_OFFLOAD_DIR_REPLY].ifindex;\n+\tflow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.oifidx =\n+\t\troute->tuple[FLOW_OFFLOAD_DIR_ORIGINAL].ifindex;\n+\n+\tif (ct->status & IPS_SRC_NAT)\n+\t\tflow->flags |= FLOW_OFFLOAD_SNAT;\n+\telse if (ct->status & IPS_DST_NAT)\n+\t\tflow->flags |= FLOW_OFFLOAD_DNAT;\n+\n+\treturn flow;\n+\n+err_dst_cache_reply:\n+\tdst_release(route->tuple[FLOW_OFFLOAD_DIR_ORIGINAL].dst);\n+err_dst_cache_original:\n+\tkfree(entry);\n+err_ct_refcnt:\n+\tnf_ct_put(ct);\n+\n+\treturn NULL;\n+}\n+EXPORT_SYMBOL_GPL(flow_offload_alloc);\n+\n+void flow_offload_free(struct flow_offload *flow)\n+{\n+\tstruct flow_offload_entry *e;\n+\n+\tdst_release(flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_cache);\n+\tdst_release(flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_cache);\n+\te = container_of(flow, struct flow_offload_entry, flow);\n+\tkfree(e);\n+}\n+EXPORT_SYMBOL_GPL(flow_offload_free);\n+\n+void flow_offload_dead(struct flow_offload *flow)\n+{\n+\tflow->flags |= FLOW_OFFLOAD_DYING;\n+}\n+EXPORT_SYMBOL_GPL(flow_offload_dead);\n+\n+int flow_offload_add(struct nf_flowtable *flow_table, struct flow_offload *flow)\n+{\n+\tflow->timeout = (u32)jiffies;\n+\n+\trhashtable_insert_fast(&flow_table->rhashtable,\n+\t\t\t       &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].node,\n+\t\t\t       *flow_table->type->params);\n+\trhashtable_insert_fast(&flow_table->rhashtable,\n+\t\t\t       &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].node,\n+\t\t\t       *flow_table->type->params);\n+\treturn 0;\n+}\n+EXPORT_SYMBOL_GPL(flow_offload_add);\n+\n+void flow_offload_del(struct nf_flowtable *flow_table,\n+\t\t      struct flow_offload *flow)\n+{\n+\tstruct flow_offload_entry *e;\n+\n+\trhashtable_remove_fast(&flow_table->rhashtable,\n+\t\t\t       &flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].node,\n+\t\t\t       *flow_table->type->params);\n+\trhashtable_remove_fast(&flow_table->rhashtable,\n+\t\t\t       &flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].node,\n+\t\t\t       *flow_table->type->params);\n+\n+\te = container_of(flow, struct flow_offload_entry, flow);\n+\tkfree_rcu(e, rcu_head);\n+}\n+EXPORT_SYMBOL_GPL(flow_offload_del);\n+\n+struct flow_offload_tuple_rhash *\n+flow_offload_lookup(struct nf_flowtable *flow_table,\n+\t\t    struct flow_offload_tuple *tuple)\n+{\n+\treturn rhashtable_lookup_fast(&flow_table->rhashtable, tuple,\n+\t\t\t\t      *flow_table->type->params);\n+}\n+EXPORT_SYMBOL_GPL(flow_offload_lookup);\n+\n+static void nf_flow_release_ct(const struct flow_offload *flow)\n+{\n+\tstruct flow_offload_entry *e;\n+\n+\te = container_of(flow, struct flow_offload_entry, flow);\n+\tnf_ct_delete(e->ct, 0, 0);\n+\tnf_ct_put(e->ct);\n+}\n+\n+int nf_flow_table_iterate(struct nf_flowtable *flow_table,\n+\t\t\t  void (*iter)(struct flow_offload *flow, void *data),\n+\t\t\t  void *data)\n+{\n+\tstruct flow_offload_tuple_rhash *tuplehash;\n+\tstruct rhashtable_iter hti;\n+\tstruct flow_offload *flow;\n+\tint err;\n+\n+\trhashtable_walk_init(&flow_table->rhashtable, &hti, GFP_KERNEL);\n+\terr = rhashtable_walk_start(&hti);\n+\tif (err && err != -EAGAIN)\n+\t\tgoto out;\n+\n+\twhile ((tuplehash = rhashtable_walk_next(&hti))) {\n+\t\tif (IS_ERR(tuplehash)) {\n+\t\t\terr = PTR_ERR(tuplehash);\n+\t\t\tif (err != -EAGAIN)\n+\t\t\t\tgoto out;\n+\n+\t\t\tcontinue;\n+\t\t}\n+\t\tif (tuplehash->tuple.dir)\n+\t\t\tcontinue;\n+\n+\t\tflow = container_of(tuplehash, struct flow_offload, tuplehash[0]);\n+\n+\t\titer(flow, data);\n+\t}\n+out:\n+\trhashtable_walk_stop(&hti);\n+\trhashtable_walk_exit(&hti);\n+\n+\treturn err;\n+}\n+EXPORT_SYMBOL_GPL(nf_flow_table_iterate);\n+\n+static inline bool nf_flow_has_expired(const struct flow_offload *flow)\n+{\n+\treturn (__s32)(flow->timeout - (u32)jiffies) <= 0;\n+}\n+\n+static inline bool nf_flow_is_dying(const struct flow_offload *flow)\n+{\n+\treturn flow->flags & FLOW_OFFLOAD_DYING;\n+}\n+\n+void nf_flow_offload_work_gc(struct work_struct *work)\n+{\n+\tstruct flow_offload_tuple_rhash *tuplehash;\n+\tstruct nf_flowtable *flow_table;\n+\tstruct rhashtable_iter hti;\n+\tstruct flow_offload *flow;\n+\tint err;\n+\n+\tflow_table = container_of(work, struct nf_flowtable, gc_work.work);\n+\n+\trhashtable_walk_init(&flow_table->rhashtable, &hti, GFP_KERNEL);\n+\terr = rhashtable_walk_start(&hti);\n+\tif (err && err != -EAGAIN)\n+\t\tgoto out;\n+\n+\twhile ((tuplehash = rhashtable_walk_next(&hti))) {\n+\t\tif (IS_ERR(tuplehash)) {\n+\t\t\terr = PTR_ERR(tuplehash);\n+\t\t\tif (err != -EAGAIN)\n+\t\t\t\tgoto out;\n+\n+\t\t\tcontinue;\n+\t\t}\n+\t\tif (tuplehash->tuple.dir)\n+\t\t\tcontinue;\n+\n+\t\tflow = container_of(tuplehash, struct flow_offload, tuplehash[0]);\n+\n+\t\tif (nf_flow_has_expired(flow) ||\n+\t\t    nf_flow_is_dying(flow)) {\n+\t\t\tflow_offload_del(flow_table, flow);\n+\t\t\tnf_flow_release_ct(flow);\n+\t\t}\n+\t}\n+\n+\trhashtable_walk_stop(&hti);\n+\trhashtable_walk_exit(&hti);\n+out:\n+\tqueue_delayed_work(system_power_efficient_wq, &flow_table->gc_work, HZ);\n+}\n+EXPORT_SYMBOL_GPL(nf_flow_offload_work_gc);\n+\n+static u32 flow_offload_hash(const void *data, u32 len, u32 seed)\n+{\n+\tconst struct flow_offload_tuple *tuple = data;\n+\n+\treturn jhash(tuple, offsetof(struct flow_offload_tuple, dir), seed);\n+}\n+\n+static u32 flow_offload_hash_obj(const void *data, u32 len, u32 seed)\n+{\n+\tconst struct flow_offload_tuple_rhash *tuplehash = data;\n+\n+\treturn jhash(&tuplehash->tuple, offsetof(struct flow_offload_tuple, dir), seed);\n+}\n+\n+static int flow_offload_hash_cmp(struct rhashtable_compare_arg *arg,\n+\t\t\t\t\tconst void *ptr)\n+{\n+\tconst struct flow_offload_tuple *tuple = arg->key;\n+\tconst struct flow_offload_tuple_rhash *x = ptr;\n+\n+\tif (memcmp(&x->tuple, tuple, offsetof(struct flow_offload_tuple, dir)))\n+\t\treturn 1;\n+\n+\treturn 0;\n+}\n+\n+const struct rhashtable_params nf_flow_offload_rhash_params = {\n+\t.head_offset\t\t= offsetof(struct flow_offload_tuple_rhash, node),\n+\t.hashfn\t\t\t= flow_offload_hash,\n+\t.obj_hashfn\t\t= flow_offload_hash_obj,\n+\t.obj_cmpfn\t\t= flow_offload_hash_cmp,\n+\t.automatic_shrinking\t= true,\n+};\n+EXPORT_SYMBOL_GPL(nf_flow_offload_rhash_params);\n+\n+static int nf_flow_nat_port_tcp(struct sk_buff *skb, unsigned int thoff,\n+\t\t\t\t__be16 port, __be16 new_port)\n+{\n+\tstruct tcphdr *tcph;\n+\n+\tif (!pskb_may_pull(skb, thoff + sizeof(*tcph)) ||\n+\t    skb_try_make_writable(skb, thoff + sizeof(*tcph)))\n+\t\treturn -1;\n+\n+\ttcph = (void *)(skb_network_header(skb) + thoff);\n+\tinet_proto_csum_replace2(&tcph->check, skb, port, new_port, true);\n+\n+\treturn 0;\n+}\n+\n+static int nf_flow_nat_port_udp(struct sk_buff *skb, unsigned int thoff,\n+\t\t\t\t__be16 port, __be16 new_port)\n+{\n+\tstruct udphdr *udph;\n+\n+\tif (!pskb_may_pull(skb, thoff + sizeof(*udph)) ||\n+\t    skb_try_make_writable(skb, thoff + sizeof(*udph)))\n+\t\treturn -1;\n+\n+\tudph = (void *)(skb_network_header(skb) + thoff);\n+\tif (udph->check || skb->ip_summed == CHECKSUM_PARTIAL) {\n+\t\tinet_proto_csum_replace2(&udph->check, skb, port,\n+\t\t\t\t\t new_port, true);\n+\t\tif (!udph->check)\n+\t\t\tudph->check = CSUM_MANGLED_0;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int nf_flow_nat_port(struct sk_buff *skb, unsigned int thoff,\n+\t\t\t    u8 protocol, __be16 port, __be16 new_port)\n+{\n+\tswitch (protocol) {\n+\tcase IPPROTO_TCP:\n+\t\tif (nf_flow_nat_port_tcp(skb, thoff, port, new_port) < 0)\n+\t\t\treturn NF_DROP;\n+\t\tbreak;\n+\tcase IPPROTO_UDP:\n+\t\tif (nf_flow_nat_port_udp(skb, thoff, port, new_port) < 0)\n+\t\t\treturn NF_DROP;\n+\t\tbreak;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+int nf_flow_snat_port(const struct flow_offload *flow,\n+\t\t      struct sk_buff *skb, unsigned int thoff,\n+\t\t      u8 protocol, enum flow_offload_tuple_dir dir)\n+{\n+\tstruct flow_ports *hdr;\n+\t__be16 port, new_port;\n+\n+\tif (!pskb_may_pull(skb, thoff + sizeof(*hdr)) ||\n+\t    skb_try_make_writable(skb, thoff + sizeof(*hdr)))\n+\t\treturn -1;\n+\n+\thdr = (void *)(skb_network_header(skb) + thoff);\n+\n+\tswitch (dir) {\n+\tcase FLOW_OFFLOAD_DIR_ORIGINAL:\n+\t\tport = hdr->source;\n+\t\tnew_port = flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.dst_port;\n+\t\thdr->source = new_port;\n+\t\tbreak;\n+\tcase FLOW_OFFLOAD_DIR_REPLY:\n+\t\tport = hdr->dest;\n+\t\tnew_port = flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.src_port;\n+\t\thdr->dest = new_port;\n+\t\tbreak;\n+\tdefault:\n+\t\treturn -1;\n+\t}\n+\n+\treturn nf_flow_nat_port(skb, thoff, protocol, port, new_port);\n+}\n+EXPORT_SYMBOL_GPL(nf_flow_snat_port);\n+\n+int nf_flow_dnat_port(const struct flow_offload *flow,\n+\t\t      struct sk_buff *skb, unsigned int thoff,\n+\t\t      u8 protocol, enum flow_offload_tuple_dir dir)\n+{\n+\tstruct flow_ports *hdr;\n+\t__be16 port, new_port;\n+\n+\tif (!pskb_may_pull(skb, thoff + sizeof(*hdr)) ||\n+\t    skb_try_make_writable(skb, thoff + sizeof(*hdr)))\n+\t\treturn -1;\n+\n+\thdr = (void *)(skb_network_header(skb) + thoff);\n+\n+\tswitch (dir) {\n+\tcase FLOW_OFFLOAD_DIR_ORIGINAL:\n+\t\tport = hdr->dest;\n+\t\tnew_port = flow->tuplehash[FLOW_OFFLOAD_DIR_REPLY].tuple.src_port;\n+\t\thdr->dest = new_port;\n+\t\tbreak;\n+\tcase FLOW_OFFLOAD_DIR_REPLY:\n+\t\tport = hdr->source;\n+\t\tnew_port = flow->tuplehash[FLOW_OFFLOAD_DIR_ORIGINAL].tuple.dst_port;\n+\t\thdr->source = new_port;\n+\t\tbreak;\n+\tdefault:\n+\t\treturn -1;\n+\t}\n+\n+\treturn nf_flow_nat_port(skb, thoff, protocol, port, new_port);\n+}\n+EXPORT_SYMBOL_GPL(nf_flow_dnat_port);\n+\n+MODULE_LICENSE(\"GPL\");\n+MODULE_AUTHOR(\"Pablo Neira Ayuso <pablo@netfilter.org>\");\n",
    "prefixes": [
        "nf-next",
        "v3",
        "2/7"
    ]
}