{"id":2221010,"url":"http://patchwork.ozlabs.org/api/1.1/patches/2221010/?format=json","web_url":"http://patchwork.ozlabs.org/project/openvswitch/patch/20260408170613.587902-7-aconole@redhat.com/","project":{"id":47,"url":"http://patchwork.ozlabs.org/api/1.1/projects/47/?format=json","name":"Open vSwitch","link_name":"openvswitch","list_id":"ovs-dev.openvswitch.org","list_email":"ovs-dev@openvswitch.org","web_url":"http://openvswitch.org/","scm_url":"git@github.com:openvswitch/ovs.git","webscm_url":"https://github.com/openvswitch/ovs"},"msgid":"<20260408170613.587902-7-aconole@redhat.com>","date":"2026-04-08T17:06:02","name":"[ovs-dev,RFC,06/12] ct-offload: Add batching support.","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"3040e618a54d0a45641b85a0571f2d4710e2a5b1","submitter":{"id":67184,"url":"http://patchwork.ozlabs.org/api/1.1/people/67184/?format=json","name":"Aaron Conole","email":"aconole@redhat.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/openvswitch/patch/20260408170613.587902-7-aconole@redhat.com/mbox/","series":[{"id":499163,"url":"http://patchwork.ozlabs.org/api/1.1/series/499163/?format=json","web_url":"http://patchwork.ozlabs.org/project/openvswitch/list/?series=499163","date":"2026-04-08T17:05:56","name":"ct-offload: Introduce a conntrack offload infrastructure.","version":1,"mbox":"http://patchwork.ozlabs.org/series/499163/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2221010/comments/","check":"warning","checks":"http://patchwork.ozlabs.org/api/patches/2221010/checks/","tags":{},"headers":{"Return-Path":"<ovs-dev-bounces@openvswitch.org>","X-Original-To":["incoming@patchwork.ozlabs.org","dev@openvswitch.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","ovs-dev@lists.linuxfoundation.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=VVwKybX9;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org\n (client-ip=140.211.166.136; helo=smtp3.osuosl.org;\n envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org)","smtp3.osuosl.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key)\n header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=VVwKybX9","smtp2.osuosl.org; dmarc=pass (p=quarantine dis=none)\n header.from=redhat.com","smtp2.osuosl.org;\n dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com\n header.a=rsa-sha256 header.s=mimecast20190719 header.b=VVwKybX9"],"Received":["from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4frTy435HYz1xv0\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 09 Apr 2026 03:07:08 +1000 (AEST)","from localhost (localhost [127.0.0.1])\n\tby smtp3.osuosl.org (Postfix) with ESMTP id 7CD4E6100E;\n\tWed,  8 Apr 2026 17:07:06 +0000 (UTC)","from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id IoQ2M2PvzhcW; Wed,  8 Apr 2026 17:07:01 +0000 (UTC)","from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56])\n\tby smtp3.osuosl.org (Postfix) with ESMTPS id 7812160FFA;\n\tWed,  8 Apr 2026 17:07:00 +0000 (UTC)","from lf-lists.osuosl.org (localhost [127.0.0.1])\n\tby lists.linuxfoundation.org (Postfix) with ESMTP id BC02DC0908;\n\tWed,  8 Apr 2026 17:06:59 +0000 (UTC)","from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133])\n by lists.linuxfoundation.org (Postfix) with ESMTP id 1B5BEC0902\n for <dev@openvswitch.org>; Wed,  8 Apr 2026 17:06:58 +0000 (UTC)","from localhost (localhost [127.0.0.1])\n by smtp2.osuosl.org (Postfix) with ESMTP id 77151407EA\n for <dev@openvswitch.org>; Wed,  8 Apr 2026 17:06:36 +0000 (UTC)","from smtp2.osuosl.org ([127.0.0.1])\n by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id X2MeECwsVGCQ for <dev@openvswitch.org>;\n Wed,  8 Apr 2026 17:06:35 +0000 (UTC)","from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [170.10.129.124])\n by smtp2.osuosl.org (Postfix) with ESMTPS id 4C976404AE\n for <dev@openvswitch.org>; Wed,  8 Apr 2026 17:06:35 +0000 (UTC)","from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com\n (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by\n relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,\n cipher=TLS_AES_256_GCM_SHA384) id us-mta-319-otJsty_fMc6N3erS-fXb8A-1; Wed,\n 08 Apr 2026 13:06:30 -0400","from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com\n (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n (No client certificate requested)\n by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS\n id D3F72195608D; Wed,  8 Apr 2026 17:06:29 +0000 (UTC)","from RHTRH0061144.redhat.com (unknown [10.22.89.172])\n by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP\n id D9885300019F; Wed,  8 Apr 2026 17:06:27 +0000 (UTC)"],"X-Virus-Scanned":["amavis at osuosl.org","amavis at osuosl.org"],"X-Comment":"SPF check N/A for local connections - client-ip=140.211.9.56;\n helo=lists.linuxfoundation.org;\n envelope-from=ovs-dev-bounces@openvswitch.org; receiver=<UNKNOWN> ","DKIM-Filter":["OpenDKIM Filter v2.11.0 smtp3.osuosl.org 7812160FFA","OpenDKIM Filter v2.11.0 smtp2.osuosl.org 4C976404AE"],"Received-SPF":"Pass (mailfrom) identity=mailfrom; client-ip=170.10.129.124;\n helo=us-smtp-delivery-124.mimecast.com; envelope-from=aconole@redhat.com;\n receiver=<UNKNOWN>","DMARC-Filter":"OpenDMARC Filter v1.4.2 smtp2.osuosl.org 4C976404AE","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1775667994;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=N+/D41uh28tkAkpf338Kh8KYyfNnRmZ7o2qRawQuhfQ=;\n b=VVwKybX9JzKJHoAAhUAIRGIffTGWo1njO8vpZA6DPiMfNSRLNk3xBljUTciYcn+jZxmzpw\n oVekySo+9eMPhxJ+r9Fwitqjn6cRd1l2lV/ErbWLkXXf3SFdhoZj9cuXz8fqSYmJHQxbrT\n atOF3CnuwFQdHRajUXLFQ4G69inVH/o=","X-MC-Unique":"otJsty_fMc6N3erS-fXb8A-1","X-Mimecast-MFC-AGG-ID":"otJsty_fMc6N3erS-fXb8A_1775667989","To":"dev@openvswitch.org","Date":"Wed,  8 Apr 2026 13:06:02 -0400","Message-ID":"<20260408170613.587902-7-aconole@redhat.com>","In-Reply-To":"<20260408170613.587902-1-aconole@redhat.com>","References":"<20260408170613.587902-1-aconole@redhat.com>","MIME-Version":"1.0","X-Scanned-By":"MIMEDefang 3.4.1 on 10.30.177.4","X-Mimecast-Spam-Score":"0","X-Mimecast-MFC-PROC-ID":"qmEAItb1oXfx5LIdJnQIv1j_DdE-CG7ixK2yfRzCuSI_1775667989","X-Mimecast-Originator":"redhat.com","Subject":"[ovs-dev] [RFC 06/12] ct-offload: Add batching support.","X-BeenThere":"ovs-dev@openvswitch.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"<ovs-dev.openvswitch.org>","List-Unsubscribe":"<https://mail.openvswitch.org/mailman/options/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=unsubscribe>","List-Archive":"<http://mail.openvswitch.org/pipermail/ovs-dev/>","List-Post":"<mailto:ovs-dev@openvswitch.org>","List-Help":"<mailto:ovs-dev-request@openvswitch.org?subject=help>","List-Subscribe":"<https://mail.openvswitch.org/mailman/listinfo/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=subscribe>","From":"Aaron Conole via dev <ovs-dev@openvswitch.org>","Reply-To":"Aaron Conole <aconole@redhat.com>","Cc":"Eli Britstein <elibr@nvidia.com>, Florian Westphal <fwestpha@redhat.com>,\n Flavio Leitner <fbl@redhat.com>","Content-Type":"text/plain; charset=\"us-ascii\"","Content-Transfer-Encoding":"7bit","Errors-To":"ovs-dev-bounces@openvswitch.org","Sender":"\"dev\" <ovs-dev-bounces@openvswitch.org>"},"content":"The CT offload operations API currently considers operating on a\nsingle connection at a time.  However, there may be reason to\naccumulate offload API operations and execute them as a single large\nbatch of operations.  Provide a basic batch abstraction that allows\nfor accumulating operations and then executing them all at once.  This\nwill be used in an upcoming commit, especially with the ct expiration\nlogic.  The provider also may have a batched abstraction that lets it\ndo a better provider based optimization.\n\nAs part of this extension, move the lock management up a level for the\nbatching system to have a single bulk operations lock.\n\nSigned-off-by: Aaron Conole <aconole@redhat.com>\n---\n lib/ct-offload.c | 254 +++++++++++++++++++++++++++++++++++++++++++----\n lib/ct-offload.h |  54 ++++++++++\n 2 files changed, 286 insertions(+), 22 deletions(-)","diff":"diff --git a/lib/ct-offload.c b/lib/ct-offload.c\nindex 3bd6200e37..97c922dde1 100644\n--- a/lib/ct-offload.c\n+++ b/lib/ct-offload.c\n@@ -121,25 +121,33 @@ ct_offload_module_init(void)\n      * directly from their own module-init routines. */\n }\n \n-/* ct_offload_conn_add() - notify all eligible providers of a new connection.\n+/* ct_offload_conn_add_() - notify all eligible providers of a new connection.\n  *\n  * Iterates over registered providers and calls conn_add() on each one that\n  * reports can_offload() == true for this context.  Returns the first non-zero\n  * error encountered, but continues notifying remaining providers.  This allows\n- * the underlying hardware conntrack details across providers function. */\n-int\n-ct_offload_conn_add(const struct ct_offload_ctx *ctx)\n+ * the underlying hardware conntrack details across providers function.\n+ */\n+static int\n+ct_offload_conn_add_(const struct ct_offload_ctx *ctx, bool batched)\n {\n     struct ct_offload_class_node *node;\n     int ret = 0;\n \n-    ovs_mutex_lock(&ct_offload_mutex);\n     LIST_FOR_EACH (node, list_node, &ct_offload_classes) {\n         const struct ct_offload_class *class = node->class;\n \n+        if (batched && class->batch_submit) {\n+            /* Called via the batched path - skip the providers\n+             * that support batched submits since they already processed\n+             * this. */\n+            continue;\n+        }\n+\n         if (class->can_offload && !class->can_offload(ctx)) {\n             continue;\n         }\n+\n         if (class->conn_add) {\n             int error = class->conn_add(ctx);\n \n@@ -148,44 +156,83 @@ ct_offload_conn_add(const struct ct_offload_ctx *ctx)\n             }\n         }\n     }\n+\n+    return ret;\n+}\n+\n+int\n+ct_offload_conn_add(const struct ct_offload_ctx *ctx)\n+{\n+    int ret;\n+\n+    ovs_mutex_lock(&ct_offload_mutex);\n+    ret = ct_offload_conn_add_(ctx, false);\n     ovs_mutex_unlock(&ct_offload_mutex);\n \n     return ret;\n }\n \n-/* ct_offload_conn_del() - notify all providers that a connection was removed.\n+/* ct_offload_conn_del_() - notify all providers that a connection was removed.\n  *\n  * Called unconditionally on all providers so that each can clean up any\n  * state it may have installed. */\n-void\n-ct_offload_conn_del(const struct ct_offload_ctx *ctx)\n+static void\n+ct_offload_conn_del_(const struct ct_offload_ctx *ctx, bool batched)\n {\n     struct ct_offload_class_node *node;\n \n-    ovs_mutex_lock(&ct_offload_mutex);\n     LIST_FOR_EACH (node, list_node, &ct_offload_classes) {\n         const struct ct_offload_class *class = node->class;\n \n+        if (batched && class->batch_submit) {\n+            /* Called via the batched path - skip the providers\n+             * that support batched submits since they already processed\n+             * this. */\n+            continue;\n+        }\n+\n         if (class->conn_del) {\n             class->conn_del(ctx);\n         }\n     }\n-    ovs_mutex_unlock(&ct_offload_mutex);\n }\n \n void\n-ct_offload_conn_established(const struct ct_offload_ctx *ctx)\n+ct_offload_conn_del(const struct ct_offload_ctx *ctx)\n+{\n+    ovs_mutex_lock(&ct_offload_mutex);\n+    ct_offload_conn_del_(ctx, false);\n+    ovs_mutex_unlock(&ct_offload_mutex);\n+}\n+\n+static int\n+ct_offload_conn_established_(const struct ct_offload_ctx *ctx, bool batched)\n {\n     struct ct_offload_class_node *node;\n \n-    ovs_mutex_lock(&ct_offload_mutex);\n     LIST_FOR_EACH (node, list_node, &ct_offload_classes) {\n         const struct ct_offload_class *class = node->class;\n \n+        if (batched && class->batch_submit) {\n+            /* Called via the batched path - skip the providers\n+             * that support batched submits since they already processed\n+             * this. */\n+            continue;\n+        }\n+\n         if (class->conn_established) {\n             class->conn_established(ctx);\n         }\n     }\n+\n+    return 0;\n+}\n+\n+void\n+ct_offload_conn_established(const struct ct_offload_ctx *ctx)\n+{\n+    ovs_mutex_lock(&ct_offload_mutex);\n+    (void) ct_offload_conn_established_(ctx, false);\n     ovs_mutex_unlock(&ct_offload_mutex);\n }\n \n@@ -194,16 +241,22 @@ ct_offload_conn_established(const struct ct_offload_ctx *ctx)\n  * Iterates over providers and returns the first non-zero timestamp returned\n  * by a provider's conn_update() callback.  Returns 0 if no provider\n  * supplies a timestamp. */\n-long long\n-ct_offload_conn_update(const struct ct_offload_ctx *ctx)\n+static long long\n+ct_offload_conn_update_(const struct ct_offload_ctx *ctx, bool batched)\n {\n     struct ct_offload_class_node *node;\n     long long last_used = 0;\n \n-    ovs_mutex_lock(&ct_offload_mutex);\n     LIST_FOR_EACH (node, list_node, &ct_offload_classes) {\n         const struct ct_offload_class *class = node->class;\n \n+        if (batched && class->batch_submit) {\n+            /* Called via the batched path - skip the providers\n+             * that support batched submits since they already processed\n+             * this. */\n+            continue;\n+        }\n+\n         if (class->conn_update) {\n             long long ts = class->conn_update(ctx);\n \n@@ -213,45 +266,202 @@ ct_offload_conn_update(const struct ct_offload_ctx *ctx)\n             }\n         }\n     }\n+    return last_used;\n+}\n+\n+long long\n+ct_offload_conn_update(const struct ct_offload_ctx *ctx)\n+{\n+    long long ret;\n+\n+    ovs_mutex_lock(&ct_offload_mutex);\n+    ret = ct_offload_conn_update_(ctx, false);\n     ovs_mutex_unlock(&ct_offload_mutex);\n \n-    return last_used;\n+    return ret;\n }\n \n /* ct_offload_can_offload() - returns true if any provider can offload ctx. */\n-bool\n-ct_offload_can_offload(const struct ct_offload_ctx *ctx)\n+static bool\n+ct_offload_can_offload_(const struct ct_offload_ctx *ctx, bool batched)\n {\n     struct ct_offload_class_node *node;\n     bool result = false;\n \n-    ovs_mutex_lock(&ct_offload_mutex);\n     LIST_FOR_EACH (node, list_node, &ct_offload_classes) {\n         const struct ct_offload_class *class = node->class;\n \n+        if (batched && class->batch_submit) {\n+            /* Called via the batched path - skip the providers\n+             * that support batched submits since they already processed\n+             * this. */\n+            continue;\n+        }\n+\n         if (class->can_offload && class->can_offload(ctx)) {\n             result = true;\n             break;\n         }\n     }\n-    ovs_mutex_unlock(&ct_offload_mutex);\n \n     return result;\n }\n \n+bool\n+ct_offload_can_offload(const struct ct_offload_ctx *ctx)\n+{\n+    bool can_offload;\n+\n+    ovs_mutex_lock(&ct_offload_mutex);\n+    can_offload = ct_offload_can_offload_(ctx, false);\n+    ovs_mutex_unlock(&ct_offload_mutex);\n+\n+    return can_offload;\n+}\n+\n /* ct_offload_flush() - flush all offloaded connections from every provider. */\n+static void\n+ct_offload_flush_(bool batched)\n+{\n+    struct ct_offload_class_node *node;\n+\n+    LIST_FOR_EACH (node, list_node, &ct_offload_classes) {\n+        const struct ct_offload_class *class = node->class;\n+\n+        if (batched && class->batch_submit) {\n+            /* Called via the batched path - skip the providers\n+             * that support batched submits since they already processed\n+             * this. */\n+            continue;\n+        }\n+\n+        if (class->flush) {\n+            class->flush();\n+        }\n+    }\n+}\n+\n void\n ct_offload_flush(void)\n+{\n+    ovs_mutex_lock(&ct_offload_mutex);\n+    ct_offload_flush_(false);\n+    ovs_mutex_unlock(&ct_offload_mutex);\n+}\n+\n+\n+/* Batch API\n+ * =========\n+ *\n+ * The default implementation serialises each operation in the batch through\n+ * the individual per-connection dispatch functions above.  All provider\n+ * callbacks are invoked under the ct_offload_mutex, so the per-operation\n+ * lock/unlock overhead of the single-op path is avoided across the batch.\n+ */\n+\n+#define CT_OFFLOAD_BATCH_INITIAL_SIZE 8\n+\n+/* ct_offload_op_batch_init() - prepare an empty batch for use. */\n+void\n+ct_offload_op_batch_init(struct ct_offload_op_batch *batch)\n+{\n+    batch->ops      = NULL;\n+    batch->n_ops    = 0;\n+    batch->allocated = 0;\n+}\n+\n+/* ct_offload_op_batch_add() - append one operation to the batch.\n+ *\n+ * The batch grows dynamically; callers need not pre-size it. */\n+void\n+ct_offload_op_batch_add(struct ct_offload_op_batch *batch,\n+                        enum ct_offload_op_type type,\n+                        const struct ct_offload_ctx *ctx)\n+{\n+    if (batch->n_ops == batch->allocated) {\n+        batch->allocated = batch->allocated\n+                           ? batch->allocated * 2\n+                           : CT_OFFLOAD_BATCH_INITIAL_SIZE;\n+        batch->ops = xrealloc(batch->ops,\n+                              batch->allocated * sizeof *batch->ops);\n+    }\n+\n+    struct ct_offload_op *op = &batch->ops[batch->n_ops++];\n+    op->type     = type;\n+    op->ctx      = *ctx;\n+    op->error    = 0;\n+}\n+\n+/* ct_offload_op_batch_submit() - execute every operation in the batch.\n+ *\n+ * Each op's 'error' field is set to the result of the corresponding\n+ * per-connection dispatch.  The mutex is held for the duration of each\n+ * operation; providers are invoked directly rather than through the\n+ * public single-op wrappers to avoid repeated lock/unlock cycles. */\n+void\n+ct_offload_op_batch_submit(struct ct_offload_op_batch *batch)\n {\n     struct ct_offload_class_node *node;\n+    struct ct_offload_op *op;\n \n     ovs_mutex_lock(&ct_offload_mutex);\n     LIST_FOR_EACH (node, list_node, &ct_offload_classes) {\n         const struct ct_offload_class *class = node->class;\n \n-        if (class->flush) {\n-            class->flush();\n+        if (class->batch_submit) {\n+            class->batch_submit(batch);\n+        }\n+    }\n+\n+    CT_OFFLOAD_BATCH_OP_FOR_EACH (idx, op, batch) {\n+\n+        switch (op->type) {\n+        case CT_OFFLOAD_OP_ADD:\n+            op->error = ct_offload_conn_add_(&op->ctx, true);\n+            break;\n+\n+        case CT_OFFLOAD_OP_DEL:\n+            ct_offload_conn_del_(&op->ctx, true);\n+            op->error = 0;\n+            break;\n+\n+        case CT_OFFLOAD_OP_UPD: {\n+            long long ts = ct_offload_conn_update_(&op->ctx, true);\n+\n+            op->error = ts ? 0 : ENODATA;\n+            break;\n+        }\n+\n+        case CT_OFFLOAD_OP_POLICY:\n+            op->error = ct_offload_can_offload_(&op->ctx, true) ? 0 : EPERM;\n+            break;\n+\n+        case CT_OFFLOAD_OP_FLUSH:\n+            ct_offload_flush_(true);\n+            op->error = 0;\n+            break;\n+\n+        case CT_OFFLOAD_OP_EST:\n+            op->error = ct_offload_conn_established_(&op->ctx, true);\n+            break;\n+\n+        default:\n+            op->error = EINVAL;\n+            break;\n         }\n     }\n     ovs_mutex_unlock(&ct_offload_mutex);\n }\n+\n+/* ct_offload_op_batch_destroy() - release memory held by the batch.\n+ *\n+ * The batch may be re-initialised with ct_offload_op_batch_init() after\n+ * this call. */\n+void\n+ct_offload_op_batch_destroy(struct ct_offload_op_batch *batch)\n+{\n+    free(batch->ops);\n+    batch->ops       = NULL;\n+    batch->n_ops     = 0;\n+    batch->allocated = 0;\n+}\ndiff --git a/lib/ct-offload.h b/lib/ct-offload.h\nindex 824b94a5c1..36871d12cb 100644\n--- a/lib/ct-offload.h\n+++ b/lib/ct-offload.h\n@@ -62,6 +62,10 @@ struct ct_offload_class {\n     /* Initialization routine for the provider. */\n     int (*init)(void);\n \n+    /* Interface to allow offload providers to operate in bulk.  This\n+     * will be called as part of the batch processing process.  If a provider\n+     * doesn't implemented this the fallback is each individual call. */\n+    void (*batch_submit)(struct ct_offload_op_batch *);\n     /* Per-connection operation callbacks get called for individual operations\n      * on the fast path or when batching is not in use. */\n     int  (*conn_add)(const struct ct_offload_ctx *);\n@@ -94,4 +98,54 @@ void      ct_offload_conn_established(const struct ct_offload_ctx *);\n bool      ct_offload_can_offload(const struct ct_offload_ctx *);\n void      ct_offload_flush(void);\n \n+/* Batch offload API.\n+ *\n+ * The default implementation dispatches each operation individually using the\n+ * per-connection API above.  Providers that can handle a native batch may do\n+ * so by implementing a batch_submit callback in struct ct_offload_class in the\n+ * future.\n+ *\n+ * Typical usage:\n+ *\n+ *   struct ct_offload_op_batch batch;\n+ *   ct_offload_op_batch_init(&batch);\n+ *\n+ *   ct_offload_op_batch_add(&batch, CT_OFFLOAD_OP_ADD, &ctx_a);\n+ *   ct_offload_op_batch_add(&batch, CT_OFFLOAD_OP_ADD, &ctx_b);\n+ *\n+ *   ct_offload_op_batch_submit(&batch);\n+ *   for_each_op inspect batch.ops[i].error\n+ *\n+ *   ct_offload_op_batch_destroy(&batch);\n+ *\n+ * For CT_OFFLOAD_OP_UPD, op->error is set to 0 when the hardware returned a\n+ * valid last-used timestamp (expiration was refreshed by the provider), or to\n+ * ENODATA when no hardware record was found.\n+ *\n+ * For CT_OFFLOAD_OP_POLICY, op->error is set to 0 when the connection is\n+ * eligible for offload, or EPERM when no provider will accept it.\n+ */\n+void ct_offload_op_batch_init(struct ct_offload_op_batch *);\n+void ct_offload_op_batch_add(struct ct_offload_op_batch *,\n+                             enum ct_offload_op_type,\n+                             const struct ct_offload_ctx *);\n+void ct_offload_op_batch_submit(struct ct_offload_op_batch *);\n+void ct_offload_op_batch_destroy(struct ct_offload_op_batch *);\n+\n+static inline\n+size_t ct_offload_op_batch_len(struct ct_offload_op_batch *batch)\n+{\n+    return batch->n_ops;\n+}\n+\n+static inline\n+size_t ct_offload_op_batch_size(struct ct_offload_op_batch *batch)\n+{\n+    return batch->allocated;\n+}\n+\n+#define CT_OFFLOAD_BATCH_OP_FOR_EACH(IDX, OP, BATCH) \\\n+    for (size_t IDX = 0; IDX < ct_offload_op_batch_len(BATCH); IDX++) \\\n+        if (OP = &((BATCH)->ops[IDX]), true)\n+\n #endif /* CT_OFFLOAD_H */\n","prefixes":["ovs-dev","RFC","06/12"]}