get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/1525733/
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 1525733,
    "url": "http://patchwork.ozlabs.org/api/patches/1525733/",
    "web_url": "http://patchwork.ozlabs.org/project/openvswitch/patch/0d917810d36ddb53c378b5615fa7fb8e9f193215.1631094144.git.grive@u256.net/",
    "project": {
        "id": 47,
        "url": "http://patchwork.ozlabs.org/api/projects/47/",
        "name": "Open vSwitch",
        "link_name": "openvswitch",
        "list_id": "ovs-dev.openvswitch.org",
        "list_email": "ovs-dev@openvswitch.org",
        "web_url": "http://openvswitch.org/",
        "scm_url": "git@github.com:openvswitch/ovs.git",
        "webscm_url": "https://github.com/openvswitch/ovs",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<0d917810d36ddb53c378b5615fa7fb8e9f193215.1631094144.git.grive@u256.net>",
    "list_archive_url": null,
    "date": "2021-09-08T09:47:47",
    "name": "[ovs-dev,v5,23/27] dpif-netdev: Use lockless queue to manage offloads",
    "commit_ref": null,
    "pull_url": null,
    "state": "new",
    "archived": false,
    "hash": "b01b707f1cb77e3ede0eeae7fa933bfee68b8af5",
    "submitter": {
        "id": 78795,
        "url": "http://patchwork.ozlabs.org/api/people/78795/",
        "name": "Gaëtan Rivet",
        "email": "grive@u256.net"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/openvswitch/patch/0d917810d36ddb53c378b5615fa7fb8e9f193215.1631094144.git.grive@u256.net/mbox/",
    "series": [
        {
            "id": 261424,
            "url": "http://patchwork.ozlabs.org/api/series/261424/",
            "web_url": "http://patchwork.ozlabs.org/project/openvswitch/list/?series=261424",
            "date": "2021-09-08T09:47:24",
            "name": "dpif-netdev: Parallel offload processing",
            "version": 5,
            "mbox": "http://patchwork.ozlabs.org/series/261424/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/1525733/comments/",
    "check": "success",
    "checks": "http://patchwork.ozlabs.org/api/patches/1525733/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<ovs-dev-bounces@openvswitch.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "ovs-dev@openvswitch.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@bilbo.ozlabs.org",
            "ovs-dev@lists.linuxfoundation.org"
        ],
        "Authentication-Results": [
            "ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n unprotected) header.d=u256.net header.i=@u256.net header.a=rsa-sha256\n header.s=fm2 header.b=btEbuYGb;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n unprotected) header.d=messagingengine.com header.i=@messagingengine.com\n header.a=rsa-sha256 header.s=fm3 header.b=bTTz4bS7;\n\tdkim-atps=neutral",
            "ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org\n (client-ip=140.211.166.133; helo=smtp2.osuosl.org;\n envelope-from=ovs-dev-bounces@openvswitch.org; receiver=<UNKNOWN>)",
            "smtp1.osuosl.org (amavisd-new);\n dkim=pass (2048-bit key) header.d=u256.net header.b=\"btEbuYGb\";\n dkim=pass (2048-bit key) header.d=messagingengine.com\n header.b=\"bTTz4bS7\""
        ],
        "Received": [
            "from smtp2.osuosl.org (smtp2.osuosl.org [140.211.166.133])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 4H4HSG73ztz9sW8\n\tfor <incoming@patchwork.ozlabs.org>; Wed,  8 Sep 2021 19:50:22 +1000 (AEST)",
            "from localhost (localhost [127.0.0.1])\n\tby smtp2.osuosl.org (Postfix) with ESMTP id 1570A40499;\n\tWed,  8 Sep 2021 09:50:21 +0000 (UTC)",
            "from smtp2.osuosl.org ([127.0.0.1])\n\tby localhost (smtp2.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n\twith ESMTP id 5lIl6AZJ703r; Wed,  8 Sep 2021 09:50:18 +0000 (UTC)",
            "from lists.linuxfoundation.org (lf-lists.osuosl.org\n [IPv6:2605:bc80:3010:104::8cd3:938])\n\tby smtp2.osuosl.org (Postfix) with ESMTPS id 17F3C4072C;\n\tWed,  8 Sep 2021 09:50:17 +0000 (UTC)",
            "from lf-lists.osuosl.org (localhost [127.0.0.1])\n\tby lists.linuxfoundation.org (Postfix) with ESMTP id 00EDAC0020;\n\tWed,  8 Sep 2021 09:50:16 +0000 (UTC)",
            "from smtp1.osuosl.org (smtp1.osuosl.org [IPv6:2605:bc80:3010::138])\n by lists.linuxfoundation.org (Postfix) with ESMTP id 11DF8C0020\n for <ovs-dev@openvswitch.org>; Wed,  8 Sep 2021 09:50:13 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n by smtp1.osuosl.org (Postfix) with ESMTP id 368D4834EF\n for <ovs-dev@openvswitch.org>; Wed,  8 Sep 2021 09:48:41 +0000 (UTC)",
            "from smtp1.osuosl.org ([127.0.0.1])\n by localhost (smtp1.osuosl.org [127.0.0.1]) (amavisd-new, port 10024)\n with ESMTP id XVJ0rz3sMIZP for <ovs-dev@openvswitch.org>;\n Wed,  8 Sep 2021 09:48:39 +0000 (UTC)",
            "from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com\n [64.147.123.19])\n by smtp1.osuosl.org (Postfix) with ESMTPS id B52B08364B\n for <ovs-dev@openvswitch.org>; Wed,  8 Sep 2021 09:48:38 +0000 (UTC)",
            "from compute2.internal (compute2.nyi.internal [10.202.2.42])\n by mailout.west.internal (Postfix) with ESMTP id 097D332009E5;\n Wed,  8 Sep 2021 05:48:37 -0400 (EDT)",
            "from mailfrontend2 ([10.202.2.163])\n by compute2.internal (MEProxy); Wed, 08 Sep 2021 05:48:38 -0400",
            "by mail.messagingengine.com (Postfix) with ESMTPA; Wed,\n 8 Sep 2021 05:48:36 -0400 (EDT)"
        ],
        "X-Virus-Scanned": [
            "amavisd-new at osuosl.org",
            "amavisd-new at osuosl.org"
        ],
        "X-Greylist": "from auto-whitelisted by SQLgrey-1.8.0",
        "DKIM-Signature": [
            "v=1; a=rsa-sha256; c=relaxed/relaxed; d=u256.net; h=from\n :to:cc:subject:date:message-id:in-reply-to:references\n :mime-version:content-transfer-encoding; s=fm2; bh=oahR8q2oKeV+K\n TiDuu2PYWnqdLlBYiK4Xsa0X64Oukg=; b=btEbuYGbKbsV0dIJMmzQBQ9gI0FlK\n 5C1RUOTidU1Vo6nmgJvV2NOSL/rX11ffsd47kVmo2YLOcwUeR1fU8p+ypenIhsp3\n sHvT3FDlkEGZV7fgxdzby1PjD/+evW91/Dm+69aJrYlBYtwdT/nyZ466MUF4Vrr4\n Xp6b1v2dWLGQfItDfRCOyQhY3xzyNFWdpLGeLX2MvmPFLC8IiQ0lEtdM7BYR9CkZ\n XXYhOpbjej6GCRnr8GAlRUvPT5vhyMli7fGtmfor3huxfb/sPVvZKhuMBz6abtJV\n qijPEA/wHj0svzc3Wuh0L7B6Q9oNMu9LIJSiiTIAie+91MZJqJJeZUqpw==",
            "v=1; a=rsa-sha256; c=relaxed/relaxed; d=\n messagingengine.com; h=cc:content-transfer-encoding:date:from\n :in-reply-to:message-id:mime-version:references:subject:to\n :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=\n fm3; bh=oahR8q2oKeV+KTiDuu2PYWnqdLlBYiK4Xsa0X64Oukg=; b=bTTz4bS7\n hB7Lj17x4i4IAeYd0zJPWjrab27/f9pVDvxI4rEwfU8By5+3OyZRIkGdnSw1env3\n ZQAUKexu1YeqxWzxmoRF9JPhEzBfakvpAL//zP8b4NWqeEE5/XEqVihZEHi5FNpa\n NN+PwsZCWM45nqzCJtFnPSlPuycjAuclPFTLJzL51aJwVcyDKYhh5+1TlSIEeQyi\n uqVSs+cvyZW5TtoSaYCSjQzUP6JpACWqS/IZaHETjBPS+hd8Pt5VtkTmSaT3rW6H\n ulSPqIQy23agR1FJZChcvWCvUzbB4rnXJ0HlBsZ7izd7Da7wyq5pp2NmGjxI48SP\n B6O2MGOE94lAqw=="
        ],
        "X-ME-Sender": "<xms:9YY4YSq3cxTR5eYa6dfiq-bsX7tA8-3UYFKUc2bbDX4_Aab0cuaCSA>\n <xme:9YY4YQrVgjqLypEnlKdAy_oRKP8tmiBiEBCvfnsj8oigfDoS2JD-ZurZ5v7P0kNxH\n BY5eNppHV85Les6rIQ>",
        "X-ME-Received": "\n <xmr:9YY4YXMJtH1AYBgue2bcfZEN625rWr6J4eYvHC2TM-ABJgd8VA7crhkfUGAGbA6q-HX1CRyCsZQfnm3F75yHmuf4ag>",
        "X-ME-Proxy-Cause": "\n gggruggvucftvghtrhhoucdtuddrgedvtddrudefjedgudekucetufdoteggodetrfdotf\n fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen\n uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne\n cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefirggvthgr\n nhcutfhivhgvthcuoehgrhhivhgvsehuvdehiedrnhgvtheqnecuggftrfgrthhtvghrnh\n ephefgveffkeetheetfeeifedvheelfeejfeehveduteejhfekuedtkeeiuedvteehnecu\n vehluhhsthgvrhfuihiivgepheenucfrrghrrghmpehmrghilhhfrhhomhepghhrihhvvg\n esuhdvheeirdhnvght",
        "X-ME-Proxy": "<xmx:9YY4YR7EDPtYCseDwF3B4x9v0oVI0ur0FYUtqqGgR0oTA_cbr0CeTQ>\n <xmx:9YY4YR45jmheSXTotRcACK7A9hPpLV79_0slGTI4Y2ZSsTUXXAr7sA>\n <xmx:9YY4YRjibXhw6OX7XV4a9C4iI7PEUDyZqli0pHfgn7Ap4kKqNOKKSA>\n <xmx:9YY4YTT1IGBYDDnYcuj8J4wGjAaUG0aytKdlg92pGWG0zXG-Od-WZw>",
        "From": "Gaetan Rivet <grive@u256.net>",
        "To": "ovs-dev@openvswitch.org",
        "Date": "Wed,  8 Sep 2021 11:47:47 +0200",
        "Message-Id": "\n <0d917810d36ddb53c378b5615fa7fb8e9f193215.1631094144.git.grive@u256.net>",
        "X-Mailer": "git-send-email 2.31.1",
        "In-Reply-To": "<cover.1631094144.git.grive@u256.net>",
        "References": "<cover.1631094144.git.grive@u256.net>",
        "MIME-Version": "1.0",
        "Cc": "Eli Britstein <elibr@nvidia.com>,\n Maxime Coquelin <maxime.coquelin@redhat.com>",
        "Subject": "[ovs-dev] [PATCH v5 23/27] dpif-netdev: Use lockless queue to\n\tmanage offloads",
        "X-BeenThere": "ovs-dev@openvswitch.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "<ovs-dev.openvswitch.org>",
        "List-Unsubscribe": "<https://mail.openvswitch.org/mailman/options/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=unsubscribe>",
        "List-Archive": "<http://mail.openvswitch.org/pipermail/ovs-dev/>",
        "List-Post": "<mailto:ovs-dev@openvswitch.org>",
        "List-Help": "<mailto:ovs-dev-request@openvswitch.org?subject=help>",
        "List-Subscribe": "<https://mail.openvswitch.org/mailman/listinfo/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=subscribe>",
        "Content-Type": "text/plain; charset=\"us-ascii\"",
        "Content-Transfer-Encoding": "7bit",
        "Errors-To": "ovs-dev-bounces@openvswitch.org",
        "Sender": "\"dev\" <ovs-dev-bounces@openvswitch.org>"
    },
    "content": "The dataplane threads (PMDs) send offloading commands to a dedicated\noffload management thread. The current implementation uses a lock\nand benchmarks show a high contention on the queue in some cases.\n\nWith high-contention, the mutex will more often lead to the locking\nthread yielding in wait, using a syscall. This should be avoided in\na userland dataplane.\n\nThe mpsc-queue can be used instead. It uses less cycles and has\nlower latency. Benchmarks show better behavior as multiple\nrevalidators and one or multiple PMDs writes to a single queue\nwhile another thread polls it.\n\nOne trade-off with the new scheme however is to be forced to poll\nthe queue from the offload thread. Without mutex, a cond_wait\ncannot be used for signaling. The offload thread is implementing\nan exponential backoff and will sleep in short increments when no\ndata is available. This makes the thread yield, at the price of\nsome latency to manage offloads after an inactivity period.\n\nSigned-off-by: Gaetan Rivet <grive@u256.net>\nReviewed-by: Eli Britstein <elibr@nvidia.com>\nReviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>\n---\n lib/dpif-netdev.c | 109 ++++++++++++++++++++++++----------------------\n 1 file changed, 57 insertions(+), 52 deletions(-)",
    "diff": "diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c\nindex bf5785981..4e91926fd 100644\n--- a/lib/dpif-netdev.c\n+++ b/lib/dpif-netdev.c\n@@ -55,6 +55,7 @@\n #include \"id-pool.h\"\n #include \"ipf.h\"\n #include \"mov-avg.h\"\n+#include \"mpsc-queue.h\"\n #include \"netdev.h\"\n #include \"netdev-offload.h\"\n #include \"netdev-provider.h\"\n@@ -366,25 +367,22 @@ union dp_offload_thread_data {\n };\n \n struct dp_offload_thread_item {\n-    struct ovs_list node;\n+    struct mpsc_queue_node node;\n     enum dp_offload_type type;\n     long long int timestamp;\n     union dp_offload_thread_data data[0];\n };\n \n struct dp_offload_thread {\n-    struct ovs_mutex mutex;\n-    struct ovs_list list;\n-    uint64_t enqueued_item;\n+    struct mpsc_queue queue;\n+    atomic_uint64_t enqueued_item;\n     struct mov_avg_cma cma;\n     struct mov_avg_ema ema;\n-    pthread_cond_t cond;\n };\n \n static struct dp_offload_thread dp_offload_thread = {\n-    .mutex = OVS_MUTEX_INITIALIZER,\n-    .list  = OVS_LIST_INITIALIZER(&dp_offload_thread.list),\n-    .enqueued_item = 0,\n+    .queue = MPSC_QUEUE_INITIALIZER(&dp_offload_thread.queue),\n+    .enqueued_item = ATOMIC_VAR_INIT(0),\n     .cma = MOV_AVG_CMA_INITIALIZER,\n     .ema = MOV_AVG_EMA_INITIALIZER(100),\n };\n@@ -2616,11 +2614,8 @@ dp_netdev_free_offload(struct dp_offload_thread_item *offload)\n static void\n dp_netdev_append_offload(struct dp_offload_thread_item *offload)\n {\n-    ovs_mutex_lock(&dp_offload_thread.mutex);\n-    ovs_list_push_back(&dp_offload_thread.list, &offload->node);\n-    dp_offload_thread.enqueued_item++;\n-    xpthread_cond_signal(&dp_offload_thread.cond);\n-    ovs_mutex_unlock(&dp_offload_thread.mutex);\n+    mpsc_queue_insert(&dp_offload_thread.queue, &offload->node);\n+    atomic_count_inc64(&dp_offload_thread.enqueued_item);\n }\n \n static int\n@@ -2765,58 +2760,68 @@ dp_offload_flush(struct dp_offload_thread_item *item)\n     ovs_barrier_block(flush->barrier);\n }\n \n+#define DP_NETDEV_OFFLOAD_BACKOFF_MIN 1\n+#define DP_NETDEV_OFFLOAD_BACKOFF_MAX 64\n #define DP_NETDEV_OFFLOAD_QUIESCE_INTERVAL_US (10 * 1000) /* 10 ms */\n \n static void *\n dp_netdev_flow_offload_main(void *data OVS_UNUSED)\n {\n     struct dp_offload_thread_item *offload;\n-    struct ovs_list *list;\n+    struct mpsc_queue_node *node;\n+    struct mpsc_queue *queue;\n     long long int latency_us;\n     long long int next_rcu;\n     long long int now;\n+    uint64_t backoff;\n \n-    next_rcu = time_usec() + DP_NETDEV_OFFLOAD_QUIESCE_INTERVAL_US;\n-    for (;;) {\n-        ovs_mutex_lock(&dp_offload_thread.mutex);\n-        if (ovs_list_is_empty(&dp_offload_thread.list)) {\n-            ovsrcu_quiesce_start();\n-            ovs_mutex_cond_wait(&dp_offload_thread.cond,\n-                                &dp_offload_thread.mutex);\n-            ovsrcu_quiesce_end();\n-            next_rcu = time_usec() + DP_NETDEV_OFFLOAD_QUIESCE_INTERVAL_US;\n-        }\n-        list = ovs_list_pop_front(&dp_offload_thread.list);\n-        dp_offload_thread.enqueued_item--;\n-        offload = CONTAINER_OF(list, struct dp_offload_thread_item, node);\n-        ovs_mutex_unlock(&dp_offload_thread.mutex);\n-\n-        switch (offload->type) {\n-        case DP_OFFLOAD_FLOW:\n-            dp_offload_flow(offload);\n-            break;\n-        case DP_OFFLOAD_FLUSH:\n-            dp_offload_flush(offload);\n-            break;\n-        default:\n-            OVS_NOT_REACHED();\n+    queue = &dp_offload_thread.queue;\n+    mpsc_queue_acquire(queue);\n+\n+    while (true) {\n+        backoff = DP_NETDEV_OFFLOAD_BACKOFF_MIN;\n+        while (mpsc_queue_tail(queue) == NULL) {\n+            xnanosleep(backoff * 1E6);\n+            if (backoff < DP_NETDEV_OFFLOAD_BACKOFF_MAX) {\n+                backoff <<= 1;\n+            }\n         }\n \n-        now = time_usec();\n+        next_rcu = time_usec() + DP_NETDEV_OFFLOAD_QUIESCE_INTERVAL_US;\n+        MPSC_QUEUE_FOR_EACH_POP (node, queue) {\n+            offload = CONTAINER_OF(node, struct dp_offload_thread_item, node);\n+            atomic_count_dec64(&dp_offload_thread.enqueued_item);\n \n-        latency_us = now - offload->timestamp;\n-        mov_avg_cma_update(&dp_offload_thread.cma, latency_us);\n-        mov_avg_ema_update(&dp_offload_thread.ema, latency_us);\n+            switch (offload->type) {\n+            case DP_OFFLOAD_FLOW:\n+                dp_offload_flow(offload);\n+                break;\n+            case DP_OFFLOAD_FLUSH:\n+                dp_offload_flush(offload);\n+                break;\n+            default:\n+                OVS_NOT_REACHED();\n+            }\n \n-        dp_netdev_free_offload(offload);\n+            now = time_usec();\n \n-        /* Do RCU synchronization at fixed interval. */\n-        if (now > next_rcu) {\n-            ovsrcu_quiesce();\n-            next_rcu = time_usec() + DP_NETDEV_OFFLOAD_QUIESCE_INTERVAL_US;\n+            latency_us = now - offload->timestamp;\n+            mov_avg_cma_update(&dp_offload_thread.cma, latency_us);\n+            mov_avg_ema_update(&dp_offload_thread.ema, latency_us);\n+\n+            dp_netdev_free_offload(offload);\n+\n+            /* Do RCU synchronization at fixed interval. */\n+            if (now > next_rcu) {\n+                ovsrcu_quiesce();\n+                next_rcu = time_usec() + DP_NETDEV_OFFLOAD_QUIESCE_INTERVAL_US;\n+            }\n         }\n     }\n \n+    OVS_NOT_REACHED();\n+    mpsc_queue_release(queue);\n+\n     return NULL;\n }\n \n@@ -2827,7 +2832,7 @@ queue_netdev_flow_del(struct dp_netdev_pmd_thread *pmd,\n     struct dp_offload_thread_item *offload;\n \n     if (ovsthread_once_start(&offload_thread_once)) {\n-        xpthread_cond_init(&dp_offload_thread.cond, NULL);\n+        mpsc_queue_init(&dp_offload_thread.queue);\n         ovs_thread_create(\"hw_offload\", dp_netdev_flow_offload_main, NULL);\n         ovsthread_once_done(&offload_thread_once);\n     }\n@@ -2917,7 +2922,7 @@ queue_netdev_flow_put(struct dp_netdev_pmd_thread *pmd,\n     }\n \n     if (ovsthread_once_start(&offload_thread_once)) {\n-        xpthread_cond_init(&dp_offload_thread.cond, NULL);\n+        mpsc_queue_init(&dp_offload_thread.queue);\n         ovs_thread_create(\"hw_offload\", dp_netdev_flow_offload_main, NULL);\n         ovsthread_once_done(&offload_thread_once);\n     }\n@@ -2969,7 +2974,7 @@ dp_netdev_offload_flush_enqueue(struct dp_netdev *dp,\n     struct dp_offload_flush_item *flush;\n \n     if (ovsthread_once_start(&offload_thread_once)) {\n-        xpthread_cond_init(&dp_offload_thread.cond, NULL);\n+        mpsc_queue_init(&dp_offload_thread.queue);\n         ovs_thread_create(\"hw_offload\", dp_netdev_flow_offload_main, NULL);\n         ovsthread_once_done(&offload_thread_once);\n     }\n@@ -4389,8 +4394,8 @@ dpif_netdev_offload_stats_get(struct dpif *dpif,\n     }\n     ovs_mutex_unlock(&dp->port_mutex);\n \n-    stats->counters[DP_NETDEV_HW_OFFLOADS_STATS_ENQUEUED].value =\n-        dp_offload_thread.enqueued_item;\n+    atomic_read_relaxed(&dp_offload_thread.enqueued_item,\n+        &stats->counters[DP_NETDEV_HW_OFFLOADS_STATS_ENQUEUED].value);\n     stats->counters[DP_NETDEV_HW_OFFLOADS_STATS_INSERTED].value = nb_offloads;\n     stats->counters[DP_NETDEV_HW_OFFLOADS_STATS_LAT_CMA_MEAN].value =\n         mov_avg_cma(&dp_offload_thread.cma);\n",
    "prefixes": [
        "ovs-dev",
        "v5",
        "23/27"
    ]
}