get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2218459/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2218459,
    "url": "http://patchwork.ozlabs.org/api/patches/2218459/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/openvswitch/patch/20260401091318.2671624-5-elibr@nvidia.com/",
    "project": {
        "id": 47,
        "url": "http://patchwork.ozlabs.org/api/projects/47/?format=api",
        "name": "Open vSwitch",
        "link_name": "openvswitch",
        "list_id": "ovs-dev.openvswitch.org",
        "list_email": "ovs-dev@openvswitch.org",
        "web_url": "http://openvswitch.org/",
        "scm_url": "git@github.com:openvswitch/ovs.git",
        "webscm_url": "https://github.com/openvswitch/ovs",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20260401091318.2671624-5-elibr@nvidia.com>",
    "list_archive_url": null,
    "date": "2026-04-01T09:13:11",
    "name": "[ovs-dev,v3,04/11] netdev-dpdk-private: Refactor declarations from netdev-dpdk.",
    "commit_ref": null,
    "pull_url": null,
    "state": "new",
    "archived": false,
    "hash": "4e4a963fb51e63111a731ef331740d0fdeeb565a",
    "submitter": {
        "id": 79848,
        "url": "http://patchwork.ozlabs.org/api/people/79848/?format=api",
        "name": "Eli Britstein",
        "email": "elibr@nvidia.com"
    },
    "delegate": {
        "id": 75123,
        "url": "http://patchwork.ozlabs.org/api/users/75123/?format=api",
        "username": "echaudron",
        "first_name": "Eelco",
        "last_name": "Chaudron",
        "email": "echaudro@redhat.com"
    },
    "mbox": "http://patchwork.ozlabs.org/project/openvswitch/patch/20260401091318.2671624-5-elibr@nvidia.com/mbox/",
    "series": [
        {
            "id": 498297,
            "url": "http://patchwork.ozlabs.org/api/series/498297/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/openvswitch/list/?series=498297",
            "date": "2026-04-01T09:13:07",
            "name": "netdev-doca",
            "version": 3,
            "mbox": "http://patchwork.ozlabs.org/series/498297/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/2218459/comments/",
    "check": "fail",
    "checks": "http://patchwork.ozlabs.org/api/patches/2218459/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<ovs-dev-bounces@openvswitch.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "dev@openvswitch.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@legolas.ozlabs.org",
            "ovs-dev@lists.linuxfoundation.org"
        ],
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key;\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=oFagxqSG;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=openvswitch.org\n (client-ip=2605:bc80:3010::133; helo=smtp2.osuosl.org;\n envelope-from=ovs-dev-bounces@openvswitch.org; receiver=patchwork.ozlabs.org)",
            "smtp2.osuosl.org;\n\tdkim=fail reason=\"signature verification failed\" (2048-bit key,\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=oFagxqSG",
            "smtp3.osuosl.org;\n dmarc=pass (p=reject dis=none) header.from=nvidia.com",
            "smtp3.osuosl.org; dkim=pass (2048-bit key,\n unprotected) header.d=Nvidia.com header.i=@Nvidia.com header.a=rsa-sha256\n header.s=selector2 header.b=oFagxqSG"
        ],
        "Received": [
            "from smtp2.osuosl.org (smtp2.osuosl.org [IPv6:2605:bc80:3010::133])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4flzrs3nqCz1yGH\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 01 Apr 2026 20:17:01 +1100 (AEDT)",
            "from localhost (localhost [127.0.0.1])\n\tby smtp2.osuosl.org (Postfix) with ESMTP id 217B44090B;\n\tWed,  1 Apr 2026 09:17:00 +0000 (UTC)",
            "from smtp2.osuosl.org ([127.0.0.1])\n by localhost (smtp2.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id 9CSzDtDOHHhy; Wed,  1 Apr 2026 09:16:56 +0000 (UTC)",
            "from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56])\n\tby smtp2.osuosl.org (Postfix) with ESMTPS id 0A36E40865;\n\tWed,  1 Apr 2026 09:16:56 +0000 (UTC)",
            "from lf-lists.osuosl.org (localhost [127.0.0.1])\n\tby lists.linuxfoundation.org (Postfix) with ESMTP id E4893C0070;\n\tWed,  1 Apr 2026 09:16:55 +0000 (UTC)",
            "from smtp3.osuosl.org (smtp3.osuosl.org [140.211.166.136])\n by lists.linuxfoundation.org (Postfix) with ESMTP id 0EE4AC0070\n for <dev@openvswitch.org>; Wed,  1 Apr 2026 09:16:55 +0000 (UTC)",
            "from localhost (localhost [127.0.0.1])\n by smtp3.osuosl.org (Postfix) with ESMTP id 998B46103C\n for <dev@openvswitch.org>; Wed,  1 Apr 2026 09:15:33 +0000 (UTC)",
            "from smtp3.osuosl.org ([127.0.0.1])\n by localhost (smtp3.osuosl.org [127.0.0.1]) (amavis, port 10024) with ESMTP\n id 4BXO9nTsMoqC for <dev@openvswitch.org>;\n Wed,  1 Apr 2026 09:15:30 +0000 (UTC)",
            "from MW6PR02CU001.outbound.protection.outlook.com\n (mail-westus2azlp170120002.outbound.protection.outlook.com\n [IPv6:2a01:111:f403:c007::2])\n by smtp3.osuosl.org (Postfix) with ESMTPS id A4DB36101B\n for <dev@openvswitch.org>; Wed,  1 Apr 2026 09:15:28 +0000 (UTC)",
            "from PH5P220CA0009.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:34a::14)\n by DS0PR12MB8766.namprd12.prod.outlook.com (2603:10b6:8:14e::15) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9769.16; Wed, 1 Apr\n 2026 09:14:56 +0000",
            "from CY4PEPF0000E9D7.namprd05.prod.outlook.com\n (2603:10b6:510:34a:cafe::e1) by PH5P220CA0009.outlook.office365.com\n (2603:10b6:510:34a::14) with Microsoft SMTP Server (version=TLS1_3,\n cipher=TLS_AES_256_GCM_SHA384) id 15.20.9745.30 via Frontend Transport; Wed,\n 1 Apr 2026 09:15:00 +0000",
            "from mail.nvidia.com (216.228.117.160) by\n CY4PEPF0000E9D7.mail.protection.outlook.com (10.167.241.70) with Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.9769.17 via Frontend Transport; Wed, 1 Apr 2026 09:14:55 +0000",
            "from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com\n (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Wed, 1 Apr\n 2026 02:14:38 -0700",
            "from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com\n (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Wed, 1 Apr\n 2026 02:14:34 -0700"
        ],
        "X-Virus-Scanned": [
            "amavis at osuosl.org",
            "amavis at osuosl.org"
        ],
        "X-Comment": "SPF check N/A for local connections - client-ip=140.211.9.56;\n helo=lists.linuxfoundation.org;\n envelope-from=ovs-dev-bounces@openvswitch.org; receiver=<UNKNOWN> ",
        "DKIM-Filter": [
            "OpenDKIM Filter v2.11.0 smtp2.osuosl.org 0A36E40865",
            "OpenDKIM Filter v2.11.0 smtp3.osuosl.org A4DB36101B"
        ],
        "Received-SPF": [
            "Pass (mailfrom) identity=mailfrom;\n client-ip=2a01:111:f403:c007::2;\n helo=mw6pr02cu001.outbound.protection.outlook.com;\n envelope-from=elibr@nvidia.com; receiver=<UNKNOWN>",
            "Pass (protection.outlook.com: domain of nvidia.com designates\n 216.228.117.160 as permitted sender) receiver=protection.outlook.com;\n client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C"
        ],
        "DMARC-Filter": "OpenDMARC Filter v1.4.2 smtp3.osuosl.org A4DB36101B",
        "ARC-Seal": "i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;\n b=QRc+odm5nnx2w++1pwCrfMjgoZdFVvrhvBp70wulbBjAH+T1dZqzE2GY2Ss9h7CwUoxobJ6iF2VbgkE+YXLnPmeOHg/mO5OSI8qnxhehRT5piu+f3o8NfcV0KVuZv61RW9KpYLHLmhDZE99GmyYVsTlmurGNg7xT6vZEWIS4KulPa0jR88+uqvUPP0Zi3A709g0NuSJRomal7J6l1yWXlk4FVTMusdthU+MRE19dAuglEJEMXudYrwkBIoxNouKKNHZyEWMOZtaDKekoIv40/6DNmKXgDuUt8TB7YfMK/lS1QPpi5H5UEA7ZsZ6eNyrgr8nmo6A8twtx0cpxa3odeQ==",
        "ARC-Message-Signature": "i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector10001;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=JoNInTZH/DG+aowdbaMaEm0LcveLShOqq7x/SwXnUNE=;\n b=GLZieBF5Tu7Tltu+5FgFKkxpWE17pfh/z50rf28y+lTnrZsERvgbNJxp9yxJLtLuun4cQQ/94zlHX3Bb58lg1Gv2HTIibIMfzMtxSaxzQxWljA2x5hpwdLPbfmoCGC0uoIYTeTnDqxnNWS3T59T30h+CEDNlS2D8z+p+ciRcr3TqA474W67/4H9Qr47a538Y2RojweT9U+BKhBiB4ADsU/5RRnjHM1FcB3rN3DsRBqyU3DeByMLDdCpvNevU7ILBaKWJaKAEcWVRz9Udyl4AW0LnLT4uBiUpYxH1zz5hANI3LOReGunecS3MZiGEEvEOg6OC78TdGK7N8vLvkcCbOg==",
        "ARC-Authentication-Results": "i=1; mx.microsoft.com 1; spf=pass (sender ip is\n 216.228.117.160) smtp.rcpttodomain=openvswitch.org smtp.mailfrom=nvidia.com;\n dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com;\n dkim=none (message not signed); arc=none (0)",
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;\n s=selector2;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=JoNInTZH/DG+aowdbaMaEm0LcveLShOqq7x/SwXnUNE=;\n b=oFagxqSGj3KAhNpPRPsPX/1KEF1EeEn77mxRNdOuleuseToqkeQjtLxjrbIIHH2bVQdXsoFmvYF/aCYeVHHigh/Nj0sNOM58rAq0nUoo8q+3cKyllySM8d5SpiY0PIUx9wxN4nk5jz1d7j7azsOIbZeYJ3UIED2bWoHBFKumMaMja6bqx85/v8rZMl0s6Qx2/nX1ibwAI9zMx1CAbK7r7NhyqN6lPOV0nENaV2N1c9CESa7rQ9jlupQbQOietDjd4QbgGbBlztPp1kQpL35BTHSL0QItpkgGpBXa8b8qwqjkBCbytZOooNx7KMPFFS2WEiKO03qrrqBrDNNTBsrduQ==",
        "X-MS-Exchange-Authentication-Results": "spf=pass (sender IP is 216.228.117.160)\n smtp.mailfrom=nvidia.com;\n dkim=none (message not signed)\n header.d=none;dmarc=pass action=none header.from=nvidia.com;",
        "To": "<dev@openvswitch.org>",
        "Date": "Wed, 1 Apr 2026 12:13:11 +0300",
        "Message-ID": "<20260401091318.2671624-5-elibr@nvidia.com>",
        "X-Mailer": "git-send-email 2.34.1",
        "In-Reply-To": "<20260401091318.2671624-1-elibr@nvidia.com>",
        "References": "<20260401091318.2671624-1-elibr@nvidia.com>",
        "MIME-Version": "1.0",
        "X-Originating-IP": "[10.126.231.35]",
        "X-ClientProxiedBy": "rnnvmail203.nvidia.com (10.129.68.9) To\n rnnvmail201.nvidia.com (10.129.68.8)",
        "X-EOPAttributedMessage": "0",
        "X-MS-PublicTrafficType": "Email",
        "X-MS-TrafficTypeDiagnostic": "CY4PEPF0000E9D7:EE_|DS0PR12MB8766:EE_",
        "X-MS-Office365-Filtering-Correlation-Id": "07112494-04b7-4338-7008-08de8fcf241f",
        "X-MS-Exchange-SenderADCheck": "1",
        "X-MS-Exchange-AntiSpam-Relay": "0",
        "X-Microsoft-Antispam": "BCL:0;\n ARA:13230040|36860700016|376014|1800799024|82310400026|13003099007|18002099003|56012099003|22082099003;",
        "X-Microsoft-Antispam-Message-Info": "\n 7bwCoZ2hKYCIqSOopuIO1v5wHA5r2DDWU1tNeegtwNwz5od9LrpKLTv4RLTYixBpeu7LiJCtNtoU+aVHSIV2xz+RAxhGU7q/HHfKPJIoQPOFqp5/IpuOhRYsQNimBpMRXslMBDd6h/t+QSY5Ax3V4rotYGBhhIVpsXH+2uC7fCH5x2qiuxP1uZkpH1kwCtY7xGQRbCA7C0N4CY3xKOV3kXjBQIgGM9C2E4kGiwouvDu6sg8KiuOgk7VhQuSUDyp+SzR/yyLfdO1UbRAUh+MIDX1zUKUyLMOmdJtWl1s49qB5sYtsQAUZ2gs7fi/LPieyqydKBkVivNeiZQrwAhcGMzZVo7hALZC7i4YE6b+c1IMole1K2KxFEFEFc6d7I5i3jmCn4C1WXLf4buR1/qJ5S07vjI11UIqTn4xXQZkrgQiCgfdoB1zn5iaIuVEa3jKXUwN4NDM9xrlhdwdKDEO17m1C/Z129Kk6O5OUCQDR9MnD21pDAMcgWka2uF54qFv9VKD4iokf1TngdTdn6/OHVgWNaxddwJCxIY8Hp4SRzpEZpivGXxdafp8DA0v4UTHd8pshq9dkhnEsrAYKDcdH55z9VEx/Y0clIrmyJIUFWfjZdiNxXsuomU1KhWkS7KuClwF6uG1EplXlT6yXYMNjLswnX+xDzY/PCBlFdn5W2DgUnL40awp0oTKHgFG00JnzUzcy676GSeSNMF9WlQ8nCJwfddp2WazzEnnN2eVmcMmDEVsjotBsgniZtN71pF4+U61C0oBKOiCdjeoNYJArJA==",
        "X-Forefront-Antispam-Report": "CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1;\n SRV:;\n IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE;\n SFS:(13230040)(36860700016)(376014)(1800799024)(82310400026)(13003099007)(18002099003)(56012099003)(22082099003);\n DIR:OUT; SFP:1101;",
        "X-MS-Exchange-AntiSpam-MessageData-ChunkCount": "1",
        "X-MS-Exchange-AntiSpam-MessageData-0": "\n D7IUHXi8YhCtrtOkdtk5HiSC60w1X97AJLWai2mmJU6Uingu2bzJKyNQemyHsaMZDcB4malLkwYxwKWAxJIt0DYhiFRQQ1qmzELR+aVKxnheF5gxxhJv2cZfh3dGc9S66geQHnLiM/YReEL3ReRJ+RcpxmyuqqYZwSZxB4+1xeNIwftQgl57J1coPEj7JklKOv8AgvN1Cap6Nf+HhTXzvOvJsrklFeMdH+Am2NBdbf1yWDWb2OyAQ2DL6Ph2JchygsTxm8zLzm/Pr35VymvlGFoVzr9EYbAw+X24jKvf0cWgwjHyjbV6A21EMydRgw2jDEAjLsrQZm48Cefxtm49Q6zXj1+H2DyFLEmjUnhZq6mlBb5Gg9pX4RE1ECq6hwJ/k2H6D6jFU79lrhLRGAYRlrKBAtgY9vRhQfanKGsVDveLzixj3lnl26K9AUzEjlyx",
        "X-OriginatorOrg": "Nvidia.com",
        "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "01 Apr 2026 09:14:55.8862 (UTC)",
        "X-MS-Exchange-CrossTenant-Network-Message-Id": "\n 07112494-04b7-4338-7008-08de8fcf241f",
        "X-MS-Exchange-CrossTenant-Id": "43083d15-7273-40c1-b7db-39efd9ccc17a",
        "X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp": "\n TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160];\n Helo=[mail.nvidia.com]",
        "X-MS-Exchange-CrossTenant-AuthSource": "\n CY4PEPF0000E9D7.namprd05.prod.outlook.com",
        "X-MS-Exchange-CrossTenant-AuthAs": "Anonymous",
        "X-MS-Exchange-CrossTenant-FromEntityHeader": "HybridOnPrem",
        "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "DS0PR12MB8766",
        "Subject": "[ovs-dev] [PATCH v3 04/11] netdev-dpdk-private: Refactor\n declarations from netdev-dpdk.",
        "X-BeenThere": "ovs-dev@openvswitch.org",
        "X-Mailman-Version": "2.1.30",
        "Precedence": "list",
        "List-Id": "<ovs-dev.openvswitch.org>",
        "List-Unsubscribe": "<https://mail.openvswitch.org/mailman/options/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=unsubscribe>",
        "List-Archive": "<http://mail.openvswitch.org/pipermail/ovs-dev/>",
        "List-Post": "<mailto:ovs-dev@openvswitch.org>",
        "List-Help": "<mailto:ovs-dev-request@openvswitch.org?subject=help>",
        "List-Subscribe": "<https://mail.openvswitch.org/mailman/listinfo/ovs-dev>,\n <mailto:ovs-dev-request@openvswitch.org?subject=subscribe>",
        "From": "Eli Britstein via dev <ovs-dev@openvswitch.org>",
        "Reply-To": "Eli Britstein <elibr@nvidia.com>",
        "Cc": "Eli Britstein <elibr@nvidia.com>, Ilya Maximets <i.maximets@ovn.org>,\n David Marchand <david.marchand@redhat.com>, Maor Dickman <maord@nvidia.com>",
        "Content-Type": "text/plain; charset=\"us-ascii\"",
        "Content-Transfer-Encoding": "7bit",
        "Errors-To": "ovs-dev-bounces@openvswitch.org",
        "Sender": "\"dev\" <ovs-dev-bounces@openvswitch.org>"
    },
    "content": "As a pre-step towards introducing netdev-doca, that has common parts\nwith netdev-dpdk, refactor declarations to be non-static, declared in a\nnew file netdev-dpdk-private.\n\nSigned-off-by: Eli Britstein <elibr@nvidia.com>\n---\n lib/automake.mk           |    1 +\n lib/netdev-dpdk-private.h |  173 +++++\n lib/netdev-dpdk.c         | 1519 +++++++++++++++++--------------------\n 3 files changed, 880 insertions(+), 813 deletions(-)\n create mode 100644 lib/netdev-dpdk-private.h",
    "diff": "diff --git a/lib/automake.mk b/lib/automake.mk\nindex cb6458b0d..bab03c3e7 100644\n--- a/lib/automake.mk\n+++ b/lib/automake.mk\n@@ -209,6 +209,7 @@ lib_libopenvswitch_la_SOURCES = \\\n \tlib/multipath.h \\\n \tlib/namemap.c \\\n \tlib/netdev-dpdk.h \\\n+\tlib/netdev-dpdk-private.h \\\n \tlib/netdev-dummy.c \\\n \tlib/netdev-provider.h \\\n \tlib/netdev-vport.c \\\ndiff --git a/lib/netdev-dpdk-private.h b/lib/netdev-dpdk-private.h\nnew file mode 100644\nindex 000000000..9b82db750\n--- /dev/null\n+++ b/lib/netdev-dpdk-private.h\n@@ -0,0 +1,173 @@\n+/*\n+ * SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES.\n+ * All rights reserved.\n+ * SPDX-License-Identifier: Apache-2.0\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+#ifndef NETDEV_DPDK_PRIVATE_H\n+#define NETDEV_DPDK_PRIVATE_H\n+\n+#include <config.h>\n+\n+#include <rte_config.h>\n+#include <rte_ethdev.h>\n+#include <rte_spinlock.h>\n+\n+#include \"netdev-provider.h\"\n+#include \"util.h\"\n+\n+#include \"openvswitch/thread.h\"\n+\n+extern const struct rte_eth_conf port_conf;\n+\n+/* Defines. */\n+\n+#define SOCKET0              0\n+\n+/*\n+ * need to reserve tons of extra space in the mbufs so we can align the\n+ * DMA addresses to 4KB.\n+ * The minimum mbuf size is limited to avoid scatter behaviour and drop in\n+ * performance for standard Ethernet MTU.\n+ */\n+#define ETHER_HDR_MAX_LEN           (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN \\\n+                                     + (2 * VLAN_HEADER_LEN))\n+#define MTU_TO_FRAME_LEN(mtu)       ((mtu) + RTE_ETHER_HDR_LEN + \\\n+                                     RTE_ETHER_CRC_LEN)\n+#define MTU_TO_MAX_FRAME_LEN(mtu)   ((mtu) + ETHER_HDR_MAX_LEN)\n+#define FRAME_LEN_TO_MTU(frame_len) ((frame_len)                    \\\n+                                     - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)\n+#define NETDEV_DPDK_MBUF_ALIGN      1024\n+\n+#define MP_CACHE_SZ          RTE_MEMPOOL_CACHE_MAX_SIZE\n+\n+/* Default size of Physical NIC RXQ */\n+#define NIC_PORT_DEFAULT_RXQ_SIZE 2048\n+/* Default size of Physical NIC TXQ */\n+#define NIC_PORT_DEFAULT_TXQ_SIZE 2048\n+\n+#define DPDK_ETH_PORT_ID_INVALID    RTE_MAX_ETHPORTS\n+\n+/* DPDK library uses uint16_t for port_id. */\n+typedef uint16_t dpdk_port_t;\n+#define DPDK_PORT_ID_FMT \"%\"PRIu16\n+\n+/* Enums. */\n+\n+enum dpdk_hw_ol_features {\n+    NETDEV_RX_CHECKSUM_OFFLOAD = 1 << 0,\n+    NETDEV_RX_HW_CRC_STRIP = 1 << 1,\n+    NETDEV_RX_HW_SCATTER = 1 << 2,\n+    NETDEV_TX_IPV4_CKSUM_OFFLOAD = 1 << 3,\n+    NETDEV_TX_TCP_CKSUM_OFFLOAD = 1 << 4,\n+    NETDEV_TX_UDP_CKSUM_OFFLOAD = 1 << 5,\n+    NETDEV_TX_SCTP_CKSUM_OFFLOAD = 1 << 6,\n+    NETDEV_TX_TSO_OFFLOAD = 1 << 7,\n+    NETDEV_TX_VXLAN_TNL_TSO_OFFLOAD = 1 << 8,\n+    NETDEV_TX_GENEVE_TNL_TSO_OFFLOAD = 1 << 9,\n+    NETDEV_TX_OUTER_IP_CKSUM_OFFLOAD = 1 << 10,\n+    NETDEV_TX_OUTER_UDP_CKSUM_OFFLOAD = 1 << 11,\n+    NETDEV_TX_GRE_TNL_TSO_OFFLOAD = 1 << 12,\n+};\n+\n+/* Structs. */\n+\n+#ifndef NETDEV_DPDK_TX_Q_TYPE\n+#error \"NETDEV_DPDK_TX_Q_TYPE must be defined before\"  \\\n+       \"including netdev-dpdk-private.h\"\n+#endif\n+\n+#ifndef NETDEV_DPDK_SW_STATS_TYPE\n+#error \"NETDEV_DPDK_SW_STATS_TYPE must be defined before\" \\\n+       \"including netdev-dpdk-private.h\"\n+#endif\n+\n+#ifndef NETDEV_DPDK_GLOBAL_MUTEX\n+#error \"NETDEV_DPDK_GLOBAL_MUTEX must be defined before\" \\\n+       \"including netdev-dpdk-private.h\"\n+#endif\n+\n+struct netdev_rxq_dpdk {\n+    struct netdev_rxq up;\n+    dpdk_port_t port_id;\n+};\n+\n+struct netdev_dpdk_common {\n+    PADDED_MEMBERS_CACHELINE_MARKER(CACHE_LINE_SIZE, cacheline0,\n+        uint16_t port_id;\n+        bool attached;\n+        bool is_representor;\n+        bool started;\n+        struct eth_addr hwaddr;\n+        int mtu;\n+        int socket_id;\n+        int max_packet_len;\n+        enum netdev_flags flags;\n+        int link_reset_cnt;\n+        char *devargs;\n+        NETDEV_DPDK_TX_Q_TYPE *tx_q;\n+        struct rte_eth_link link;\n+    );\n+\n+    PADDED_MEMBERS_CACHELINE_MARKER(CACHE_LINE_SIZE, cacheline1,\n+        struct ovs_mutex mutex OVS_ACQ_AFTER(NETDEV_DPDK_GLOBAL_MUTEX);\n+        struct dpdk_mp *dpdk_mp;\n+    );\n+\n+    PADDED_MEMBERS(CACHE_LINE_SIZE,\n+        struct netdev up;\n+        struct ovs_list list_node OVS_GUARDED_BY(NETDEV_DPDK_GLOBAL_MUTEX);\n+        bool rx_metadata_delivery_configured;\n+    );\n+\n+    PADDED_MEMBERS(CACHE_LINE_SIZE,\n+        struct netdev_stats stats;\n+        NETDEV_DPDK_SW_STATS_TYPE *sw_stats;\n+        rte_spinlock_t stats_lock;\n+    );\n+\n+    PADDED_MEMBERS(CACHE_LINE_SIZE,\n+        /* Configuration fields */\n+        int requested_mtu;\n+        int requested_n_txq;\n+        int user_n_rxq;\n+        int requested_n_rxq;\n+        int requested_rxq_size;\n+        int requested_txq_size;\n+        int rxq_size;\n+        int txq_size;\n+        int requested_socket_id;\n+        struct rte_eth_fc_conf fc_conf;\n+        uint32_t hw_ol_features;\n+        bool requested_lsc_interrupt_mode;\n+        bool lsc_interrupt_mode;\n+        struct eth_addr requested_hwaddr;\n+    );\n+\n+    PADDED_MEMBERS(CACHE_LINE_SIZE,\n+        struct rte_eth_xstat_name *rte_xstats_names;\n+        int rte_xstats_names_size;\n+        int rte_xstats_ids_size;\n+        uint64_t *rte_xstats_ids;\n+    );\n+};\n+\n+static inline struct netdev_dpdk_common *\n+netdev_dpdk_common_cast(const struct netdev *netdev)\n+{\n+    return CONTAINER_OF(netdev, struct netdev_dpdk_common, up);\n+}\n+\n+#endif /* NETDEV_DPDK_PRIVATE_H */\ndiff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c\nindex 54959ff0d..e34e96dd3 100644\n--- a/lib/netdev-dpdk.c\n+++ b/lib/netdev-dpdk.c\n@@ -17,6 +17,14 @@\n #include <config.h>\n #include \"netdev-dpdk.h\"\n \n+#include \"openvswitch/thread.h\"\n+\n+#define NETDEV_DPDK_TX_Q_TYPE struct dpdk_tx_queue\n+#define NETDEV_DPDK_SW_STATS_TYPE struct netdev_dpdk_sw_stats\n+static struct ovs_mutex dpdk_mutex;\n+#define NETDEV_DPDK_GLOBAL_MUTEX dpdk_mutex\n+#include \"netdev-dpdk-private.h\"\n+\n #include <errno.h>\n #include <signal.h>\n #include <stdint.h>\n@@ -94,20 +102,6 @@ static bool per_port_memory = false; /* Status of per port memory support */\n #define OVS_CACHE_LINE_SIZE CACHE_LINE_SIZE\n #define OVS_VPORT_DPDK \"ovs_dpdk\"\n \n-/*\n- * need to reserve tons of extra space in the mbufs so we can align the\n- * DMA addresses to 4KB.\n- * The minimum mbuf size is limited to avoid scatter behaviour and drop in\n- * performance for standard Ethernet MTU.\n- */\n-#define ETHER_HDR_MAX_LEN           (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN \\\n-                                     + (2 * VLAN_HEADER_LEN))\n-#define MTU_TO_FRAME_LEN(mtu)       ((mtu) + RTE_ETHER_HDR_LEN + \\\n-                                     RTE_ETHER_CRC_LEN)\n-#define MTU_TO_MAX_FRAME_LEN(mtu)   ((mtu) + ETHER_HDR_MAX_LEN)\n-#define FRAME_LEN_TO_MTU(frame_len) ((frame_len)                    \\\n-                                     - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN)\n-#define NETDEV_DPDK_MBUF_ALIGN      1024\n #define NETDEV_DPDK_MAX_PKT_LEN     9728\n \n /* Max and min number of packets in the mempool. OVS tries to allocate a\n@@ -117,7 +111,6 @@ static bool per_port_memory = false; /* Status of per port memory support */\n \n #define MAX_NB_MBUF          (4096 * 64)\n #define MIN_NB_MBUF          (4096 * 4)\n-#define MP_CACHE_SZ          RTE_MEMPOOL_CACHE_MAX_SIZE\n \n /* MAX_NB_MBUF can be divided by 2 many times, until MIN_NB_MBUF */\n BUILD_ASSERT_DECL(MAX_NB_MBUF % ROUND_DOWN_POW2(MAX_NB_MBUF / MIN_NB_MBUF)\n@@ -128,24 +121,11 @@ BUILD_ASSERT_DECL(MAX_NB_MBUF % ROUND_DOWN_POW2(MAX_NB_MBUF / MIN_NB_MBUF)\n BUILD_ASSERT_DECL((MAX_NB_MBUF / ROUND_DOWN_POW2(MAX_NB_MBUF / MIN_NB_MBUF))\n                   % MP_CACHE_SZ == 0);\n \n-#define SOCKET0              0\n-\n-/* Default size of Physical NIC RXQ */\n-#define NIC_PORT_DEFAULT_RXQ_SIZE 2048\n-/* Default size of Physical NIC TXQ */\n-#define NIC_PORT_DEFAULT_TXQ_SIZE 2048\n-\n #define OVS_VHOST_MAX_QUEUE_NUM 1024  /* Maximum number of vHost TX queues. */\n #define OVS_VHOST_QUEUE_MAP_UNKNOWN (-1) /* Mapping not initialized. */\n #define OVS_VHOST_QUEUE_DISABLED    (-2) /* Queue was disabled by guest and not\n                                           * yet mapped to another queue. */\n \n-#define DPDK_ETH_PORT_ID_INVALID    RTE_MAX_ETHPORTS\n-\n-/* DPDK library uses uint16_t for port_id. */\n-typedef uint16_t dpdk_port_t;\n-#define DPDK_PORT_ID_FMT \"%\"PRIu16\n-\n /* Minimum amount of vhost tx retries, effectively a disable. */\n #define VHOST_ENQ_RETRY_MIN 0\n /* Maximum amount of vhost tx retries. */\n@@ -160,7 +140,7 @@ typedef uint16_t dpdk_port_t;\n \n #define IF_NAME_SZ (PATH_MAX > IFNAMSIZ ? PATH_MAX : IFNAMSIZ)\n \n-static const struct rte_eth_conf port_conf = {\n+const struct rte_eth_conf port_conf = {\n     .rxmode = {\n         .offloads = 0,\n     },\n@@ -402,22 +382,6 @@ struct ingress_policer {\n     rte_spinlock_t policer_lock;\n };\n \n-enum dpdk_hw_ol_features {\n-    NETDEV_RX_CHECKSUM_OFFLOAD = 1 << 0,\n-    NETDEV_RX_HW_CRC_STRIP = 1 << 1,\n-    NETDEV_RX_HW_SCATTER = 1 << 2,\n-    NETDEV_TX_IPV4_CKSUM_OFFLOAD = 1 << 3,\n-    NETDEV_TX_TCP_CKSUM_OFFLOAD = 1 << 4,\n-    NETDEV_TX_UDP_CKSUM_OFFLOAD = 1 << 5,\n-    NETDEV_TX_SCTP_CKSUM_OFFLOAD = 1 << 6,\n-    NETDEV_TX_TSO_OFFLOAD = 1 << 7,\n-    NETDEV_TX_VXLAN_TNL_TSO_OFFLOAD = 1 << 8,\n-    NETDEV_TX_GENEVE_TNL_TSO_OFFLOAD = 1 << 9,\n-    NETDEV_TX_OUTER_IP_CKSUM_OFFLOAD = 1 << 10,\n-    NETDEV_TX_OUTER_UDP_CKSUM_OFFLOAD = 1 << 11,\n-    NETDEV_TX_GRE_TNL_TSO_OFFLOAD = 1 << 12,\n-};\n-\n enum dpdk_rx_steer_flags {\n     DPDK_RX_STEER_LACP = 1 << 0,\n };\n@@ -447,151 +411,40 @@ enum dpdk_rx_steer_flags {\n  *     struct netdev *netdev = netdev_from_name(name);\n  *     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n  *\n- *  Also, 'netdev' should be used instead of 'dev->up', where 'netdev' was\n- *  already defined.\n+ *  Also, 'netdev' should be used instead of 'dev->common.up',\n+ *  where 'netdev' was already defined.\n  */\n \n struct netdev_dpdk {\n-    PADDED_MEMBERS_CACHELINE_MARKER(CACHE_LINE_SIZE, cacheline0,\n-        dpdk_port_t port_id;\n-\n-        /* If true, device was attached by rte_eth_dev_attach(). */\n-        bool attached;\n-        /* If true, rte_eth_dev_start() was successfully called. */\n-        bool started;\n-        /* If true, this is a port representor. */\n-        bool is_representor;\n-        struct eth_addr hwaddr;\n-        /* 1 pad bytes here. */\n-        int mtu;\n-        int socket_id;\n-        int buf_size;\n-        int max_packet_len;\n-        enum dpdk_dev_type type;\n-        enum netdev_flags flags;\n-        int link_reset_cnt;\n-        union {\n-            /* Device arguments for dpdk ports. */\n-            char *devargs;\n-            /* Identifier used to distinguish vhost devices from each other. */\n-            char *vhost_id;\n-        };\n-        struct dpdk_tx_queue *tx_q;\n-        struct rte_eth_link link;\n-    );\n-\n-    PADDED_MEMBERS_CACHELINE_MARKER(CACHE_LINE_SIZE, cacheline1,\n-        struct ovs_mutex mutex OVS_ACQ_AFTER(dpdk_mutex);\n-        struct dpdk_mp *dpdk_mp;\n-\n-        /* virtio identifier for vhost devices */\n-        ovsrcu_index vid;\n-\n-        /* True if vHost device is 'up' and has been reconfigured at least once */\n-        bool vhost_reconfigured;\n-\n-        atomic_uint8_t vhost_tx_retries_max;\n-\n-        /* Flags for virtio features recovery mechanism. */\n-        uint8_t virtio_features_state;\n-\n-        /* 1 pad byte here. */\n-    );\n+    struct netdev_dpdk_common common;\n \n-    PADDED_MEMBERS(CACHE_LINE_SIZE,\n-        struct netdev up;\n-        /* In dpdk_list. */\n-        struct ovs_list list_node OVS_GUARDED_BY(dpdk_mutex);\n-\n-        /* QoS configuration and lock for the device */\n-        OVSRCU_TYPE(struct qos_conf *) qos_conf;\n-\n-        /* Ingress Policer */\n-        OVSRCU_TYPE(struct ingress_policer *) ingress_policer;\n-        uint32_t policer_rate;\n-        uint32_t policer_burst;\n-\n-        /* Array of vhost rxq states, see vring_state_changed. */\n-        bool *vhost_rxq_enabled;\n-\n-        /* Ensures that Rx metadata delivery is configured only once. */\n-        bool rx_metadata_delivery_configured;\n-    );\n+    enum dpdk_dev_type type;\n+    int buf_size;\n \n-    PADDED_MEMBERS(CACHE_LINE_SIZE,\n-        struct netdev_stats stats;\n-        struct netdev_dpdk_sw_stats *sw_stats;\n-        /* Protects stats */\n-        rte_spinlock_t stats_lock;\n-        /* 36 pad bytes here. */\n-    );\n-\n-    PADDED_MEMBERS(CACHE_LINE_SIZE,\n-        /* The following properties cannot be changed when a device is running,\n-         * so we remember the request and update them next time\n-         * netdev_dpdk*_reconfigure() is called */\n-        int requested_mtu;\n-        int requested_n_txq;\n-        /* User input for n_rxq (see dpdk_set_rxq_config). */\n-        int user_n_rxq;\n-        /* user_n_rxq + an optional rx steering queue (see\n-         * netdev_dpdk_reconfigure). This field is different from the other\n-         * requested_* fields as it may contain a different value than the user\n-         * input. */\n-        int requested_n_rxq;\n-        int requested_rxq_size;\n-        int requested_txq_size;\n-\n-        /* Number of rx/tx descriptors for physical devices */\n-        int rxq_size;\n-        int txq_size;\n-\n-        /* Socket ID detected when vHost device is brought up */\n-        int requested_socket_id;\n-\n-        /* Ignored by DPDK for vhost-user backends, only for VDUSE. */\n-        uint8_t vhost_max_queue_pairs;\n-\n-        /* Denotes whether vHost port is client/server mode */\n-        uint64_t vhost_driver_flags;\n-\n-        /* DPDK-ETH Flow control */\n-        struct rte_eth_fc_conf fc_conf;\n-\n-        /* DPDK-ETH hardware offload features,\n-         * from the enum set 'dpdk_hw_ol_features' */\n-        uint32_t hw_ol_features;\n-\n-        /* Properties for link state change detection mode.\n-         * If lsc_interrupt_mode is set to false, poll mode is used,\n-         * otherwise interrupt mode is used. */\n-        bool requested_lsc_interrupt_mode;\n-        bool lsc_interrupt_mode;\n-\n-        /* VF configuration. */\n-        struct eth_addr requested_hwaddr;\n-\n-        /* Requested rx queue steering flags,\n-         * from the enum set 'dpdk_rx_steer_flags'. */\n-        uint64_t requested_rx_steer_flags;\n-        uint64_t rx_steer_flags;\n-        size_t rx_steer_flows_num;\n-        struct rte_flow **rx_steer_flows;\n-    );\n-\n-    PADDED_MEMBERS(CACHE_LINE_SIZE,\n-        /* Names of all XSTATS counters */\n-        struct rte_eth_xstat_name *rte_xstats_names;\n-        int rte_xstats_names_size;\n-        int rte_xstats_ids_size;\n-        uint64_t *rte_xstats_ids;\n-    );\n+    /* vHost-specific fields */\n+    char *vhost_id;\n+    ovsrcu_index vid;\n+    bool vhost_reconfigured;\n+    atomic_uint8_t vhost_tx_retries_max;\n+    uint8_t virtio_features_state;\n+    bool *vhost_rxq_enabled;\n+    uint8_t vhost_max_queue_pairs;\n+    uint64_t vhost_driver_flags;\n+\n+    /* QoS fields */\n+    OVSRCU_TYPE(struct qos_conf *) qos_conf;\n+    OVSRCU_TYPE(struct ingress_policer *) ingress_policer;\n+    uint32_t policer_rate;\n+    uint32_t policer_burst;\n+\n+    /* Rx steering */\n+    uint64_t requested_rx_steer_flags;\n+    uint64_t rx_steer_flags;\n+    size_t rx_steer_flows_num;\n+    struct rte_flow **rx_steer_flows;\n };\n \n-struct netdev_rxq_dpdk {\n-    struct netdev_rxq up;\n-    dpdk_port_t port_id;\n-};\n+BUILD_ASSERT_DECL(offsetof(struct netdev_dpdk, common) == 0);\n \n static void netdev_dpdk_destruct(struct netdev *netdev);\n static void netdev_dpdk_vhost_destruct(struct netdev *netdev);\n@@ -763,10 +616,14 @@ dpdk_calculate_mbufs(struct netdev_dpdk *dev, int mtu)\n          * + <packets in the pmd threads>\n          * + <additional memory for corner cases>\n          */\n-        n_mbufs = dev->requested_n_rxq * dev->requested_rxq_size\n-                  + dev->requested_n_txq * dev->requested_txq_size\n-                  + MIN(RTE_MAX_LCORE, dev->requested_n_rxq) * NETDEV_MAX_BURST\n-                  + MIN_NB_MBUF;\n+        n_mbufs = dev->common.requested_n_rxq *\n+                  dev->common.requested_rxq_size +\n+                  dev->common.requested_n_txq *\n+                  dev->common.requested_txq_size +\n+                  MIN(RTE_MAX_LCORE,\n+                      dev->common.requested_n_rxq) *\n+                      NETDEV_MAX_BURST +\n+                  MIN_NB_MBUF;\n     }\n \n     return n_mbufs;\n@@ -776,8 +633,8 @@ static struct dpdk_mp *\n dpdk_mp_create(struct netdev_dpdk *dev, int mtu)\n {\n     char mp_name[RTE_MEMPOOL_NAMESIZE];\n-    const char *netdev_name = netdev_get_name(&dev->up);\n-    int socket_id = dev->requested_socket_id;\n+    const char *netdev_name = netdev_get_name(&dev->common.up);\n+    int socket_id = dev->common.requested_socket_id;\n     uint32_t n_mbufs = 0;\n     uint32_t mbuf_size = 0;\n     uint32_t aligned_mbuf_size = 0;\n@@ -823,7 +680,7 @@ dpdk_mp_create(struct netdev_dpdk *dev, int mtu)\n                   \"on socket %d for %d Rx and %d Tx queues, \"\n                   \"cache line size of %u\",\n                   netdev_name, n_mbufs, mbuf_size, socket_id,\n-                  dev->requested_n_rxq, dev->requested_n_txq,\n+                  dev->common.requested_n_rxq, dev->common.requested_n_txq,\n                   RTE_CACHE_LINE_SIZE);\n \n         /* The size of the mbuf's private area (i.e. area that holds OvS'\n@@ -895,10 +752,10 @@ dpdk_mp_get(struct netdev_dpdk *dev, int mtu)\n     if (!per_port_memory) {\n         /* If user has provided defined mempools, check if one is suitable\n          * and get new buffer size.*/\n-        mtu = dpdk_get_user_adjusted_mtu(mtu, dev->requested_mtu,\n-                                         dev->requested_socket_id);\n+        mtu = dpdk_get_user_adjusted_mtu(mtu, dev->common.requested_mtu,\n+                                         dev->common.requested_socket_id);\n         LIST_FOR_EACH (dmp, list_node, &dpdk_mp_list) {\n-            if (dmp->socket_id == dev->requested_socket_id\n+            if (dmp->socket_id == dev->common.requested_socket_id\n                 && dmp->mtu == mtu) {\n                 VLOG_DBG(\"Reusing mempool \\\"%s\\\"\", dmp->mp->name);\n                 dmp->refcount++;\n@@ -961,17 +818,17 @@ dpdk_mp_put(struct dpdk_mp *dmp)\n  * On error, device will be left unchanged. */\n static int\n netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n-    uint32_t buf_size = dpdk_buf_size(dev->requested_mtu);\n+    uint32_t buf_size = dpdk_buf_size(dev->common.requested_mtu);\n     struct dpdk_mp *dmp;\n     int ret = 0;\n \n     /* With shared memory we do not need to configure a mempool if the MTU\n      * and socket ID have not changed, the previous configuration is still\n      * valid so return 0 */\n-    if (!per_port_memory && dev->mtu == dev->requested_mtu\n-        && dev->socket_id == dev->requested_socket_id) {\n+    if (!per_port_memory && dev->common.mtu == dev->common.requested_mtu\n+        && dev->common.socket_id == dev->common.requested_socket_id) {\n         return ret;\n     }\n \n@@ -979,14 +836,16 @@ netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)\n     if (!dmp) {\n         VLOG_ERR(\"Failed to create memory pool for netdev \"\n                  \"%s, with MTU %d on socket %d: %s\\n\",\n-                 dev->up.name, dev->requested_mtu, dev->requested_socket_id,\n+                 dev->common.up.name,\n+                 dev->common.requested_mtu,\n+                 dev->common.requested_socket_id,\n                  rte_strerror(rte_errno));\n         ret = rte_errno;\n     } else {\n         /* Check for any pre-existing dpdk_mp for the device before accessing\n          * the associated mempool.\n          */\n-        if (dev->dpdk_mp != NULL) {\n+        if (dev->common.dpdk_mp != NULL) {\n             /* A new MTU was requested, decrement the reference count for the\n              * devices current dpdk_mp. This is required even if a pointer to\n              * same dpdk_mp is returned by dpdk_mp_get. The refcount for dmp\n@@ -994,12 +853,12 @@ netdev_dpdk_mempool_configure(struct netdev_dpdk *dev)\n              * must be decremented to keep an accurate refcount for the\n              * dpdk_mp.\n              */\n-            dpdk_mp_put(dev->dpdk_mp);\n+            dpdk_mp_put(dev->common.dpdk_mp);\n         }\n-        dev->dpdk_mp = dmp;\n-        dev->mtu = dev->requested_mtu;\n-        dev->socket_id = dev->requested_socket_id;\n-        dev->max_packet_len = MTU_TO_FRAME_LEN(dev->mtu);\n+        dev->common.dpdk_mp = dmp;\n+        dev->common.mtu = dev->common.requested_mtu;\n+        dev->common.socket_id = dev->common.requested_socket_id;\n+        dev->common.max_packet_len = MTU_TO_FRAME_LEN(dev->common.mtu);\n     }\n \n     return ret;\n@@ -1010,27 +869,29 @@ check_link_status(struct netdev_dpdk *dev)\n {\n     struct rte_eth_link link;\n \n-    if (rte_eth_link_get_nowait(dev->port_id, &link) < 0) {\n+    if (rte_eth_link_get_nowait(dev->common.port_id, &link) < 0) {\n         VLOG_DBG_RL(&rl,\n                     \"Failed to retrieve link status for port \"DPDK_PORT_ID_FMT,\n-                    dev->port_id);\n+                    dev->common.port_id);\n         return;\n     }\n \n-    if (dev->link.link_status != link.link_status) {\n-        netdev_change_seq_changed(&dev->up);\n+    if (dev->common.link.link_status != link.link_status) {\n+        netdev_change_seq_changed(&dev->common.up);\n \n-        dev->link_reset_cnt++;\n-        dev->link = link;\n-        if (dev->link.link_status) {\n+        dev->common.link_reset_cnt++;\n+        dev->common.link = link;\n+        if (dev->common.link.link_status) {\n             VLOG_DBG_RL(&rl,\n                         \"Port \"DPDK_PORT_ID_FMT\" Link Up - speed %u Mbps - %s\",\n-                        dev->port_id, (unsigned) dev->link.link_speed,\n-                        (dev->link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)\n+                        dev->common.port_id,\n+                        (unsigned) dev->common.link.link_speed,\n+                        (dev->common.link.link_duplex ==\n+                         RTE_ETH_LINK_FULL_DUPLEX)\n                         ? \"full-duplex\" : \"half-duplex\");\n         } else {\n             VLOG_DBG_RL(&rl, \"Port \"DPDK_PORT_ID_FMT\" Link Down\",\n-                        dev->port_id);\n+                        dev->common.port_id);\n         }\n     }\n }\n@@ -1044,12 +905,12 @@ dpdk_watchdog(void *dummy OVS_UNUSED)\n \n     for (;;) {\n         ovs_mutex_lock(&dpdk_mutex);\n-        LIST_FOR_EACH (dev, list_node, &dpdk_list) {\n-            ovs_mutex_lock(&dev->mutex);\n+        LIST_FOR_EACH (dev, common.list_node, &dpdk_list) {\n+            ovs_mutex_lock(&dev->common.mutex);\n             if (dev->type == DPDK_DEV_ETH) {\n                 check_link_status(dev);\n             }\n-            ovs_mutex_unlock(&dev->mutex);\n+            ovs_mutex_unlock(&dev->common.mutex);\n         }\n         ovs_mutex_unlock(&dpdk_mutex);\n         xsleep(DPDK_PORT_WATCHDOG_INTERVAL);\n@@ -1062,11 +923,11 @@ static void\n netdev_dpdk_update_netdev_flag(struct netdev_dpdk *dev,\n                                enum dpdk_hw_ol_features hw_ol_features,\n                                enum netdev_ol_flags flag)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n-    struct netdev *netdev = &dev->up;\n+    struct netdev *netdev = &dev->common.up;\n \n-    if (dev->hw_ol_features & hw_ol_features) {\n+    if (dev->common.hw_ol_features & hw_ol_features) {\n         netdev->ol_flags |= flag;\n     } else {\n         netdev->ol_flags &= ~flag;\n@@ -1075,7 +936,7 @@ netdev_dpdk_update_netdev_flag(struct netdev_dpdk *dev,\n \n static void\n netdev_dpdk_update_netdev_flags(struct netdev_dpdk *dev)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     netdev_dpdk_update_netdev_flag(dev, NETDEV_TX_IPV4_CKSUM_OFFLOAD,\n                                    NETDEV_TX_OFFLOAD_IPV4_CKSUM);\n@@ -1113,60 +974,60 @@ dpdk_eth_dev_port_config(struct netdev_dpdk *dev,\n      * scatter to support jumbo RX.\n      * Setting scatter for the device is done after checking for\n      * scatter support in the device capabilites. */\n-    if (dev->mtu > RTE_ETHER_MTU) {\n-        if (dev->hw_ol_features & NETDEV_RX_HW_SCATTER) {\n+    if (dev->common.mtu > RTE_ETHER_MTU) {\n+        if (dev->common.hw_ol_features & NETDEV_RX_HW_SCATTER) {\n             conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;\n         }\n     }\n \n-    conf.intr_conf.lsc = dev->lsc_interrupt_mode;\n+    conf.intr_conf.lsc = dev->common.lsc_interrupt_mode;\n \n-    if (dev->hw_ol_features & NETDEV_RX_CHECKSUM_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_RX_CHECKSUM_OFFLOAD) {\n         conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;\n     }\n \n-    if (!(dev->hw_ol_features & NETDEV_RX_HW_CRC_STRIP)\n+    if (!(dev->common.hw_ol_features & NETDEV_RX_HW_CRC_STRIP)\n         && info->rx_offload_capa & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {\n         conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_IPV4_CKSUM_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_IPV4_CKSUM_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_TCP_CKSUM_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_TCP_CKSUM_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_UDP_CKSUM_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_UDP_CKSUM_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_SCTP_CKSUM_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_SCTP_CKSUM_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_TSO_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_TSO_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_VXLAN_TNL_TSO_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_VXLAN_TNL_TSO_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_GENEVE_TNL_TSO_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_GENEVE_TNL_TSO_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_GRE_TNL_TSO_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_GRE_TNL_TSO_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_OUTER_IP_CKSUM_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_OUTER_IP_CKSUM_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;\n     }\n \n-    if (dev->hw_ol_features & NETDEV_TX_OUTER_UDP_CKSUM_OFFLOAD) {\n+    if (dev->common.hw_ol_features & NETDEV_TX_OUTER_UDP_CKSUM_OFFLOAD) {\n         conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;\n     }\n \n@@ -1189,36 +1050,38 @@ dpdk_eth_dev_port_config(struct netdev_dpdk *dev,\n             VLOG_INFO(\"Retrying setup with (rxq:%d txq:%d)\", n_rxq, n_txq);\n         }\n \n-        diag = rte_eth_dev_configure(dev->port_id, n_rxq, n_txq, &conf);\n+        diag = rte_eth_dev_configure(dev->common.port_id, n_rxq, n_txq, &conf);\n         if (diag) {\n             VLOG_WARN(\"Interface %s eth_dev setup error %s\\n\",\n-                      dev->up.name, rte_strerror(-diag));\n+                      dev->common.up.name, rte_strerror(-diag));\n             break;\n         }\n \n-        diag = rte_eth_dev_set_mtu(dev->port_id, dev->mtu);\n+        diag = rte_eth_dev_set_mtu(dev->common.port_id, dev->common.mtu);\n         if (diag) {\n             /* A device may not support rte_eth_dev_set_mtu, in this case\n              * flag a warning to the user and include the devices configured\n              * MTU value that will be used instead. */\n             if (-ENOTSUP == diag) {\n-                rte_eth_dev_get_mtu(dev->port_id, &conf_mtu);\n+                rte_eth_dev_get_mtu(dev->common.port_id, &conf_mtu);\n                 VLOG_WARN(\"Interface %s does not support MTU configuration, \"\n                           \"max packet size supported is %\"PRIu16\".\",\n-                          dev->up.name, conf_mtu);\n+                          dev->common.up.name, conf_mtu);\n             } else {\n                 VLOG_ERR(\"Interface %s MTU (%d) setup error: %s\",\n-                         dev->up.name, dev->mtu, rte_strerror(-diag));\n+                         dev->common.up.name, dev->common.mtu,\n+                         rte_strerror(-diag));\n                 break;\n             }\n         }\n \n         for (i = 0; i < n_txq; i++) {\n-            diag = rte_eth_tx_queue_setup(dev->port_id, i, dev->txq_size,\n-                                          dev->socket_id, NULL);\n+            diag = rte_eth_tx_queue_setup(dev->common.port_id,\n+                                         i, dev->common.txq_size,\n+                                          dev->common.socket_id, NULL);\n             if (diag) {\n                 VLOG_INFO(\"Interface %s unable to setup txq(%d): %s\",\n-                          dev->up.name, i, rte_strerror(-diag));\n+                          dev->common.up.name, i, rte_strerror(-diag));\n                 break;\n             }\n         }\n@@ -1230,12 +1093,13 @@ dpdk_eth_dev_port_config(struct netdev_dpdk *dev,\n         }\n \n         for (i = 0; i < n_rxq; i++) {\n-            diag = rte_eth_rx_queue_setup(dev->port_id, i, dev->rxq_size,\n-                                          dev->socket_id, NULL,\n-                                          dev->dpdk_mp->mp);\n+            diag = rte_eth_rx_queue_setup(dev->common.port_id, i,\n+                                          dev->common.rxq_size,\n+                                          dev->common.socket_id, NULL,\n+                                          dev->common.dpdk_mp->mp);\n             if (diag) {\n                 VLOG_INFO(\"Interface %s unable to setup rxq(%d): %s\",\n-                          dev->up.name, i, rte_strerror(-diag));\n+                          dev->common.up.name, i, rte_strerror(-diag));\n                 break;\n             }\n         }\n@@ -1246,8 +1110,8 @@ dpdk_eth_dev_port_config(struct netdev_dpdk *dev,\n             continue;\n         }\n \n-        dev->up.n_rxq = n_rxq;\n-        dev->up.n_txq = n_txq;\n+        dev->common.up.n_rxq = n_rxq;\n+        dev->common.up.n_txq = n_txq;\n \n         return 0;\n     }\n@@ -1256,11 +1120,12 @@ dpdk_eth_dev_port_config(struct netdev_dpdk *dev,\n }\n \n static void\n-dpdk_eth_flow_ctrl_setup(struct netdev_dpdk *dev) OVS_REQUIRES(dev->mutex)\n+dpdk_eth_flow_ctrl_setup(struct netdev_dpdk *dev)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n-    if (rte_eth_dev_flow_ctrl_set(dev->port_id, &dev->fc_conf)) {\n+    if (rte_eth_dev_flow_ctrl_set(dev->common.port_id, &dev->common.fc_conf)) {\n         VLOG_WARN(\"Failed to enable flow control on device \"DPDK_PORT_ID_FMT,\n-                  dev->port_id);\n+                  dev->common.port_id);\n     }\n }\n \n@@ -1270,7 +1135,7 @@ dpdk_eth_dev_init_rx_metadata(struct netdev_dpdk *dev)\n     uint64_t rx_metadata = 0;\n     int ret;\n \n-    if (dev->rx_metadata_delivery_configured) {\n+    if (dev->common.rx_metadata_delivery_configured) {\n         return;\n     }\n \n@@ -1282,30 +1147,30 @@ dpdk_eth_dev_init_rx_metadata(struct netdev_dpdk *dev)\n     rx_metadata |= RTE_ETH_RX_METADATA_TUNNEL_ID;\n #endif /* ALLOW_EXPERIMENTAL_API */\n \n-    ret = rte_eth_rx_metadata_negotiate(dev->port_id, &rx_metadata);\n+    ret = rte_eth_rx_metadata_negotiate(dev->common.port_id, &rx_metadata);\n     if (ret == 0) {\n         if (!(rx_metadata & RTE_ETH_RX_METADATA_USER_MARK)) {\n             VLOG_DBG(\"%s: The NIC will not provide per-packet USER_MARK\",\n-                     netdev_get_name(&dev->up));\n+                     netdev_get_name(&dev->common.up));\n         }\n #ifdef ALLOW_EXPERIMENTAL_API\n         if (!(rx_metadata & RTE_ETH_RX_METADATA_TUNNEL_ID)) {\n             VLOG_DBG(\"%s: The NIC will not provide per-packet TUNNEL_ID\",\n-                     netdev_get_name(&dev->up));\n+                     netdev_get_name(&dev->common.up));\n         }\n #endif /* ALLOW_EXPERIMENTAL_API */\n     } else {\n         VLOG(ret == -ENOTSUP ? VLL_DBG : VLL_WARN,\n              \"%s: Cannot negotiate Rx metadata: %s\",\n-             netdev_get_name(&dev->up), rte_strerror(-ret));\n+             netdev_get_name(&dev->common.up), rte_strerror(-ret));\n     }\n \n-    dev->rx_metadata_delivery_configured = true;\n+    dev->common.rx_metadata_delivery_configured = true;\n }\n \n static int\n dpdk_eth_dev_init(struct netdev_dpdk *dev)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     struct rte_pktmbuf_pool_private *mbp_priv;\n     struct rte_eth_dev_info info;\n@@ -1328,142 +1193,143 @@ dpdk_eth_dev_init(struct netdev_dpdk *dev)\n         dpdk_eth_dev_init_rx_metadata(dev);\n     }\n \n-    diag = rte_eth_dev_info_get(dev->port_id, &info);\n+    diag = rte_eth_dev_info_get(dev->common.port_id, &info);\n     if (diag < 0) {\n         VLOG_ERR(\"Interface %s rte_eth_dev_info_get error: %s\",\n-                 dev->up.name, rte_strerror(-diag));\n+                 dev->common.up.name, rte_strerror(-diag));\n         return -diag;\n     }\n \n-    dev->is_representor = !!(*info.dev_flags & RTE_ETH_DEV_REPRESENTOR);\n+    dev->common.is_representor = !!(*info.dev_flags & RTE_ETH_DEV_REPRESENTOR);\n \n     if (strstr(info.driver_name, \"vf\") != NULL) {\n         VLOG_INFO(\"Virtual function detected, HW_CRC_STRIP will be enabled\");\n-        dev->hw_ol_features |= NETDEV_RX_HW_CRC_STRIP;\n+        dev->common.hw_ol_features |= NETDEV_RX_HW_CRC_STRIP;\n     } else {\n-        dev->hw_ol_features &= ~NETDEV_RX_HW_CRC_STRIP;\n+        dev->common.hw_ol_features &= ~NETDEV_RX_HW_CRC_STRIP;\n     }\n \n     if ((info.rx_offload_capa & rx_chksm_offload_capa) !=\n             rx_chksm_offload_capa) {\n         VLOG_WARN(\"Rx checksum offload is not supported on port \"\n-                  DPDK_PORT_ID_FMT, dev->port_id);\n-        dev->hw_ol_features &= ~NETDEV_RX_CHECKSUM_OFFLOAD;\n+                  DPDK_PORT_ID_FMT, dev->common.port_id);\n+        dev->common.hw_ol_features &= ~NETDEV_RX_CHECKSUM_OFFLOAD;\n     } else {\n-        dev->hw_ol_features |= NETDEV_RX_CHECKSUM_OFFLOAD;\n+        dev->common.hw_ol_features |= NETDEV_RX_CHECKSUM_OFFLOAD;\n     }\n \n     if (info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_SCATTER) {\n-        dev->hw_ol_features |= NETDEV_RX_HW_SCATTER;\n+        dev->common.hw_ol_features |= NETDEV_RX_HW_SCATTER;\n     } else {\n         /* Do not warn on lack of scatter support */\n-        dev->hw_ol_features &= ~NETDEV_RX_HW_SCATTER;\n+        dev->common.hw_ol_features &= ~NETDEV_RX_HW_SCATTER;\n     }\n \n     if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {\n-        dev->hw_ol_features |= NETDEV_TX_IPV4_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features |= NETDEV_TX_IPV4_CKSUM_OFFLOAD;\n     } else {\n-        dev->hw_ol_features &= ~NETDEV_TX_IPV4_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features &= ~NETDEV_TX_IPV4_CKSUM_OFFLOAD;\n     }\n \n     if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {\n-        dev->hw_ol_features |= NETDEV_TX_TCP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features |= NETDEV_TX_TCP_CKSUM_OFFLOAD;\n     } else {\n-        dev->hw_ol_features &= ~NETDEV_TX_TCP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features &= ~NETDEV_TX_TCP_CKSUM_OFFLOAD;\n     }\n \n     if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {\n-        dev->hw_ol_features |= NETDEV_TX_UDP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features |= NETDEV_TX_UDP_CKSUM_OFFLOAD;\n     } else {\n-        dev->hw_ol_features &= ~NETDEV_TX_UDP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features &= ~NETDEV_TX_UDP_CKSUM_OFFLOAD;\n     }\n \n     if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) {\n-        dev->hw_ol_features |= NETDEV_TX_SCTP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features |= NETDEV_TX_SCTP_CKSUM_OFFLOAD;\n     } else {\n-        dev->hw_ol_features &= ~NETDEV_TX_SCTP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features &= ~NETDEV_TX_SCTP_CKSUM_OFFLOAD;\n     }\n \n     if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) {\n-        dev->hw_ol_features |= NETDEV_TX_OUTER_IP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features |= NETDEV_TX_OUTER_IP_CKSUM_OFFLOAD;\n     } else {\n-        dev->hw_ol_features &= ~NETDEV_TX_OUTER_IP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features &= ~NETDEV_TX_OUTER_IP_CKSUM_OFFLOAD;\n     }\n \n     if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {\n-        dev->hw_ol_features |= NETDEV_TX_OUTER_UDP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features |= NETDEV_TX_OUTER_UDP_CKSUM_OFFLOAD;\n     } else {\n-        dev->hw_ol_features &= ~NETDEV_TX_OUTER_UDP_CKSUM_OFFLOAD;\n+        dev->common.hw_ol_features &= ~NETDEV_TX_OUTER_UDP_CKSUM_OFFLOAD;\n     }\n \n-    dev->hw_ol_features &= ~NETDEV_TX_TSO_OFFLOAD;\n+    dev->common.hw_ol_features &= ~NETDEV_TX_TSO_OFFLOAD;\n     if (userspace_tso_enabled()) {\n         if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) {\n-            dev->hw_ol_features |= NETDEV_TX_TSO_OFFLOAD;\n+            dev->common.hw_ol_features |= NETDEV_TX_TSO_OFFLOAD;\n         } else {\n             VLOG_WARN(\"%s: Tx TSO offload is not supported.\",\n-                      netdev_get_name(&dev->up));\n+                      netdev_get_name(&dev->common.up));\n         }\n \n         if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) {\n-            dev->hw_ol_features |= NETDEV_TX_VXLAN_TNL_TSO_OFFLOAD;\n+            dev->common.hw_ol_features |= NETDEV_TX_VXLAN_TNL_TSO_OFFLOAD;\n         } else {\n             VLOG_WARN(\"%s: Tx Vxlan tunnel TSO offload is not supported.\",\n-                      netdev_get_name(&dev->up));\n+                      netdev_get_name(&dev->common.up));\n         }\n \n         if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO) {\n-            dev->hw_ol_features |= NETDEV_TX_GENEVE_TNL_TSO_OFFLOAD;\n+            dev->common.hw_ol_features |= NETDEV_TX_GENEVE_TNL_TSO_OFFLOAD;\n         } else {\n             VLOG_WARN(\"%s: Tx Geneve tunnel TSO offload is not supported.\",\n-                      netdev_get_name(&dev->up));\n+                      netdev_get_name(&dev->common.up));\n         }\n \n         if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO) {\n-            dev->hw_ol_features |= NETDEV_TX_GRE_TNL_TSO_OFFLOAD;\n+            dev->common.hw_ol_features |= NETDEV_TX_GRE_TNL_TSO_OFFLOAD;\n         } else {\n             VLOG_WARN(\"%s: Tx GRE tunnel TSO offload is not supported.\",\n-                      netdev_get_name(&dev->up));\n+                      netdev_get_name(&dev->common.up));\n         }\n     }\n \n-    n_rxq = MIN(info.max_rx_queues, dev->up.n_rxq);\n-    n_txq = MIN(info.max_tx_queues, dev->up.n_txq);\n+    n_rxq = MIN(info.max_rx_queues, dev->common.up.n_rxq);\n+    n_txq = MIN(info.max_tx_queues, dev->common.up.n_txq);\n \n     diag = dpdk_eth_dev_port_config(dev, &info, n_rxq, n_txq);\n     if (diag) {\n         VLOG_ERR(\"Interface %s(rxq:%d txq:%d lsc interrupt mode:%s) \"\n                  \"configure error: %s\",\n-                 dev->up.name, n_rxq, n_txq,\n-                 dev->lsc_interrupt_mode ? \"true\" : \"false\",\n+                 dev->common.up.name, n_rxq, n_txq,\n+                 dev->common.lsc_interrupt_mode ? \"true\" : \"false\",\n                  rte_strerror(-diag));\n         return -diag;\n     }\n \n-    diag = rte_eth_dev_start(dev->port_id);\n+    diag = rte_eth_dev_start(dev->common.port_id);\n     if (diag) {\n-        VLOG_ERR(\"Interface %s start error: %s\", dev->up.name,\n+        VLOG_ERR(\"Interface %s start error: %s\", dev->common.up.name,\n                  rte_strerror(-diag));\n         return -diag;\n     }\n-    dev->started = true;\n+    dev->common.started = true;\n \n     netdev_dpdk_configure_xstats(dev);\n \n-    rte_eth_promiscuous_enable(dev->port_id);\n-    rte_eth_allmulticast_enable(dev->port_id);\n+    rte_eth_promiscuous_enable(dev->common.port_id);\n+    rte_eth_allmulticast_enable(dev->common.port_id);\n \n     memset(&eth_addr, 0x0, sizeof(eth_addr));\n-    rte_eth_macaddr_get(dev->port_id, &eth_addr);\n+    rte_eth_macaddr_get(dev->common.port_id, &eth_addr);\n     VLOG_INFO_RL(&rl, \"Port \"DPDK_PORT_ID_FMT\": \"ETH_ADDR_FMT,\n-                 dev->port_id, ETH_ADDR_BYTES_ARGS(eth_addr.addr_bytes));\n+                 dev->common.port_id,\n+                 ETH_ADDR_BYTES_ARGS(eth_addr.addr_bytes));\n \n-    memcpy(dev->hwaddr.ea, eth_addr.addr_bytes, ETH_ADDR_LEN);\n-    if (rte_eth_link_get_nowait(dev->port_id, &dev->link) < 0) {\n-        memset(&dev->link, 0, sizeof dev->link);\n+    memcpy(dev->common.hwaddr.ea, eth_addr.addr_bytes, ETH_ADDR_LEN);\n+    if (rte_eth_link_get_nowait(dev->common.port_id, &dev->common.link) < 0) {\n+        memset(&dev->common.link, 0, sizeof dev->common.link);\n     }\n \n-    mbp_priv = rte_mempool_get_priv(dev->dpdk_mp->mp);\n+    mbp_priv = rte_mempool_get_priv(dev->common.dpdk_mp->mp);\n     dev->buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;\n     return 0;\n }\n@@ -1471,7 +1337,9 @@ dpdk_eth_dev_init(struct netdev_dpdk *dev)\n static struct netdev_dpdk *\n netdev_dpdk_cast(const struct netdev *netdev)\n {\n-    return CONTAINER_OF(netdev, struct netdev_dpdk, up);\n+    struct netdev_dpdk_common *common = netdev_dpdk_common_cast(netdev);\n+\n+    return CONTAINER_OF(common, struct netdev_dpdk, common);\n }\n \n static struct netdev *\n@@ -1481,7 +1349,7 @@ netdev_dpdk_alloc(void)\n \n     dev = dpdk_rte_mzalloc(sizeof *dev);\n     if (dev) {\n-        return &dev->up;\n+        return &dev->common.up;\n     }\n \n     return NULL;\n@@ -1512,26 +1380,26 @@ common_construct(struct netdev *netdev, dpdk_port_t port_no,\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_init(&dev->mutex);\n+    ovs_mutex_init(&dev->common.mutex);\n \n-    rte_spinlock_init(&dev->stats_lock);\n+    rte_spinlock_init(&dev->common.stats_lock);\n \n     /* If the 'sid' is negative, it means that the kernel fails\n      * to obtain the pci numa info.  In that situation, always\n      * use 'SOCKET0'. */\n-    dev->socket_id = socket_id < 0 ? SOCKET0 : socket_id;\n-    dev->requested_socket_id = dev->socket_id;\n-    dev->port_id = port_no;\n+    dev->common.socket_id = socket_id < 0 ? SOCKET0 : socket_id;\n+    dev->common.requested_socket_id = dev->common.socket_id;\n+    dev->common.port_id = port_no;\n     dev->type = type;\n-    dev->flags = 0;\n-    dev->requested_mtu = RTE_ETHER_MTU;\n-    dev->max_packet_len = MTU_TO_FRAME_LEN(dev->mtu);\n-    dev->requested_lsc_interrupt_mode = 0;\n+    dev->common.flags = 0;\n+    dev->common.requested_mtu = RTE_ETHER_MTU;\n+    dev->common.max_packet_len = MTU_TO_FRAME_LEN(dev->common.mtu);\n+    dev->common.requested_lsc_interrupt_mode = 0;\n     ovsrcu_index_init(&dev->vid, -1);\n     dev->vhost_reconfigured = false;\n     dev->virtio_features_state = OVS_VIRTIO_F_CLEAN;\n-    dev->attached = false;\n-    dev->started = false;\n+    dev->common.attached = false;\n+    dev->common.started = false;\n \n     ovsrcu_init(&dev->qos_conf, NULL);\n \n@@ -1541,38 +1409,39 @@ common_construct(struct netdev *netdev, dpdk_port_t port_no,\n \n     netdev->n_rxq = 0;\n     netdev->n_txq = 0;\n-    dev->user_n_rxq = NR_QUEUE;\n-    dev->requested_n_rxq = NR_QUEUE;\n-    dev->requested_n_txq = NR_QUEUE;\n-    dev->requested_rxq_size = NIC_PORT_DEFAULT_RXQ_SIZE;\n-    dev->requested_txq_size = NIC_PORT_DEFAULT_TXQ_SIZE;\n+    dev->common.user_n_rxq = NR_QUEUE;\n+    dev->common.requested_n_rxq = NR_QUEUE;\n+    dev->common.requested_n_txq = NR_QUEUE;\n+    dev->common.requested_rxq_size = NIC_PORT_DEFAULT_RXQ_SIZE;\n+    dev->common.requested_txq_size = NIC_PORT_DEFAULT_TXQ_SIZE;\n     dev->requested_rx_steer_flags = 0;\n     dev->rx_steer_flags = 0;\n     dev->rx_steer_flows_num = 0;\n     dev->rx_steer_flows = NULL;\n \n     /* Initialize the flow control to NULL */\n-    memset(&dev->fc_conf, 0, sizeof dev->fc_conf);\n+    memset(&dev->common.fc_conf, 0, sizeof dev->common.fc_conf);\n \n     /* Initilize the hardware offload flags to 0 */\n-    dev->hw_ol_features = 0;\n+    dev->common.hw_ol_features = 0;\n \n-    dev->rx_metadata_delivery_configured = false;\n+    dev->common.rx_metadata_delivery_configured = false;\n \n-    dev->flags = NETDEV_UP | NETDEV_PROMISC;\n+    dev->common.flags = NETDEV_UP | NETDEV_PROMISC;\n \n-    ovs_list_push_back(&dpdk_list, &dev->list_node);\n+    ovs_list_push_back(&dpdk_list, &dev->common.list_node);\n \n     netdev_request_reconfigure(netdev);\n \n-    dev->rte_xstats_names = NULL;\n-    dev->rte_xstats_names_size = 0;\n+    dev->common.rte_xstats_names = NULL;\n+    dev->common.rte_xstats_names_size = 0;\n \n-    dev->rte_xstats_ids = NULL;\n-    dev->rte_xstats_ids_size = 0;\n+    dev->common.rte_xstats_ids = NULL;\n+    dev->common.rte_xstats_ids_size = 0;\n \n-    dev->sw_stats = xzalloc(sizeof *dev->sw_stats);\n-    dev->sw_stats->tx_retries = (dev->type == DPDK_DEV_VHOST) ? 0 : UINT64_MAX;\n+    dev->common.sw_stats = xzalloc(sizeof *dev->common.sw_stats);\n+    dev->common.sw_stats->tx_retries =\n+        (dev->type == DPDK_DEV_VHOST) ? 0 : UINT64_MAX;\n \n     return 0;\n }\n@@ -1589,8 +1458,8 @@ vhost_common_construct(struct netdev *netdev)\n     if (!dev->vhost_rxq_enabled) {\n         return ENOMEM;\n     }\n-    dev->tx_q = netdev_dpdk_alloc_txq(OVS_VHOST_MAX_QUEUE_NUM);\n-    if (!dev->tx_q) {\n+    dev->common.tx_q = netdev_dpdk_alloc_txq(OVS_VHOST_MAX_QUEUE_NUM);\n+    if (!dev->common.tx_q) {\n         rte_free(dev->vhost_rxq_enabled);\n         return ENOMEM;\n     }\n@@ -1716,16 +1585,16 @@ netdev_dpdk_construct(struct netdev *netdev)\n static void\n common_destruct(struct netdev_dpdk *dev)\n     OVS_REQUIRES(dpdk_mutex)\n-    OVS_EXCLUDED(dev->mutex)\n+    OVS_EXCLUDED(dev->common.mutex)\n {\n-    rte_free(dev->tx_q);\n-    dpdk_mp_put(dev->dpdk_mp);\n+    rte_free(dev->common.tx_q);\n+    dpdk_mp_put(dev->common.dpdk_mp);\n \n-    ovs_list_remove(&dev->list_node);\n+    ovs_list_remove(&dev->common.list_node);\n     free(ovsrcu_get_protected(struct ingress_policer *,\n                               &dev->ingress_policer));\n-    free(dev->sw_stats);\n-    ovs_mutex_destroy(&dev->mutex);\n+    free(dev->common.sw_stats);\n+    ovs_mutex_destroy(&dev->common.mutex);\n }\n \n static void dpdk_rx_steer_unconfigure(struct netdev_dpdk *);\n@@ -1740,10 +1609,10 @@ netdev_dpdk_destruct(struct netdev *netdev)\n     /* Destroy any rx-steering flows to allow RXQs to be removed. */\n     dpdk_rx_steer_unconfigure(dev);\n \n-    rte_eth_dev_stop(dev->port_id);\n-    dev->started = false;\n+    rte_eth_dev_stop(dev->common.port_id);\n+    dev->common.started = false;\n \n-    if (dev->attached) {\n+    if (dev->common.attached) {\n         bool dpdk_resources_still_used = false;\n         struct rte_eth_dev_info dev_info;\n         dpdk_port_t sibling_port_id;\n@@ -1751,16 +1620,16 @@ netdev_dpdk_destruct(struct netdev *netdev)\n \n         /* Check if this netdev has siblings (i.e. shares DPDK resources) among\n          * other OVS netdevs. */\n-        RTE_ETH_FOREACH_DEV_SIBLING (sibling_port_id, dev->port_id) {\n+        RTE_ETH_FOREACH_DEV_SIBLING (sibling_port_id, dev->common.port_id) {\n             struct netdev_dpdk *sibling;\n \n-            /* RTE_ETH_FOREACH_DEV_SIBLING lists dev->port_id as part of the\n-             * loop. */\n-            if (sibling_port_id == dev->port_id) {\n+            /* RTE_ETH_FOREACH_DEV_SIBLING lists dev->common.port_id\n+             * as part of the loop. */\n+            if (sibling_port_id == dev->common.port_id) {\n                 continue;\n             }\n-            LIST_FOR_EACH (sibling, list_node, &dpdk_list) {\n-                if (sibling->port_id != sibling_port_id) {\n+            LIST_FOR_EACH (sibling, common.list_node, &dpdk_list) {\n+                if (sibling->common.port_id != sibling_port_id) {\n                     continue;\n                 }\n                 dpdk_resources_still_used = true;\n@@ -1772,10 +1641,10 @@ netdev_dpdk_destruct(struct netdev *netdev)\n         }\n \n         /* Retrieve eth device data before closing it. */\n-        diag = rte_eth_dev_info_get(dev->port_id, &dev_info);\n+        diag = rte_eth_dev_info_get(dev->common.port_id, &dev_info);\n \n         /* Remove the eth device. */\n-        rte_eth_dev_close(dev->port_id);\n+        rte_eth_dev_close(dev->common.port_id);\n \n         /* Remove the rte device if no associated eth device is used by OVS.\n          * Note: any remaining eth devices associated to this rte device are\n@@ -1787,33 +1656,33 @@ netdev_dpdk_destruct(struct netdev *netdev)\n \n             if (diag < 0) {\n                 VLOG_ERR(\"Device '%s' can not be detached: %s.\",\n-                         dev->devargs, rte_strerror(-diag));\n+                         dev->common.devargs, rte_strerror(-diag));\n             } else {\n                 /* Device was closed and detached. */\n                 VLOG_INFO(\"Device '%s' has been removed and detached\",\n-                    dev->devargs);\n+                    dev->common.devargs);\n             }\n         } else {\n             /* Device was only closed. rte_dev_remove() was not called. */\n-            VLOG_INFO(\"Device '%s' has been removed\", dev->devargs);\n+            VLOG_INFO(\"Device '%s' has been removed\", dev->common.devargs);\n         }\n     }\n \n     netdev_dpdk_clear_xstats(dev);\n-    free(dev->devargs);\n+    free(dev->common.devargs);\n     common_destruct(dev);\n \n     ovs_mutex_unlock(&dpdk_mutex);\n }\n \n /* rte_vhost_driver_unregister() can call back destroy_device(), which will\n- * try to acquire 'dpdk_mutex' and possibly 'dev->mutex'.  To avoid a\n+ * try to acquire 'dpdk_mutex' and possibly 'dev->common.mutex'.  To avoid a\n  * deadlock, none of the mutexes must be held while calling this function. */\n static int\n dpdk_vhost_driver_unregister(struct netdev_dpdk *dev OVS_UNUSED,\n                              char *vhost_id)\n     OVS_EXCLUDED(dpdk_mutex)\n-    OVS_EXCLUDED(dev->mutex)\n+    OVS_EXCLUDED(dev->common.mutex)\n {\n     return rte_vhost_driver_unregister(vhost_id);\n }\n@@ -1870,24 +1739,24 @@ netdev_dpdk_dealloc(struct netdev *netdev)\n \n static void\n netdev_dpdk_clear_xstats(struct netdev_dpdk *dev)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n-    free(dev->rte_xstats_names);\n-    dev->rte_xstats_names = NULL;\n-    dev->rte_xstats_names_size = 0;\n-    free(dev->rte_xstats_ids);\n-    dev->rte_xstats_ids = NULL;\n-    dev->rte_xstats_ids_size = 0;\n+    free(dev->common.rte_xstats_names);\n+    dev->common.rte_xstats_names = NULL;\n+    dev->common.rte_xstats_names_size = 0;\n+    free(dev->common.rte_xstats_ids);\n+    dev->common.rte_xstats_ids = NULL;\n+    dev->common.rte_xstats_ids_size = 0;\n }\n \n static const char *\n netdev_dpdk_get_xstat_name(struct netdev_dpdk *dev, uint64_t id)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n-    if (id >= dev->rte_xstats_names_size) {\n+    if (id >= dev->common.rte_xstats_names_size) {\n         return \"UNKNOWN\";\n     }\n-    return dev->rte_xstats_names[id].name;\n+    return dev->common.rte_xstats_names[id].name;\n }\n \n static bool\n@@ -1902,7 +1771,7 @@ is_queue_stat(const char *s)\n \n static void\n netdev_dpdk_configure_xstats(struct netdev_dpdk *dev)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     struct rte_eth_xstat_name *rte_xstats_names = NULL;\n     struct rte_eth_xstat *rte_xstats = NULL;\n@@ -1913,39 +1782,40 @@ netdev_dpdk_configure_xstats(struct netdev_dpdk *dev)\n \n     netdev_dpdk_clear_xstats(dev);\n \n-    rte_xstats_names_size = rte_eth_xstats_get_names(dev->port_id, NULL, 0);\n+    rte_xstats_names_size =\n+        rte_eth_xstats_get_names(dev->common.port_id, NULL, 0);\n     if (rte_xstats_names_size < 0) {\n         VLOG_WARN(\"Cannot get XSTATS names for port: \"DPDK_PORT_ID_FMT,\n-                  dev->port_id);\n+                  dev->common.port_id);\n         goto out;\n     }\n \n     rte_xstats_names = xcalloc(rte_xstats_names_size,\n                                sizeof *rte_xstats_names);\n-    rte_xstats_len = rte_eth_xstats_get_names(dev->port_id,\n+    rte_xstats_len = rte_eth_xstats_get_names(dev->common.port_id,\n                                               rte_xstats_names,\n                                               rte_xstats_names_size);\n     if (rte_xstats_len < 0 || rte_xstats_len != rte_xstats_names_size) {\n         VLOG_WARN(\"Cannot get XSTATS names for port: \"DPDK_PORT_ID_FMT,\n-                  dev->port_id);\n+                  dev->common.port_id);\n         goto out;\n     }\n \n     rte_xstats = xcalloc(rte_xstats_names_size, sizeof *rte_xstats);\n-    rte_xstats_len = rte_eth_xstats_get(dev->port_id, rte_xstats,\n+    rte_xstats_len = rte_eth_xstats_get(dev->common.port_id, rte_xstats,\n                                         rte_xstats_names_size);\n     if (rte_xstats_len < 0 || rte_xstats_len != rte_xstats_names_size) {\n         VLOG_WARN(\"Cannot get XSTATS for port: \"DPDK_PORT_ID_FMT,\n-                  dev->port_id);\n+                  dev->common.port_id);\n         goto out;\n     }\n \n-    dev->rte_xstats_names = rte_xstats_names;\n+    dev->common.rte_xstats_names = rte_xstats_names;\n     rte_xstats_names = NULL;\n-    dev->rte_xstats_names_size = rte_xstats_names_size;\n+    dev->common.rte_xstats_names_size = rte_xstats_names_size;\n \n-    dev->rte_xstats_ids = xcalloc(rte_xstats_names_size,\n-                                  sizeof *dev->rte_xstats_ids);\n+    dev->common.rte_xstats_ids = xcalloc(rte_xstats_names_size,\n+                                  sizeof *dev->common.rte_xstats_ids);\n     for (unsigned int i = 0; i < rte_xstats_names_size; i++) {\n         id = rte_xstats[i].id;\n         name = netdev_dpdk_get_xstat_name(dev, id);\n@@ -1957,8 +1827,8 @@ netdev_dpdk_configure_xstats(struct netdev_dpdk *dev)\n             strstr(name, \"_management_\") ||\n             string_ends_with(name, \"_dropped\")) {\n \n-            dev->rte_xstats_ids[dev->rte_xstats_ids_size] = id;\n-            dev->rte_xstats_ids_size++;\n+            dev->common.rte_xstats_ids[dev->common.rte_xstats_ids_size] = id;\n+            dev->common.rte_xstats_ids_size++;\n         }\n     }\n \n@@ -1972,44 +1842,44 @@ netdev_dpdk_get_config(const struct netdev *netdev, struct smap *args)\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n-    if (dev->devargs && dev->devargs[0]) {\n-        smap_add_format(args, \"dpdk-devargs\", \"%s\", dev->devargs);\n+    if (dev->common.devargs && dev->common.devargs[0]) {\n+        smap_add_format(args, \"dpdk-devargs\", \"%s\", dev->common.devargs);\n     }\n \n-    smap_add_format(args, \"n_rxq\", \"%d\", dev->user_n_rxq);\n+    smap_add_format(args, \"n_rxq\", \"%d\", dev->common.user_n_rxq);\n \n-    if (dev->fc_conf.mode == RTE_ETH_FC_TX_PAUSE ||\n-        dev->fc_conf.mode == RTE_ETH_FC_FULL) {\n+    if (dev->common.fc_conf.mode == RTE_ETH_FC_TX_PAUSE ||\n+        dev->common.fc_conf.mode == RTE_ETH_FC_FULL) {\n         smap_add(args, \"rx-flow-ctrl\", \"true\");\n     }\n \n-    if (dev->fc_conf.mode == RTE_ETH_FC_RX_PAUSE ||\n-        dev->fc_conf.mode == RTE_ETH_FC_FULL) {\n+    if (dev->common.fc_conf.mode == RTE_ETH_FC_RX_PAUSE ||\n+        dev->common.fc_conf.mode == RTE_ETH_FC_FULL) {\n         smap_add(args, \"tx-flow-ctrl\", \"true\");\n     }\n \n-    if (dev->fc_conf.autoneg) {\n+    if (dev->common.fc_conf.autoneg) {\n         smap_add(args, \"flow-ctrl-autoneg\", \"true\");\n     }\n \n-    smap_add_format(args, \"n_rxq_desc\", \"%d\", dev->rxq_size);\n-    smap_add_format(args, \"n_txq_desc\", \"%d\", dev->txq_size);\n+    smap_add_format(args, \"n_rxq_desc\", \"%d\", dev->common.rxq_size);\n+    smap_add_format(args, \"n_txq_desc\", \"%d\", dev->common.txq_size);\n \n     if (dev->rx_steer_flags == DPDK_RX_STEER_LACP) {\n         smap_add(args, \"rx-steering\", \"rss+lacp\");\n     }\n \n     smap_add(args, \"dpdk-lsc-interrupt\",\n-             dev->lsc_interrupt_mode ? \"true\" : \"false\");\n+             dev->common.lsc_interrupt_mode ? \"true\" : \"false\");\n \n-    if (dev->is_representor) {\n+    if (dev->common.is_representor) {\n         smap_add_format(args, \"dpdk-vf-mac\", ETH_ADDR_FMT,\n-                        ETH_ADDR_ARGS(dev->requested_hwaddr));\n+                        ETH_ADDR_ARGS(dev->common.requested_hwaddr));\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -2020,8 +1890,8 @@ netdev_dpdk_lookup_by_port_id(dpdk_port_t port_id)\n {\n     struct netdev_dpdk *dev;\n \n-    LIST_FOR_EACH (dev, list_node, &dpdk_list) {\n-        if (dev->port_id == port_id) {\n+    LIST_FOR_EACH (dev, common.list_node, &dpdk_list) {\n+        if (dev->common.port_id == port_id) {\n             return dev;\n         }\n     }\n@@ -2099,7 +1969,7 @@ netdev_dpdk_process_devargs(struct netdev_dpdk *dev,\n                 new_port_id = netdev_dpdk_get_port_by_devargs(devargs);\n                 if (rte_eth_dev_is_valid_port(new_port_id)) {\n                     /* Attach successful */\n-                    dev->attached = true;\n+                    dev->common.attached = true;\n                     VLOG_INFO(\"Device '%s' attached to DPDK\", devargs);\n                 } else {\n                     /* Attach unsuccessful */\n@@ -2155,11 +2025,11 @@ netdev_dpdk_run(const struct netdev_class *netdev_class OVS_UNUSED)\n             ovs_mutex_lock(&dpdk_mutex);\n             dev = netdev_dpdk_lookup_by_port_id(port_id);\n             if (dev) {\n-                ovs_mutex_lock(&dev->mutex);\n-                netdev_request_reconfigure(&dev->up);\n+                ovs_mutex_lock(&dev->common.mutex);\n+                netdev_request_reconfigure(&dev->common.up);\n                 VLOG_DBG_RL(&rl, \"%s: Device reset requested.\",\n-                            netdev_get_name(&dev->up));\n-                ovs_mutex_unlock(&dev->mutex);\n+                            netdev_get_name(&dev->common.up));\n+                ovs_mutex_unlock(&dev->common.mutex);\n             }\n             ovs_mutex_unlock(&dpdk_mutex);\n         }\n@@ -2185,14 +2055,14 @@ dpdk_eth_event_callback(dpdk_port_t port_id, enum rte_eth_event_type type,\n \n static void\n dpdk_set_rxq_config(struct netdev_dpdk *dev, const struct smap *args)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     int new_n_rxq;\n \n     new_n_rxq = MAX(smap_get_int(args, \"n_rxq\", NR_QUEUE), 1);\n-    if (new_n_rxq != dev->user_n_rxq) {\n-        dev->user_n_rxq = new_n_rxq;\n-        netdev_request_reconfigure(&dev->up);\n+    if (new_n_rxq != dev->common.user_n_rxq) {\n+        dev->common.user_n_rxq = new_n_rxq;\n+        netdev_request_reconfigure(&dev->common.up);\n     }\n }\n \n@@ -2209,14 +2079,14 @@ dpdk_process_queue_size(struct netdev *netdev, const struct smap *args,\n     if (is_rx) {\n         default_size = NIC_PORT_DEFAULT_RXQ_SIZE;\n         new_requested_size = smap_get_int(args, \"n_rxq_desc\", default_size);\n-        cur_size = dev->rxq_size;\n-        cur_requested_size = &dev->requested_rxq_size;\n+        cur_size = dev->common.rxq_size;\n+        cur_requested_size = &dev->common.requested_rxq_size;\n         lim = info ? &info->rx_desc_lim : NULL;\n     } else {\n         default_size = NIC_PORT_DEFAULT_TXQ_SIZE;\n         new_requested_size = smap_get_int(args, \"n_txq_desc\", default_size);\n-        cur_size = dev->txq_size;\n-        cur_requested_size = &dev->requested_txq_size;\n+        cur_size = dev->common.txq_size;\n+        cur_requested_size = &dev->common.requested_txq_size;\n         lim = info ? &info->tx_desc_lim : NULL;\n     }\n \n@@ -2302,7 +2172,7 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n     int err = 0;\n \n     ovs_mutex_lock(&dpdk_mutex);\n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     dpdk_set_rx_steer_config(netdev, dev, args, errp);\n \n@@ -2310,7 +2180,8 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n \n     new_devargs = smap_get(args, \"dpdk-devargs\");\n \n-    if (dev->devargs && new_devargs && strcmp(new_devargs, dev->devargs)) {\n+    if (dev->common.devargs && new_devargs &&\n+        strcmp(new_devargs, dev->common.devargs)) {\n         /* The user requested a new device.  If we return error, the caller\n          * will delete this netdev and try to recreate it. */\n         err = EAGAIN;\n@@ -2321,14 +2192,14 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n     if (new_devargs && new_devargs[0]) {\n         /* Don't process dpdk-devargs if value is unchanged and port id\n          * is valid */\n-        if (!(dev->devargs && !strcmp(dev->devargs, new_devargs)\n-               && rte_eth_dev_is_valid_port(dev->port_id))) {\n+        if (!(dev->common.devargs && !strcmp(dev->common.devargs, new_devargs)\n+               && rte_eth_dev_is_valid_port(dev->common.port_id))) {\n             dpdk_port_t new_port_id = netdev_dpdk_process_devargs(dev,\n                                                                   new_devargs,\n                                                                   errp);\n             if (!rte_eth_dev_is_valid_port(new_port_id)) {\n                 err = EINVAL;\n-            } else if (new_port_id == dev->port_id) {\n+            } else if (new_port_id == dev->common.port_id) {\n                 /* Already configured, do not reconfigure again */\n                 err = 0;\n             } else {\n@@ -2339,15 +2210,15 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n                     VLOG_WARN_BUF(errp, \"'%s' is trying to use device '%s' \"\n                                   \"which is already in use by '%s'\",\n                                   netdev_get_name(netdev), new_devargs,\n-                                  netdev_get_name(&dup_dev->up));\n+                                  netdev_get_name(&dup_dev->common.up));\n                     err = EADDRINUSE;\n                 } else {\n                     int sid = rte_eth_dev_socket_id(new_port_id);\n \n-                    dev->requested_socket_id = sid < 0 ? SOCKET0 : sid;\n-                    dev->devargs = xstrdup(new_devargs);\n-                    dev->port_id = new_port_id;\n-                    netdev_request_reconfigure(&dev->up);\n+                    dev->common.requested_socket_id = sid < 0 ? SOCKET0 : sid;\n+                    dev->common.devargs = xstrdup(new_devargs);\n+                    dev->common.port_id = new_port_id;\n+                    netdev_request_reconfigure(&dev->common.up);\n                     err = 0;\n                 }\n             }\n@@ -2363,7 +2234,7 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n         goto out;\n     }\n \n-    err = -rte_eth_dev_info_get(dev->port_id, &info);\n+    err = -rte_eth_dev_info_get(dev->common.port_id, &info);\n     if (err) {\n         VLOG_WARN_BUF(errp, \"%s: Failed to get device info: %s\" ,\n                       netdev_get_name(netdev), rte_strerror(err));\n@@ -2377,7 +2248,7 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n     if (vf_mac) {\n         struct eth_addr mac;\n \n-        if (!dev->is_representor) {\n+        if (!dev->common.is_representor) {\n             VLOG_WARN(\"'%s' is trying to set the VF MAC '%s' \"\n                       \"but 'options:dpdk-vf-mac' is only supported for \"\n                       \"VF representors.\",\n@@ -2388,8 +2259,8 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n         } else if (eth_addr_is_multicast(mac)) {\n             VLOG_WARN(\"interface '%s': cannot set VF MAC to multicast \"\n                       \"address '%s'.\", netdev_get_name(netdev), vf_mac);\n-        } else if (!eth_addr_equals(dev->requested_hwaddr, mac)) {\n-            dev->requested_hwaddr = mac;\n+        } else if (!eth_addr_equals(dev->common.requested_hwaddr, mac)) {\n+            dev->common.requested_hwaddr = mac;\n             netdev_request_reconfigure(netdev);\n         }\n     }\n@@ -2406,8 +2277,8 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n                  netdev_get_name(netdev));\n         lsc_interrupt_mode = false;\n     }\n-    if (dev->requested_lsc_interrupt_mode != lsc_interrupt_mode) {\n-        dev->requested_lsc_interrupt_mode = lsc_interrupt_mode;\n+    if (dev->common.requested_lsc_interrupt_mode != lsc_interrupt_mode) {\n+        dev->common.requested_lsc_interrupt_mode = lsc_interrupt_mode;\n         netdev_request_reconfigure(netdev);\n     }\n \n@@ -2426,7 +2297,8 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n     }\n \n     /* Get the Flow control configuration. */\n-    err = -rte_eth_dev_flow_ctrl_get(dev->port_id, &dev->fc_conf);\n+    err = -rte_eth_dev_flow_ctrl_get(dev->common.port_id,\n+                                     &dev->common.fc_conf);\n     if (err) {\n         if (err == ENOTSUP) {\n             if (flow_control_requested) {\n@@ -2441,14 +2313,15 @@ netdev_dpdk_set_config(struct netdev *netdev, const struct smap *args,\n         goto out;\n     }\n \n-    if (dev->fc_conf.mode != fc_mode || autoneg != dev->fc_conf.autoneg) {\n-        dev->fc_conf.mode = fc_mode;\n-        dev->fc_conf.autoneg = autoneg;\n+    if (dev->common.fc_conf.mode != fc_mode ||\n+        autoneg != dev->common.fc_conf.autoneg) {\n+        dev->common.fc_conf.mode = fc_mode;\n+        dev->common.fc_conf.autoneg = autoneg;\n         dpdk_eth_flow_ctrl_setup(dev);\n     }\n \n out:\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     ovs_mutex_unlock(&dpdk_mutex);\n \n     return err;\n@@ -2461,7 +2334,7 @@ netdev_dpdk_vhost_client_get_config(const struct netdev *netdev,\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n     int tx_retries_max;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     if (dev->vhost_id) {\n         smap_add(args, \"vhost-server-path\", dev->vhost_id);\n@@ -2472,7 +2345,7 @@ netdev_dpdk_vhost_client_get_config(const struct netdev *netdev,\n         smap_add_format(args, \"tx-retries-max\", \"%d\", tx_retries_max);\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -2487,7 +2360,7 @@ netdev_dpdk_vhost_client_set_config(struct netdev *netdev,\n     int max_tx_retries, cur_max_tx_retries;\n     uint32_t max_queue_pairs;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n     if (!(dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT)) {\n         path = smap_get(args, \"vhost-server-path\");\n         if (!nullable_string_is_equal(path, dev->vhost_id)) {\n@@ -2518,7 +2391,7 @@ netdev_dpdk_vhost_client_set_config(struct netdev *netdev,\n         VLOG_INFO(\"Max Tx retries for vhost device '%s' set to %d\",\n                   netdev_get_name(netdev), max_tx_retries);\n     }\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -2528,7 +2401,7 @@ netdev_dpdk_get_numa_id(const struct netdev *netdev)\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    return dev->socket_id;\n+    return dev->common.socket_id;\n }\n \n /* Sets the number of tx queues for the dpdk interface. */\n@@ -2537,17 +2410,17 @@ netdev_dpdk_set_tx_multiq(struct netdev *netdev, unsigned int n_txq)\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n-    if (dev->requested_n_txq == n_txq) {\n+    if (dev->common.requested_n_txq == n_txq) {\n         goto out;\n     }\n \n-    dev->requested_n_txq = n_txq;\n+    dev->common.requested_n_txq = n_txq;\n     netdev_request_reconfigure(netdev);\n \n out:\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     return 0;\n }\n \n@@ -2575,9 +2448,9 @@ netdev_dpdk_rxq_construct(struct netdev_rxq *rxq)\n     struct netdev_rxq_dpdk *rx = netdev_rxq_dpdk_cast(rxq);\n     struct netdev_dpdk *dev = netdev_dpdk_cast(rxq->netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n-    rx->port_id = dev->port_id;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    rx->port_id = dev->common.port_id;\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -2635,8 +2508,8 @@ netdev_dpdk_prep_hwol_packet(struct netdev_dpdk *dev, struct rte_mbuf *mbuf)\n \n     if (OVS_UNLIKELY(unexpected)) {\n         VLOG_WARN_RL(&rl, \"%s: Unexpected Tx offload flags: %#\"PRIx64,\n-                     netdev_get_name(&dev->up), unexpected);\n-        netdev_dpdk_mbuf_dump(netdev_get_name(&dev->up),\n+                     netdev_get_name(&dev->common.up), unexpected);\n+        netdev_dpdk_mbuf_dump(netdev_get_name(&dev->common.up),\n                               \"Packet with unexpected ol_flags\", mbuf);\n         return false;\n     }\n@@ -2737,11 +2610,12 @@ netdev_dpdk_prep_hwol_packet(struct netdev_dpdk *dev, struct rte_mbuf *mbuf)\n             hdr_len += mbuf->outer_l2_len + mbuf->outer_l3_len;\n         }\n \n-        if (OVS_UNLIKELY((hdr_len + mbuf->tso_segsz) > dev->max_packet_len)) {\n+        if (OVS_UNLIKELY((hdr_len + mbuf->tso_segsz) >\n+                         dev->common.max_packet_len)) {\n             VLOG_WARN_RL(&rl, \"%s: Oversized TSO packet. hdr: %\"PRIu32\", \"\n                          \"gso: %\"PRIu32\", max len: %\"PRIu32\"\",\n-                         dev->up.name, hdr_len, mbuf->tso_segsz,\n-                         dev->max_packet_len);\n+                         dev->common.up.name, hdr_len, mbuf->tso_segsz,\n+                         dev->common.max_packet_len);\n             return false;\n         }\n         mbuf->ol_flags |= RTE_MBUF_F_TX_TCP_SEG;\n@@ -2822,19 +2696,20 @@ netdev_dpdk_eth_tx_burst(struct netdev_dpdk *dev, int qid,\n     uint32_t nb_tx = 0;\n     uint16_t nb_tx_prep = cnt;\n \n-    nb_tx_prep = rte_eth_tx_prepare(dev->port_id, qid, pkts, cnt);\n+    nb_tx_prep = rte_eth_tx_prepare(dev->common.port_id, qid, pkts, cnt);\n     if (nb_tx_prep != cnt) {\n         VLOG_WARN_RL(&rl, \"%s: Output batch contains invalid packets. \"\n-                     \"Only %u/%u are valid: %s\", netdev_get_name(&dev->up),\n+                     \"Only %u/%u are valid: %s\",\n+                     netdev_get_name(&dev->common.up),\n                      nb_tx_prep, cnt, rte_strerror(rte_errno));\n-        netdev_dpdk_mbuf_dump(netdev_get_name(&dev->up),\n+        netdev_dpdk_mbuf_dump(netdev_get_name(&dev->common.up),\n                               \"First invalid packet\", pkts[nb_tx_prep]);\n     }\n \n     while (nb_tx != nb_tx_prep) {\n         uint32_t ret;\n \n-        ret = rte_eth_tx_burst(dev->port_id, qid, pkts + nb_tx,\n+        ret = rte_eth_tx_burst(dev->common.port_id, qid, pkts + nb_tx,\n                                nb_tx_prep - nb_tx);\n         if (!ret) {\n             break;\n@@ -2928,11 +2803,11 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,\n     int vid = netdev_dpdk_get_vid(dev);\n \n     if (OVS_UNLIKELY(vid < 0 || !dev->vhost_reconfigured\n-                     || !(dev->flags & NETDEV_UP))) {\n+                     || !(dev->common.flags & NETDEV_UP))) {\n         return EAGAIN;\n     }\n \n-    nb_rx = rte_vhost_dequeue_burst(vid, qid, dev->dpdk_mp->mp,\n+    nb_rx = rte_vhost_dequeue_burst(vid, qid, dev->common.dpdk_mp->mp,\n                                     (struct rte_mbuf **) batch->packets,\n                                     NETDEV_MAX_BURST);\n     if (!nb_rx) {\n@@ -2958,10 +2833,10 @@ netdev_dpdk_vhost_rxq_recv(struct netdev_rxq *rxq,\n     }\n \n     if (OVS_UNLIKELY(qos_drops)) {\n-        rte_spinlock_lock(&dev->stats_lock);\n-        dev->stats.rx_dropped += qos_drops;\n-        dev->sw_stats->rx_qos_drops += qos_drops;\n-        rte_spinlock_unlock(&dev->stats_lock);\n+        rte_spinlock_lock(&dev->common.stats_lock);\n+        dev->common.stats.rx_dropped += qos_drops;\n+        dev->common.sw_stats->rx_qos_drops += qos_drops;\n+        rte_spinlock_unlock(&dev->common.stats_lock);\n     }\n \n     batch->count = nb_rx;\n@@ -2988,7 +2863,7 @@ netdev_dpdk_rxq_recv(struct netdev_rxq *rxq, struct dp_packet_batch *batch,\n     int nb_rx;\n     int dropped = 0;\n \n-    if (OVS_UNLIKELY(!(dev->flags & NETDEV_UP))) {\n+    if (OVS_UNLIKELY(!(dev->common.flags & NETDEV_UP))) {\n         return EAGAIN;\n     }\n \n@@ -3017,10 +2892,10 @@ netdev_dpdk_rxq_recv(struct netdev_rxq *rxq, struct dp_packet_batch *batch,\n \n     /* Update stats to reflect dropped packets */\n     if (OVS_UNLIKELY(dropped)) {\n-        rte_spinlock_lock(&dev->stats_lock);\n-        dev->stats.rx_dropped += dropped;\n-        dev->sw_stats->rx_qos_drops += dropped;\n-        rte_spinlock_unlock(&dev->stats_lock);\n+        rte_spinlock_lock(&dev->common.stats_lock);\n+        dev->common.stats.rx_dropped += dropped;\n+        dev->common.sw_stats->rx_qos_drops += dropped;\n+        rte_spinlock_unlock(&dev->common.stats_lock);\n     }\n \n     batch->count = nb_rx;\n@@ -3056,11 +2931,12 @@ netdev_dpdk_filter_packet_len(struct netdev_dpdk *dev, struct rte_mbuf **pkts,\n      * during the offloading preparation for performance reasons. */\n     for (i = 0; i < pkt_cnt; i++) {\n         pkt = pkts[i];\n-        if (OVS_UNLIKELY((pkt->pkt_len > dev->max_packet_len)\n+        if (OVS_UNLIKELY((pkt->pkt_len > dev->common.max_packet_len)\n             && !pkt->tso_segsz)) {\n             VLOG_WARN_RL(&rl, \"%s: Too big size %\" PRIu32 \" \"\n-                         \"max_packet_len %d\", dev->up.name, pkt->pkt_len,\n-                         dev->max_packet_len);\n+                         \"max_packet_len %d\",\n+                         dev->common.up.name, pkt->pkt_len,\n+                         dev->common.max_packet_len);\n             rte_pktmbuf_free(pkt);\n             continue;\n         }\n@@ -3241,7 +3117,8 @@ dpdk_copy_batch_to_mbuf(struct netdev *netdev, struct dp_packet_batch *batch)\n         } else {\n             struct dp_packet *pktcopy;\n \n-            pktcopy = dpdk_copy_dp_packet_to_mbuf(dev->dpdk_mp->mp, packet);\n+            pktcopy = dpdk_copy_dp_packet_to_mbuf(\n+                dev->common.dpdk_mp->mp, packet);\n             if (pktcopy) {\n                 dp_packet_batch_refill(batch, pktcopy, i);\n             }\n@@ -3313,19 +3190,19 @@ netdev_dpdk_vhost_send(struct netdev *netdev, int qid,\n     int retries;\n \n     batch_cnt = cnt = dp_packet_batch_size(batch);\n-    qid = dev->tx_q[qid % netdev->n_txq].map;\n+    qid = dev->common.tx_q[qid % netdev->n_txq].map;\n     if (OVS_UNLIKELY(vid < 0 || !dev->vhost_reconfigured || qid < 0\n-                     || !(dev->flags & NETDEV_UP))) {\n-        rte_spinlock_lock(&dev->stats_lock);\n-        dev->stats.tx_dropped += cnt;\n-        rte_spinlock_unlock(&dev->stats_lock);\n+                     || !(dev->common.flags & NETDEV_UP))) {\n+        rte_spinlock_lock(&dev->common.stats_lock);\n+        dev->common.stats.tx_dropped += cnt;\n+        rte_spinlock_unlock(&dev->common.stats_lock);\n         dp_packet_delete_batch(batch, true);\n         return 0;\n     }\n \n-    if (OVS_UNLIKELY(!rte_spinlock_trylock(&dev->tx_q[qid].tx_lock))) {\n+    if (OVS_UNLIKELY(!rte_spinlock_trylock(&dev->common.tx_q[qid].tx_lock))) {\n         COVERAGE_INC(vhost_tx_contention);\n-        rte_spinlock_lock(&dev->tx_q[qid].tx_lock);\n+        rte_spinlock_lock(&dev->common.tx_q[qid].tx_lock);\n     }\n \n     cnt = netdev_dpdk_common_send(netdev, batch, &stats);\n@@ -3357,23 +3234,23 @@ netdev_dpdk_vhost_send(struct netdev *netdev, int qid,\n         }\n     } while (cnt && (retries++ < max_retries));\n \n-    rte_spinlock_unlock(&dev->tx_q[qid].tx_lock);\n+    rte_spinlock_unlock(&dev->common.tx_q[qid].tx_lock);\n \n     stats.tx_failure_drops += cnt;\n     dropped += cnt;\n     stats.tx_retries = MIN(retries, max_retries);\n \n     if (OVS_UNLIKELY(dropped || stats.tx_retries)) {\n-        struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;\n+        struct netdev_dpdk_sw_stats *sw_stats = dev->common.sw_stats;\n \n-        rte_spinlock_lock(&dev->stats_lock);\n-        dev->stats.tx_dropped += dropped;\n+        rte_spinlock_lock(&dev->common.stats_lock);\n+        dev->common.stats.tx_dropped += dropped;\n         sw_stats->tx_retries += stats.tx_retries;\n         sw_stats->tx_failure_drops += stats.tx_failure_drops;\n         sw_stats->tx_mtu_exceeded_drops += stats.tx_mtu_exceeded_drops;\n         sw_stats->tx_qos_drops += stats.tx_qos_drops;\n         sw_stats->tx_invalid_hwol_drops += stats.tx_invalid_hwol_drops;\n-        rte_spinlock_unlock(&dev->stats_lock);\n+        rte_spinlock_unlock(&dev->common.stats_lock);\n     }\n \n     pkts = (struct rte_mbuf **) batch->packets;\n@@ -3392,17 +3269,17 @@ netdev_dpdk_eth_send(struct netdev *netdev, int qid,\n     struct netdev_dpdk_sw_stats stats;\n     int cnt, dropped;\n \n-    if (OVS_UNLIKELY(!(dev->flags & NETDEV_UP))) {\n-        rte_spinlock_lock(&dev->stats_lock);\n-        dev->stats.tx_dropped += dp_packet_batch_size(batch);\n-        rte_spinlock_unlock(&dev->stats_lock);\n+    if (OVS_UNLIKELY(!(dev->common.flags & NETDEV_UP))) {\n+        rte_spinlock_lock(&dev->common.stats_lock);\n+        dev->common.stats.tx_dropped += dp_packet_batch_size(batch);\n+        rte_spinlock_unlock(&dev->common.stats_lock);\n         dp_packet_delete_batch(batch, true);\n         return 0;\n     }\n \n     if (OVS_UNLIKELY(concurrent_txq)) {\n-        qid = qid % dev->up.n_txq;\n-        rte_spinlock_lock(&dev->tx_q[qid].tx_lock);\n+        qid = qid % dev->common.up.n_txq;\n+        rte_spinlock_lock(&dev->common.tx_q[qid].tx_lock);\n     }\n \n     cnt = netdev_dpdk_common_send(netdev, batch, &stats);\n@@ -3411,19 +3288,19 @@ netdev_dpdk_eth_send(struct netdev *netdev, int qid,\n     stats.tx_failure_drops += dropped;\n     dropped += batch_cnt - cnt;\n     if (OVS_UNLIKELY(dropped)) {\n-        struct netdev_dpdk_sw_stats *sw_stats = dev->sw_stats;\n+        struct netdev_dpdk_sw_stats *sw_stats = dev->common.sw_stats;\n \n-        rte_spinlock_lock(&dev->stats_lock);\n-        dev->stats.tx_dropped += dropped;\n+        rte_spinlock_lock(&dev->common.stats_lock);\n+        dev->common.stats.tx_dropped += dropped;\n         sw_stats->tx_failure_drops += stats.tx_failure_drops;\n         sw_stats->tx_mtu_exceeded_drops += stats.tx_mtu_exceeded_drops;\n         sw_stats->tx_qos_drops += stats.tx_qos_drops;\n         sw_stats->tx_invalid_hwol_drops += stats.tx_invalid_hwol_drops;\n-        rte_spinlock_unlock(&dev->stats_lock);\n+        rte_spinlock_unlock(&dev->common.stats_lock);\n     }\n \n     if (OVS_UNLIKELY(concurrent_txq)) {\n-        rte_spinlock_unlock(&dev->tx_q[qid].tx_lock);\n+        rte_spinlock_unlock(&dev->common.tx_q[qid].tx_lock);\n     }\n \n     return 0;\n@@ -3431,7 +3308,7 @@ netdev_dpdk_eth_send(struct netdev *netdev, int qid,\n \n static int\n netdev_dpdk_set_etheraddr__(struct netdev_dpdk *dev, const struct eth_addr mac)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     int err = 0;\n \n@@ -3439,13 +3316,13 @@ netdev_dpdk_set_etheraddr__(struct netdev_dpdk *dev, const struct eth_addr mac)\n         struct rte_ether_addr ea;\n \n         memcpy(ea.addr_bytes, mac.ea, ETH_ADDR_LEN);\n-        err = -rte_eth_dev_default_mac_addr_set(dev->port_id, &ea);\n+        err = -rte_eth_dev_default_mac_addr_set(dev->common.port_id, &ea);\n     }\n     if (!err) {\n-        dev->hwaddr = mac;\n+        dev->common.hwaddr = mac;\n     } else {\n         VLOG_WARN(\"%s: Failed to set requested mac(\"ETH_ADDR_FMT\"): %s\",\n-                  netdev_get_name(&dev->up), ETH_ADDR_ARGS(mac),\n+                  netdev_get_name(&dev->common.up), ETH_ADDR_ARGS(mac),\n                   rte_strerror(err));\n     }\n \n@@ -3458,14 +3335,14 @@ netdev_dpdk_set_etheraddr(struct netdev *netdev, const struct eth_addr mac)\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n     int err = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n-    if (!eth_addr_equals(dev->hwaddr, mac)) {\n+    ovs_mutex_lock(&dev->common.mutex);\n+    if (!eth_addr_equals(dev->common.hwaddr, mac)) {\n         err = netdev_dpdk_set_etheraddr__(dev, mac);\n         if (!err) {\n             netdev_change_seq_changed(netdev);\n         }\n     }\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return err;\n }\n@@ -3475,9 +3352,9 @@ netdev_dpdk_get_etheraddr(const struct netdev *netdev, struct eth_addr *mac)\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n-    *mac = dev->hwaddr;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    *mac = dev->common.hwaddr;\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -3487,9 +3364,9 @@ netdev_dpdk_get_mtu(const struct netdev *netdev, int *mtup)\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n-    *mtup = dev->mtu;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    *mtup = dev->common.mtu;\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -3512,16 +3389,16 @@ netdev_dpdk_set_mtu(struct netdev *netdev, int mtu)\n      */\n     if (MTU_TO_MAX_FRAME_LEN(mtu) > NETDEV_DPDK_MAX_PKT_LEN\n         || mtu < RTE_ETHER_MIN_MTU) {\n-        VLOG_WARN(\"%s: unsupported MTU %d\\n\", dev->up.name, mtu);\n+        VLOG_WARN(\"%s: unsupported MTU %d\\n\", dev->common.up.name, mtu);\n         return EINVAL;\n     }\n \n-    ovs_mutex_lock(&dev->mutex);\n-    if (dev->requested_mtu != mtu) {\n-        dev->requested_mtu = mtu;\n+    ovs_mutex_lock(&dev->common.mutex);\n+    if (dev->common.requested_mtu != mtu) {\n+        dev->common.requested_mtu = mtu;\n         netdev_request_reconfigure(netdev);\n     }\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -3538,7 +3415,7 @@ netdev_dpdk_vhost_get_stats(const struct netdev *netdev,\n     int qid;\n     int vid;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     if (!is_vhost_running(dev)) {\n         err = EPROTO;\n@@ -3580,11 +3457,11 @@ netdev_dpdk_vhost_get_stats(const struct netdev *netdev,\n     VHOST_RXQ_STAT(rx_1024_to_1522_packets, \"size_1024_1518_packets\") \\\n     VHOST_RXQ_STAT(rx_1523_to_max_packets,  \"size_1519_max_packets\")\n \n-#define VHOST_RXQ_STAT(MEMBER, NAME) dev->stats.MEMBER = 0;\n+#define VHOST_RXQ_STAT(MEMBER, NAME) dev->common.stats.MEMBER = 0;\n     VHOST_RXQ_STATS;\n #undef VHOST_RXQ_STAT\n \n-    for (int q = 0; q < dev->up.n_rxq; q++) {\n+    for (int q = 0; q < dev->common.up.n_rxq; q++) {\n         qid = q * VIRTIO_QNUM + VIRTIO_TXQ;\n \n         err = rte_vhost_vring_stats_get(vid, qid, vhost_stats,\n@@ -3597,7 +3474,7 @@ netdev_dpdk_vhost_get_stats(const struct netdev *netdev,\n         for (int i = 0; i < vhost_stats_count; i++) {\n #define VHOST_RXQ_STAT(MEMBER, NAME)                                 \\\n             if (string_ends_with(vhost_stats_names[i].name, NAME)) { \\\n-                dev->stats.MEMBER += vhost_stats[i].value;           \\\n+                dev->common.stats.MEMBER += vhost_stats[i].value;           \\\n                 continue;                                            \\\n             }\n             VHOST_RXQ_STATS;\n@@ -3609,11 +3486,12 @@ netdev_dpdk_vhost_get_stats(const struct netdev *netdev,\n      * Since vhost only reports good packets and has no error counter,\n      * rx_undersized_errors is highjacked (see above) to retrieve\n      * \"undersize_packets\". */\n-    dev->stats.rx_1_to_64_packets += dev->stats.rx_undersized_errors;\n-    memset(&dev->stats.rx_undersized_errors, 0xff,\n-           sizeof dev->stats.rx_undersized_errors);\n+    dev->common.stats.rx_1_to_64_packets +=\n+        dev->common.stats.rx_undersized_errors;\n+    memset(&dev->common.stats.rx_undersized_errors, 0xff,\n+           sizeof dev->common.stats.rx_undersized_errors);\n \n-#define VHOST_RXQ_STAT(MEMBER, NAME) stats->MEMBER = dev->stats.MEMBER;\n+#define VHOST_RXQ_STAT(MEMBER, NAME) stats->MEMBER = dev->common.stats.MEMBER;\n     VHOST_RXQ_STATS;\n #undef VHOST_RXQ_STAT\n \n@@ -3655,11 +3533,11 @@ netdev_dpdk_vhost_get_stats(const struct netdev *netdev,\n     VHOST_TXQ_STAT(tx_1024_to_1522_packets, \"size_1024_1518_packets\") \\\n     VHOST_TXQ_STAT(tx_1523_to_max_packets,  \"size_1519_max_packets\")\n \n-#define VHOST_TXQ_STAT(MEMBER, NAME) dev->stats.MEMBER = 0;\n+#define VHOST_TXQ_STAT(MEMBER, NAME) dev->common.stats.MEMBER = 0;\n     VHOST_TXQ_STATS;\n #undef VHOST_TXQ_STAT\n \n-    for (int q = 0; q < dev->up.n_txq; q++) {\n+    for (int q = 0; q < dev->common.up.n_txq; q++) {\n         qid = q * VIRTIO_QNUM;\n \n         err = rte_vhost_vring_stats_get(vid, qid, vhost_stats,\n@@ -3672,7 +3550,7 @@ netdev_dpdk_vhost_get_stats(const struct netdev *netdev,\n         for (int i = 0; i < vhost_stats_count; i++) {\n #define VHOST_TXQ_STAT(MEMBER, NAME)                                 \\\n             if (string_ends_with(vhost_stats_names[i].name, NAME)) { \\\n-                dev->stats.MEMBER += vhost_stats[i].value;           \\\n+                dev->common.stats.MEMBER += vhost_stats[i].value;           \\\n                 continue;                                            \\\n             }\n             VHOST_TXQ_STATS;\n@@ -3682,23 +3560,24 @@ netdev_dpdk_vhost_get_stats(const struct netdev *netdev,\n \n     /* OVS reports 64 bytes and smaller packets into \"tx_1_to_64_packets\".\n      * Same as for rx, rx_undersized_errors is highjacked. */\n-    dev->stats.tx_1_to_64_packets += dev->stats.rx_undersized_errors;\n-    memset(&dev->stats.rx_undersized_errors, 0xff,\n-           sizeof dev->stats.rx_undersized_errors);\n+    dev->common.stats.tx_1_to_64_packets +=\n+        dev->common.stats.rx_undersized_errors;\n+    memset(&dev->common.stats.rx_undersized_errors, 0xff,\n+           sizeof dev->common.stats.rx_undersized_errors);\n \n-#define VHOST_TXQ_STAT(MEMBER, NAME) stats->MEMBER = dev->stats.MEMBER;\n+#define VHOST_TXQ_STAT(MEMBER, NAME) stats->MEMBER = dev->common.stats.MEMBER;\n     VHOST_TXQ_STATS;\n #undef VHOST_TXQ_STAT\n \n-    rte_spinlock_lock(&dev->stats_lock);\n-    stats->rx_dropped = dev->stats.rx_dropped;\n-    stats->tx_dropped = dev->stats.tx_dropped;\n-    rte_spinlock_unlock(&dev->stats_lock);\n+    rte_spinlock_lock(&dev->common.stats_lock);\n+    stats->rx_dropped = dev->common.stats.rx_dropped;\n+    stats->tx_dropped = dev->common.stats.tx_dropped;\n+    rte_spinlock_unlock(&dev->common.stats_lock);\n \n     err = 0;\n out:\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     free(vhost_stats);\n     free(vhost_stats_names);\n@@ -3723,7 +3602,7 @@ netdev_dpdk_vhost_get_custom_stats(const struct netdev *netdev,\n     netdev_dpdk_get_sw_custom_stats(netdev, custom_stats);\n     stat_offset = custom_stats->size;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     if (!is_vhost_running(dev)) {\n         goto out;\n@@ -3745,8 +3624,8 @@ netdev_dpdk_vhost_get_custom_stats(const struct netdev *netdev,\n     }\n     vhost_txq_stats_count = err;\n \n-    stat_offset += dev->up.n_rxq * vhost_rxq_stats_count;\n-    stat_offset += dev->up.n_txq * vhost_txq_stats_count;\n+    stat_offset += dev->common.up.n_rxq * vhost_rxq_stats_count;\n+    stat_offset += dev->common.up.n_txq * vhost_txq_stats_count;\n     custom_stats->counters = xrealloc(custom_stats->counters,\n                                       stat_offset *\n                                       sizeof *custom_stats->counters);\n@@ -3756,7 +3635,7 @@ netdev_dpdk_vhost_get_custom_stats(const struct netdev *netdev,\n                                 sizeof *vhost_stats_names);\n     vhost_stats = xcalloc(vhost_rxq_stats_count, sizeof *vhost_stats);\n \n-    for (int q = 0; q < dev->up.n_rxq; q++) {\n+    for (int q = 0; q < dev->common.up.n_rxq; q++) {\n         qid = q * VIRTIO_QNUM + VIRTIO_TXQ;\n \n         err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names,\n@@ -3790,7 +3669,7 @@ netdev_dpdk_vhost_get_custom_stats(const struct netdev *netdev,\n                                 sizeof *vhost_stats_names);\n     vhost_stats = xcalloc(vhost_txq_stats_count, sizeof *vhost_stats);\n \n-    for (int q = 0; q < dev->up.n_txq; q++) {\n+    for (int q = 0; q < dev->common.up.n_txq; q++) {\n         qid = q * VIRTIO_QNUM;\n \n         err = rte_vhost_vring_stats_get_names(vid, qid, vhost_stats_names,\n@@ -3816,7 +3695,7 @@ netdev_dpdk_vhost_get_custom_stats(const struct netdev *netdev,\n     }\n \n out:\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     custom_stats->size = stat_offset;\n     free(vhost_stats_names);\n@@ -3879,24 +3758,24 @@ netdev_dpdk_get_stats(const struct netdev *netdev, struct netdev_stats *stats)\n     bool gg;\n \n     netdev_dpdk_get_carrier(netdev, &gg);\n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     struct rte_eth_xstat *rte_xstats = NULL;\n     struct rte_eth_xstat_name *rte_xstats_names = NULL;\n     int rte_xstats_len, rte_xstats_new_len, rte_xstats_ret;\n \n-    if (rte_eth_stats_get(dev->port_id, &rte_stats)) {\n+    if (rte_eth_stats_get(dev->common.port_id, &rte_stats)) {\n         VLOG_ERR(\"Can't get ETH statistics for port: \"DPDK_PORT_ID_FMT,\n-                 dev->port_id);\n-        ovs_mutex_unlock(&dev->mutex);\n+                 dev->common.port_id);\n+        ovs_mutex_unlock(&dev->common.mutex);\n         return EPROTO;\n     }\n \n     /* Get length of statistics */\n-    rte_xstats_len = rte_eth_xstats_get_names(dev->port_id, NULL, 0);\n+    rte_xstats_len = rte_eth_xstats_get_names(dev->common.port_id, NULL, 0);\n     if (rte_xstats_len < 0) {\n         VLOG_WARN(\"Cannot get XSTATS values for port: \"DPDK_PORT_ID_FMT,\n-                  dev->port_id);\n+                  dev->common.port_id);\n         goto out;\n     }\n     /* Reserve memory for xstats names and values */\n@@ -3904,24 +3783,24 @@ netdev_dpdk_get_stats(const struct netdev *netdev, struct netdev_stats *stats)\n     rte_xstats = xcalloc(rte_xstats_len, sizeof *rte_xstats);\n \n     /* Retreive xstats names */\n-    rte_xstats_new_len = rte_eth_xstats_get_names(dev->port_id,\n+    rte_xstats_new_len = rte_eth_xstats_get_names(dev->common.port_id,\n                                                   rte_xstats_names,\n                                                   rte_xstats_len);\n     if (rte_xstats_new_len != rte_xstats_len) {\n         VLOG_WARN(\"Cannot get XSTATS names for port: \"DPDK_PORT_ID_FMT,\n-                  dev->port_id);\n+                  dev->common.port_id);\n         goto out;\n     }\n     /* Retreive xstats values */\n     memset(rte_xstats, 0xff, sizeof *rte_xstats * rte_xstats_len);\n-    rte_xstats_ret = rte_eth_xstats_get(dev->port_id, rte_xstats,\n+    rte_xstats_ret = rte_eth_xstats_get(dev->common.port_id, rte_xstats,\n                                         rte_xstats_len);\n     if (rte_xstats_ret > 0 && rte_xstats_ret <= rte_xstats_len) {\n         netdev_dpdk_convert_xstats(stats, rte_xstats, rte_xstats_names,\n                                    rte_xstats_len);\n     } else {\n         VLOG_WARN(\"Cannot get XSTATS values for port: \"DPDK_PORT_ID_FMT,\n-                  dev->port_id);\n+                  dev->common.port_id);\n     }\n \n out:\n@@ -3935,17 +3814,17 @@ out:\n     stats->rx_errors = rte_stats.ierrors;\n     stats->tx_errors = rte_stats.oerrors;\n \n-    rte_spinlock_lock(&dev->stats_lock);\n-    stats->tx_dropped = dev->stats.tx_dropped;\n-    stats->rx_dropped = dev->stats.rx_dropped;\n-    rte_spinlock_unlock(&dev->stats_lock);\n+    rte_spinlock_lock(&dev->common.stats_lock);\n+    stats->tx_dropped = dev->common.stats.tx_dropped;\n+    stats->rx_dropped = dev->common.stats.rx_dropped;\n+    rte_spinlock_unlock(&dev->common.stats_lock);\n \n     /* These are the available DPDK counters for packets not received due to\n      * local resource constraints in DPDK and NIC respectively. */\n     stats->rx_dropped += rte_stats.rx_nombuf + rte_stats.imissed;\n     stats->rx_missed_errors = rte_stats.imissed;\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -3961,18 +3840,20 @@ netdev_dpdk_get_custom_stats(const struct netdev *netdev,\n \n     netdev_dpdk_get_sw_custom_stats(netdev, custom_stats);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n-    if (dev->rte_xstats_ids_size > 0) {\n-        uint64_t *values = xcalloc(dev->rte_xstats_ids_size,\n+    if (dev->common.rte_xstats_ids_size > 0) {\n+        uint64_t *values = xcalloc(dev->common.rte_xstats_ids_size,\n                                    sizeof(uint64_t));\n \n         rte_xstats_ret =\n-                rte_eth_xstats_get_by_id(dev->port_id, dev->rte_xstats_ids,\n-                                         values, dev->rte_xstats_ids_size);\n+                rte_eth_xstats_get_by_id(dev->common.port_id,\n+                                         dev->common.rte_xstats_ids,\n+                                         values,\n+                                         dev->common.rte_xstats_ids_size);\n \n         if (rte_xstats_ret > 0 &&\n-            rte_xstats_ret <= dev->rte_xstats_ids_size) {\n+            rte_xstats_ret <= dev->common.rte_xstats_ids_size) {\n \n             sw_stats_size = custom_stats->size;\n             custom_stats->size += rte_xstats_ret;\n@@ -3982,20 +3863,20 @@ netdev_dpdk_get_custom_stats(const struct netdev *netdev,\n \n             for (i = 0; i < rte_xstats_ret; i++) {\n                 ovs_strlcpy(custom_stats->counters[sw_stats_size + i].name,\n-                            netdev_dpdk_get_xstat_name(dev,\n-                                                       dev->rte_xstats_ids[i]),\n+                            netdev_dpdk_get_xstat_name(\n+                                dev, dev->common.rte_xstats_ids[i]),\n                             NETDEV_CUSTOM_STATS_NAME_SIZE);\n                 custom_stats->counters[sw_stats_size + i].value = values[i];\n             }\n         } else {\n             VLOG_WARN(\"Cannot get XSTATS values for port: \"DPDK_PORT_ID_FMT,\n-                      dev->port_id);\n+                      dev->common.port_id);\n         }\n \n         free(values);\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -4021,17 +3902,17 @@ netdev_dpdk_get_sw_custom_stats(const struct netdev *netdev,\n     custom_stats->counters = xcalloc(custom_stats->size,\n                                      sizeof *custom_stats->counters);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n-    rte_spinlock_lock(&dev->stats_lock);\n+    rte_spinlock_lock(&dev->common.stats_lock);\n     i = 0;\n #define SW_CSTAT(NAME) \\\n-    custom_stats->counters[i++].value = dev->sw_stats->NAME;\n+    custom_stats->counters[i++].value = dev->common.sw_stats->NAME;\n     SW_CSTATS;\n #undef SW_CSTAT\n-    rte_spinlock_unlock(&dev->stats_lock);\n+    rte_spinlock_unlock(&dev->common.stats_lock);\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     i = 0;\n     n = 0;\n@@ -4061,9 +3942,9 @@ netdev_dpdk_get_features(const struct netdev *netdev,\n     struct rte_eth_link link;\n     uint32_t feature = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n-    link = dev->link;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    link = dev->common.link;\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     /* Match against OpenFlow defined link speed values. */\n     if (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) {\n@@ -4124,10 +4005,10 @@ netdev_dpdk_get_speed(const struct netdev *netdev, uint32_t *current,\n     struct rte_eth_link link;\n     int diag;\n \n-    ovs_mutex_lock(&dev->mutex);\n-    link = dev->link;\n-    diag = rte_eth_dev_info_get(dev->port_id, &dev_info);\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    link = dev->common.link;\n+    diag = rte_eth_dev_info_get(dev->common.port_id, &dev_info);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     *current = link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN\n                ? link.link_speed : 0;\n@@ -4179,13 +4060,14 @@ netdev_dpdk_get_duplex(const struct netdev *netdev, bool *full_duplex)\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n     int err = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n-    if (dev->link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN) {\n-        *full_duplex = dev->link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX;\n+    ovs_mutex_lock(&dev->common.mutex);\n+    if (dev->common.link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN) {\n+        *full_duplex = dev->common.link.link_duplex ==\n+                       RTE_ETH_LINK_FULL_DUPLEX;\n     } else {\n         err = EOPNOTSUPP;\n     }\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return err;\n }\n@@ -4240,7 +4122,7 @@ netdev_dpdk_set_policing(struct netdev* netdev, uint32_t policer_rate,\n                      : !policer_burst ? 8000\n                      : policer_burst);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     policer = ovsrcu_get_protected(struct ingress_policer *,\n                                     &dev->ingress_policer);\n@@ -4248,7 +4130,7 @@ netdev_dpdk_set_policing(struct netdev* netdev, uint32_t policer_rate,\n     if (dev->policer_rate == policer_rate &&\n         dev->policer_burst == policer_burst) {\n         /* Assume that settings haven't changed since we last set them. */\n-        ovs_mutex_unlock(&dev->mutex);\n+        ovs_mutex_unlock(&dev->common.mutex);\n         return 0;\n     }\n \n@@ -4265,7 +4147,7 @@ netdev_dpdk_set_policing(struct netdev* netdev, uint32_t policer_rate,\n     ovsrcu_set(&dev->ingress_policer, policer);\n     dev->policer_rate = policer_rate;\n     dev->policer_burst = policer_burst;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -4275,12 +4157,12 @@ netdev_dpdk_get_ifindex(const struct netdev *netdev)\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n     /* Calculate hash from the netdev name. Ensure that ifindex is a 24-bit\n      * postive integer to meet RFC 2863 recommendations.\n      */\n     int ifindex = hash_string(netdev->name, 0) % 0xfffffe + 1;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return ifindex;\n }\n@@ -4290,11 +4172,11 @@ netdev_dpdk_get_carrier(const struct netdev *netdev, bool *carrier)\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n     check_link_status(dev);\n-    *carrier = dev->link.link_status;\n+    *carrier = dev->common.link.link_status;\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -4304,7 +4186,7 @@ netdev_dpdk_vhost_get_carrier(const struct netdev *netdev, bool *carrier)\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     if (is_vhost_running(dev)) {\n         *carrier = 1;\n@@ -4312,7 +4194,7 @@ netdev_dpdk_vhost_get_carrier(const struct netdev *netdev, bool *carrier)\n         *carrier = 0;\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return 0;\n }\n@@ -4323,9 +4205,9 @@ netdev_dpdk_get_carrier_resets(const struct netdev *netdev)\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n     long long int carrier_resets;\n \n-    ovs_mutex_lock(&dev->mutex);\n-    carrier_resets = dev->link_reset_cnt;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    carrier_resets = dev->common.link_reset_cnt;\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return carrier_resets;\n }\n@@ -4341,46 +4223,46 @@ static int\n netdev_dpdk_update_flags__(struct netdev_dpdk *dev,\n                            enum netdev_flags off, enum netdev_flags on,\n                            enum netdev_flags *old_flagsp)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     if ((off | on) & ~(NETDEV_UP | NETDEV_PROMISC)) {\n         return EINVAL;\n     }\n \n-    *old_flagsp = dev->flags;\n-    dev->flags |= on;\n-    dev->flags &= ~off;\n+    *old_flagsp = dev->common.flags;\n+    dev->common.flags |= on;\n+    dev->common.flags &= ~off;\n \n-    if (dev->flags == *old_flagsp) {\n+    if (dev->common.flags == *old_flagsp) {\n         return 0;\n     }\n \n     if (dev->type == DPDK_DEV_ETH) {\n \n-        if ((dev->flags ^ *old_flagsp) & NETDEV_UP) {\n+        if ((dev->common.flags ^ *old_flagsp) & NETDEV_UP) {\n             int err;\n \n-            if (dev->flags & NETDEV_UP) {\n-                err = rte_eth_dev_set_link_up(dev->port_id);\n+            if (dev->common.flags & NETDEV_UP) {\n+                err = rte_eth_dev_set_link_up(dev->common.port_id);\n             } else {\n-                err = rte_eth_dev_set_link_down(dev->port_id);\n+                err = rte_eth_dev_set_link_down(dev->common.port_id);\n             }\n             if (err == -ENOTSUP) {\n                 VLOG_INFO(\"Interface %s does not support link state \"\n-                          \"configuration\", netdev_get_name(&dev->up));\n+                          \"configuration\", netdev_get_name(&dev->common.up));\n             } else if (err < 0) {\n                 VLOG_ERR(\"Interface %s link change error: %s\",\n-                         netdev_get_name(&dev->up), rte_strerror(-err));\n-                dev->flags = *old_flagsp;\n+                         netdev_get_name(&dev->common.up), rte_strerror(-err));\n+                dev->common.flags = *old_flagsp;\n                 return -err;\n             }\n         }\n \n-        if (dev->flags & NETDEV_PROMISC) {\n-            rte_eth_promiscuous_enable(dev->port_id);\n+        if (dev->common.flags & NETDEV_PROMISC) {\n+            rte_eth_promiscuous_enable(dev->common.port_id);\n         }\n \n-        netdev_change_seq_changed(&dev->up);\n+        netdev_change_seq_changed(&dev->common.up);\n     } else {\n         /* If DPDK_DEV_VHOST device's NETDEV_UP flag was changed and vhost is\n          * running then change netdev's change_seq to trigger link state\n@@ -4388,14 +4270,15 @@ netdev_dpdk_update_flags__(struct netdev_dpdk *dev,\n \n         if ((NETDEV_UP & ((*old_flagsp ^ on) | (*old_flagsp ^ off)))\n             && is_vhost_running(dev)) {\n-            netdev_change_seq_changed(&dev->up);\n+            netdev_change_seq_changed(&dev->common.up);\n \n             /* Clear statistics if device is getting up. */\n             if (NETDEV_UP & on) {\n-                rte_spinlock_lock(&dev->stats_lock);\n-                memset(&dev->stats, 0, sizeof dev->stats);\n-                memset(dev->sw_stats, 0, sizeof *dev->sw_stats);\n-                rte_spinlock_unlock(&dev->stats_lock);\n+                rte_spinlock_lock(&dev->common.stats_lock);\n+                memset(&dev->common.stats, 0, sizeof dev->common.stats);\n+                memset(dev->common.sw_stats, 0,\n+                       sizeof *dev->common.sw_stats);\n+                rte_spinlock_unlock(&dev->common.stats_lock);\n             }\n         }\n     }\n@@ -4411,9 +4294,9 @@ netdev_dpdk_update_flags(struct netdev *netdev,\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n     int error;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n     error = netdev_dpdk_update_flags__(dev, off, on, old_flagsp);\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -4424,7 +4307,7 @@ netdev_dpdk_vhost_user_get_status(const struct netdev *netdev,\n {\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     bool client_mode = dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT;\n     smap_add_format(args, \"mode\", \"%s\", client_mode ? \"client\" : \"server\");\n@@ -4432,7 +4315,7 @@ netdev_dpdk_vhost_user_get_status(const struct netdev *netdev,\n     int vid = netdev_dpdk_get_vid(dev);\n     if (vid < 0) {\n         smap_add_format(args, \"status\", \"disconnected\");\n-        ovs_mutex_unlock(&dev->mutex);\n+        ovs_mutex_unlock(&dev->common.mutex);\n         return 0;\n     } else {\n         smap_add_format(args, \"status\", \"connected\");\n@@ -4480,7 +4363,7 @@ netdev_dpdk_vhost_user_get_status(const struct netdev *netdev,\n     smap_add_format(args, \"n_rxq\", \"%d\", netdev->n_rxq);\n     smap_add_format(args, \"n_txq\", \"%d\", netdev->n_txq);\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     return 0;\n }\n \n@@ -4519,28 +4402,28 @@ netdev_dpdk_get_status(const struct netdev *netdev, struct smap *args)\n     int n_rxq;\n     int diag;\n \n-    if (!rte_eth_dev_is_valid_port(dev->port_id)) {\n+    if (!rte_eth_dev_is_valid_port(dev->common.port_id)) {\n         return ENODEV;\n     }\n \n     ovs_mutex_lock(&dpdk_mutex);\n-    ovs_mutex_lock(&dev->mutex);\n-    diag = rte_eth_dev_info_get(dev->port_id, &dev_info);\n-    link_speed = dev->link.link_speed;\n+    ovs_mutex_lock(&dev->common.mutex);\n+    diag = rte_eth_dev_info_get(dev->common.port_id, &dev_info);\n+    link_speed = dev->common.link.link_speed;\n     rx_steer_flags = dev->rx_steer_flags;\n     rx_steer_flows_num = dev->rx_steer_flows_num;\n     n_rxq = netdev->n_rxq;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     ovs_mutex_unlock(&dpdk_mutex);\n \n-    smap_add_format(args, \"port_no\", DPDK_PORT_ID_FMT, dev->port_id);\n+    smap_add_format(args, \"port_no\", DPDK_PORT_ID_FMT, dev->common.port_id);\n     smap_add_format(args, \"numa_id\", \"%d\",\n-                           rte_eth_dev_socket_id(dev->port_id));\n+                           rte_eth_dev_socket_id(dev->common.port_id));\n     if (!diag) {\n         smap_add_format(args, \"driver_name\", \"%s\", dev_info.driver_name);\n         smap_add_format(args, \"min_rx_bufsize\", \"%u\", dev_info.min_rx_bufsize);\n     }\n-    smap_add_format(args, \"max_rx_pktlen\", \"%u\", dev->max_packet_len);\n+    smap_add_format(args, \"max_rx_pktlen\", \"%u\", dev->common.max_packet_len);\n     if (!diag) {\n         smap_add_format(args, \"max_rx_queues\", \"%u\", dev_info.max_rx_queues);\n         smap_add_format(args, \"max_tx_queues\", \"%u\", dev_info.max_tx_queues);\n@@ -4555,7 +4438,7 @@ netdev_dpdk_get_status(const struct netdev *netdev, struct smap *args)\n     smap_add_format(args, \"n_txq\", \"%d\", netdev->n_txq);\n \n     smap_add(args, \"rx_csum_offload\",\n-             dev->hw_ol_features & NETDEV_RX_CHECKSUM_OFFLOAD\n+             dev->common.hw_ol_features & NETDEV_RX_CHECKSUM_OFFLOAD\n              ? \"true\" : \"false\");\n \n     /* Querying the DPDK library for iftype may be done in future, pending\n@@ -4581,9 +4464,9 @@ netdev_dpdk_get_status(const struct netdev *netdev, struct smap *args)\n     smap_add(args, \"link_speed\",\n              netdev_dpdk_link_speed_to_str__(link_speed));\n \n-    if (dev->is_representor) {\n+    if (dev->common.is_representor) {\n         smap_add_format(args, \"dpdk-vf-mac\", ETH_ADDR_FMT,\n-                        ETH_ADDR_ARGS(dev->hwaddr));\n+                        ETH_ADDR_ARGS(dev->common.hwaddr));\n     }\n \n     if (rx_steer_flags && !rx_steer_flows_num) {\n@@ -4609,7 +4492,7 @@ netdev_dpdk_get_status(const struct netdev *netdev, struct smap *args)\n \n static void\n netdev_dpdk_set_admin_state__(struct netdev_dpdk *dev, bool admin_state)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     enum netdev_flags old_flags;\n \n@@ -4641,9 +4524,9 @@ netdev_dpdk_set_admin_state(struct unixctl_conn *conn, int argc,\n         if (netdev && is_dpdk_class(netdev->netdev_class)) {\n             struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-            ovs_mutex_lock(&dev->mutex);\n+            ovs_mutex_lock(&dev->common.mutex);\n             netdev_dpdk_set_admin_state__(dev, up);\n-            ovs_mutex_unlock(&dev->mutex);\n+            ovs_mutex_unlock(&dev->common.mutex);\n \n             netdev_close(netdev);\n         } else {\n@@ -4655,10 +4538,10 @@ netdev_dpdk_set_admin_state(struct unixctl_conn *conn, int argc,\n         struct netdev_dpdk *dev;\n \n         ovs_mutex_lock(&dpdk_mutex);\n-        LIST_FOR_EACH (dev, list_node, &dpdk_list) {\n-            ovs_mutex_lock(&dev->mutex);\n+        LIST_FOR_EACH (dev, common.list_node, &dpdk_list) {\n+            ovs_mutex_lock(&dev->common.mutex);\n             netdev_dpdk_set_admin_state__(dev, up);\n-            ovs_mutex_unlock(&dev->mutex);\n+            ovs_mutex_unlock(&dev->common.mutex);\n         }\n         ovs_mutex_unlock(&dpdk_mutex);\n     }\n@@ -4692,13 +4575,13 @@ netdev_dpdk_detach(struct unixctl_conn *conn, int argc OVS_UNUSED,\n     RTE_ETH_FOREACH_DEV_SIBLING (sibling_port_id, port_id) {\n         struct netdev_dpdk *dev;\n \n-        LIST_FOR_EACH (dev, list_node, &dpdk_list) {\n-            if (dev->port_id != sibling_port_id) {\n+        LIST_FOR_EACH (dev, common.list_node, &dpdk_list) {\n+            if (dev->common.port_id != sibling_port_id) {\n                 continue;\n             }\n             used = true;\n             ds_put_format(&used_interfaces, \" %s\",\n-                          netdev_get_name(&dev->up));\n+                          netdev_get_name(&dev->common.up));\n             break;\n         }\n     }\n@@ -4762,20 +4645,20 @@ netdev_dpdk_get_mempool_info(struct unixctl_conn *conn,\n     if (netdev) {\n         struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-        ovs_mutex_lock(&dev->mutex);\n+        ovs_mutex_lock(&dev->common.mutex);\n         ovs_mutex_lock(&dpdk_mp_mutex);\n \n-        if (dev->dpdk_mp) {\n-            rte_mempool_dump(stream, dev->dpdk_mp->mp);\n+        if (dev->common.dpdk_mp) {\n+            rte_mempool_dump(stream, dev->common.dpdk_mp->mp);\n             fprintf(stream, \"    count: avail (%u), in use (%u)\\n\",\n-                    rte_mempool_avail_count(dev->dpdk_mp->mp),\n-                    rte_mempool_in_use_count(dev->dpdk_mp->mp));\n+                    rte_mempool_avail_count(dev->common.dpdk_mp->mp),\n+                    rte_mempool_in_use_count(dev->common.dpdk_mp->mp));\n         } else {\n             error = \"Not allocated\";\n         }\n \n         ovs_mutex_unlock(&dpdk_mp_mutex);\n-        ovs_mutex_unlock(&dev->mutex);\n+        ovs_mutex_unlock(&dev->common.mutex);\n     } else {\n         ovs_mutex_lock(&dpdk_mp_mutex);\n         rte_mempool_list_dump(stream);\n@@ -4813,16 +4696,16 @@ set_irq_status(int vid)\n  */\n static void\n netdev_dpdk_remap_txqs(struct netdev_dpdk *dev)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     int *enabled_queues, n_enabled = 0;\n-    int i, k, total_txqs = dev->up.n_txq;\n+    int i, k, total_txqs = dev->common.up.n_txq;\n \n     enabled_queues = xcalloc(total_txqs, sizeof *enabled_queues);\n \n     for (i = 0; i < total_txqs; i++) {\n         /* Enabled queues always mapped to themselves. */\n-        if (dev->tx_q[i].map == i) {\n+        if (dev->common.tx_q[i].map == i) {\n             enabled_queues[n_enabled++] = i;\n         }\n     }\n@@ -4834,8 +4717,8 @@ netdev_dpdk_remap_txqs(struct netdev_dpdk *dev)\n \n     k = 0;\n     for (i = 0; i < total_txqs; i++) {\n-        if (dev->tx_q[i].map != i) {\n-            dev->tx_q[i].map = enabled_queues[k];\n+        if (dev->common.tx_q[i].map != i) {\n+            dev->common.tx_q[i].map = enabled_queues[k];\n             k = (k + 1) % n_enabled;\n         }\n     }\n@@ -4844,9 +4727,10 @@ netdev_dpdk_remap_txqs(struct netdev_dpdk *dev)\n         struct ds mapping = DS_EMPTY_INITIALIZER;\n \n         ds_put_format(&mapping, \"TX queue mapping for port '%s':\\n\",\n-                      netdev_get_name(&dev->up));\n+                      netdev_get_name(&dev->common.up));\n         for (i = 0; i < total_txqs; i++) {\n-            ds_put_format(&mapping, \"%2d --> %2d\\n\", i, dev->tx_q[i].map);\n+            ds_put_format(&mapping, \"%2d --> %2d\\n\",\n+                          i, dev->common.tx_q[i].map);\n         }\n \n         VLOG_DBG(\"%s\", ds_cstr(&mapping));\n@@ -4870,9 +4754,10 @@ new_device(int vid)\n     rte_vhost_get_ifname(vid, ifname, sizeof ifname);\n \n     ovs_mutex_lock(&dpdk_mutex);\n+\n     /* Add device to the vhost port with the same name as that passed down. */\n-    LIST_FOR_EACH(dev, list_node, &dpdk_list) {\n-        ovs_mutex_lock(&dev->mutex);\n+    LIST_FOR_EACH (dev, common.list_node, &dpdk_list) {\n+        ovs_mutex_lock(&dev->common.mutex);\n         if (nullable_string_is_equal(ifname, dev->vhost_id)) {\n             uint32_t qp_num = rte_vhost_get_vring_num(vid) / VIRTIO_QNUM;\n             uint64_t features;\n@@ -4884,19 +4769,19 @@ new_device(int vid)\n                 VLOG_INFO(\"Error getting NUMA info for vHost Device '%s'\",\n                           ifname);\n #endif\n-                newnode = dev->socket_id;\n+                newnode = dev->common.socket_id;\n             }\n \n             dev->virtio_features_state |= OVS_VIRTIO_F_NEGOTIATED;\n \n-            if (dev->requested_n_txq < qp_num\n-                || dev->requested_n_rxq < qp_num\n-                || dev->requested_socket_id != newnode\n-                || dev->dpdk_mp == NULL) {\n-                dev->requested_socket_id = newnode;\n-                dev->requested_n_rxq = qp_num;\n-                dev->requested_n_txq = qp_num;\n-                netdev_request_reconfigure(&dev->up);\n+            if (dev->common.requested_n_txq < qp_num\n+                || dev->common.requested_n_rxq < qp_num\n+                || dev->common.requested_socket_id != newnode\n+                || dev->common.dpdk_mp == NULL) {\n+                dev->common.requested_socket_id = newnode;\n+                dev->common.requested_n_rxq = qp_num;\n+                dev->common.requested_n_txq = qp_num;\n+                netdev_request_reconfigure(&dev->common.up);\n             } else {\n                 /* Reconfiguration not required. */\n                 dev->vhost_reconfigured = true;\n@@ -4907,13 +4792,13 @@ new_device(int vid)\n                           \"vHost Device '%s'\", dev->vhost_id);\n             } else {\n                 if (features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {\n-                    dev->hw_ol_features |= NETDEV_TX_TCP_CKSUM_OFFLOAD;\n-                    dev->hw_ol_features |= NETDEV_TX_UDP_CKSUM_OFFLOAD;\n-                    dev->hw_ol_features |= NETDEV_TX_SCTP_CKSUM_OFFLOAD;\n+                    dev->common.hw_ol_features |= NETDEV_TX_TCP_CKSUM_OFFLOAD;\n+                    dev->common.hw_ol_features |= NETDEV_TX_UDP_CKSUM_OFFLOAD;\n+                    dev->common.hw_ol_features |= NETDEV_TX_SCTP_CKSUM_OFFLOAD;\n \n                     /* There is no support in virtio net to offload IPv4 csum,\n                      * but the vhost library handles IPv4 csum offloading. */\n-                    dev->hw_ol_features |= NETDEV_TX_IPV4_CKSUM_OFFLOAD;\n+                    dev->common.hw_ol_features |= NETDEV_TX_IPV4_CKSUM_OFFLOAD;\n                 }\n \n                 if (userspace_tso_enabled()\n@@ -4922,12 +4807,12 @@ new_device(int vid)\n                     if (features & (1ULL << VIRTIO_NET_F_GUEST_TSO4)\n                         && features & (1ULL << VIRTIO_NET_F_GUEST_TSO6)) {\n \n-                        dev->hw_ol_features |= NETDEV_TX_TSO_OFFLOAD;\n+                        dev->common.hw_ol_features |= NETDEV_TX_TSO_OFFLOAD;\n                         VLOG_DBG(\"%s: TSO enabled on vhost port\",\n-                                 netdev_get_name(&dev->up));\n+                                 netdev_get_name(&dev->common.up));\n                     } else {\n                         VLOG_WARN(\"%s: Tx TSO offload is not supported.\",\n-                                  netdev_get_name(&dev->up));\n+                                  netdev_get_name(&dev->common.up));\n                     }\n                 }\n             }\n@@ -4939,11 +4824,11 @@ new_device(int vid)\n \n             /* Disable notifications. */\n             set_irq_status(vid);\n-            netdev_change_seq_changed(&dev->up);\n-            ovs_mutex_unlock(&dev->mutex);\n+            netdev_change_seq_changed(&dev->common.up);\n+            ovs_mutex_unlock(&dev->common.mutex);\n             break;\n         }\n-        ovs_mutex_unlock(&dev->mutex);\n+        ovs_mutex_unlock(&dev->common.mutex);\n     }\n     ovs_mutex_unlock(&dpdk_mutex);\n \n@@ -4962,12 +4847,12 @@ new_device(int vid)\n /* Clears mapping for all available queues of vhost interface. */\n static void\n netdev_dpdk_txq_map_clear(struct netdev_dpdk *dev)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n     int i;\n \n-    for (i = 0; i < dev->up.n_txq; i++) {\n-        dev->tx_q[i].map = OVS_VHOST_QUEUE_MAP_UNKNOWN;\n+    for (i = 0; i < dev->common.up.n_txq; i++) {\n+        dev->common.tx_q[i].map = OVS_VHOST_QUEUE_MAP_UNKNOWN;\n     }\n }\n \n@@ -4987,22 +4872,22 @@ destroy_device(int vid)\n     rte_vhost_get_ifname(vid, ifname, sizeof ifname);\n \n     ovs_mutex_lock(&dpdk_mutex);\n-    LIST_FOR_EACH (dev, list_node, &dpdk_list) {\n+    LIST_FOR_EACH (dev, common.list_node, &dpdk_list) {\n         if (netdev_dpdk_get_vid(dev) == vid) {\n \n-            ovs_mutex_lock(&dev->mutex);\n+            ovs_mutex_lock(&dev->common.mutex);\n             dev->vhost_reconfigured = false;\n             ovsrcu_index_set(&dev->vid, -1);\n             memset(dev->vhost_rxq_enabled, 0,\n-                   dev->up.n_rxq * sizeof *dev->vhost_rxq_enabled);\n+                   dev->common.up.n_rxq * sizeof *dev->vhost_rxq_enabled);\n             netdev_dpdk_txq_map_clear(dev);\n \n             /* Clear offload capabilities before next new_device. */\n-            dev->hw_ol_features = 0;\n+            dev->common.hw_ol_features = 0;\n             netdev_dpdk_update_netdev_flags(dev);\n \n-            netdev_change_seq_changed(&dev->up);\n-            ovs_mutex_unlock(&dev->mutex);\n+            netdev_change_seq_changed(&dev->common.up);\n+            ovs_mutex_unlock(&dev->common.mutex);\n             exists = true;\n             break;\n         }\n@@ -5047,29 +4932,29 @@ vring_state_changed__(struct vhost_state_change *sc)\n     bool is_rx = (sc->queue_id % VIRTIO_QNUM) == VIRTIO_TXQ;\n \n     ovs_mutex_lock(&dpdk_mutex);\n-    LIST_FOR_EACH (dev, list_node, &dpdk_list) {\n-        ovs_mutex_lock(&dev->mutex);\n+    LIST_FOR_EACH (dev, common.list_node, &dpdk_list) {\n+        ovs_mutex_lock(&dev->common.mutex);\n         if (nullable_string_is_equal(sc->ifname, dev->vhost_id)) {\n             if (is_rx) {\n                 bool old_state = dev->vhost_rxq_enabled[qid];\n \n                 dev->vhost_rxq_enabled[qid] = sc->enable != 0;\n                 if (old_state != dev->vhost_rxq_enabled[qid]) {\n-                    netdev_change_seq_changed(&dev->up);\n+                    netdev_change_seq_changed(&dev->common.up);\n                 }\n             } else {\n                 if (sc->enable) {\n-                    dev->tx_q[qid].map = qid;\n+                    dev->common.tx_q[qid].map = qid;\n                 } else {\n-                    dev->tx_q[qid].map = OVS_VHOST_QUEUE_DISABLED;\n+                    dev->common.tx_q[qid].map = OVS_VHOST_QUEUE_DISABLED;\n                 }\n                 netdev_dpdk_remap_txqs(dev);\n             }\n             exists = true;\n-            ovs_mutex_unlock(&dev->mutex);\n+            ovs_mutex_unlock(&dev->common.mutex);\n             break;\n         }\n-        ovs_mutex_unlock(&dev->mutex);\n+        ovs_mutex_unlock(&dev->common.mutex);\n     }\n     ovs_mutex_unlock(&dpdk_mutex);\n \n@@ -5153,8 +5038,8 @@ destroy_connection(int vid)\n     rte_vhost_get_ifname(vid, ifname, sizeof ifname);\n \n     ovs_mutex_lock(&dpdk_mutex);\n-    LIST_FOR_EACH (dev, list_node, &dpdk_list) {\n-        ovs_mutex_lock(&dev->mutex);\n+    LIST_FOR_EACH (dev, common.list_node, &dpdk_list) {\n+        ovs_mutex_lock(&dev->common.mutex);\n         if (nullable_string_is_equal(ifname, dev->vhost_id)) {\n             uint32_t qp_num = NR_QUEUE;\n \n@@ -5164,11 +5049,11 @@ destroy_connection(int vid)\n             }\n \n             /* Restore the number of queue pairs to default. */\n-            if (dev->requested_n_txq != qp_num\n-                || dev->requested_n_rxq != qp_num) {\n-                dev->requested_n_rxq = qp_num;\n-                dev->requested_n_txq = qp_num;\n-                netdev_request_reconfigure(&dev->up);\n+            if (dev->common.requested_n_txq != qp_num\n+                || dev->common.requested_n_rxq != qp_num) {\n+                dev->common.requested_n_rxq = qp_num;\n+                dev->common.requested_n_txq = qp_num;\n+                netdev_request_reconfigure(&dev->common.up);\n             }\n \n             if (!(dev->virtio_features_state & OVS_VIRTIO_F_NEGOTIATED)) {\n@@ -5205,15 +5090,15 @@ destroy_connection(int vid)\n                 }\n                 if (!(dev->virtio_features_state & OVS_VIRTIO_F_NEGOTIATED)) {\n                     dev->virtio_features_state |= OVS_VIRTIO_F_RECONF_PENDING;\n-                    netdev_request_reconfigure(&dev->up);\n+                    netdev_request_reconfigure(&dev->common.up);\n                 }\n             }\n \n-            ovs_mutex_unlock(&dev->mutex);\n+            ovs_mutex_unlock(&dev->common.mutex);\n             exists = true;\n             break;\n         }\n-        ovs_mutex_unlock(&dev->mutex);\n+        ovs_mutex_unlock(&dev->common.mutex);\n     }\n     ovs_mutex_unlock(&dpdk_mutex);\n \n@@ -5355,7 +5240,7 @@ netdev_dpdk_get_qos(const struct netdev *netdev,\n     struct qos_conf *qos_conf;\n     int error = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n     qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf);\n     if (qos_conf) {\n         *typep = qos_conf->ops->qos_name;\n@@ -5365,7 +5250,7 @@ netdev_dpdk_get_qos(const struct netdev *netdev,\n         /* No QoS configuration set, return an empty string */\n         *typep = \"\";\n     }\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -5379,7 +5264,7 @@ netdev_dpdk_set_qos(struct netdev *netdev, const char *type,\n     struct qos_conf *qos_conf, *new_qos_conf = NULL;\n     int error = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf);\n \n@@ -5409,7 +5294,7 @@ netdev_dpdk_set_qos(struct netdev *netdev, const char *type,\n         }\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -5422,7 +5307,7 @@ netdev_dpdk_get_queue(const struct netdev *netdev, uint32_t queue_id,\n     struct qos_conf *qos_conf;\n     int error = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf);\n     if (!qos_conf || !qos_conf->ops || !qos_conf->ops->qos_queue_get) {\n@@ -5431,7 +5316,7 @@ netdev_dpdk_get_queue(const struct netdev *netdev, uint32_t queue_id,\n         error = qos_conf->ops->qos_queue_get(details, queue_id, qos_conf);\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -5444,7 +5329,7 @@ netdev_dpdk_set_queue(struct netdev *netdev, uint32_t queue_id,\n     struct qos_conf *qos_conf;\n     int error = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf);\n     if (!qos_conf || !qos_conf->ops || !qos_conf->ops->qos_queue_construct) {\n@@ -5459,7 +5344,7 @@ netdev_dpdk_set_queue(struct netdev *netdev, uint32_t queue_id,\n                  queue_id, netdev_get_name(netdev), rte_strerror(error));\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -5471,7 +5356,7 @@ netdev_dpdk_delete_queue(struct netdev *netdev, uint32_t queue_id)\n     struct qos_conf *qos_conf;\n     int error = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf);\n     if (qos_conf && qos_conf->ops && qos_conf->ops->qos_queue_destruct) {\n@@ -5480,7 +5365,7 @@ netdev_dpdk_delete_queue(struct netdev *netdev, uint32_t queue_id)\n         error =  EOPNOTSUPP;\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -5493,7 +5378,7 @@ netdev_dpdk_get_queue_stats(const struct netdev *netdev, uint32_t queue_id,\n     struct qos_conf *qos_conf;\n     int error = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf);\n     if (qos_conf && qos_conf->ops && qos_conf->ops->qos_queue_get_stats) {\n@@ -5502,7 +5387,7 @@ netdev_dpdk_get_queue_stats(const struct netdev *netdev, uint32_t queue_id,\n         error = EOPNOTSUPP;\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -5514,7 +5399,7 @@ netdev_dpdk_queue_dump_start(const struct netdev *netdev, void **statep)\n     struct qos_conf *qos_conf;\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     qos_conf = ovsrcu_get_protected(struct qos_conf *, &dev->qos_conf);\n     if (qos_conf && qos_conf->ops\n@@ -5527,7 +5412,7 @@ netdev_dpdk_queue_dump_start(const struct netdev *netdev, void **statep)\n         error = EOPNOTSUPP;\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -5541,7 +5426,7 @@ netdev_dpdk_queue_dump_next(const struct netdev *netdev, void *state_,\n     struct qos_conf *qos_conf;\n     int error = EOF;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     while (state->cur_queue < state->n_queues) {\n         uint32_t queue_id = state->queues[state->cur_queue++];\n@@ -5554,7 +5439,7 @@ netdev_dpdk_queue_dump_next(const struct netdev *netdev, void *state_,\n         }\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return error;\n }\n@@ -6007,7 +5892,7 @@ dpdk_rx_steer_add_flow(struct netdev_dpdk *dev,\n         {\n             .type = RTE_FLOW_ACTION_TYPE_QUEUE,\n             .conf = &(const struct rte_flow_action_queue) {\n-                .index = dev->up.n_rxq - 1,\n+                .index = dev->common.up.n_rxq - 1,\n             },\n         },\n         { .type = RTE_FLOW_ACTION_TYPE_END },\n@@ -6018,19 +5903,20 @@ dpdk_rx_steer_add_flow(struct netdev_dpdk *dev,\n     int err;\n \n     set_error(&error, RTE_FLOW_ERROR_TYPE_NONE);\n-    err = rte_flow_validate(dev->port_id, &attr, items, actions, &error);\n+    err = rte_flow_validate(dev->common.port_id, &attr,\n+                            items, actions, &error);\n     if (err) {\n         VLOG_WARN(\"%s: rx-steering: device does not support %s flow: %s\",\n-                  netdev_get_name(&dev->up), desc,\n+                  netdev_get_name(&dev->common.up), desc,\n                   error.message ? error.message : \"\");\n         goto out;\n     }\n \n     set_error(&error, RTE_FLOW_ERROR_TYPE_NONE);\n-    flow = rte_flow_create(dev->port_id, &attr, items, actions, &error);\n+    flow = rte_flow_create(dev->common.port_id, &attr, items, actions, &error);\n     if (flow == NULL) {\n         VLOG_WARN(\"%s: rx-steering: failed to add %s flow: %s\",\n-                  netdev_get_name(&dev->up), desc,\n+                  netdev_get_name(&dev->common.up), desc,\n                   error.message ? error.message : \"\");\n         err = rte_errno;\n         goto out;\n@@ -6042,7 +5928,8 @@ dpdk_rx_steer_add_flow(struct netdev_dpdk *dev,\n     dev->rx_steer_flows_num = num;\n \n     VLOG_INFO(\"%s: rx-steering: redirected %s traffic to rx queue %d\",\n-              netdev_get_name(&dev->up), desc, dev->up.n_rxq - 1);\n+              netdev_get_name(&dev->common.up), desc,\n+              dev->common.up.n_rxq - 1);\n out:\n     return err;\n }\n@@ -6056,10 +5943,10 @@ dpdk_rx_steer_rss_configure(struct netdev_dpdk *dev, int rss_n_rxq)\n     struct rte_eth_dev_info info;\n     int err;\n \n-    err = rte_eth_dev_info_get(dev->port_id, &info);\n+    err = rte_eth_dev_info_get(dev->common.port_id, &info);\n     if (err < 0) {\n         VLOG_WARN(\"%s: failed to query RSS info: %s\",\n-                  netdev_get_name(&dev->up), rte_strerror(-err));\n+                  netdev_get_name(&dev->common.up), rte_strerror(-err));\n         goto error;\n     }\n \n@@ -6101,10 +5988,11 @@ dpdk_rx_steer_rss_configure(struct netdev_dpdk *dev, int rss_n_rxq)\n         reta_conf[idx].reta[shift] = i % rss_n_rxq;\n     }\n \n-    err = rte_eth_dev_rss_reta_update(dev->port_id, reta_conf, info.reta_size);\n+    err = rte_eth_dev_rss_reta_update(dev->common.port_id,\n+                                      reta_conf, info.reta_size);\n     if (err < 0) {\n         VLOG_WARN(\"%s: failed to configure RSS redirection table: err=%d\",\n-                  netdev_get_name(&dev->up), err);\n+                  netdev_get_name(&dev->common.up), err);\n     }\n \n error:\n@@ -6116,10 +6004,10 @@ dpdk_rx_steer_configure(struct netdev_dpdk *dev)\n {\n     int err = 0;\n \n-    if (dev->up.n_rxq < 2) {\n+    if (dev->common.up.n_rxq < 2) {\n         err = ENOTSUP;\n         VLOG_WARN(\"%s: rx-steering: not enough available rx queues\",\n-                  netdev_get_name(&dev->up));\n+                  netdev_get_name(&dev->common.up));\n         goto out;\n     }\n \n@@ -6144,16 +6032,17 @@ dpdk_rx_steer_configure(struct netdev_dpdk *dev)\n \n     if (dev->rx_steer_flows_num) {\n         /* Reconfigure RSS reta in all but the rx steering queue. */\n-        err = dpdk_rx_steer_rss_configure(dev, dev->up.n_rxq - 1);\n+        err = dpdk_rx_steer_rss_configure(dev, dev->common.up.n_rxq - 1);\n         if (err) {\n             goto out;\n         }\n-        if (dev->up.n_rxq == 2) {\n+        if (dev->common.up.n_rxq == 2) {\n             VLOG_INFO(\"%s: rx-steering: redirected other traffic to \"\n-                      \"rx queue 0\", netdev_get_name(&dev->up));\n+                      \"rx queue 0\", netdev_get_name(&dev->common.up));\n         } else {\n-            VLOG_INFO(\"%s: rx-steering: applied rss on rx queues 0-%u\",\n-                      netdev_get_name(&dev->up), dev->up.n_rxq - 2);\n+            VLOG_INFO(\"%s: rx-steering: applied rss on rx queues\"\n+                      \" 0-%u\", netdev_get_name(&dev->common.up),\n+                      dev->common.up.n_rxq - 2);\n         }\n     }\n \n@@ -6170,13 +6059,14 @@ dpdk_rx_steer_unconfigure(struct netdev_dpdk *dev)\n         return;\n     }\n \n-    VLOG_DBG(\"%s: rx-steering: reset flows\", netdev_get_name(&dev->up));\n+    VLOG_DBG(\"%s: rx-steering: reset flows\", netdev_get_name(&dev->common.up));\n \n     for (int i = 0; i < dev->rx_steer_flows_num; i++) {\n         set_error(&error, RTE_FLOW_ERROR_TYPE_NONE);\n-        if (rte_flow_destroy(dev->port_id, dev->rx_steer_flows[i], &error)) {\n+        if (rte_flow_destroy(dev->common.port_id,\n+                            dev->rx_steer_flows[i], &error)) {\n             VLOG_WARN(\"%s: rx-steering: failed to destroy flow: %s\",\n-                      netdev_get_name(&dev->up),\n+                      netdev_get_name(&dev->common.up),\n                       error.message ? error.message : \"\");\n         }\n     }\n@@ -6198,27 +6088,28 @@ netdev_dpdk_reconfigure(struct netdev *netdev)\n     bool try_rx_steer;\n     int err = 0;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     try_rx_steer = dev->requested_rx_steer_flags != 0;\n-    dev->requested_n_rxq = dev->user_n_rxq;\n+    dev->common.requested_n_rxq = dev->common.user_n_rxq;\n     if (try_rx_steer) {\n-        dev->requested_n_rxq += 1;\n+        dev->common.requested_n_rxq += 1;\n     }\n \n-    atomic_read_relaxed(&netdev_dpdk_pending_reset[dev->port_id],\n+    atomic_read_relaxed(&netdev_dpdk_pending_reset[dev->common.port_id],\n                         &pending_reset);\n \n-    if (netdev->n_txq == dev->requested_n_txq\n-        && netdev->n_rxq == dev->requested_n_rxq\n+    if (netdev->n_txq == dev->common.requested_n_txq\n+        && netdev->n_rxq == dev->common.requested_n_rxq\n         && dev->rx_steer_flags == dev->requested_rx_steer_flags\n-        && dev->mtu == dev->requested_mtu\n-        && dev->lsc_interrupt_mode == dev->requested_lsc_interrupt_mode\n-        && dev->rxq_size == dev->requested_rxq_size\n-        && dev->txq_size == dev->requested_txq_size\n-        && eth_addr_equals(dev->hwaddr, dev->requested_hwaddr)\n-        && dev->socket_id == dev->requested_socket_id\n-        && dev->started && !pending_reset) {\n+        && dev->common.mtu == dev->common.requested_mtu\n+        && dev->common.lsc_interrupt_mode ==\n+           dev->common.requested_lsc_interrupt_mode\n+        && dev->common.rxq_size == dev->common.requested_rxq_size\n+        && dev->common.txq_size == dev->common.requested_txq_size\n+        && eth_addr_equals(dev->common.hwaddr, dev->common.requested_hwaddr)\n+        && dev->common.socket_id == dev->common.requested_socket_id\n+        && dev->common.started && !pending_reset) {\n         /* Reconfiguration is unnecessary */\n \n         goto out;\n@@ -6232,33 +6123,34 @@ retry:\n          * Set false before reset to avoid missing a new reset interrupt event\n          * in a race with event callback.\n          */\n-        atomic_store_relaxed(&netdev_dpdk_pending_reset[dev->port_id], false);\n-        rte_eth_dev_reset(dev->port_id);\n+        atomic_store_relaxed(\n+            &netdev_dpdk_pending_reset[dev->common.port_id], false);\n+        rte_eth_dev_reset(dev->common.port_id);\n         if_notifier_manual_report();\n     } else {\n-        rte_eth_dev_stop(dev->port_id);\n+        rte_eth_dev_stop(dev->common.port_id);\n     }\n \n-    dev->started = false;\n+    dev->common.started = false;\n \n     err = netdev_dpdk_mempool_configure(dev);\n     if (err && err != EEXIST) {\n         goto out;\n     }\n \n-    dev->lsc_interrupt_mode = dev->requested_lsc_interrupt_mode;\n+    dev->common.lsc_interrupt_mode = dev->common.requested_lsc_interrupt_mode;\n \n-    netdev->n_txq = dev->requested_n_txq;\n-    netdev->n_rxq = dev->requested_n_rxq;\n+    netdev->n_txq = dev->common.requested_n_txq;\n+    netdev->n_rxq = dev->common.requested_n_rxq;\n \n-    dev->rxq_size = dev->requested_rxq_size;\n-    dev->txq_size = dev->requested_txq_size;\n+    dev->common.rxq_size = dev->common.requested_rxq_size;\n+    dev->common.txq_size = dev->common.requested_txq_size;\n \n-    rte_free(dev->tx_q);\n-    dev->tx_q = NULL;\n+    rte_free(dev->common.tx_q);\n+    dev->common.tx_q = NULL;\n \n-    if (!eth_addr_equals(dev->hwaddr, dev->requested_hwaddr)) {\n-        err = netdev_dpdk_set_etheraddr__(dev, dev->requested_hwaddr);\n+    if (!eth_addr_equals(dev->common.hwaddr, dev->common.requested_hwaddr)) {\n+        err = netdev_dpdk_set_etheraddr__(dev, dev->common.requested_hwaddr);\n         if (err) {\n             goto out;\n         }\n@@ -6280,7 +6172,7 @@ retry:\n      * configured by the user, as netdev_dpdk_set_etheraddr__()\n      * will have succeeded to get to this point.\n      */\n-    dev->requested_hwaddr = dev->hwaddr;\n+    dev->common.requested_hwaddr = dev->common.hwaddr;\n \n     if (try_rx_steer) {\n         err = dpdk_rx_steer_configure(dev);\n@@ -6291,46 +6183,47 @@ retry:\n              * The extra queue must be explicitly removed here to ensure that\n              * it is unconfigured immediately.\n              */\n-            dev->requested_n_rxq = dev->user_n_rxq;\n+            dev->common.requested_n_rxq = dev->common.user_n_rxq;\n             goto retry;\n         }\n     } else {\n-        VLOG_INFO(\"%s: rx-steering: default rss\", netdev_get_name(&dev->up));\n+        VLOG_INFO(\"%s: rx-steering: default rss\",\n+                  netdev_get_name(&dev->common.up));\n     }\n     dev->rx_steer_flags = dev->requested_rx_steer_flags;\n \n-    dev->tx_q = netdev_dpdk_alloc_txq(netdev->n_txq);\n-    if (!dev->tx_q) {\n+    dev->common.tx_q = netdev_dpdk_alloc_txq(netdev->n_txq);\n+    if (!dev->common.tx_q) {\n         err = ENOMEM;\n     }\n \n     netdev_change_seq_changed(netdev);\n \n out:\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     return err;\n }\n \n static int\n dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev)\n-    OVS_REQUIRES(dev->mutex)\n+    OVS_REQUIRES(dev->common.mutex)\n {\n-    dev->up.n_txq = dev->requested_n_txq;\n-    dev->up.n_rxq = dev->requested_n_rxq;\n+    dev->common.up.n_txq = dev->common.requested_n_txq;\n+    dev->common.up.n_rxq = dev->common.requested_n_rxq;\n \n     /* Always keep RX queue 0 enabled for implementations that won't\n      * report vring states. */\n     dev->vhost_rxq_enabled[0] = true;\n \n     /* Enable TX queue 0 by default if it wasn't disabled. */\n-    if (dev->tx_q[0].map == OVS_VHOST_QUEUE_MAP_UNKNOWN) {\n-        dev->tx_q[0].map = 0;\n+    if (dev->common.tx_q[0].map == OVS_VHOST_QUEUE_MAP_UNKNOWN) {\n+        dev->common.tx_q[0].map = 0;\n     }\n \n-    rte_spinlock_lock(&dev->stats_lock);\n-    memset(&dev->stats, 0, sizeof dev->stats);\n-    memset(dev->sw_stats, 0, sizeof *dev->sw_stats);\n-    rte_spinlock_unlock(&dev->stats_lock);\n+    rte_spinlock_lock(&dev->common.stats_lock);\n+    memset(&dev->common.stats, 0, sizeof dev->common.stats);\n+    memset(dev->common.sw_stats, 0, sizeof *dev->common.sw_stats);\n+    rte_spinlock_unlock(&dev->common.stats_lock);\n \n     netdev_dpdk_remap_txqs(dev);\n \n@@ -6340,7 +6233,7 @@ dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev)\n         err = netdev_dpdk_mempool_configure(dev);\n         if (!err) {\n             /* A new mempool was created or re-used. */\n-            netdev_change_seq_changed(&dev->up);\n+            netdev_change_seq_changed(&dev->common.up);\n         } else if (err != EEXIST) {\n             return err;\n         }\n@@ -6348,7 +6241,7 @@ dpdk_vhost_reconfigure_helper(struct netdev_dpdk *dev)\n         if (dev->vhost_reconfigured == false) {\n             dev->vhost_reconfigured = true;\n             /* Carrier status may need updating. */\n-            netdev_change_seq_changed(&dev->up);\n+            netdev_change_seq_changed(&dev->common.up);\n         }\n     }\n \n@@ -6363,9 +6256,9 @@ netdev_dpdk_vhost_reconfigure(struct netdev *netdev)\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n     int err;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n     err = dpdk_vhost_reconfigure_helper(dev);\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return err;\n }\n@@ -6378,7 +6271,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)\n     char *vhost_id;\n     int err;\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     if (dev->vhost_driver_flags & RTE_VHOST_USER_CLIENT && dev->vhost_id\n         && dev->virtio_features_state & OVS_VIRTIO_F_RECONF_PENDING) {\n@@ -6391,13 +6284,13 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)\n         unregister = true;\n     }\n \n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     if (unregister) {\n         dpdk_vhost_driver_unregister(dev, vhost_id);\n     }\n \n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n \n     /* Configure vHost client mode if requested and if the following criteria\n      * are met:\n@@ -6451,14 +6344,14 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)\n             dev->vhost_driver_flags |= vhost_flags;\n             VLOG_INFO(\"vHost User device '%s' created in 'client' mode, \"\n                       \"using client socket '%s'\",\n-                      dev->up.name, dev->vhost_id);\n+                      dev->common.up.name, dev->vhost_id);\n         }\n \n         err = rte_vhost_driver_callback_register(dev->vhost_id,\n                                                  &virtio_net_device_ops);\n         if (err) {\n             VLOG_ERR(\"rte_vhost_driver_callback_register failed for \"\n-                     \"vhost user client port: %s\\n\", dev->up.name);\n+                     \"vhost user client port: %s\\n\", dev->common.up.name);\n             goto unlock;\n         }\n \n@@ -6466,7 +6359,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)\n             virtio_unsup_features = 1ULL << VIRTIO_NET_F_HOST_ECN\n                                     | 1ULL << VIRTIO_NET_F_HOST_UFO;\n             VLOG_DBG(\"%s: TSO enabled on vhost port\",\n-                     netdev_get_name(&dev->up));\n+                     netdev_get_name(&dev->common.up));\n         } else {\n             /* Advertise checksum offloading to the guest, but explicitly\n              * disable TSO and friends.\n@@ -6481,7 +6374,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)\n                                                 virtio_unsup_features);\n         if (err) {\n             VLOG_ERR(\"rte_vhost_driver_disable_features failed for \"\n-                     \"vhost user client port: %s\\n\", dev->up.name);\n+                     \"vhost user client port: %s\\n\", dev->common.up.name);\n             goto unlock;\n         }\n \n@@ -6492,7 +6385,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)\n             err = rte_vhost_driver_set_max_queue_num(dev->vhost_id, max_qp);\n             if (err) {\n                 VLOG_ERR(\"rte_vhost_driver_set_max_queue_num failed for \"\n-                         \"vhost-user client port: %s\\n\", dev->up.name);\n+                         \"vhost-user client port: %s\\n\", dev->common.up.name);\n                 goto unlock;\n             }\n         }\n@@ -6500,7 +6393,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)\n         err = rte_vhost_driver_start(dev->vhost_id);\n         if (err) {\n             VLOG_ERR(\"rte_vhost_driver_start failed for vhost user \"\n-                     \"client port: %s\\n\", dev->up.name);\n+                     \"client port: %s\\n\", dev->common.up.name);\n             goto unlock;\n         }\n     }\n@@ -6508,7 +6401,7 @@ netdev_dpdk_vhost_client_reconfigure(struct netdev *netdev)\n     err = dpdk_vhost_reconfigure_helper(dev);\n \n unlock:\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n \n     return err;\n }\n@@ -6524,9 +6417,9 @@ netdev_dpdk_get_port_id(struct netdev *netdev)\n     }\n \n     dev = netdev_dpdk_cast(netdev);\n-    ovs_mutex_lock(&dev->mutex);\n-    ret = dev->port_id;\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    ret = dev->common.port_id;\n+    ovs_mutex_unlock(&dev->common.mutex);\n out:\n     return ret;\n }\n@@ -6549,7 +6442,7 @@ netdev_dpdk_flow_api_supported(struct netdev *netdev, bool check_only)\n     }\n \n     dev = netdev_dpdk_cast(netdev);\n-    ovs_mutex_lock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n     if (dev->type == DPDK_DEV_ETH) {\n         if (dev->requested_rx_steer_flags && !check_only) {\n             VLOG_WARN(\"%s: rx-steering is mutually exclusive with hw-offload,\"\n@@ -6561,7 +6454,7 @@ netdev_dpdk_flow_api_supported(struct netdev *netdev, bool check_only)\n         /* TODO: Check if we able to offload some minimal flow. */\n         ret = true;\n     }\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n out:\n     return ret;\n }\n@@ -6574,7 +6467,7 @@ netdev_dpdk_rte_flow_destroy(struct netdev *netdev,\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n     int ret;\n \n-    ret = rte_flow_destroy(dev->port_id, rte_flow, error);\n+    ret = rte_flow_destroy(dev->common.port_id, rte_flow, error);\n     return ret;\n }\n \n@@ -6588,7 +6481,7 @@ netdev_dpdk_rte_flow_create(struct netdev *netdev,\n     struct rte_flow *flow;\n     struct netdev_dpdk *dev = netdev_dpdk_cast(netdev);\n \n-    flow = rte_flow_create(dev->port_id, attr, items, actions, error);\n+    flow = rte_flow_create(dev->common.port_id, attr, items, actions, error);\n     return flow;\n }\n \n@@ -6616,7 +6509,7 @@ netdev_dpdk_rte_flow_query_count(struct netdev *netdev,\n     }\n \n     dev = netdev_dpdk_cast(netdev);\n-    ret = rte_flow_query(dev->port_id, rte_flow, actions, query, error);\n+    ret = rte_flow_query(dev->common.port_id, rte_flow, actions, query, error);\n     return ret;\n }\n \n@@ -6637,10 +6530,10 @@ netdev_dpdk_rte_flow_tunnel_decap_set(struct netdev *netdev,\n     }\n \n     dev = netdev_dpdk_cast(netdev);\n-    ovs_mutex_lock(&dev->mutex);\n-    ret = rte_flow_tunnel_decap_set(dev->port_id, tunnel, actions,\n+    ovs_mutex_lock(&dev->common.mutex);\n+    ret = rte_flow_tunnel_decap_set(dev->common.port_id, tunnel, actions,\n                                     num_of_actions, error);\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     return ret;\n }\n \n@@ -6659,10 +6552,10 @@ netdev_dpdk_rte_flow_tunnel_match(struct netdev *netdev,\n     }\n \n     dev = netdev_dpdk_cast(netdev);\n-    ovs_mutex_lock(&dev->mutex);\n-    ret = rte_flow_tunnel_match(dev->port_id, tunnel, items, num_of_items,\n-                                error);\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    ret = rte_flow_tunnel_match(dev->common.port_id, tunnel,\n+                                items, num_of_items, error);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     return ret;\n }\n \n@@ -6681,9 +6574,9 @@ netdev_dpdk_rte_flow_get_restore_info(struct netdev *netdev,\n     }\n \n     dev = netdev_dpdk_cast(netdev);\n-    ovs_mutex_lock(&dev->mutex);\n-    ret = rte_flow_get_restore_info(dev->port_id, m, info, error);\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    ret = rte_flow_get_restore_info(dev->common.port_id, m, info, error);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     return ret;\n }\n \n@@ -6702,10 +6595,10 @@ netdev_dpdk_rte_flow_tunnel_action_decap_release(\n     }\n \n     dev = netdev_dpdk_cast(netdev);\n-    ovs_mutex_lock(&dev->mutex);\n-    ret = rte_flow_tunnel_action_decap_release(dev->port_id, actions,\n+    ovs_mutex_lock(&dev->common.mutex);\n+    ret = rte_flow_tunnel_action_decap_release(dev->common.port_id, actions,\n                                                num_of_actions, error);\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     return ret;\n }\n \n@@ -6723,10 +6616,10 @@ netdev_dpdk_rte_flow_tunnel_item_release(struct netdev *netdev,\n     }\n \n     dev = netdev_dpdk_cast(netdev);\n-    ovs_mutex_lock(&dev->mutex);\n-    ret = rte_flow_tunnel_item_release(dev->port_id, items, num_of_items,\n-                                       error);\n-    ovs_mutex_unlock(&dev->mutex);\n+    ovs_mutex_lock(&dev->common.mutex);\n+    ret = rte_flow_tunnel_item_release(dev->common.port_id,\n+                                        items, num_of_items, error);\n+    ovs_mutex_unlock(&dev->common.mutex);\n     return ret;\n }\n \n",
    "prefixes": [
        "ovs-dev",
        "v3",
        "04/11"
    ]
}