From patchwork Tue Oct 11 05:56:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khalid Elmously X-Patchwork-Id: 1688463 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=ocgS+Prq; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4MmlRz1pwDz23jw for ; Tue, 11 Oct 2022 16:57:34 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1oi8GQ-0008FJ-Kv; Tue, 11 Oct 2022 05:57:14 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1oi8GN-0008EP-Rd for kernel-team@lists.ubuntu.com; Tue, 11 Oct 2022 05:57:11 +0000 Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 8564340008 for ; Tue, 11 Oct 2022 05:57:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1665467831; bh=2lngiN8OeMclOCfEaEjfXbhf3AVfK3bAyXsjNGmEtvg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ocgS+PrqLA0D8RE6J93bDHIOOEmXeycvZHM6S/ZVwIEoWltjw4KnpTYL9oid93V7m aoxwiSyRaaNxdyhVj6V5EUzW7UU1x6FdapDiD8vC1QonkOW3EpvwO+s0dcIqg0w9BG 3kNTfv36nXo9b9GFf9tcEvf1drRfH9CaCA65NpOv/Ku9YJmSIdN2futZbnHIbSg1la m9rV+Xf8weVaTitGNmJsd88pJe/q48iaz5vW0RFhwh6eWNb1a3klUIrtlq76JoSbis RQq11TX58aSH7wdUWIER4SHKHABGI6UgM0NXewoR6IrfdT5NVXSdjulmvM/9dsyfxH P0Lx9uTXL3ZNg== Received: by mail-qk1-f197.google.com with SMTP id de16-20020a05620a371000b006ceb92bc740so10897852qkb.15 for ; Mon, 10 Oct 2022 22:57:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2lngiN8OeMclOCfEaEjfXbhf3AVfK3bAyXsjNGmEtvg=; b=hkBfpaJazRltRAT3N+5GfVJnaMBNOyUDwZxaSzZowSjeEaJFfcA+vWU4BvJ88EOB+B ptrz/Z6PlnW8/wx2VO9u0HuiVkdH+eGe8wmQwRDx8r2UXIshpo6j3OyVPXQUr4ucxJq8 OO9oebK5OMICGSz/bEoaoZzwrEQzmmoHkgH47zXbBAPiGBavggwbEfo7DWMOox7XBRZd uLldea6FHaShBQf2pAVKLXyXUv5vAsQru6bw5IvOY51SljpOlzC8TOm7tU0Q5MCI3UDL xPjnta3ecsb0A0dWDlbIelKg6p1OsFVrHxuiarA8x2tCEvau/FLhwLVP13+jXV2jNnc8 mNPg== X-Gm-Message-State: ACrzQf0fuMX/mJ5T1YJiEepoxRB/KW0FgCjGkvksyD/y81J3QWAK/qLr jXhEq3LJIkPpGkw7ZMTlznUrHhbmV1HfgalvdE+iap+r0trWkyMXGhj8t2qnZ+SAYajLQHmMLHm lbnkzjlg4ePYX1HTteuOD9VNTGa2JJesdeu17AbSZ7g== X-Received: by 2002:a05:620a:270a:b0:6ce:249a:54fa with SMTP id b10-20020a05620a270a00b006ce249a54famr14960997qkp.129.1665467829866; Mon, 10 Oct 2022 22:57:09 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6XajRMiq2dkpaYtfy1sJAzCb+ptl5eyQhhmno3saMQQ91Zcq07Z/otXrkBy/OdjX+2R1C0lQ== X-Received: by 2002:a05:620a:270a:b0:6ce:249a:54fa with SMTP id b10-20020a05620a270a00b006ce249a54famr14960988qkp.129.1665467829545; Mon, 10 Oct 2022 22:57:09 -0700 (PDT) Received: from rpi4-work.fuzzbuzz.org ([38.147.253.164]) by smtp.gmail.com with ESMTPSA id z8-20020ac81008000000b0039351b26714sm9848927qti.7.2022.10.10.22.57.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Oct 2022 22:57:09 -0700 (PDT) From: Khalid Elmously To: kernel-team@lists.ubuntu.com Subject: [PATCH 02/19] gve: Add a jumbo-frame device option. Date: Tue, 11 Oct 2022 01:56:47 -0400 Message-Id: <20221011055704.642271-3-khalid.elmously@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221011055704.642271-1-khalid.elmously@canonical.com> References: <20221011055704.642271-1-khalid.elmously@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Shailend Chand BugLink: https://bugs.launchpad.net/bugs/1953575 A widely deployed driver has a bug that will cause the driver not to load when a max_mtu > 2048 is present in the device descriptor. To avoid this bug while still enabling jumbo frames, we present a lower max_mtu in the device descriptor and pass the actual max_mtu in a separate device option. The driver supports 2 different queue formats. To enable features on one queue format, but not the other, a supported_features mask was added to the device options in the device descriptor. Signed-off-by: Shailend Chand Signed-off-by: Jeroen de Borst Signed-off-by: David S. Miller (cherry picked from commit 255489f5b33ccec046be689dd45b5ccdec2b2a32) Signed-off-by: Khalid Elmously --- drivers/net/ethernet/google/gve/gve_adminq.c | 58 ++++++++++++++++++-- drivers/net/ethernet/google/gve/gve_adminq.h | 14 +++++ 2 files changed, 68 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index ce507464f3d6..fbe652454722 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -38,7 +38,8 @@ void gve_parse_device_option(struct gve_priv *priv, struct gve_device_option *option, struct gve_device_option_gqi_rda **dev_op_gqi_rda, struct gve_device_option_gqi_qpl **dev_op_gqi_qpl, - struct gve_device_option_dqo_rda **dev_op_dqo_rda) + struct gve_device_option_dqo_rda **dev_op_dqo_rda, + struct gve_device_option_jumbo_frames **dev_op_jumbo_frames) { u32 req_feat_mask = be32_to_cpu(option->required_features_mask); u16 option_length = be16_to_cpu(option->option_length); @@ -111,6 +112,24 @@ void gve_parse_device_option(struct gve_priv *priv, } *dev_op_dqo_rda = (void *)(option + 1); break; + case GVE_DEV_OPT_ID_JUMBO_FRAMES: + if (option_length < sizeof(**dev_op_jumbo_frames) || + req_feat_mask != GVE_DEV_OPT_REQ_FEAT_MASK_JUMBO_FRAMES) { + dev_warn(&priv->pdev->dev, GVE_DEVICE_OPTION_ERROR_FMT, + "Jumbo Frames", + (int)sizeof(**dev_op_jumbo_frames), + GVE_DEV_OPT_REQ_FEAT_MASK_JUMBO_FRAMES, + option_length, req_feat_mask); + break; + } + + if (option_length > sizeof(**dev_op_jumbo_frames)) { + dev_warn(&priv->pdev->dev, + GVE_DEVICE_OPTION_TOO_BIG_FMT, + "Jumbo Frames"); + } + *dev_op_jumbo_frames = (void *)(option + 1); + break; default: /* If we don't recognize the option just continue * without doing anything. @@ -126,7 +145,8 @@ gve_process_device_options(struct gve_priv *priv, struct gve_device_descriptor *descriptor, struct gve_device_option_gqi_rda **dev_op_gqi_rda, struct gve_device_option_gqi_qpl **dev_op_gqi_qpl, - struct gve_device_option_dqo_rda **dev_op_dqo_rda) + struct gve_device_option_dqo_rda **dev_op_dqo_rda, + struct gve_device_option_jumbo_frames **dev_op_jumbo_frames) { const int num_options = be16_to_cpu(descriptor->num_device_options); struct gve_device_option *dev_opt; @@ -146,7 +166,7 @@ gve_process_device_options(struct gve_priv *priv, gve_parse_device_option(priv, descriptor, dev_opt, dev_op_gqi_rda, dev_op_gqi_qpl, - dev_op_dqo_rda); + dev_op_dqo_rda, dev_op_jumbo_frames); dev_opt = next_opt; } @@ -660,12 +680,31 @@ gve_set_desc_cnt_dqo(struct gve_priv *priv, return 0; } +static void gve_enable_supported_features(struct gve_priv *priv, + u32 supported_features_mask, + const struct gve_device_option_jumbo_frames + *dev_op_jumbo_frames) +{ + /* Before control reaches this point, the page-size-capped max MTU from + * the gve_device_descriptor field has already been stored in + * priv->dev->max_mtu. We overwrite it with the true max MTU below. + */ + if (dev_op_jumbo_frames && + (supported_features_mask & GVE_SUP_JUMBO_FRAMES_MASK)) { + dev_info(&priv->pdev->dev, + "JUMBO FRAMES device option enabled.\n"); + priv->dev->max_mtu = be16_to_cpu(dev_op_jumbo_frames->max_mtu); + } +} + int gve_adminq_describe_device(struct gve_priv *priv) { + struct gve_device_option_jumbo_frames *dev_op_jumbo_frames = NULL; struct gve_device_option_gqi_rda *dev_op_gqi_rda = NULL; struct gve_device_option_gqi_qpl *dev_op_gqi_qpl = NULL; struct gve_device_option_dqo_rda *dev_op_dqo_rda = NULL; struct gve_device_descriptor *descriptor; + u32 supported_features_mask = 0; union gve_adminq_command cmd; dma_addr_t descriptor_bus; int err = 0; @@ -689,7 +728,8 @@ int gve_adminq_describe_device(struct gve_priv *priv) goto free_device_descriptor; err = gve_process_device_options(priv, descriptor, &dev_op_gqi_rda, - &dev_op_gqi_qpl, &dev_op_dqo_rda); + &dev_op_gqi_qpl, &dev_op_dqo_rda, + &dev_op_jumbo_frames); if (err) goto free_device_descriptor; @@ -704,12 +744,19 @@ int gve_adminq_describe_device(struct gve_priv *priv) priv->queue_format = GVE_DQO_RDA_FORMAT; dev_info(&priv->pdev->dev, "Driver is running with DQO RDA queue format.\n"); + supported_features_mask = + be32_to_cpu(dev_op_dqo_rda->supported_features_mask); } else if (dev_op_gqi_rda) { priv->queue_format = GVE_GQI_RDA_FORMAT; dev_info(&priv->pdev->dev, "Driver is running with GQI RDA queue format.\n"); + supported_features_mask = + be32_to_cpu(dev_op_gqi_rda->supported_features_mask); } else { priv->queue_format = GVE_GQI_QPL_FORMAT; + if (dev_op_gqi_qpl) + supported_features_mask = + be32_to_cpu(dev_op_gqi_qpl->supported_features_mask); dev_info(&priv->pdev->dev, "Driver is running with GQI QPL queue format.\n"); } @@ -746,6 +793,9 @@ int gve_adminq_describe_device(struct gve_priv *priv) } priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues); + gve_enable_supported_features(priv, supported_features_mask, + dev_op_jumbo_frames); + free_device_descriptor: dma_free_coherent(&priv->pdev->dev, PAGE_SIZE, descriptor, descriptor_bus); diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 3953f6f7a427..83c0b40cd2d9 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -108,6 +108,14 @@ struct gve_device_option_dqo_rda { static_assert(sizeof(struct gve_device_option_dqo_rda) == 8); +struct gve_device_option_jumbo_frames { + __be32 supported_features_mask; + __be16 max_mtu; + u8 padding[2]; +}; + +static_assert(sizeof(struct gve_device_option_jumbo_frames) == 8); + /* Terminology: * * RDA - Raw DMA Addressing - Buffers associated with SKBs are directly DMA @@ -121,6 +129,7 @@ enum gve_dev_opt_id { GVE_DEV_OPT_ID_GQI_RDA = 0x2, GVE_DEV_OPT_ID_GQI_QPL = 0x3, GVE_DEV_OPT_ID_DQO_RDA = 0x4, + GVE_DEV_OPT_ID_JUMBO_FRAMES = 0x8, }; enum gve_dev_opt_req_feat_mask { @@ -128,6 +137,11 @@ enum gve_dev_opt_req_feat_mask { GVE_DEV_OPT_REQ_FEAT_MASK_GQI_RDA = 0x0, GVE_DEV_OPT_REQ_FEAT_MASK_GQI_QPL = 0x0, GVE_DEV_OPT_REQ_FEAT_MASK_DQO_RDA = 0x0, + GVE_DEV_OPT_REQ_FEAT_MASK_JUMBO_FRAMES = 0x0, +}; + +enum gve_sup_feature_mask { + GVE_SUP_JUMBO_FRAMES_MASK = 1 << 2, }; #define GVE_DEV_OPT_LEN_GQI_RAW_ADDRESSING 0x0