From patchwork Tue Nov 3 17:46:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 1393267 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=o6KWEUcO; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4CQcfq5jQMz9sVM for ; Wed, 4 Nov 2020 04:46:59 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728742AbgKCRq6 (ORCPT ); Tue, 3 Nov 2020 12:46:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727688AbgKCRq5 (ORCPT ); Tue, 3 Nov 2020 12:46:57 -0500 Received: from mail-qv1-xf4a.google.com (mail-qv1-xf4a.google.com [IPv6:2607:f8b0:4864:20::f4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A88FC0613D1 for ; Tue, 3 Nov 2020 09:46:57 -0800 (PST) Received: by mail-qv1-xf4a.google.com with SMTP id l16so9574653qvt.17 for ; Tue, 03 Nov 2020 09:46:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=8g52K0cJFBVuvRZ/Kuf+ftq8TI+Voabo2HysgByQvPE=; b=o6KWEUcO4LbNXkk2h+MHC9p0xkMNToDW+dg4Ku2x4ZceKRL2D1psMo8Hw1s68IFG1Y e9DIcw6WJo5gTRL30ji0qp65Um6LUCkjaWDH4IeIXcZ9s3mwiuHBDcmQNLRWyWQbMKg4 c14CXk9bbp7eNwCoBf33PIEQFqf2AzfnObybOFd93EJGx2/5kbMg+ZkMTrCwOT6NnIrU ObW+wUlq2Sywv8kDKQmna+dBFXKTZ9uBgSh5DxLFDR2htB3Af2TkzB+3/51+JmlXMZ5Y KZ2Hvmkwwa66kQMy8X6hwscipifRw0EyOR92WjEjGCwpsjluudc/A2DTZJOgK5YEnNUt TuyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8g52K0cJFBVuvRZ/Kuf+ftq8TI+Voabo2HysgByQvPE=; b=DG81x/LQ+LXykNINbrsVDgJuWQr9006pKpoeSfeYjMt3Xrzwogb/FhWZIYRvuKucMF y7gwkWouUUKpQ/uMf6sOVCyElB80SHJ3gxjQHeHT1C6bCr2ke6MrhHDHwXPx9GlJXbDF +xBMcnSg69PIUMxyxh5qgPynVopF7oO0yUagNXbIBHv1HcPN1XvJqpcIRP3nXSICbQxS J4Gg5Na1ac1pvqKpGJ1i+D1+EszAbeCppqSO/6+X9LCKOXctluX6LyYbTkuQSW6atByT gqHgWhXUUTE2Jzx9tpqzp2tDXTbxlB8aoctglku0QWLxzOnotUPPEX83sef9EL/OTvX+ wV7A== X-Gm-Message-State: AOAM531IzcXw+hJ0N2o4mqiMmowtc7aIEC7lpq37HtVm94fpd6y5tDHC QgZeYOesrYRDbSale/UjGuzI6VN5mwiVX53KVEpANEmDVfbn8ypMv8XkN8HxxRzWpav9Pds/ix8 9vJFKdu1BHSCNxJXFlPs1qbG5BLhW0rmEssxtgjDF3hG4mMWxopjk8U9JlfO12PKp0edR2nra X-Google-Smtp-Source: ABdhPJzTroEwKsjUrXUm7qmgwVYaWgKuW791ptpIOkll1SI1i4c65C5B+PgvnG+rJGWML5VbSdVUyOQAGnLVVjBV Sender: "awogbemila via sendgmr" X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:ad4:456c:: with SMTP id o12mr28780143qvu.48.1604425616479; Tue, 03 Nov 2020 09:46:56 -0800 (PST) Date: Tue, 3 Nov 2020 09:46:48 -0800 In-Reply-To: <20201103174651.590586-1-awogbemila@google.com> Message-Id: <20201103174651.590586-2-awogbemila@google.com> Mime-Version: 1.0 References: <20201103174651.590586-1-awogbemila@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH 1/4] gve: Add support for raw addressing device option From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan Add support to describe device for parsing device options. As the first device option, add raw addressing. "Raw Addressing" mode (as opposed to the current "qpl" mode) is an operational mode which allows the driver avoid bounce buffer copies which it currently performs using pre-allocated qpls (queue_page_lists) when sending and receiving packets. For egress packets, the provided skb data addresses will be dma_map'ed and passed to the device, allowing the NIC can perform DMA directly - the driver will not have to copy the buffer content into pre-allocated buffers/qpls (as in qpl mode). For ingress packets, copies are also eliminated as buffers are handed to the networking stack and then recycled or re-allocated as necessary, avoiding the use of skb_copy_to_linear_data(). This patch only introduces the option to the driver. Subsequent patches will add the ingress and egress functionality. Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 1 + drivers/net/ethernet/google/gve/gve_adminq.c | 52 ++++++++++++++++++++ drivers/net/ethernet/google/gve/gve_adminq.h | 15 ++++-- drivers/net/ethernet/google/gve/gve_main.c | 9 ++++ 4 files changed, 73 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index f5c80229ea96..80cdae06ee39 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -199,6 +199,7 @@ struct gve_priv { u64 num_registered_pages; /* num pages registered with NIC */ u32 rx_copybreak; /* copy packets smaller than this */ u16 default_num_queues; /* default num queues to set up */ + bool raw_addressing; /* true if this dev supports raw addressing */ struct gve_queue_config tx_cfg; struct gve_queue_config rx_cfg; diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 24ae6a28a806..0b7a2653fe33 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -460,11 +460,14 @@ int gve_adminq_destroy_rx_queues(struct gve_priv *priv, u32 num_queues) int gve_adminq_describe_device(struct gve_priv *priv) { struct gve_device_descriptor *descriptor; + struct gve_device_option *dev_opt; union gve_adminq_command cmd; dma_addr_t descriptor_bus; + u16 num_options; int err = 0; u8 *mac; u16 mtu; + int i; memset(&cmd, 0, sizeof(cmd)); descriptor = dma_alloc_coherent(&priv->pdev->dev, PAGE_SIZE, @@ -518,6 +521,55 @@ int gve_adminq_describe_device(struct gve_priv *priv) priv->rx_desc_cnt = priv->rx_pages_per_qpl; } priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues); + dev_opt = (void *)(descriptor + 1); + + num_options = be16_to_cpu(descriptor->num_device_options); + for (i = 0; i < num_options; i++) { + u16 option_length = be16_to_cpu(dev_opt->option_length); + u16 option_id = be16_to_cpu(dev_opt->option_id); + void *option_end; + + option_end = (void *)dev_opt + sizeof(*dev_opt) + option_length; + if (option_end > (void *)descriptor + be16_to_cpu(descriptor->total_length)) { + dev_err(&priv->dev->dev, + "options exceed device_descriptor's total length.\n"); + err = -EINVAL; + goto free_device_descriptor; + } + + switch (option_id) { + case GVE_DEV_OPT_ID_RAW_ADDRESSING: + /* If the length or feature mask doesn't match, + * continue without enabling the feature. + */ + if (option_length != GVE_DEV_OPT_LEN_RAW_ADDRESSING || + dev_opt->feat_mask != + cpu_to_be32(GVE_DEV_OPT_FEAT_MASK_RAW_ADDRESSING)) { + dev_warn(&priv->pdev->dev, + "Raw addressing option error:\n" + " Expected: length=%d, feature_mask=%x.\n" + " Actual: length=%d, feature_mask=%x.\n", + GVE_DEV_OPT_LEN_RAW_ADDRESSING, + cpu_to_be32(GVE_DEV_OPT_FEAT_MASK_RAW_ADDRESSING), + option_length, dev_opt->feat_mask); + priv->raw_addressing = false; + } else { + dev_info(&priv->pdev->dev, + "Raw addressing device option enabled.\n"); + priv->raw_addressing = true; + } + break; + default: + /* If we don't recognize the option just continue + * without doing anything. + */ + dev_dbg(&priv->pdev->dev, + "Unrecognized device option 0x%hx not enabled.\n", + option_id); + break; + } + dev_opt = (void *)dev_opt + sizeof(*dev_opt) + option_length; + } free_device_descriptor: dma_free_coherent(&priv->pdev->dev, sizeof(*descriptor), descriptor, diff --git a/drivers/net/ethernet/google/gve/gve_adminq.h b/drivers/net/ethernet/google/gve/gve_adminq.h index 281de8326bc5..af5f586167bd 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.h +++ b/drivers/net/ethernet/google/gve/gve_adminq.h @@ -79,12 +79,17 @@ struct gve_device_descriptor { static_assert(sizeof(struct gve_device_descriptor) == 40); -struct device_option { - __be32 option_id; - __be32 option_length; +struct gve_device_option { + __be16 option_id; + __be16 option_length; + __be32 feat_mask; }; -static_assert(sizeof(struct device_option) == 8); +static_assert(sizeof(struct gve_device_option) == 8); + +#define GVE_DEV_OPT_ID_RAW_ADDRESSING 0x1 +#define GVE_DEV_OPT_LEN_RAW_ADDRESSING 0x0 +#define GVE_DEV_OPT_FEAT_MASK_RAW_ADDRESSING 0x0 struct gve_adminq_configure_device_resources { __be64 counter_array; @@ -111,6 +116,8 @@ struct gve_adminq_unregister_page_list { static_assert(sizeof(struct gve_adminq_unregister_page_list) == 4); +#define GVE_RAW_ADDRESSING_QPL_ID 0xFFFFFFFF + struct gve_adminq_create_tx_queue { __be32 queue_id; __be32 reserved; diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 48a433154ce0..70685c10db0e 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -678,6 +678,10 @@ static int gve_alloc_qpls(struct gve_priv *priv) int i, j; int err; + /* Raw addressing means no QPLs */ + if (priv->raw_addressing) + return 0; + priv->qpls = kvzalloc(num_qpls * sizeof(*priv->qpls), GFP_KERNEL); if (!priv->qpls) return -ENOMEM; @@ -718,6 +722,10 @@ static void gve_free_qpls(struct gve_priv *priv) int num_qpls = gve_num_tx_qpls(priv) + gve_num_rx_qpls(priv); int i; + /* Raw addressing means no QPLs */ + if (priv->raw_addressing) + return; + kvfree(priv->qpl_cfg.qpl_id_map); for (i = 0; i < num_qpls; i++) @@ -1078,6 +1086,7 @@ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device) if (skip_describe_device) goto setup_device; + priv->raw_addressing = false; /* Get the initial information we need from the device */ err = gve_adminq_describe_device(priv); if (err) { From patchwork Tue Nov 3 17:46:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 1393268 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=AWSdI6A6; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4CQcfv566Pz9sVK for ; Wed, 4 Nov 2020 04:47:03 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728886AbgKCRrB (ORCPT ); Tue, 3 Nov 2020 12:47:01 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728527AbgKCRq7 (ORCPT ); Tue, 3 Nov 2020 12:46:59 -0500 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE911C0613D1 for ; Tue, 3 Nov 2020 09:46:58 -0800 (PST) Received: by mail-yb1-xb49.google.com with SMTP id a126so16322959ybb.11 for ; Tue, 03 Nov 2020 09:46:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=iL9IYpyXZhiHcteNSuPQRHqqtm0ckbWKf9cIwB/atzs=; b=AWSdI6A6xcWdg6+DepIjTPXUSSnvaUsnjKiKtEeIVozjnJBzpmA50CeyoWG1v5fD5e GIjWWTxJuFe8hwFbgqFKLEVpA6Lb5gGdu3g0NdhYSIA/3Af9oi6I1AAfPrYTkxqAfhNs RDD8lbinXfU4+XrbN2/3GD4KWjd/nGcpLAhLC84sHMIYwqyvomB5vzzlvA6NSwuh5Rg9 1qRosbmm5SQMehh6zoSylc9RJWcn4Y6vZblc+0dJzgdglmZtDpD9lIga5tMu9MYi2JoI QiGF2P51XBeW2y3SdkpzqCEWoftsSAMkWYcrxvu6yS3+1FD4xaL9uAZ1CVpOCxaShj/o oVSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iL9IYpyXZhiHcteNSuPQRHqqtm0ckbWKf9cIwB/atzs=; b=kX0agfSjFRkEJ/WfrivtSM/26yNf0nituMed1Us6lyMUpIOYZVhVnkZVulq5lJzJzO /D4bz6x2ujG8d/cHQQYqLLSy4He5v90gb3Gl3IXIOaau0qYwpHm2+RfORsZmNltbcl8g EozyR6jAaAew3iZD8ZRmnslyYf+bJFMosp/sjMcdBxkOOWT1vKT0sG1IFoLNXkJKLtxo mvdlDCJ5y8EfBemMXPWFaPTKiyMhrJv7dCKIIRdd7nr8CZQCvC7IVUHLfE+AgopsuRYu 4Y5QnV599KXP1PpYhulgtryreNzlFfttdWcMbRfhTMG22OZBOYayfnEqJBe+uWZmHMXy pe7g== X-Gm-Message-State: AOAM531FCJOvlTKkGBk/dgtUCIomgOzMvW14gr75ynm+qDFucaudDFWD jKhizU7f86GJAkX7a0FyIk0nqQtm18LZJdnLgm1yJYe0kHdgR8W8DTkE3Dr8jD/j4eZffUtpkuE U7XgKE1n0muZiTSxsh2ijLXKdrqh2N+bfc1o5p/Et3qHg+UPL4z8SKJFdih4hM0Gx4Nr90XLk X-Google-Smtp-Source: ABdhPJwrlZYZCjTtINFtnNy6lA62uQVKaH2yG/cfbaF+I+xmUbFc0uSa0e56/pYaEfZp2lQvkH/aSLqv6dqikDNp Sender: "awogbemila via sendgmr" X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:a25:abf3:: with SMTP id v106mr21903775ybi.214.1604425618165; Tue, 03 Nov 2020 09:46:58 -0800 (PST) Date: Tue, 3 Nov 2020 09:46:49 -0800 In-Reply-To: <20201103174651.590586-1-awogbemila@google.com> Message-Id: <20201103174651.590586-3-awogbemila@google.com> Mime-Version: 1.0 References: <20201103174651.590586-1-awogbemila@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH 2/4] gve: Add support for raw addressing to the rx path From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan Add support to use raw dma addresses in the rx path. Due to this new support we can alloc a new buffer instead of making a copy. RX buffers are handed to the networking stack and are re-allocated as needed, avoiding the need to use skb_copy_to_linear_data() as in "qpl" mode. Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 9 +- drivers/net/ethernet/google/gve/gve_adminq.c | 14 +- drivers/net/ethernet/google/gve/gve_desc.h | 10 +- drivers/net/ethernet/google/gve/gve_main.c | 3 +- drivers/net/ethernet/google/gve/gve_rx.c | 220 +++++++++++++++---- 5 files changed, 203 insertions(+), 53 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 80cdae06ee39..5c48f9e2d04c 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -68,6 +68,7 @@ struct gve_rx_data_queue { dma_addr_t data_bus; /* dma mapping of the slots */ struct gve_rx_slot_page_info *page_info; /* page info of the buffers */ struct gve_queue_page_list *qpl; /* qpl assigned to this queue */ + bool raw_addressing; /* use raw_addressing? */ }; struct gve_priv; @@ -82,6 +83,7 @@ struct gve_rx_ring { u32 cnt; /* free-running total number of completed packets */ u32 fill_cnt; /* free-running total number of descs and buffs posted */ u32 mask; /* masks the cnt and fill_cnt to the size of the ring */ + u32 db_threshold; /* threshold for posting new buffs and descs */ u64 rx_copybreak_pkt; /* free-running count of copybreak packets */ u64 rx_copied_pkt; /* free-running total number of copied packets */ u64 rx_skb_alloc_fail; /* free-running count of skb alloc fails */ @@ -194,7 +196,7 @@ struct gve_priv { u16 tx_desc_cnt; /* num desc per ring */ u16 rx_desc_cnt; /* num desc per ring */ u16 tx_pages_per_qpl; /* tx buffer length */ - u16 rx_pages_per_qpl; /* rx buffer length */ + u16 rx_data_slot_cnt; /* rx buffer length */ u64 max_registered_pages; u64 num_registered_pages; /* num pages registered with NIC */ u32 rx_copybreak; /* copy packets smaller than this */ @@ -444,7 +446,10 @@ static inline u32 gve_num_tx_qpls(struct gve_priv *priv) */ static inline u32 gve_num_rx_qpls(struct gve_priv *priv) { - return priv->rx_cfg.num_queues; + if (priv->raw_addressing) + return 0; + else + return priv->rx_cfg.num_queues; } /* Returns a pointer to the next available tx qpl in the list of qpls diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 0b7a2653fe33..3a7a12b3d144 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -357,8 +357,10 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) { struct gve_rx_ring *rx = &priv->rx[queue_index]; union gve_adminq_command cmd; + u32 qpl_id; int err; + qpl_id = priv->raw_addressing ? GVE_RAW_ADDRESSING_QPL_ID : rx->data.qpl->id; memset(&cmd, 0, sizeof(cmd)); cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_RX_QUEUE); cmd.create_rx_queue = (struct gve_adminq_create_rx_queue) { @@ -369,7 +371,7 @@ static int gve_adminq_create_rx_queue(struct gve_priv *priv, u32 queue_index) .queue_resources_addr = cpu_to_be64(rx->q_resources_bus), .rx_desc_ring_addr = cpu_to_be64(rx->desc.bus), .rx_data_ring_addr = cpu_to_be64(rx->data.data_bus), - .queue_page_list_id = cpu_to_be32(rx->data.qpl->id), + .queue_page_list_id = cpu_to_be32(qpl_id), }; err = gve_adminq_issue_cmd(priv, &cmd); @@ -514,11 +516,11 @@ int gve_adminq_describe_device(struct gve_priv *priv) mac = descriptor->mac; dev_info(&priv->pdev->dev, "MAC addr: %pM\n", mac); priv->tx_pages_per_qpl = be16_to_cpu(descriptor->tx_pages_per_qpl); - priv->rx_pages_per_qpl = be16_to_cpu(descriptor->rx_pages_per_qpl); - if (priv->rx_pages_per_qpl < priv->rx_desc_cnt) { - dev_err(&priv->pdev->dev, "rx_pages_per_qpl cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", - priv->rx_pages_per_qpl); - priv->rx_desc_cnt = priv->rx_pages_per_qpl; + priv->rx_data_slot_cnt = be16_to_cpu(descriptor->rx_pages_per_qpl); + if (priv->rx_data_slot_cnt < priv->rx_desc_cnt) { + dev_err(&priv->pdev->dev, "rx_data_slot_cnt cannot be smaller than rx_desc_cnt, setting rx_desc_cnt down to %d.\n", + priv->rx_data_slot_cnt); + priv->rx_desc_cnt = priv->rx_data_slot_cnt; } priv->default_num_queues = be16_to_cpu(descriptor->default_num_queues); dev_opt = (void *)(descriptor + 1); diff --git a/drivers/net/ethernet/google/gve/gve_desc.h b/drivers/net/ethernet/google/gve/gve_desc.h index 54779871d52e..0aad314aefaf 100644 --- a/drivers/net/ethernet/google/gve/gve_desc.h +++ b/drivers/net/ethernet/google/gve/gve_desc.h @@ -72,12 +72,14 @@ struct gve_rx_desc { } __packed; static_assert(sizeof(struct gve_rx_desc) == 64); -/* As with the Tx ring format, the qpl_offset entries below are offsets into an - * ordered list of registered pages. +/* If the device supports raw dma addressing then the addr in data slot is + * the dma address of the buffer. + * If the device only supports registered segments than the addr is a byte + * offset into the registered segment (an ordered list of pages) where the + * buffer is. */ struct gve_rx_data_slot { - /* byte offset into the rx registered segment of this slot */ - __be64 qpl_offset; + __be64 addr; }; /* GVE Recive Packet Descriptor Seq No */ diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c index 70685c10db0e..225e17dd1ae5 100644 --- a/drivers/net/ethernet/google/gve/gve_main.c +++ b/drivers/net/ethernet/google/gve/gve_main.c @@ -596,6 +596,7 @@ int gve_alloc_page(struct gve_priv *priv, struct device *dev, if (dma_mapping_error(dev, *dma)) { priv->dma_mapping_error++; put_page(*page); + *page = NULL; return -ENOMEM; } return 0; @@ -694,7 +695,7 @@ static int gve_alloc_qpls(struct gve_priv *priv) } for (; i < num_qpls; i++) { err = gve_alloc_queue_page_list(priv, i, - priv->rx_pages_per_qpl); + priv->rx_data_slot_cnt); if (err) goto free_qpls; } diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 008fa897a3e6..6d1ca6985eec 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -16,12 +16,39 @@ static void gve_rx_remove_from_block(struct gve_priv *priv, int queue_idx) block->rx = NULL; } +static void gve_rx_free_buffer(struct device *dev, + struct gve_rx_slot_page_info *page_info, + struct gve_rx_data_slot *data_slot) +{ + dma_addr_t dma = (dma_addr_t)(be64_to_cpu(data_slot->addr) - + page_info->page_offset); + + gve_free_page(dev, page_info->page, dma, DMA_FROM_DEVICE); +} + +static void gve_rx_unfill_pages(struct gve_priv *priv, struct gve_rx_ring *rx) +{ + u32 slots = rx->mask + 1; + int i; + + if (rx->data.raw_addressing) { + for (i = 0; i < slots; i++) + gve_rx_free_buffer(&priv->pdev->dev, &rx->data.page_info[i], + &rx->data.data_ring[i]); + } else { + gve_unassign_qpl(priv, rx->data.qpl->id); + rx->data.qpl = NULL; + } + kvfree(rx->data.page_info); + rx->data.page_info = NULL; +} + static void gve_rx_free_ring(struct gve_priv *priv, int idx) { struct gve_rx_ring *rx = &priv->rx[idx]; struct device *dev = &priv->pdev->dev; + u32 slots = rx->mask + 1; size_t bytes; - u32 slots; gve_rx_remove_from_block(priv, idx); @@ -33,11 +60,8 @@ static void gve_rx_free_ring(struct gve_priv *priv, int idx) rx->q_resources, rx->q_resources_bus); rx->q_resources = NULL; - gve_unassign_qpl(priv, rx->data.qpl->id); - rx->data.qpl = NULL; - kvfree(rx->data.page_info); + gve_rx_unfill_pages(priv, rx); - slots = rx->mask + 1; bytes = sizeof(*rx->data.data_ring) * slots; dma_free_coherent(dev, bytes, rx->data.data_ring, rx->data.data_bus); @@ -52,13 +76,14 @@ static void gve_setup_rx_buffer(struct gve_rx_slot_page_info *page_info, page_info->page = page; page_info->page_offset = 0; page_info->page_address = page_address(page); - slot->qpl_offset = cpu_to_be64(addr); + slot->addr = cpu_to_be64(addr); } static int gve_prefill_rx_pages(struct gve_rx_ring *rx) { struct gve_priv *priv = rx->gve; u32 slots; + int err; int i; /* Allocate one page per Rx queue slot. Each page is split into two @@ -71,12 +96,31 @@ static int gve_prefill_rx_pages(struct gve_rx_ring *rx) if (!rx->data.page_info) return -ENOMEM; - rx->data.qpl = gve_assign_rx_qpl(priv); - + if (!rx->data.raw_addressing) + rx->data.qpl = gve_assign_rx_qpl(priv); for (i = 0; i < slots; i++) { - struct page *page = rx->data.qpl->pages[i]; - dma_addr_t addr = i * PAGE_SIZE; + struct page *page; + dma_addr_t addr; + + if (rx->data.raw_addressing) { + err = gve_alloc_page(priv, &priv->pdev->dev, &page, + &addr, DMA_FROM_DEVICE); + if (err) { + int j; + u64_stats_update_begin(&rx->statss); + rx->rx_buf_alloc_fail++; + u64_stats_update_end(&rx->statss); + for (j = 0; j < i; j++) + gve_rx_free_buffer(&priv->pdev->dev, + &rx->data.page_info[j], + &rx->data.data_ring[j]); + return err; + } + } else { + page = rx->data.qpl->pages[i]; + addr = i * PAGE_SIZE; + } gve_setup_rx_buffer(&rx->data.page_info[i], &rx->data.data_ring[i], addr, page); } @@ -110,8 +154,9 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) rx->gve = priv; rx->q_num = idx; - slots = priv->rx_pages_per_qpl; + slots = priv->rx_data_slot_cnt; rx->mask = slots - 1; + rx->data.raw_addressing = priv->raw_addressing; /* alloc rx data ring */ bytes = sizeof(*rx->data.data_ring) * slots; @@ -156,8 +201,8 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) err = -ENOMEM; goto abort_with_q_resources; } - rx->mask = slots - 1; rx->cnt = 0; + rx->db_threshold = priv->rx_desc_cnt / 2; rx->desc.seqno = 1; gve_rx_add_to_block(priv, idx); @@ -168,7 +213,7 @@ static int gve_rx_alloc_ring(struct gve_priv *priv, int idx) rx->q_resources, rx->q_resources_bus); rx->q_resources = NULL; abort_filled: - kvfree(rx->data.page_info); + gve_rx_unfill_pages(priv, rx); abort_with_slots: bytes = sizeof(*rx->data.data_ring) * slots; dma_free_coherent(hdev, bytes, rx->data.data_ring, rx->data.data_bus); @@ -251,8 +296,7 @@ static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, return skb; } -static struct sk_buff *gve_rx_add_frags(struct net_device *dev, - struct napi_struct *napi, +static struct sk_buff *gve_rx_add_frags(struct napi_struct *napi, struct gve_rx_slot_page_info *page_info, u16 len) { @@ -268,14 +312,49 @@ static struct sk_buff *gve_rx_add_frags(struct net_device *dev, return skb; } +static int gve_rx_alloc_buffer(struct gve_priv *priv, struct device *dev, + struct gve_rx_slot_page_info *page_info, + struct gve_rx_data_slot *data_slot, + struct gve_rx_ring *rx) +{ + struct page *page; + dma_addr_t dma; + int err; + + err = gve_alloc_page(priv, dev, &page, &dma, DMA_FROM_DEVICE); + if (err) { + u64_stats_update_begin(&rx->statss); + rx->rx_buf_alloc_fail++; + u64_stats_update_end(&rx->statss); + return err; + } + + gve_setup_rx_buffer(page_info, data_slot, dma, page); + return 0; +} + static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info, struct gve_rx_data_slot *data_ring) { - u64 addr = be64_to_cpu(data_ring->qpl_offset); + u64 addr = be64_to_cpu(data_ring->addr); page_info->page_offset ^= PAGE_SIZE / 2; addr ^= PAGE_SIZE / 2; - data_ring->qpl_offset = cpu_to_be64(addr); + data_ring->addr = cpu_to_be64(addr); +} + +static struct sk_buff * +gve_rx_raw_addressing(struct device *dev, struct net_device *netdev, + struct gve_rx_slot_page_info *page_info, u16 len, + struct napi_struct *napi, + struct gve_rx_data_slot *data_slot) +{ + struct sk_buff *skb = gve_rx_add_frags(napi, page_info, len); + + if (!skb) + return NULL; + + return skb; } static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, @@ -285,7 +364,9 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, struct gve_priv *priv = rx->gve; struct napi_struct *napi = &priv->ntfy_blocks[rx->ntfy_id].napi; struct net_device *dev = priv->dev; - struct sk_buff *skb; + struct gve_rx_data_slot *data_slot; + struct sk_buff *skb = NULL; + dma_addr_t page_bus; int pagecount; u16 len; @@ -294,18 +375,18 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, u64_stats_update_begin(&rx->statss); rx->rx_desc_err_dropped_pkt++; u64_stats_update_end(&rx->statss); - return true; + return false; } len = be16_to_cpu(rx_desc->len) - GVE_RX_PAD; page_info = &rx->data.page_info[idx]; - dma_sync_single_for_cpu(&priv->pdev->dev, rx->data.qpl->page_buses[idx], - PAGE_SIZE, DMA_FROM_DEVICE); - /* gvnic can only receive into registered segments. If the buffer - * can't be recycled, our only choice is to copy the data out of - * it so that we can return it to the device. - */ + data_slot = &rx->data.data_ring[idx]; + page_bus = (rx->data.raw_addressing) ? + be64_to_cpu(data_slot->addr) - page_info->page_offset : + rx->data.qpl->page_buses[idx]; + dma_sync_single_for_cpu(&priv->pdev->dev, page_bus, + PAGE_SIZE, DMA_FROM_DEVICE); if (PAGE_SIZE == 4096) { if (len <= priv->rx_copybreak) { @@ -316,6 +397,12 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, u64_stats_update_end(&rx->statss); goto have_skb; } + if (rx->data.raw_addressing) { + skb = gve_rx_raw_addressing(&priv->pdev->dev, dev, + page_info, len, napi, + data_slot); + goto have_skb; + } if (unlikely(!gve_can_recycle_pages(dev))) { skb = gve_rx_copy(rx, dev, napi, page_info, len); goto have_skb; @@ -326,12 +413,12 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, * the page fragment to a new SKB and pass it up the * stack. */ - skb = gve_rx_add_frags(dev, napi, page_info, len); + skb = gve_rx_add_frags(napi, page_info, len); if (!skb) { u64_stats_update_begin(&rx->statss); rx->rx_skb_alloc_fail++; u64_stats_update_end(&rx->statss); - return true; + return false; } /* Make sure the kernel stack can't release the page */ get_page(page_info->page); @@ -347,7 +434,12 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, return false; } } else { - skb = gve_rx_copy(rx, dev, napi, page_info, len); + if (rx->data.raw_addressing) + skb = gve_rx_raw_addressing(&priv->pdev->dev, dev, + page_info, len, napi, + data_slot); + else + skb = gve_rx_copy(rx, dev, napi, page_info, len); } have_skb: @@ -358,7 +450,7 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, u64_stats_update_begin(&rx->statss); rx->rx_skb_alloc_fail++; u64_stats_update_end(&rx->statss); - return true; + return false; } if (likely(feat & NETIF_F_RXCSUM)) { @@ -399,19 +491,45 @@ static bool gve_rx_work_pending(struct gve_rx_ring *rx) return (GVE_SEQNO(flags_seq) == rx->desc.seqno); } +static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) +{ + bool empty = rx->fill_cnt == rx->cnt; + u32 fill_cnt = rx->fill_cnt; + + while (empty || ((fill_cnt & rx->mask) != (rx->cnt & rx->mask))) { + struct gve_rx_slot_page_info *page_info; + struct device *dev = &priv->pdev->dev; + struct gve_rx_data_slot *data_slot; + u32 idx = fill_cnt & rx->mask; + + page_info = &rx->data.page_info[idx]; + data_slot = &rx->data.data_ring[idx]; + gve_rx_free_buffer(dev, page_info, data_slot); + page_info->page = NULL; + if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot, rx)) + break; + empty = false; + fill_cnt++; + } + rx->fill_cnt = fill_cnt; + return true; +} + bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, netdev_features_t feat) { struct gve_priv *priv = rx->gve; + u32 work_done = 0, packets = 0; struct gve_rx_desc *desc; u32 cnt = rx->cnt; u32 idx = cnt & rx->mask; - u32 work_done = 0; u64 bytes = 0; desc = rx->desc.desc_ring + idx; while ((GVE_SEQNO(desc->flags_seq) == rx->desc.seqno) && work_done < budget) { + bool dropped; + netif_info(priv, rx_status, priv->dev, "[%d] idx=%d desc=%p desc->flags_seq=0x%x\n", rx->q_num, idx, desc, desc->flags_seq); @@ -419,9 +537,11 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, "[%d] seqno=%d rx->desc.seqno=%d\n", rx->q_num, GVE_SEQNO(desc->flags_seq), rx->desc.seqno); - bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; - if (!gve_rx(rx, desc, feat, idx)) - gve_schedule_reset(priv); + dropped = !gve_rx(rx, desc, feat, idx); + if (!dropped) { + bytes += be16_to_cpu(desc->len) - GVE_RX_PAD; + packets++; + } cnt++; idx = cnt & rx->mask; desc = rx->desc.desc_ring + idx; @@ -429,15 +549,35 @@ bool gve_clean_rx_done(struct gve_rx_ring *rx, int budget, work_done++; } - if (!work_done) + if (!work_done && rx->fill_cnt - cnt > rx->db_threshold) { return false; + } else if (work_done) { + u64_stats_update_begin(&rx->statss); + rx->rpackets += packets; + rx->rbytes += bytes; + u64_stats_update_end(&rx->statss); + rx->cnt = cnt; + } - u64_stats_update_begin(&rx->statss); - rx->rpackets += work_done; - rx->rbytes += bytes; - u64_stats_update_end(&rx->statss); - rx->cnt = cnt; - rx->fill_cnt += work_done; + /* restock ring slots */ + if (!rx->data.raw_addressing) { + /* In QPL mode buffs are refilled as the desc are processed */ + rx->fill_cnt += work_done; + } else if (rx->fill_cnt - cnt <= rx->db_threshold) { + /* In raw addressing mode buffs are only refilled if the avail + * falls below a threshold. + */ + if (!gve_rx_refill_buffers(priv, rx)) + return false; + + /* If we were not able to completely refill buffers, we'll want + * to schedule this queue for work again to refill buffers. + */ + if (rx->fill_cnt - cnt <= rx->db_threshold) { + gve_rx_write_doorbell(priv, rx); + return true; + } + } gve_rx_write_doorbell(priv, rx); return gve_rx_work_pending(rx); From patchwork Tue Nov 3 17:46:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 1393269 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=iV71W+tw; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4CQcfw50hMz9sVM for ; Wed, 4 Nov 2020 04:47:04 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728896AbgKCRrC (ORCPT ); Tue, 3 Nov 2020 12:47:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728877AbgKCRrB (ORCPT ); Tue, 3 Nov 2020 12:47:01 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4DED6C061A04 for ; Tue, 3 Nov 2020 09:47:01 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id x5so11306293qkn.2 for ; Tue, 03 Nov 2020 09:47:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=bMZb9mwyOJY38gfdzXXjSIBkTAilg4nHedpeg0wrhW8=; b=iV71W+twJgE9xrm4FeyD3SZoAPA07zP/DtyFxLrgT5zeynrdxms+IX/89tA22lkUK2 qMd7+HKvaJUnWwqLA6ZPyx8jl1TvE5QTESSN2645dypaPL8Bkg7n3GDjNkSfdRs89SmH htgdVwRYKAz90Pd6GcTEbnLP2dbCMF9/hRQaIcEKG97d0Ck6k2hMhdlFGHW9g1QuutbU yXp0Pb2V0YClxYcL2e7aOCLHsnTC0LwSWpMJO6cCVUrAhO+jFl+BpPlHXRDhL/z6bjKt OEdpmsa3uq8zBLc8kB/0EgjHCAQQO57b3R1gAdAJuFrePojPONQ172AqqXeYzdSAmNM+ 3SAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bMZb9mwyOJY38gfdzXXjSIBkTAilg4nHedpeg0wrhW8=; b=lZ2r459JZnCSqoD7DrJGOAL0ON2nNN3Uh4ktzaDc55dO+jzGxruBnTkEwuC/YhquH3 wUW7DgOGeJq6YyMLRk+m0Gk8XuURiBTf40342ghYrX34u2WxAHqNcXNBNdF3pzIdp8Fv DR/l5A8Z0a6JJavfhjwDtJVXaS7kyPtWia3GDaQdD8j7Xwdok2t27TigsjZgGNeAe01s SbwFBQ6Eu1epviKlYMZYdK8n0gGr9Dh0ALiiGfb0Rfyd+Sn79rh/371iDWFhUaDe0bCZ 1VOEbvkfMNIQHiO2ef2enc3nmp5AaAfhXhEFx3vLtahsiXyvUGrls9mcDjwvTHU5trb4 ugLw== X-Gm-Message-State: AOAM530KEFk9dehxRYeLFExvF6cNziIxeB6x77E7u8pO13J8mzqA0FRA gDs7V9PNgiVL1IpCtsff/ht8UeehXqRsq+hGGDx1SMfR/OCG4ux/zEo1/v7rUJFUkLri4V5KEBp uMkWa1SxJ3cV55snGr04ioHFdGRE19Jc9xPEWaGGHvdzzp2YtLfum+1Om9H4VodFtPCXLP8US X-Google-Smtp-Source: ABdhPJz8yYBfXLjULr0023A8646Rryo+FmmiRV59SxqUiakL4j8UNGA7j0tjfPqioDy56oAodgraWBE5azD70K4f Sender: "awogbemila via sendgmr" X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:ad4:4e84:: with SMTP id dy4mr15829508qvb.47.1604425620360; Tue, 03 Nov 2020 09:47:00 -0800 (PST) Date: Tue, 3 Nov 2020 09:46:50 -0800 In-Reply-To: <20201103174651.590586-1-awogbemila@google.com> Message-Id: <20201103174651.590586-4-awogbemila@google.com> Mime-Version: 1.0 References: <20201103174651.590586-1-awogbemila@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH 3/4] gve: Rx Buffer Recycling From: David Awogbemila To: netdev@vger.kernel.org Cc: David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch lets the driver reuse buffers that have been freed by the networking stack. In the raw addressing case, this allows the driver avoid allocating new buffers. In the qpl case, the driver can avoid copies. Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 10 +- drivers/net/ethernet/google/gve/gve_rx.c | 194 +++++++++++++++-------- 2 files changed, 129 insertions(+), 75 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index 5c48f9e2d04c..d1a2a526a250 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -50,6 +50,7 @@ struct gve_rx_slot_page_info { struct page *page; void *page_address; u32 page_offset; /* offset to write to in page */ + bool can_flip; }; /* A list of pages registered with the device during setup and used by a queue @@ -503,15 +504,6 @@ static inline enum dma_data_direction gve_qpl_dma_dir(struct gve_priv *priv, return DMA_FROM_DEVICE; } -/* Returns true if the max mtu allows page recycling */ -static inline bool gve_can_recycle_pages(struct net_device *dev) -{ - /* We can't recycle the pages if we can't fit a packet into half a - * page. - */ - return dev->max_mtu <= PAGE_SIZE / 2; -} - /* buffers */ int gve_alloc_page(struct gve_priv *priv, struct device *dev, struct page **page, dma_addr_t *dma, diff --git a/drivers/net/ethernet/google/gve/gve_rx.c b/drivers/net/ethernet/google/gve/gve_rx.c index 6d1ca6985eec..2e9d5c6fc411 100644 --- a/drivers/net/ethernet/google/gve/gve_rx.c +++ b/drivers/net/ethernet/google/gve/gve_rx.c @@ -270,8 +270,7 @@ static enum pkt_hash_types gve_rss_type(__be16 pkt_flags) return PKT_HASH_TYPE_L2; } -static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, - struct net_device *dev, +static struct sk_buff *gve_rx_copy(struct net_device *dev, struct napi_struct *napi, struct gve_rx_slot_page_info *page_info, u16 len) @@ -289,10 +288,6 @@ static struct sk_buff *gve_rx_copy(struct gve_rx_ring *rx, skb->protocol = eth_type_trans(skb, dev); - u64_stats_update_begin(&rx->statss); - rx->rx_copied_pkt++; - u64_stats_update_end(&rx->statss); - return skb; } @@ -338,22 +333,90 @@ static void gve_rx_flip_buff(struct gve_rx_slot_page_info *page_info, { u64 addr = be64_to_cpu(data_ring->addr); + /* "flip" to other packet buffer on this page */ page_info->page_offset ^= PAGE_SIZE / 2; addr ^= PAGE_SIZE / 2; data_ring->addr = cpu_to_be64(addr); } +static bool gve_rx_can_flip_buffers(struct net_device *netdev) +{ +#if PAGE_SIZE == 4096 + /* We can't flip a buffer if we can't fit a packet + * into half a page. + */ + return netdev->mtu + GVE_RX_PAD + ETH_HLEN <= PAGE_SIZE / 2; +#else + /* PAGE_SIZE != 4096 - don't try to reuse */ + return false; +#endif +} + +static int gve_rx_can_recycle_buffer(struct page *page) +{ + int pagecount = page_count(page); + + /* This page is not being used by any SKBs - reuse */ + if (pagecount == 1) + return 1; + /* This page is still being used by an SKB - we can't reuse */ + else if (pagecount >= 2) + return 0; + WARN(pagecount < 1, "Pagecount should never be < 1"); + return -1; +} + static struct sk_buff * gve_rx_raw_addressing(struct device *dev, struct net_device *netdev, struct gve_rx_slot_page_info *page_info, u16 len, struct napi_struct *napi, - struct gve_rx_data_slot *data_slot) + struct gve_rx_data_slot *data_slot, bool can_flip) { - struct sk_buff *skb = gve_rx_add_frags(napi, page_info, len); + struct sk_buff *skb; + skb = gve_rx_add_frags(napi, page_info, len); if (!skb) return NULL; + /* Optimistically stop the kernel from freeing the page by increasing + * the page bias. We will check the refcount in refill to determine if + * we need to alloc a new page. + */ + get_page(page_info->page); + page_info->can_flip = can_flip; + + return skb; +} + +static struct sk_buff * +gve_rx_qpl(struct device *dev, struct net_device *netdev, + struct gve_rx_ring *rx, struct gve_rx_slot_page_info *page_info, + u16 len, struct napi_struct *napi, + struct gve_rx_data_slot *data_slot, bool recycle) +{ + struct sk_buff *skb; + + /* if raw_addressing mode is not enabled gvnic can only receive into + * registered segments. If the buffer can't be recycled, our only + * choice is to copy the data out of it so that we can return it to the + * device. + */ + if (recycle) { + skb = gve_rx_add_frags(napi, page_info, len); + /* No point in recycling if we didn't get the skb */ + if (skb) { + /* Make sure the networking stack can't free the page */ + get_page(page_info->page); + gve_rx_flip_buff(page_info, data_slot); + } + } else { + skb = gve_rx_copy(netdev, napi, page_info, len); + if (skb) { + u64_stats_update_begin(&rx->statss); + rx->rx_copied_pkt++; + u64_stats_update_end(&rx->statss); + } + } return skb; } @@ -367,7 +430,6 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, struct gve_rx_data_slot *data_slot; struct sk_buff *skb = NULL; dma_addr_t page_bus; - int pagecount; u16 len; /* drop this packet */ @@ -388,64 +450,36 @@ static bool gve_rx(struct gve_rx_ring *rx, struct gve_rx_desc *rx_desc, dma_sync_single_for_cpu(&priv->pdev->dev, page_bus, PAGE_SIZE, DMA_FROM_DEVICE); - if (PAGE_SIZE == 4096) { - if (len <= priv->rx_copybreak) { - /* Just copy small packets */ - skb = gve_rx_copy(rx, dev, napi, page_info, len); - u64_stats_update_begin(&rx->statss); - rx->rx_copybreak_pkt++; - u64_stats_update_end(&rx->statss); - goto have_skb; + if (len <= priv->rx_copybreak) { + /* Just copy small packets */ + skb = gve_rx_copy(dev, napi, page_info, len); + u64_stats_update_begin(&rx->statss); + rx->rx_copied_pkt++; + rx->rx_copybreak_pkt++; + u64_stats_update_end(&rx->statss); + } else { + bool can_flip = gve_rx_can_flip_buffers(dev); + int recycle = 0; + + if (can_flip) { + recycle = gve_rx_can_recycle_buffer(page_info->page); + if (recycle < 0) { + gve_schedule_reset(priv); + return false; + } } if (rx->data.raw_addressing) { skb = gve_rx_raw_addressing(&priv->pdev->dev, dev, page_info, len, napi, - data_slot); - goto have_skb; - } - if (unlikely(!gve_can_recycle_pages(dev))) { - skb = gve_rx_copy(rx, dev, napi, page_info, len); - goto have_skb; - } - pagecount = page_count(page_info->page); - if (pagecount == 1) { - /* No part of this page is used by any SKBs; we attach - * the page fragment to a new SKB and pass it up the - * stack. - */ - skb = gve_rx_add_frags(napi, page_info, len); - if (!skb) { - u64_stats_update_begin(&rx->statss); - rx->rx_skb_alloc_fail++; - u64_stats_update_end(&rx->statss); - return false; - } - /* Make sure the kernel stack can't release the page */ - get_page(page_info->page); - /* "flip" to other packet buffer on this page */ - gve_rx_flip_buff(page_info, &rx->data.data_ring[idx]); - } else if (pagecount >= 2) { - /* We have previously passed the other half of this - * page up the stack, but it has not yet been freed. - */ - skb = gve_rx_copy(rx, dev, napi, page_info, len); + data_slot, + can_flip && recycle); } else { - WARN(pagecount < 1, "Pagecount should never be < 1"); - return false; + skb = gve_rx_qpl(&priv->pdev->dev, dev, rx, + page_info, len, napi, data_slot, + can_flip && recycle); } - } else { - if (rx->data.raw_addressing) - skb = gve_rx_raw_addressing(&priv->pdev->dev, dev, - page_info, len, napi, - data_slot); - else - skb = gve_rx_copy(rx, dev, napi, page_info, len); } -have_skb: - /* We didn't manage to allocate an skb but we haven't had any - * reset worthy failures. - */ if (!skb) { u64_stats_update_begin(&rx->statss); rx->rx_skb_alloc_fail++; @@ -498,16 +532,44 @@ static bool gve_rx_refill_buffers(struct gve_priv *priv, struct gve_rx_ring *rx) while (empty || ((fill_cnt & rx->mask) != (rx->cnt & rx->mask))) { struct gve_rx_slot_page_info *page_info; - struct device *dev = &priv->pdev->dev; - struct gve_rx_data_slot *data_slot; u32 idx = fill_cnt & rx->mask; page_info = &rx->data.page_info[idx]; - data_slot = &rx->data.data_ring[idx]; - gve_rx_free_buffer(dev, page_info, data_slot); - page_info->page = NULL; - if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot, rx)) - break; + if (page_info->can_flip) { + /* The other half of the page is free because it was + * free when we processed the descriptor. Flip to it. + */ + struct gve_rx_data_slot *data_slot = + &rx->data.data_ring[idx]; + + gve_rx_flip_buff(page_info, data_slot); + page_info->can_flip = false; + } else { + /* It is possible that the networking stack has already + * finished processing all outstanding packets in the buffer + * and it can be reused. + * Flipping is unnecessary here - if the networking stack still + * owns half the page it is impossible to tell which half. Either + * the whole page is free or it needs to be replaced. + */ + int recycle = gve_rx_can_recycle_buffer(page_info->page); + + if (recycle < 0) { + gve_schedule_reset(priv); + return false; + } + if (!recycle) { + /* We can't reuse the buffer - alloc a new one*/ + struct gve_rx_data_slot *data_slot = + &rx->data.data_ring[idx]; + struct device *dev = &priv->pdev->dev; + + gve_rx_free_buffer(dev, page_info, data_slot); + page_info->page = NULL; + if (gve_rx_alloc_buffer(priv, dev, page_info, data_slot, rx)) + break; + } + } empty = false; fill_cnt++; } From patchwork Tue Nov 3 17:46:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Awogbemila X-Patchwork-Id: 1393270 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20161025 header.b=pO5JMcY3; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 4CQcfy1nJ5z9sVK for ; Wed, 4 Nov 2020 04:47:06 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728946AbgKCRrF (ORCPT ); Tue, 3 Nov 2020 12:47:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50162 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728877AbgKCRrE (ORCPT ); Tue, 3 Nov 2020 12:47:04 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D577AC0613D1 for ; Tue, 3 Nov 2020 09:47:02 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id z12so12633669pfa.22 for ; Tue, 03 Nov 2020 09:47:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=hGY1FYp6Nq0rRGmlj+4ze1TEZmIz8WFYr271IGUjpiA=; b=pO5JMcY37RjpkcoBSwVymM6cZ8KoR9T1Emf9cLWTAbqm8qHSztVJWuxO2aydPXFFOj qef9MsQsnRrd/vf+kOyVX3q3Ig0obTrgQgSj+ieuH6zkuSVVY7e42/CULNW/YxwSk9y+ eiwcc59NYrPCQqI/aUawnMeZGmTMGBQoMKM0qf4M8j29xa1+WzLlm5+76ipx2IVuj/Cw oL+zQGSzVviuufqesmIZLOvRcCUtOzxpqizgu+hATolJW8N/psD5oI2waPoNJZHGQwcz vXzQwt3lN4zCErqBt1Hai5b+LmCARWSO1UIRZo1HoheRdKiIqvr9Ko4BUAdoMYQjsrh+ BD5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hGY1FYp6Nq0rRGmlj+4ze1TEZmIz8WFYr271IGUjpiA=; b=ZOsnZqHoKqr/l8uxup1CREB+ci0q0lLqBhoxghUUaAaWTPPy7p1GRiw1I52LEIA3Jn We15TtJet/eTRL1rR9BWMTdfvdotaJ1KE4XgDzqqRNVwVTUrbc6u/Ug+2a8gVM+RpzGn mSbPF5SFo/aT27ontEU7+J1+xRLVVZ0uF2Bf1vy9XDG8FPE1YJEVQgCpoePW+xH8tlJx pzSyfn1Kte7QjoDPPPQnydZUIYv64C97KigXYS47C1f9KE6eH4eNxWrQ/OdUrp3ch2ou OiivNfqNOI9HhqPpQG6nVpcDZeAAK/TykAVxpVzSuQmDXGvuy+JZ8DMx4rOvZd0y6vMr JoKQ== X-Gm-Message-State: AOAM531VgEOG4WJuCcEWMdV/oUg+L/AGililIhccmr4v0FsjfPzlZgg2 qWhlzW2tyzh4KS5pgcKa8Pz/U9U17jxpgSFzH/uV6S/1nxA3aMqvP6Sbo9K5MTeDoj0aFOUudmH cRb959ALsrmVhVhQXeSAlSa5Qlp4rZ/3wWkP1DComL/tudy706Mbm4GqXukLPLHuZU0U63kw8 X-Google-Smtp-Source: ABdhPJyT/98tJM9hfbQ46WZjhd2ISXhohGOcXUDrigVwIN+6Ei06vUtONDrRAJM3SzIaQOmANvV2qFDbObIm/k48 Sender: "awogbemila via sendgmr" X-Received: from awogbemila.sea.corp.google.com ([2620:15c:100:202:1ea0:b8ff:fe73:6cc0]) (user=awogbemila job=sendgmr) by 2002:a17:902:bb81:b029:d5:b437:edb4 with SMTP id m1-20020a170902bb81b02900d5b437edb4mr25542553pls.6.1604425622263; Tue, 03 Nov 2020 09:47:02 -0800 (PST) Date: Tue, 3 Nov 2020 09:46:51 -0800 In-Reply-To: <20201103174651.590586-1-awogbemila@google.com> Message-Id: <20201103174651.590586-5-awogbemila@google.com> Mime-Version: 1.0 References: <20201103174651.590586-1-awogbemila@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH 4/4] gve: Add support for raw addressing in the tx path From: David Awogbemila To: netdev@vger.kernel.org Cc: Catherine Sullivan , Yangchun Fu , David Awogbemila Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Catherine Sullivan During TX, skbs' data addresses are dma_map'ed and passed to the NIC. This means that the device can perform DMA directly from these addresses and the driver does not have to copy the buffer content into pre-allocated buffers/qpls (as in qpl mode). Reviewed-by: Yangchun Fu Signed-off-by: Catherine Sullivan Signed-off-by: David Awogbemila --- drivers/net/ethernet/google/gve/gve.h | 18 +- drivers/net/ethernet/google/gve/gve_adminq.c | 4 +- drivers/net/ethernet/google/gve/gve_desc.h | 8 +- drivers/net/ethernet/google/gve/gve_tx.c | 207 +++++++++++++++---- 4 files changed, 194 insertions(+), 43 deletions(-) diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h index d1a2a526a250..37a449fb032e 100644 --- a/drivers/net/ethernet/google/gve/gve.h +++ b/drivers/net/ethernet/google/gve/gve.h @@ -110,12 +110,20 @@ struct gve_tx_iovec { u32 iov_padding; /* padding associated with this segment */ }; +struct gve_tx_dma_buf { + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); +}; + /* Tracks the memory in the fifo occupied by the skb. Mapped 1:1 to a desc * ring entry but only used for a pkt_desc not a seg_desc */ struct gve_tx_buffer_state { struct sk_buff *skb; /* skb for this pkt */ - struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */ + union { + struct gve_tx_iovec iov[GVE_TX_MAX_IOVEC]; /* segments of this pkt */ + struct gve_tx_dma_buf buf; + }; }; /* A TX buffer - each queue has one */ @@ -138,13 +146,16 @@ struct gve_tx_ring { __be32 last_nic_done ____cacheline_aligned; /* NIC tail pointer */ u64 pkt_done; /* free-running - total packets completed */ u64 bytes_done; /* free-running - total bytes completed */ + u32 dropped_pkt; /* free-running - total packets dropped */ /* Cacheline 2 -- Read-mostly fields */ union gve_tx_desc *desc ____cacheline_aligned; struct gve_tx_buffer_state *info; /* Maps 1:1 to a desc */ struct netdev_queue *netdev_txq; struct gve_queue_resources *q_resources; /* head and tail pointer idx */ + struct device *dev; u32 mask; /* masks req and done down to queue size */ + bool raw_addressing; /* use raw_addressing? */ /* Slow-path fields */ u32 q_num ____cacheline_aligned; /* queue idx */ @@ -440,7 +451,10 @@ static inline u32 gve_rx_idx_to_ntfy(struct gve_priv *priv, u32 queue_idx) */ static inline u32 gve_num_tx_qpls(struct gve_priv *priv) { - return priv->tx_cfg.num_queues; + if (priv->raw_addressing) + return 0; + else + return priv->tx_cfg.num_queues; } /* Returns the number of rx queue page lists diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c index 3a7a12b3d144..1b3715e21ae4 100644 --- a/drivers/net/ethernet/google/gve/gve_adminq.c +++ b/drivers/net/ethernet/google/gve/gve_adminq.c @@ -318,8 +318,10 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) { struct gve_tx_ring *tx = &priv->tx[queue_index]; union gve_adminq_command cmd; + u32 qpl_id; int err; + qpl_id = priv->raw_addressing ? GVE_RAW_ADDRESSING_QPL_ID : tx->tx_fifo.qpl->id; memset(&cmd, 0, sizeof(cmd)); cmd.opcode = cpu_to_be32(GVE_ADMINQ_CREATE_TX_QUEUE); cmd.create_tx_queue = (struct gve_adminq_create_tx_queue) { @@ -328,7 +330,7 @@ static int gve_adminq_create_tx_queue(struct gve_priv *priv, u32 queue_index) .queue_resources_addr = cpu_to_be64(tx->q_resources_bus), .tx_ring_addr = cpu_to_be64(tx->bus), - .queue_page_list_id = cpu_to_be32(tx->tx_fifo.qpl->id), + .queue_page_list_id = cpu_to_be32(qpl_id), .ntfy_id = cpu_to_be32(tx->ntfy_id), }; diff --git a/drivers/net/ethernet/google/gve/gve_desc.h b/drivers/net/ethernet/google/gve/gve_desc.h index 0aad314aefaf..a7da364e81c8 100644 --- a/drivers/net/ethernet/google/gve/gve_desc.h +++ b/drivers/net/ethernet/google/gve/gve_desc.h @@ -16,9 +16,11 @@ * Base addresses encoded in seg_addr are not assumed to be physical * addresses. The ring format assumes these come from some linear address * space. This could be physical memory, kernel virtual memory, user virtual - * memory. gVNIC uses lists of registered pages. Each queue is assumed - * to be associated with a single such linear address space to ensure a - * consistent meaning for seg_addrs posted to its rings. + * memory. + * If raw dma addressing is not supported then gVNIC uses lists of registered + * pages. Each queue is assumed to be associated with a single such linear + * address space to ensure a consistent meaning for seg_addrs posted to its + * rings. */ struct gve_tx_pkt_desc { diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c index d0244feb0301..b7935ded1736 100644 --- a/drivers/net/ethernet/google/gve/gve_tx.c +++ b/drivers/net/ethernet/google/gve/gve_tx.c @@ -158,9 +158,11 @@ static void gve_tx_free_ring(struct gve_priv *priv, int idx) tx->q_resources, tx->q_resources_bus); tx->q_resources = NULL; - gve_tx_fifo_release(priv, &tx->tx_fifo); - gve_unassign_qpl(priv, tx->tx_fifo.qpl->id); - tx->tx_fifo.qpl = NULL; + if (!tx->raw_addressing) { + gve_tx_fifo_release(priv, &tx->tx_fifo); + gve_unassign_qpl(priv, tx->tx_fifo.qpl->id); + tx->tx_fifo.qpl = NULL; + } bytes = sizeof(*tx->desc) * slots; dma_free_coherent(hdev, bytes, tx->desc, tx->bus); @@ -206,11 +208,15 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) if (!tx->desc) goto abort_with_info; - tx->tx_fifo.qpl = gve_assign_tx_qpl(priv); + tx->raw_addressing = priv->raw_addressing; + tx->dev = &priv->pdev->dev; + if (!tx->raw_addressing) { + tx->tx_fifo.qpl = gve_assign_tx_qpl(priv); - /* map Tx FIFO */ - if (gve_tx_fifo_init(priv, &tx->tx_fifo)) - goto abort_with_desc; + /* map Tx FIFO */ + if (gve_tx_fifo_init(priv, &tx->tx_fifo)) + goto abort_with_desc; + } tx->q_resources = dma_alloc_coherent(hdev, @@ -228,7 +234,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) return 0; abort_with_fifo: - gve_tx_fifo_release(priv, &tx->tx_fifo); + if (!tx->raw_addressing) + gve_tx_fifo_release(priv, &tx->tx_fifo); abort_with_desc: dma_free_coherent(hdev, bytes, tx->desc, tx->bus); tx->desc = NULL; @@ -301,27 +308,47 @@ static inline int gve_skb_fifo_bytes_required(struct gve_tx_ring *tx, return bytes; } -/* The most descriptors we could need are 3 - 1 for the headers, 1 for - * the beginning of the payload at the end of the FIFO, and 1 if the - * payload wraps to the beginning of the FIFO. +/* The most descriptors we could need is MAX_SKB_FRAGS + 3 : 1 for each skb frag, + * +1 for the skb linear portion, +1 for when tcp hdr needs to be in separate descriptor, + * and +1 if the payload wraps to the beginning of the FIFO. */ -#define MAX_TX_DESC_NEEDED 3 +#define MAX_TX_DESC_NEEDED (MAX_SKB_FRAGS + 3) +static void gve_tx_unmap_buf(struct device *dev, struct gve_tx_buffer_state *info) +{ + if (info->skb) { + dma_unmap_single(dev, dma_unmap_addr(&info->buf, dma), + dma_unmap_len(&info->buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(&info->buf, len, 0); + } else { + dma_unmap_page(dev, dma_unmap_addr(&info->buf, dma), + dma_unmap_len(&info->buf, len), + DMA_TO_DEVICE); + dma_unmap_len_set(&info->buf, len, 0); + } +} /* Check if sufficient resources (descriptor ring space, FIFO space) are * available to transmit the given number of bytes. */ static inline bool gve_can_tx(struct gve_tx_ring *tx, int bytes_required) { - return (gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED && - gve_tx_fifo_can_alloc(&tx->tx_fifo, bytes_required)); + bool can_alloc = true; + + if (!tx->raw_addressing) + can_alloc = gve_tx_fifo_can_alloc(&tx->tx_fifo, bytes_required); + + return (gve_tx_avail(tx) >= MAX_TX_DESC_NEEDED && can_alloc); } /* Stops the queue if the skb cannot be transmitted. */ static int gve_maybe_stop_tx(struct gve_tx_ring *tx, struct sk_buff *skb) { - int bytes_required; + int bytes_required = 0; + + if (!tx->raw_addressing) + bytes_required = gve_skb_fifo_bytes_required(tx, skb); - bytes_required = gve_skb_fifo_bytes_required(tx, skb); if (likely(gve_can_tx(tx, bytes_required))) return 0; @@ -390,22 +417,22 @@ static void gve_tx_fill_seg_desc(union gve_tx_desc *seg_desc, seg_desc->seg.seg_addr = cpu_to_be64(addr); } -static void gve_dma_sync_for_device(struct device *dev, dma_addr_t *page_buses, +static void gve_dma_sync_for_device(struct gve_priv *priv, + dma_addr_t *page_buses, u64 iov_offset, u64 iov_len) { u64 last_page = (iov_offset + iov_len - 1) / PAGE_SIZE; u64 first_page = iov_offset / PAGE_SIZE; - dma_addr_t dma; u64 page; for (page = first_page; page <= last_page; page++) { - dma = page_buses[page]; - dma_sync_single_for_device(dev, dma, PAGE_SIZE, DMA_TO_DEVICE); + dma_addr_t dma = page_buses[page]; + + dma_sync_single_for_device(&priv->pdev->dev, dma, PAGE_SIZE, DMA_TO_DEVICE); } } -static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, - struct device *dev) +static int gve_tx_add_skb_copy(struct gve_priv *priv, struct gve_tx_ring *tx, struct sk_buff *skb) { int pad_bytes, hlen, hdr_nfrags, payload_nfrags, l4_hdr_offset; union gve_tx_desc *pkt_desc, *seg_desc; @@ -429,7 +456,7 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, hlen = is_gso ? l4_hdr_offset + tcp_hdrlen(skb) : skb_headlen(skb); - info->skb = skb; + info->skb = skb; /* We don't want to split the header, so if necessary, pad to the end * of the fifo and then put the header at the beginning of the fifo. */ @@ -447,7 +474,7 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, skb_copy_bits(skb, 0, tx->tx_fifo.base + info->iov[hdr_nfrags - 1].iov_offset, hlen); - gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses, + gve_dma_sync_for_device(priv, tx->tx_fifo.qpl->page_buses, info->iov[hdr_nfrags - 1].iov_offset, info->iov[hdr_nfrags - 1].iov_len); copy_offset = hlen; @@ -463,7 +490,7 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, skb_copy_bits(skb, copy_offset, tx->tx_fifo.base + info->iov[i].iov_offset, info->iov[i].iov_len); - gve_dma_sync_for_device(dev, tx->tx_fifo.qpl->page_buses, + gve_dma_sync_for_device(priv, tx->tx_fifo.qpl->page_buses, info->iov[i].iov_offset, info->iov[i].iov_len); copy_offset += info->iov[i].iov_len; @@ -472,6 +499,98 @@ static int gve_tx_add_skb(struct gve_tx_ring *tx, struct sk_buff *skb, return 1 + payload_nfrags; } +static int gve_tx_add_skb_no_copy(struct gve_priv *priv, struct gve_tx_ring *tx, + struct sk_buff *skb) +{ + const struct skb_shared_info *shinfo = skb_shinfo(skb); + int hlen, payload_nfrags, l4_hdr_offset, seg_idx_bias; + union gve_tx_desc *pkt_desc, *seg_desc; + struct gve_tx_buffer_state *info; + bool is_gso = skb_is_gso(skb); + u32 idx = tx->req & tx->mask; + struct gve_tx_dma_buf *buf; + int last_mapped = 0; + u64 addr; + u32 len; + int i; + + info = &tx->info[idx]; + pkt_desc = &tx->desc[idx]; + + l4_hdr_offset = skb_checksum_start_offset(skb); + /* If the skb is gso, then we want only up to the tcp header in the first segment + * to efficiently replicate on each segment otherwise we want the linear portion + * of the skb (which will contain the checksum because skb->csum_start and + * skb->csum_offset are given relative to skb->head) in the first segment. + */ + hlen = is_gso ? l4_hdr_offset + tcp_hdrlen(skb) : + skb_headlen(skb); + len = skb_headlen(skb); + + info->skb = skb; + + addr = dma_map_single(tx->dev, skb->data, len, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(tx->dev, addr))) { + priv->dma_mapping_error++; + goto drop; + } + buf = &info->buf; + dma_unmap_len_set(buf, len, len); + dma_unmap_addr_set(buf, dma, addr); + + payload_nfrags = shinfo->nr_frags; + if (hlen < len) { + /* For gso the rest of the linear portion of the skb needs to + * be in its own descriptor. + */ + payload_nfrags++; + gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset, + 1 + payload_nfrags, hlen, addr); + + len -= hlen; + addr += hlen; + seg_desc = &tx->desc[(tx->req + 1) & tx->mask]; + seg_idx_bias = 2; + gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr); + } else { + seg_idx_bias = 1; + gve_tx_fill_pkt_desc(pkt_desc, skb, is_gso, l4_hdr_offset, + 1 + payload_nfrags, hlen, addr); + } + idx = (tx->req + seg_idx_bias) & tx->mask; + + for (i = 0; i < payload_nfrags - (seg_idx_bias - 1); i++) { + const skb_frag_t *frag = &shinfo->frags[i]; + + seg_desc = &tx->desc[idx]; + len = skb_frag_size(frag); + addr = skb_frag_dma_map(tx->dev, frag, 0, len, DMA_TO_DEVICE); + if (unlikely(dma_mapping_error(tx->dev, addr))) { + priv->dma_mapping_error++; + goto unmap_drop; + } + buf = &tx->info[idx].buf; + tx->info[idx].skb = NULL; + dma_unmap_len_set(buf, len, len); + dma_unmap_addr_set(buf, dma, addr); + + gve_tx_fill_seg_desc(seg_desc, skb, is_gso, len, addr); + idx = (idx + 1) & tx->mask; + } + + return 1 + payload_nfrags; + +unmap_drop: + i--; + for (last_mapped = i + seg_idx_bias; last_mapped >= 0; last_mapped--) { + idx = (tx->req + last_mapped) & tx->mask; + gve_tx_unmap_buf(tx->dev, &tx->info[idx]); + } +drop: + tx->dropped_pkt++; + return 0; +} + netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) { struct gve_priv *priv = netdev_priv(dev); @@ -490,12 +609,20 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) gve_tx_put_doorbell(priv, tx->q_resources, tx->req); return NETDEV_TX_BUSY; } - nsegs = gve_tx_add_skb(tx, skb, &priv->pdev->dev); - - netdev_tx_sent_queue(tx->netdev_txq, skb->len); - skb_tx_timestamp(skb); + if (tx->raw_addressing) + nsegs = gve_tx_add_skb_no_copy(priv, tx, skb); + else + nsegs = gve_tx_add_skb_copy(priv, tx, skb); + + /* If the packet is getting sent, we need to update the skb */ + if (nsegs) { + netdev_tx_sent_queue(tx->netdev_txq, skb->len); + skb_tx_timestamp(skb); + } - /* give packets to NIC */ + /* Give packets to NIC. Even if this packet failed to send the doorbell + * might need to be rung because of xmit_more. + */ tx->req += nsegs; if (!netif_xmit_stopped(tx->netdev_txq) && netdev_xmit_more()) @@ -525,24 +652,30 @@ static int gve_clean_tx_done(struct gve_priv *priv, struct gve_tx_ring *tx, info = &tx->info[idx]; skb = info->skb; + /* Unmap the buffer */ + if (tx->raw_addressing) + gve_tx_unmap_buf(tx->dev, info); /* Mark as free */ if (skb) { info->skb = NULL; bytes += skb->len; pkts++; dev_consume_skb_any(skb); - /* FIFO free */ - for (i = 0; i < ARRAY_SIZE(info->iov); i++) { - space_freed += info->iov[i].iov_len + - info->iov[i].iov_padding; - info->iov[i].iov_len = 0; - info->iov[i].iov_padding = 0; + if (!tx->raw_addressing) { + /* FIFO free */ + for (i = 0; i < ARRAY_SIZE(info->iov); i++) { + space_freed += info->iov[i].iov_len + + info->iov[i].iov_padding; + info->iov[i].iov_len = 0; + info->iov[i].iov_padding = 0; + } } } tx->done++; } - gve_tx_free_fifo(&tx->tx_fifo, space_freed); + if (!tx->raw_addressing) + gve_tx_free_fifo(&tx->tx_fifo, space_freed); u64_stats_update_begin(&tx->statss); tx->bytes_done += bytes; tx->pkt_done += pkts;