From patchwork Thu Oct 8 05:32:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ike Panhc X-Patchwork-Id: 1378412 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4C6Kc0514sz9sTK; Thu, 8 Oct 2020 16:33:04 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kQOY1-0005HY-8Y; Thu, 08 Oct 2020 05:33:01 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kQOXy-0005H5-Vo for kernel-team@lists.ubuntu.com; Thu, 08 Oct 2020 05:32:58 +0000 Received: from 1.general.ikepanhc.us.vpn ([10.172.69.54] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kQOXx-0008DZ-V9 for kernel-team@lists.ubuntu.com; Thu, 08 Oct 2020 05:32:58 +0000 From: Ike Panhc To: kernel-team@lists.ubuntu.com Subject: [PATCH 3/3] net: hns3: add hns3_gro_complete for HW GRO process Date: Thu, 8 Oct 2020 13:32:54 +0800 Message-Id: <20201008053254.12504-1-ike.pan@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201008053159.12336-1-ike.pan@canonical.com> References: <20201008053159.12336-1-ike.pan@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Yunsheng Lin BugLink: https://launchpad.net/bugs/1893711 When a GRO packet is received by driver, the cwr field in the struct tcphdr needs to be checked to decide whether to set the SKB_GSO_TCP_ECN for skb_shinfo(skb)->gso_type. So this patch adds hns3_gro_complete to do that, and adds the hns3_handle_bdinfo to handle the hns3_gro_complete and hns3_rx_checksum. Signed-off-by: Yunsheng Lin Signed-off-by: Huazhong Tan Signed-off-by: David S. Miller (backported from commit d474d88f882610850abbf0ec6cf81ff90014c8ed) Signed-off-by: Ike Panhc --- .../net/ethernet/hisilicon/hns3/hns3_enet.c | 202 +++++++++++------- 1 file changed, 123 insertions(+), 79 deletions(-) diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 4fda2c39889f..376c121f7912 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -2311,6 +2311,44 @@ static void hns3_nic_reuse_page(struct sk_buff *skb, int i, } } +static int hns3_gro_complete(struct sk_buff *skb) +{ + __be16 type = skb->protocol; + struct tcphdr *th; + int depth = 0; + + while (type == htons(ETH_P_8021Q)) { + struct vlan_hdr *vh; + + if ((depth + VLAN_HLEN) > skb_headlen(skb)) + return -EFAULT; + + vh = (struct vlan_hdr *)(skb->data + depth); + type = vh->h_vlan_encapsulated_proto; + depth += VLAN_HLEN; + } + + if (type == htons(ETH_P_IP)) { + depth += sizeof(struct iphdr); + } else if (type == htons(ETH_P_IPV6)) { + depth += sizeof(struct ipv6hdr); + } else { + netdev_err(skb->dev, + "Error: FW GRO supports only IPv4/IPv6, not 0x%04x, depth: %d\n", + be16_to_cpu(type), depth); + return -EFAULT; + } + + th = (struct tcphdr *)(skb->data + depth); + skb_shinfo(skb)->gso_segs = NAPI_GRO_CB(skb)->count; + if (th->cwr) + skb_shinfo(skb)->gso_type |= SKB_GSO_TCP_ECN; + + skb->ip_summed = CHECKSUM_UNNECESSARY; + + return 0; +} + static void hns3_rx_checksum(struct hns3_enet_ring *ring, struct sk_buff *skb, u32 l234info, u32 bd_base_info) { @@ -2325,12 +2363,6 @@ static void hns3_rx_checksum(struct hns3_enet_ring *ring, struct sk_buff *skb, if (!(netdev->features & NETIF_F_RXCSUM)) return; - /* We MUST enable hardware checksum before enabling hardware GRO */ - if (skb_shinfo(skb)->gso_size) { - skb->ip_summed = CHECKSUM_UNNECESSARY; - return; - } - /* check if hardware has done checksum */ if (!hnae3_get_bit(bd_base_info, HNS3_RXD_L3L4P_B)) return; @@ -2526,8 +2558,9 @@ static int hns3_add_frag(struct hns3_enet_ring *ring, struct hns3_desc *desc, return 0; } -static void hns3_set_gro_param(struct sk_buff *skb, u32 l234info, - u32 bd_base_info) +static int hns3_set_gro_and_checksum(struct hns3_enet_ring *ring, + struct sk_buff *skb, u32 l234info, + u32 bd_base_info) { u16 gro_count; u32 l3_type; @@ -2535,12 +2568,11 @@ static void hns3_set_gro_param(struct sk_buff *skb, u32 l234info, gro_count = hnae3_get_field(l234info, HNS3_RXD_GRO_COUNT_M, HNS3_RXD_GRO_COUNT_S); /* if there is no HW GRO, do not set gro params */ - if (!gro_count) - return; + if (!gro_count) { + hns3_rx_checksum(ring, skb, l234info, bd_base_info); + return 0; + } - /* tcp_gro_complete() will copy NAPI_GRO_CB(skb)->count - * to skb_shinfo(skb)->gso_segs - */ NAPI_GRO_CB(skb)->count = gro_count; l3_type = hnae3_get_field(l234info, HNS3_RXD_L3ID_M, @@ -2550,13 +2582,13 @@ static void hns3_set_gro_param(struct sk_buff *skb, u32 l234info, else if (l3_type == HNS3_L3_TYPE_IPV6) skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; else - return; + return -EFAULT; skb_shinfo(skb)->gso_size = hnae3_get_field(bd_base_info, HNS3_RXD_GRO_SIZE_M, HNS3_RXD_GRO_SIZE_S); - if (skb_shinfo(skb)->gso_size) - tcp_gro_complete(skb); + + return hns3_gro_complete(skb); } static void hns3_set_rx_skb_rss_type(struct hns3_enet_ring *ring, @@ -2581,16 +2613,85 @@ static void hns3_set_rx_skb_rss_type(struct hns3_enet_ring *ring, skb_set_hash(skb, le32_to_cpu(desc->rx.rss_hash), rss_type); } -static int hns3_handle_rx_bd(struct hns3_enet_ring *ring, - struct sk_buff **out_skb) +static int hns3_handle_bdinfo(struct hns3_enet_ring *ring, struct sk_buff *skb, + struct hns3_desc *desc) { struct net_device *netdev = ring->tqp->handle->kinfo.netdev; + u32 bd_base_info = le32_to_cpu(desc->rx.bd_base_info); + u32 l234info = le32_to_cpu(desc->rx.l234_info); enum hns3_pkt_l2t_type l2_frame_type; + unsigned int len; + int ret; + + /* Based on hw strategy, the tag offloaded will be stored at + * ot_vlan_tag in two layer tag case, and stored at vlan_tag + * in one layer tag case. + */ + if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { + u16 vlan_tag; + + if (hns3_parse_vlan_tag(ring, desc, l234info, &vlan_tag)) + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), + vlan_tag); + } + + if (unlikely(!(bd_base_info & BIT(HNS3_RXD_VLD_B)))) { + u64_stats_update_begin(&ring->syncp); + ring->stats.non_vld_descs++; + u64_stats_update_end(&ring->syncp); + + return -EINVAL; + } + + if (unlikely(!desc->rx.pkt_len || (l234info & (BIT(HNS3_RXD_TRUNCAT_B) | + BIT(HNS3_RXD_L2E_B))))) { + u64_stats_update_begin(&ring->syncp); + if (l234info & BIT(HNS3_RXD_L2E_B)) + ring->stats.l2_err++; + else + ring->stats.err_pkt_len++; + u64_stats_update_end(&ring->syncp); + + return -EFAULT; + } + + len = skb->len; + + /* Do update ip stack process */ + skb->protocol = eth_type_trans(skb, netdev); + + /* This is needed in order to enable forwarding support */ + ret = hns3_set_gro_and_checksum(ring, skb, l234info, bd_base_info); + if (unlikely(ret)) { + u64_stats_update_begin(&ring->syncp); + ring->stats.rx_err_cnt++; + u64_stats_update_end(&ring->syncp); + return ret; + } + + l2_frame_type = hnae3_get_field(l234info, HNS3_RXD_DMAC_M, + HNS3_RXD_DMAC_S); + + u64_stats_update_begin(&ring->syncp); + ring->stats.rx_pkts++; + ring->stats.rx_bytes += len; + + if (l2_frame_type == HNS3_L2_TYPE_MULTICAST) + ring->stats.rx_multicast++; + + u64_stats_update_end(&ring->syncp); + + ring->tqp_vector->rx_group.total_bytes += len; + return 0; +} + +static int hns3_handle_rx_bd(struct hns3_enet_ring *ring, + struct sk_buff **out_skb) +{ struct sk_buff *skb = ring->skb; struct hns3_desc_cb *desc_cb; struct hns3_desc *desc; u32 bd_base_info; - u32 l234info; int length; int ret; @@ -2650,66 +2751,12 @@ static int hns3_handle_rx_bd(struct hns3_enet_ring *ring, ALIGN(ring->pull_len, sizeof(long))); } - l234info = le32_to_cpu(desc->rx.l234_info); - bd_base_info = le32_to_cpu(desc->rx.bd_base_info); - - /* Based on hw strategy, the tag offloaded will be stored at - * ot_vlan_tag in two layer tag case, and stored at vlan_tag - * in one layer tag case. - */ - if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { - u16 vlan_tag; - - if (hns3_parse_vlan_tag(ring, desc, l234info, &vlan_tag)) - __vlan_hwaccel_put_tag(skb, - htons(ETH_P_8021Q), - vlan_tag); - } - - if (unlikely(!hnae3_get_bit(bd_base_info, HNS3_RXD_VLD_B))) { - u64_stats_update_begin(&ring->syncp); - ring->stats.non_vld_descs++; - u64_stats_update_end(&ring->syncp); - - dev_kfree_skb_any(skb); - return -EINVAL; - } - - if (unlikely((!desc->rx.pkt_len) || - hnae3_get_bit(l234info, HNS3_RXD_TRUNCAT_B))) { - u64_stats_update_begin(&ring->syncp); - ring->stats.err_pkt_len++; - u64_stats_update_end(&ring->syncp); - - dev_kfree_skb_any(skb); - return -EFAULT; - } - - if (unlikely(hnae3_get_bit(l234info, HNS3_RXD_L2E_B))) { - u64_stats_update_begin(&ring->syncp); - ring->stats.l2_err++; - u64_stats_update_end(&ring->syncp); - + ret = hns3_handle_bdinfo(ring, skb, desc); + if (unlikely(ret)) { dev_kfree_skb_any(skb); - return -EFAULT; + return ret; } - l2_frame_type = hnae3_get_field(l234info, HNS3_RXD_DMAC_M, - HNS3_RXD_DMAC_S); - u64_stats_update_begin(&ring->syncp); - if (l2_frame_type == HNS3_L2_TYPE_MULTICAST) - ring->stats.rx_multicast++; - - ring->stats.rx_pkts++; - ring->stats.rx_bytes += skb->len; - u64_stats_update_end(&ring->syncp); - - ring->tqp_vector->rx_group.total_bytes += skb->len; - - /* This is needed in order to enable forwarding support */ - hns3_set_gro_param(skb, l234info, bd_base_info); - - hns3_rx_checksum(ring, skb, l234info, bd_base_info); *out_skb = skb; hns3_set_rx_skb_rss_type(ring, skb); @@ -2721,7 +2768,6 @@ int hns3_clean_rx_ring( void (*rx_fn)(struct hns3_enet_ring *, struct sk_buff *)) { #define RCB_NOF_ALLOC_RX_BUFF_ONCE 16 - struct net_device *netdev = ring->tqp->handle->kinfo.netdev; int recv_pkts, recv_bds, clean_count, err; int unused_count = hns3_desc_unused(ring) - ring->pending_buf; struct sk_buff *skb = ring->skb; @@ -2758,8 +2804,6 @@ int hns3_clean_rx_ring( continue; } - /* Do update ip stack process */ - skb->protocol = eth_type_trans(skb, netdev); rx_fn(ring, skb); recv_bds += ring->pending_buf; clean_count += ring->pending_buf;