From patchwork Wed Mar 21 12:14:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Streetman X-Patchwork-Id: 888775 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 405q0y1CZGz9s2L; Wed, 21 Mar 2018 23:31:30 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1eyctp-0008HU-DA; Wed, 21 Mar 2018 12:31:25 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1eycdh-0006KI-QV for kernel-team@lists.ubuntu.com; Wed, 21 Mar 2018 12:14:45 +0000 Received: from mail-yw0-f199.google.com ([209.85.161.199]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1eycdh-0003jJ-GY for kernel-team@lists.ubuntu.com; Wed, 21 Mar 2018 12:14:45 +0000 Received: by mail-yw0-f199.google.com with SMTP id i125so1956512ywg.22 for ; Wed, 21 Mar 2018 05:14:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=jnBZ7bCw5vvWeP+xRUje21lbJU25LwBSeksQbSCnbkM=; b=te8GoGBpWq+QDeFFwBTKHSq0gLfkmOAbscjaxpkCN9Tc7NuNoOR7KJ0xRS8av3d4a8 e2kjLhW8ZAXyVHhlUa17SL6JWB4dYXJr0XoYsZKp9iDufybnI16PJCjqnuZtsfeSlaeV Zrp0O/iIFE4mMZkc3mq5DPbW86h7sr8xIT9wGzM56EnF+agxAqdGjj9zVt9wPczrnREv YJprj8TnljuwEj1ehqPuPfcQ9dyc9hwBp01wsGMfW3fOPqmJ5Dk9SdXe5qgQe03uT7JO MP7OwmCU9oTMWD6Z11uxYDnZGVxcrw5Q7AMa+O7m2X7tLOH6H0MK4m+uLYI0UesQXFcN 4KfQ== X-Gm-Message-State: AElRT7GLgcVa8XgE5qOLsJJkUkJJV0v7UIkztgvPCwwP2pgrO8izyd6b cVWHvQLTKBoAokZPQSK+w0D9FOC3sliT+hPVOSU3f67qf87naIK9TS5yJ2m9QHNREfsutex8jj/ oRR5J8zhTpCq7GiC6Jz2nezpbzUaAkcC6CY+JYI+BUw== X-Received: by 10.129.62.11 with SMTP id l11mr11699072ywa.387.1521634484272; Wed, 21 Mar 2018 05:14:44 -0700 (PDT) X-Google-Smtp-Source: AG47ELvgQNFfCmrudnnArT6UKG3/BJxwuCvv2+J/dJahrDv8VSuFbLEcluqBnQDkRsjMuQVgQTJAoA== X-Received: by 10.129.62.11 with SMTP id l11mr11699060ywa.387.1521634483961; Wed, 21 Mar 2018 05:14:43 -0700 (PDT) Received: from thorin.lan (45-27-90-188.lightspeed.rlghnc.sbcglobal.net. [45.27.90.188]) by smtp.gmail.com with ESMTPSA id w4sm1468909ywe.72.2018.03.21.05.14.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Mar 2018 05:14:43 -0700 (PDT) From: Dan Streetman To: kernel-team@lists.ubuntu.com Subject: [Xenial][Artful][PATCH 2/2] i40e/i40evf: Account for frags split over multiple descriptors in check linearize Date: Wed, 21 Mar 2018 08:14:25 -0400 Message-Id: <20180321121425.26886-2-ddstreet@canonical.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180321121425.26886-1-ddstreet@canonical.com> References: <20180321121425.26886-1-ddstreet@canonical.com> X-Mailman-Approved-At: Wed, 21 Mar 2018 12:31:23 +0000 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dan Streetman MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Alexander Duyck BugLink: http://bugs.launchpad.net/bugs/1723127 The original code for __i40e_chk_linearize didn't take into account the fact that if a fragment is 16K in size or larger it has to be split over 2 descriptors and the smaller of those 2 descriptors will be on the trailing edge of the transmit. As a result we can get into situations where we didn't catch requests that could result in a Tx hang. This patch takes care of that by subtracting the length of all but the trailing edge of the stale fragment before we test for sum. By doing this we can guarantee that we have all cases covered, including the case of a fragment that spans multiple descriptors. We don't need to worry about checking the inner portions of this since 12K is the maximum aligned DMA size and that is larger than any MSS will ever be since the MTU limit for jumbos is something on the order of 9K. Signed-off-by: Alexander Duyck Tested-by: Andrew Bowers Signed-off-by: Jeff Kirsher (cherry-picked from 248de22e638f10bd5bfc7624a357f940f66ba137) Signed-off-by: Dan Streetman --- Note that this patch applies to Xenial and Artful, but the previous patch in the series is only required in Xenial. drivers/net/ethernet/intel/i40e/i40e_txrx.c | 26 +++++++++++++++++++++++--- drivers/net/ethernet/intel/i40evf/i40e_txrx.c | 26 +++++++++++++++++++++++--- 2 files changed, 46 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index a18c91e33dea..2c83a6ce1b47 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -2640,10 +2640,30 @@ bool __i40e_chk_linearize(struct sk_buff *skb) /* Walk through fragments adding latest fragment, testing it, and * then removing stale fragments from the sum. */ - stale = &skb_shinfo(skb)->frags[0]; - for (;;) { + for (stale = &skb_shinfo(skb)->frags[0];; stale++) { + int stale_size = skb_frag_size(stale); + sum += skb_frag_size(frag++); + /* The stale fragment may present us with a smaller + * descriptor than the actual fragment size. To account + * for that we need to remove all the data on the front and + * figure out what the remainder would be in the last + * descriptor associated with the fragment. + */ + if (stale_size > I40E_MAX_DATA_PER_TXD) { + int align_pad = -(stale->page_offset) & + (I40E_MAX_READ_REQ_SIZE - 1); + + sum -= align_pad; + stale_size -= align_pad; + + do { + sum -= I40E_MAX_DATA_PER_TXD_ALIGNED; + stale_size -= I40E_MAX_DATA_PER_TXD_ALIGNED; + } while (stale_size > I40E_MAX_DATA_PER_TXD); + } + /* if sum is negative we failed to make sufficient progress */ if (sum < 0) return true; @@ -2651,7 +2671,7 @@ bool __i40e_chk_linearize(struct sk_buff *skb) if (!nr_frags--) break; - sum -= skb_frag_size(stale++); + sum -= stale_size; } return false; diff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c index f2163735528d..1f48bcdf1ea5 100644 --- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c @@ -1843,10 +1843,30 @@ bool __i40evf_chk_linearize(struct sk_buff *skb) /* Walk through fragments adding latest fragment, testing it, and * then removing stale fragments from the sum. */ - stale = &skb_shinfo(skb)->frags[0]; - for (;;) { + for (stale = &skb_shinfo(skb)->frags[0];; stale++) { + int stale_size = skb_frag_size(stale); + sum += skb_frag_size(frag++); + /* The stale fragment may present us with a smaller + * descriptor than the actual fragment size. To account + * for that we need to remove all the data on the front and + * figure out what the remainder would be in the last + * descriptor associated with the fragment. + */ + if (stale_size > I40E_MAX_DATA_PER_TXD) { + int align_pad = -(stale->page_offset) & + (I40E_MAX_READ_REQ_SIZE - 1); + + sum -= align_pad; + stale_size -= align_pad; + + do { + sum -= I40E_MAX_DATA_PER_TXD_ALIGNED; + stale_size -= I40E_MAX_DATA_PER_TXD_ALIGNED; + } while (stale_size > I40E_MAX_DATA_PER_TXD); + } + /* if sum is negative we failed to make sufficient progress */ if (sum < 0) return true; @@ -1854,7 +1874,7 @@ bool __i40evf_chk_linearize(struct sk_buff *skb) if (!nr_frags--) break; - sum -= skb_frag_size(stale++); + sum -= stale_size; } return false;