From patchwork Thu Aug 21 11:39:30 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Whitcroft X-Patchwork-Id: 381947 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 2E6B1140096; Thu, 21 Aug 2014 22:20:33 +1000 (EST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1XKRLy-0001XP-64; Thu, 21 Aug 2014 12:20:30 +0000 Received: from mail-wi0-f174.google.com ([209.85.212.174]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1XKRLe-0001Qo-NX for kernel-team@lists.ubuntu.com; Thu, 21 Aug 2014 12:20:10 +0000 Received: by mail-wi0-f174.google.com with SMTP id d1so8560973wiv.13 for ; Thu, 21 Aug 2014 05:20:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oWcCePJfe7FpDNl2P1FnNbn17fT0fgbbo11GwMzp20Y=; b=URx7embjzp99yQyWX3k0I06XCTZqmfRL6qrwG6sk6n9QjAbh++88tqhl6HbrX11TpW D7J5fXClwqGZfECWgdFk1HA7KuwNf8ZiQUznrqHvNbS6tuCntE/KLFfIMjIxefutcgB+ ImtpqzJhM8cAsFI6Yqq6EK8xv2hlAUD/4cIY4NhFVdXQplwC5Haim8ix6gsosw9OLbpq AOMXpk91P0nkdV9xebjUVK2VbK7upSkG4tVlAyUkRPfimN+D9HAFTO/qIBvQa/yevd38 UHnjsbQ4zOFciwKPpEVkH0mM6Zot26DqypOaCup4p6j5ZDfZuNNhsn4QKHRlQnEyiTzq MKlQ== X-Gm-Message-State: ALoCoQlhHeA94sIcNg7pNAhK4oU/sFo9FHfB3XAXdp5n+ZjMsUv/WQ9mJN3CKMACFUkew/u6yc0z X-Received: by 10.194.203.8 with SMTP id km8mr67545260wjc.51.1408623610582; Thu, 21 Aug 2014 05:20:10 -0700 (PDT) Received: from localhost ([2001:470:6973:2:1d91:b3b8:15c7:7bfb]) by mx.google.com with ESMTPSA id q2sm66364087wjo.13.2014.08.21.05.20.09 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 21 Aug 2014 05:20:09 -0700 (PDT) From: Andy Whitcroft To: kernel-team@lists.ubuntu.com Subject: [trusty 1/2] bnx2x: Fix kernel crash and data miscompare after EEH recovery Date: Thu, 21 Aug 2014 12:39:30 +0100 Message-Id: <1408621171-20164-2-git-send-email-apw@canonical.com> X-Mailer: git-send-email 2.1.0.rc1 In-Reply-To: <1408621171-20164-1-git-send-email-apw@canonical.com> References: <1408621171-20164-1-git-send-email-apw@canonical.com> Cc: Andy Whitcroft X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com From: "wenxiong@linux.vnet.ibm.com" A rmb() is required to ensure that the CQE is not read before it is written by the adapter DMA. PCI ordering rules will make sure the other fields are written before the marker at the end of struct eth_fast_path_rx_cqe but without rmb() a weakly ordered processor can process stale data. Without the barrier we have observed various crashes including bnx2x_tpa_start being called on queues not stopped (resulting in message start of bin not in stop) and NULL pointer exceptions from bnx2x_rx_int. Signed-off-by: Milton Miller Signed-off-by: Wen Xiong Signed-off-by: David S. Miller (cherry picked from commit 9aaae044abe95de182d09004cc3fa181bf22e6e0) BugLink: http://bugs.launchpad.net/bugs/1353105 Signed-off-by: Andy Whitcroft --- drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c index 4265df2..74e6040 100644 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c @@ -862,6 +862,18 @@ int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget) bd_prod = RX_BD(bd_prod); bd_cons = RX_BD(bd_cons); + /* A rmb() is required to ensure that the CQE is not read + * before it is written by the adapter DMA. PCI ordering + * rules will make sure the other fields are written before + * the marker at the end of struct eth_fast_path_rx_cqe + * but without rmb() a weakly ordered processor can process + * stale data. Without the barrier TPA state-machine might + * enter inconsistent state and kernel stack might be + * provided with incorrect packet description - these lead + * to various kernel crashed. + */ + rmb(); + cqe_fp_flags = cqe_fp->type_error_flags; cqe_fp_type = cqe_fp_flags & ETH_FAST_PATH_RX_CQE_TYPE;