From patchwork Wed Feb 6 23:00:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Cong Wang X-Patchwork-Id: 1037786 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="dusPiIJK"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43vxk52WL0z9sLw for ; Thu, 7 Feb 2019 10:00:29 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726561AbfBFXA1 (ORCPT ); Wed, 6 Feb 2019 18:00:27 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:44875 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726188AbfBFXA1 (ORCPT ); Wed, 6 Feb 2019 18:00:27 -0500 Received: by mail-pl1-f195.google.com with SMTP id p4so3815450plq.11 for ; Wed, 06 Feb 2019 15:00:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=0XN8W6JSZH+xkFjWTejKTiJxbVfzeLx2VSJfYq3Z0tM=; b=dusPiIJKIYiehUXq3MB0rN49cb8uGF15srztjpeO95jWN2r3o/rIFJ85X49mjWsXzj mqHDZUukPDsEFL90MgLXDy3K9ZGLXQ0u7bqswohFZgwNu/B07lxuk1Jaxc2tXP7Fxs05 M5VW5kDvF2sgF5DUsabG/1nAsP7jSF62iT4K2/hfS2+kzwnlX85lUGXt3Ys6GWJ2Djcn 5ub3jBpLD2JWZMTwmrZmqnd6eGcARGY00TLGJBOgM2eHaV/R3Y/vqzcL9tV1/4cqd1TX EPBOC6zLfs9Hs+TZ7lErNE3HAVJfVy7VY43SzKWgy5CNizy4NHxyDTOXqfIPAA/ugKVr G6YA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=0XN8W6JSZH+xkFjWTejKTiJxbVfzeLx2VSJfYq3Z0tM=; b=P9eE7vz7fIyVzW7iBXyfhmvSptdogmFKHqD58IU1702oFOVKLypIe5hbHUxVXd418c u12OnXbcM3wpTu9q/ujE+6U214pw83JdW7cl1DLD7ER4CU9+oJKYBwg0DlbGap692GBg TyNwd/XrBH5eY3UgP7xrm5BH79tDAO0epOTcng+yJJ3uJBic4uNJ7Y8SMy1vkbleH+EI wXRSvauSvwv6HEBKzB0CSHMlc2fmvzBxBrQrAJFmqgT35oeEISAU51Ym+Uud/jOl+mgR LnmyaAZagZ+2H0GLZfdWQxa2E4WAPhtKksM/FLNxNlSFms/RQY+EHxTXMvRHrPfuJSfC vwew== X-Gm-Message-State: AHQUAuZiPixgqu7D0vFPxejn1AzAz1r3dyPlWJG9MFL6Fqq7QmZlPod9 HwBnSn/JB9XnRLIZJQ1JP8O1Lmim X-Google-Smtp-Source: AHgI3IaHVyQJtwM5An65MuwGIfMaBBBblcXs8rWRfC6YtyYNGjvYXcwv59W8MvtGl3dD/mJuOxvsDw== X-Received: by 2002:a17:902:34a:: with SMTP id 68mr13474814pld.268.1549494026273; Wed, 06 Feb 2019 15:00:26 -0800 (PST) Received: from tw-172-25-17-123.office.twttr.net ([8.25.197.27]) by smtp.gmail.com with ESMTPSA id o1sm15803778pgn.63.2019.02.06.15.00.25 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 06 Feb 2019 15:00:25 -0800 (PST) From: Cong Wang To: netdev@vger.kernel.org Cc: Cong Wang , Saeed Mahameed , Tariq Toukan Subject: [Patch net-next v2] mlx5: use RCU lock in mlx5_eq_cq_get() Date: Wed, 6 Feb 2019 15:00:19 -0800 Message-Id: <20190206230019.1303-1-xiyou.wangcong@gmail.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org mlx5_eq_cq_get() is called in IRQ handler, the spinlock inside gets a lot of contentions when we test some heavy workload with 60 RX queues and 80 CPU's, and it is clearly shown in the flame graph. In fact, radix_tree_lookup() is perfectly fine with RCU read lock, we don't have to take a spinlock on this hot path. This is pretty much similar to commit 291c566a2891 ("net/mlx4_core: Fix racy CQ (Completion Queue) free"). Slow paths are still serialized with the spinlock, and with synchronize_irq() it should be safe to just move the fast path to RCU read lock. This patch itself reduces the latency by about 50% for our memcached workload on a 4.14 kernel we test. In upstream, as pointed out by Saeed, this spinlock gets some rework in commit 02d92f790364 ("net/mlx5: CQ Database per EQ"), so the difference could be smaller. Cc: Saeed Mahameed Cc: Tariq Toukan Acked-by: Saeed Mahameed Signed-off-by: Cong Wang --- drivers/net/ethernet/mellanox/mlx5/core/eq.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c index ee04aab65a9f..7092457705a2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c @@ -114,11 +114,11 @@ static struct mlx5_core_cq *mlx5_eq_cq_get(struct mlx5_eq *eq, u32 cqn) struct mlx5_cq_table *table = &eq->cq_table; struct mlx5_core_cq *cq = NULL; - spin_lock(&table->lock); + rcu_read_lock(); cq = radix_tree_lookup(&table->tree, cqn); if (likely(cq)) mlx5_cq_hold(cq); - spin_unlock(&table->lock); + rcu_read_unlock(); return cq; } @@ -371,9 +371,9 @@ int mlx5_eq_add_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq) struct mlx5_cq_table *table = &eq->cq_table; int err; - spin_lock_irq(&table->lock); + spin_lock(&table->lock); err = radix_tree_insert(&table->tree, cq->cqn, cq); - spin_unlock_irq(&table->lock); + spin_unlock(&table->lock); return err; } @@ -383,9 +383,9 @@ int mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq) struct mlx5_cq_table *table = &eq->cq_table; struct mlx5_core_cq *tmp; - spin_lock_irq(&table->lock); + spin_lock(&table->lock); tmp = radix_tree_delete(&table->tree, cq->cqn); - spin_unlock_irq(&table->lock); + spin_unlock(&table->lock); if (!tmp) { mlx5_core_warn(eq->dev, "cq 0x%x not found in eq 0x%x tree\n", eq->eqn, cq->cqn);