From patchwork Sun Oct 6 15:59:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1172547 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="2GwLl6BY"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46mSxY4QhRz9sPK for ; Mon, 7 Oct 2019 03:00:17 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726586AbfJFQAI (ORCPT ); Sun, 6 Oct 2019 12:00:08 -0400 Received: from mail.kernel.org ([198.145.29.99]:35008 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726349AbfJFQAI (ORCPT ); Sun, 6 Oct 2019 12:00:08 -0400 Received: from localhost (unknown [77.137.89.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A4DFA2084B; Sun, 6 Oct 2019 16:00:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570377607; bh=WNz1a8CGkE/BgbIBV0IQjx7q1KkYuYpJiLnoaB6fPdU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2GwLl6BYMQAXCYXBa8jIX+8bhqOCGDvNgLhtVeVvOdUixZ3HrrVWRbaaS6qw7t46Q OxLl4CbmaI8iSaNgMO/7nlN9GHaZmBPE1g6enJLKN5r7bWhpaqm+og7SWJQgvvHVJm Wi2wEMGA2AUg+L6gA8G8D2F0iDIAtT5uHVM0rZtM= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Or Gerlitz , Yamin Friedman , Saeed Mahameed , linux-netdev Subject: [PATCH mlx5-next 1/3] net/mlx5: Expose optimal performance scatter entries capability Date: Sun, 6 Oct 2019 18:59:53 +0300 Message-Id: <20191006155955.31445-2-leon@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191006155955.31445-1-leon@kernel.org> References: <20191006155955.31445-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yamin Friedman Expose maximum scatter entries per RDMA READ for optimal performance. Signed-off-by: Yamin Friedman Reviewed-by: Or Gerlitz Signed-off-by: Leon Romanovsky --- include/linux/mlx5/mlx5_ifc.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h index 138c50d5a353..c0bfb1d90dd2 100644 --- a/include/linux/mlx5/mlx5_ifc.h +++ b/include/linux/mlx5/mlx5_ifc.h @@ -1153,7 +1153,7 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 log_max_srq[0x5]; u8 reserved_at_b0[0x10]; - u8 reserved_at_c0[0x8]; + u8 max_sgl_for_optimized_performance[0x8]; u8 log_max_cq_sz[0x8]; u8 reserved_at_d0[0xb]; u8 log_max_cq[0x5]; From patchwork Sun Oct 6 15:59:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1172548 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="OX7BHEH/"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46mSxZ1Hrnz9sPL for ; Mon, 7 Oct 2019 03:00:18 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726661AbfJFQAP (ORCPT ); Sun, 6 Oct 2019 12:00:15 -0400 Received: from mail.kernel.org ([198.145.29.99]:35052 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726349AbfJFQAO (ORCPT ); Sun, 6 Oct 2019 12:00:14 -0400 Received: from localhost (unknown [77.137.89.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 971C42084B; Sun, 6 Oct 2019 16:00:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570377613; bh=CJx7f+FNEmjE8DUtf1fUmTY+NGFr8JWXDzKQxCBCc3E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OX7BHEH/x/db9pun0htLPpJ/c5Ik8cXg3Ah2Ib5l2nyQHLKPR/K54duPXD6c6FF+n BQYndqwLRbuUfJF50VkbXILM9+fRdsB2oB3z5qzDBDZuYRJFo7AYFEsCeTp0ruPyp6 OG+FYkb5vX6QY+iqRTexQlLkhh8+C54UpmgKT15M= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Or Gerlitz , Yamin Friedman , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 2/3] RDMA/mlx5: Add capability for max sge to get optimized performance Date: Sun, 6 Oct 2019 18:59:54 +0300 Message-Id: <20191006155955.31445-3-leon@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191006155955.31445-1-leon@kernel.org> References: <20191006155955.31445-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yamin Friedman Allows the IB device to provide a value of maximum scatter gather entries per RDMA READ. In certain cases it may be preferable for a device to perform UMR memory registration rather than have many scatter entries in a single RDMA READ. This provides a significant performance increase in devices capable of using different memory registration schemes based on the number of scatter gather entries. This general capability allows each device vendor to fine tune when it is better to use memory registration. Signed-off-by: Yamin Friedman Reviewed-by: Or Gerlitz Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/main.c | 2 ++ include/rdma/ib_verbs.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index fa23c8e7043b..39d54e285ae9 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -1012,6 +1012,8 @@ static int mlx5_ib_query_device(struct ib_device *ibdev, 1 << MLX5_CAP_GEN(mdev, log_max_klm_list_size); props->max_pi_fast_reg_page_list_len = props->max_fast_reg_page_list_len / 2; + props->max_sgl_rd = + MLX5_CAP_GEN(mdev, max_sgl_for_optimized_performance); get_atomic_caps_qp(dev, props); props->masked_atomic_cap = IB_ATOMIC_NONE; props->max_mcast_grp = 1 << MLX5_CAP_GEN(mdev, log_max_mcg); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index 4f671378dbfc..60fd98a9b7e8 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -445,6 +445,8 @@ struct ib_device_attr { struct ib_tm_caps tm_caps; struct ib_cq_caps cq_caps; u64 max_dm_size; + /* Max entries for sgl for optimized performance per READ */ + u32 max_sgl_rd; }; enum ib_mtu { From patchwork Sun Oct 6 15:59:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 1172549 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="U3BE3AQ5"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 46mSxn6lkYz9s4Y for ; Mon, 7 Oct 2019 03:00:29 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726677AbfJFQAV (ORCPT ); Sun, 6 Oct 2019 12:00:21 -0400 Received: from mail.kernel.org ([198.145.29.99]:35094 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726349AbfJFQAU (ORCPT ); Sun, 6 Oct 2019 12:00:20 -0400 Received: from localhost (unknown [77.137.89.37]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A425620862; Sun, 6 Oct 2019 16:00:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1570377620; bh=DGKP/JNhMkqhnGAiwp2ESxKKl7LcAXgnEh2r/sQavvo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U3BE3AQ5m+tOUayCLSXn3hfpASlJYoLyq2VzAY3B1DoGHiBB/l5lmDl7eqqSUkY3o P3YJu4aCbaSDpXlW1izAg9p3FIl0T/DCQt+/lWvmicnoj6OWh+V/8jnYwRjvA85+Ba Ra1KokSwq3y9/ZAXwolVOVrxmoH1kR8mEWcOHfjo= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Or Gerlitz , Yamin Friedman , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next 3/3] RDMA/rw: Support threshold for registration vs scattering to local pages Date: Sun, 6 Oct 2019 18:59:55 +0300 Message-Id: <20191006155955.31445-4-leon@kernel.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191006155955.31445-1-leon@kernel.org> References: <20191006155955.31445-1-leon@kernel.org> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Yamin Friedman If there are more scatter entries than the recommended limit provided by the ib device, UMR registration is used. This will provide optimal performance when performing large RDMA READs over devices that advertise the threshold capability. With ConnectX-5 running NVMeoF RDMA with FIO single QP 128KB writes: Without use of cap: 70Gb/sec With use of cap: 84Gb/sec Signed-off-by: Yamin Friedman Reviewed-by: Or Gerlitz Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/rw.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c index 5337393d4dfe..ecff40efcb88 100644 --- a/drivers/infiniband/core/rw.c +++ b/drivers/infiniband/core/rw.c @@ -20,9 +20,7 @@ module_param_named(force_mr, rdma_rw_force_mr, bool, 0); MODULE_PARM_DESC(force_mr, "Force usage of MRs for RDMA READ/WRITE operations"); /* - * Check if the device might use memory registration. This is currently only - * true for iWarp devices. In the future we can hopefully fine tune this based - * on HCA driver input. + * Check if the device might use memory registration. */ static inline bool rdma_rw_can_use_mr(struct ib_device *dev, u8 port_num) { @@ -30,6 +28,8 @@ static inline bool rdma_rw_can_use_mr(struct ib_device *dev, u8 port_num) return true; if (unlikely(rdma_rw_force_mr)) return true; + if (dev->attrs.max_sgl_rd) + return true; return false; } @@ -37,9 +37,6 @@ static inline bool rdma_rw_can_use_mr(struct ib_device *dev, u8 port_num) * Check if the device will use memory registration for this RW operation. * We currently always use memory registrations for iWarp RDMA READs, and * have a debug option to force usage of MRs. - * - * XXX: In the future we can hopefully fine tune this based on HCA driver - * input. */ static inline bool rdma_rw_io_needs_mr(struct ib_device *dev, u8 port_num, enum dma_data_direction dir, int dma_nents) @@ -48,6 +45,9 @@ static inline bool rdma_rw_io_needs_mr(struct ib_device *dev, u8 port_num, return true; if (unlikely(rdma_rw_force_mr)) return true; + if (dev->attrs.max_sgl_rd && dir == DMA_FROM_DEVICE + && dma_nents > dev->attrs.max_sgl_rd) + return true; return false; }