From patchwork Thu Aug 22 15:25:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1151663 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46DpK11vNPz9sNC; Fri, 23 Aug 2019 01:26:13 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1i0oyW-0004g4-2t; Thu, 22 Aug 2019 15:26:08 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1i0oyT-0004fo-Jz for kernel-team@lists.ubuntu.com; Thu, 22 Aug 2019 15:26:05 +0000 Received: from mail-qk1-f197.google.com ([209.85.222.197]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1i0oyT-0003Ny-5m for kernel-team@lists.ubuntu.com; Thu, 22 Aug 2019 15:26:05 +0000 Received: by mail-qk1-f197.google.com with SMTP id g125so6170661qkd.7 for ; Thu, 22 Aug 2019 08:26:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=axVod63B+W47PkDGeEhI75GPpVYdREGMXu/bSIdO/KE=; b=XdZeyHa6BbMRxvmwZpnfmPXOWSnE6Ah8s6Gky1UAbuFT0XBJnxh6B4dKoM1aU4EvT7 n7bmChZDYEgtVI6Xp3R5658PZ7aALg+xz8fvu2SIivv40L//a4On1ukSjRhh3fZ69apy yw1X0ykCWp3tDh8MEP126Xbuxd8W1CRmi+dVhMrNR9OSSBlHh0z4lncV9ghKeEfZtU+w ccOpPQvUFCKlvcZ/KkY6p3GhaFieXQC9bW03sKcv1rpMMae6Nhw4WnmFlzMmyIIHVEEX orpO0mypRkXGdKPAhy9ab435EA/Sh95nloDfzHGYnlQEL44UVwhW+JnAMr6ml3hKClu0 aGHA== X-Gm-Message-State: APjAAAUe25/c+FHqvLaEDm4Qgr8P5grDcyiNLJ/x5H2cTnztP6Cjhw3C 8J06KGI4YQnwvy+96ZDI5cBSngt1vgZBzehbXRm76kydGn/xJy0tEKqLRIsqBRZyCu2hApCK1hp pvYW0Hu3E1y7marickIlon0yOcE60vAVNlDAwTrVQ X-Received: by 2002:ac8:53d3:: with SMTP id c19mr89982qtq.225.1566487563919; Thu, 22 Aug 2019 08:26:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqxTkFEl6zim+laPXx1JsuKgI7PfbE6cChpFHvJjd8EaM9brTtbv14lybUXiW+lT7r46ParY4w== X-Received: by 2002:ac8:53d3:: with SMTP id c19mr89939qtq.225.1566487563513; Thu, 22 Aug 2019 08:26:03 -0700 (PDT) Received: from gallifrey.lan ([2804:14c:4e3:5332:6d86:2536:a7cf:f02]) by smtp.gmail.com with ESMTPSA id s4sm11894973qkb.130.2019.08.22.08.26.01 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Aug 2019 08:26:02 -0700 (PDT) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [xenial:azure][PATCH] IB/mlx5: Fix MR registration flow to use UMR properly Date: Thu, 22 Aug 2019 12:25:56 -0300 Message-Id: <20190822152557.3274-1-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Guy Levi BugLink: https://bugs.launchpad.net/bugs/1840189 Driver shouldn't allow to use UMR to register a MR when umr_modify_atomic_disabled is set. Otherwise it will always end up with a failure in the post send flow which sets the UMR WQE to modify atomic access right. Fixes: c8d75a980fab ("IB/mlx5: Respect new UMR capabilities") Signed-off-by: Guy Levi Reviewed-by: Moni Shoua Signed-off-by: Leon Romanovsky Link: https://lore.kernel.org/r/20190731081929.32559-1-leon@kernel.org Signed-off-by: Doug Ledford (cherry picked from commit e5366d309a772fef264ec85e858f9ea46f939848) Signed-off-by: Marcelo Henrique Cerri Acked-by: Connor Kuehl Acked-by: Stefan Bader --- drivers/infiniband/hw/mlx5/mr.c | 27 +++++++++------------------ 1 file changed, 9 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index 4bda44e5b602..e0a2262691fe 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -51,22 +51,12 @@ static int clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); static int dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); static int mr_cache_max_order(struct mlx5_ib_dev *dev); static int unreg_umr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); -static bool umr_can_modify_entity_size(struct mlx5_ib_dev *dev) -{ - return !MLX5_CAP_GEN(dev->mdev, umr_modify_entity_size_disabled); -} static bool umr_can_use_indirect_mkey(struct mlx5_ib_dev *dev) { return !MLX5_CAP_GEN(dev->mdev, umr_indirect_mkey_disabled); } -static bool use_umr(struct mlx5_ib_dev *dev, int order) -{ - return order <= mr_cache_max_order(dev) && - umr_can_modify_entity_size(dev); -} - static int destroy_mkey(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) { int err = mlx5_core_destroy_mkey(dev->mdev, &mr->mmkey); @@ -1214,7 +1204,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, { struct mlx5_ib_dev *dev = to_mdev(pd->device); struct mlx5_ib_mr *mr = NULL; - bool populate_mtts = false; + bool use_umr; struct ib_umem *umem; int page_shift; int npages; @@ -1247,29 +1237,30 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (err < 0) return ERR_PTR(err); - if (use_umr(dev, order)) { + use_umr = !MLX5_CAP_GEN(dev->mdev, umr_modify_entity_size_disabled) && + (!MLX5_CAP_GEN(dev->mdev, umr_modify_atomic_disabled) || + !MLX5_CAP_GEN(dev->mdev, atomic)); + + if (order <= mr_cache_max_order(dev) && use_umr) { mr = alloc_mr_from_cache(pd, umem, virt_addr, length, ncont, page_shift, order, access_flags); if (PTR_ERR(mr) == -EAGAIN) { mlx5_ib_dbg(dev, "cache empty for order %d\n", order); mr = NULL; } - populate_mtts = false; } else if (!MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset)) { if (access_flags & IB_ACCESS_ON_DEMAND) { err = -EINVAL; pr_err("Got MR registration for ODP MR > 512MB, not supported for Connect-IB\n"); goto error; } - populate_mtts = true; + use_umr = false; } if (!mr) { - if (!umr_can_modify_entity_size(dev)) - populate_mtts = true; mutex_lock(&dev->slow_path_mutex); mr = reg_create(NULL, pd, virt_addr, length, umem, ncont, - page_shift, access_flags, populate_mtts); + page_shift, access_flags, !use_umr); mutex_unlock(&dev->slow_path_mutex); } @@ -1287,7 +1278,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, update_odp_mr(mr); #endif - if (!populate_mtts) { + if (use_umr) { int update_xlt_flags = MLX5_IB_UPD_XLT_ENABLE; if (access_flags & IB_ACCESS_ON_DEMAND)