From patchwork Thu Aug 22 15:25:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1151664 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46DpK21f1Wz9sND; Fri, 23 Aug 2019 01:26:13 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1i0oyX-0004gT-6W; Thu, 22 Aug 2019 15:26:09 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1i0oyV-0004fy-VM for kernel-team@lists.ubuntu.com; Thu, 22 Aug 2019 15:26:07 +0000 Received: from mail-qt1-f199.google.com ([209.85.160.199]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1i0oyV-0003O3-HC for kernel-team@lists.ubuntu.com; Thu, 22 Aug 2019 15:26:07 +0000 Received: by mail-qt1-f199.google.com with SMTP id r10so6889841qte.4 for ; Thu, 22 Aug 2019 08:26:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zw895m0nr5+002sS6wSYUYsvfVVLSlhXKtmRqUtYn64=; b=d912hx3yl/03iwX3ZVO9KmYiOTMqD8j8tbkfuJxrl2bXdly5VsULdOHEswmaYUsux2 QQM8Lnu32jKRL3ujPbE3Kj1fPWpzY4kxlkr5i9+RwQ0l2GTg9Kg2UBmAOcKSPRH9XW4G acm+0Xfg5Nrd4uolOZ6uHNxrGueGUtxohiQJ3pAhF09+Abc3/sP10AdqkIurdlcRM66e QdgJGu8yTDhkUUsuhHd7CxAs8iuXpN/PUPxuR4Lpvlb0VLe1Z6Mv2JJMSHvLs8a6lORR 7gnbJmdsp3EQZTHPxLRhBDUcagWYKWwgd/Khu3NcNhypiXy+pB+Tyl9PUPGB6tH/E4eN 0A7A== X-Gm-Message-State: APjAAAW898QyjxpD0PtjqNu3Go+5QdKCxD0oxCh8e3iFpfhl5AcxZdqS keXXjjh1rLeoWMBFo7YBF9pPzpE98NPNiOl0vVfS6w/jkZP+ooq4twfBkEst7pPB4KbRSTjXomM x3mkT72ODB2Lrw7uIvJt7BvifYHuDVdM4HY7YTe2n X-Received: by 2002:a05:620a:693:: with SMTP id f19mr38064689qkh.189.1566487566293; Thu, 22 Aug 2019 08:26:06 -0700 (PDT) X-Google-Smtp-Source: APXvYqzMi9Wgi16fhaD6IsjrMNMor/a9SrXNkv5c8DKyRMzaDG9Hjf7NgdyInzmRzsho/djqFcYirQ== X-Received: by 2002:a05:620a:693:: with SMTP id f19mr38064666qkh.189.1566487565979; Thu, 22 Aug 2019 08:26:05 -0700 (PDT) Received: from gallifrey.lan ([2804:14c:4e3:5332:6d86:2536:a7cf:f02]) by smtp.gmail.com with ESMTPSA id s4sm11894973qkb.130.2019.08.22.08.26.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Aug 2019 08:26:05 -0700 (PDT) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [disco:azure][PATCH] IB/mlx5: Fix MR registration flow to use UMR properly Date: Thu, 22 Aug 2019 12:25:57 -0300 Message-Id: <20190822152557.3274-2-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190822152557.3274-1-marcelo.cerri@canonical.com> References: <20190822152557.3274-1-marcelo.cerri@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Guy Levi BugLink: https://bugs.launchpad.net/bugs/1840189 Driver shouldn't allow to use UMR to register a MR when umr_modify_atomic_disabled is set. Otherwise it will always end up with a failure in the post send flow which sets the UMR WQE to modify atomic access right. Fixes: c8d75a980fab ("IB/mlx5: Respect new UMR capabilities") Signed-off-by: Guy Levi Reviewed-by: Moni Shoua Signed-off-by: Leon Romanovsky Link: https://lore.kernel.org/r/20190731081929.32559-1-leon@kernel.org Signed-off-by: Doug Ledford (cherry picked from commit e5366d309a772fef264ec85e858f9ea46f939848) Signed-off-by: Marcelo Henrique Cerri --- drivers/infiniband/hw/mlx5/mr.c | 27 +++++++++------------------ 1 file changed, 9 insertions(+), 18 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c index c2484cc9bc2f..c9ba5c9a5531 100644 --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -51,22 +51,12 @@ static void clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); static int mr_cache_max_order(struct mlx5_ib_dev *dev); static int unreg_umr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr); -static bool umr_can_modify_entity_size(struct mlx5_ib_dev *dev) -{ - return !MLX5_CAP_GEN(dev->mdev, umr_modify_entity_size_disabled); -} static bool umr_can_use_indirect_mkey(struct mlx5_ib_dev *dev) { return !MLX5_CAP_GEN(dev->mdev, umr_indirect_mkey_disabled); } -static bool use_umr(struct mlx5_ib_dev *dev, int order) -{ - return order <= mr_cache_max_order(dev) && - umr_can_modify_entity_size(dev); -} - static int destroy_mkey(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr) { int err = mlx5_core_destroy_mkey(dev->mdev, &mr->mmkey); @@ -1321,7 +1311,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, { struct mlx5_ib_dev *dev = to_mdev(pd->device); struct mlx5_ib_mr *mr = NULL; - bool populate_mtts = false; + bool use_umr; struct ib_umem *umem; int page_shift; int npages; @@ -1354,29 +1344,30 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, if (err < 0) return ERR_PTR(err); - if (use_umr(dev, order)) { + use_umr = !MLX5_CAP_GEN(dev->mdev, umr_modify_entity_size_disabled) && + (!MLX5_CAP_GEN(dev->mdev, umr_modify_atomic_disabled) || + !MLX5_CAP_GEN(dev->mdev, atomic)); + + if (order <= mr_cache_max_order(dev) && use_umr) { mr = alloc_mr_from_cache(pd, umem, virt_addr, length, ncont, page_shift, order, access_flags); if (PTR_ERR(mr) == -EAGAIN) { mlx5_ib_dbg(dev, "cache empty for order %d\n", order); mr = NULL; } - populate_mtts = false; } else if (!MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset)) { if (access_flags & IB_ACCESS_ON_DEMAND) { err = -EINVAL; pr_err("Got MR registration for ODP MR > 512MB, not supported for Connect-IB\n"); goto error; } - populate_mtts = true; + use_umr = false; } if (!mr) { - if (!umr_can_modify_entity_size(dev)) - populate_mtts = true; mutex_lock(&dev->slow_path_mutex); mr = reg_create(NULL, pd, virt_addr, length, umem, ncont, - page_shift, access_flags, populate_mtts); + page_shift, access_flags, !use_umr); mutex_unlock(&dev->slow_path_mutex); } @@ -1394,7 +1385,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length, update_odp_mr(mr); #endif - if (!populate_mtts) { + if (use_umr) { int update_xlt_flags = MLX5_IB_UPD_XLT_ENABLE; if (access_flags & IB_ACCESS_ON_DEMAND)