From patchwork Sun May 27 10:23:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 921070 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=kernel.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="b77Kveoi"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 40tx1G5gPlz9s1d for ; Sun, 27 May 2018 20:24:18 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1032638AbeE0KYQ (ORCPT ); Sun, 27 May 2018 06:24:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:45186 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1032591AbeE0KYN (ORCPT ); Sun, 27 May 2018 06:24:13 -0400 Received: from localhost (unknown [193.47.165.251]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5A48D20844; Sun, 27 May 2018 10:24:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1527416653; bh=63cozeG+iwnIlUyqDf6W92YbiUl7v9uZRxMhN9yHr9Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b77KveoiVubWiTNO33cK1RC2/jtaVQUJ4fTkn21769isYmv8nFE1t3NXnI1PNIaXd G1sukgoqm5p95BTttJBifvFQmSYF24IOwgQiYv/Fmac5kP5ykESTCiVGEempdybqID /dod5kbPHY7Ivnkeq/3IXV8YoHKvYTWlKsy2iURM= From: Leon Romanovsky To: Doug Ledford , Jason Gunthorpe Cc: Leon Romanovsky , RDMA mailing list , Boris Pismenny , Matan Barak , Raed Salem , Yishai Hadas , Saeed Mahameed , linux-netdev Subject: [PATCH rdma-next v1 07/13] IB/core: Support passing uhw for create_flow Date: Sun, 27 May 2018 13:23:40 +0300 Message-Id: <20180527102346.15149-8-leon@kernel.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180527102346.15149-1-leon@kernel.org> References: <20180527102346.15149-1-leon@kernel.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Matan Barak This is required when user-space drivers need to pass extra information regarding how to handle this flow steering specification. Reviewed-by: Yishai Hadas Signed-off-by: Matan Barak Signed-off-by: Boris Pismenny Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/uverbs_cmd.c | 7 ++++++- drivers/infiniband/core/verbs.c | 2 +- drivers/infiniband/hw/mlx4/main.c | 6 +++++- drivers/infiniband/hw/mlx5/main.c | 7 ++++++- include/rdma/ib_verbs.h | 3 ++- 5 files changed, 20 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c index e74262ee104c..ddb9d79691be 100644 --- a/drivers/infiniband/core/uverbs_cmd.c +++ b/drivers/infiniband/core/uverbs_cmd.c @@ -3542,11 +3542,16 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file, err = -EINVAL; goto err_free; } - flow_id = ib_create_flow(qp, flow_attr, IB_FLOW_DOMAIN_USER); + + flow_id = qp->device->create_flow(qp, flow_attr, + IB_FLOW_DOMAIN_USER, uhw); + if (IS_ERR(flow_id)) { err = PTR_ERR(flow_id); goto err_free; } + atomic_inc(&qp->usecnt); + flow_id->qp = qp; flow_id->uobject = uobj; uobj->object = flow_id; uflow = container_of(uobj, typeof(*uflow), uobject); diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c index 6ddfb1fade79..0b56828c1319 100644 --- a/drivers/infiniband/core/verbs.c +++ b/drivers/infiniband/core/verbs.c @@ -1983,7 +1983,7 @@ struct ib_flow *ib_create_flow(struct ib_qp *qp, if (!qp->device->create_flow) return ERR_PTR(-EOPNOTSUPP); - flow_id = qp->device->create_flow(qp, flow_attr, domain); + flow_id = qp->device->create_flow(qp, flow_attr, domain, NULL); if (!IS_ERR(flow_id)) { atomic_inc(&qp->usecnt); flow_id->qp = qp; diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c index bf12394c13c1..6fe5d5d1d1d9 100644 --- a/drivers/infiniband/hw/mlx4/main.c +++ b/drivers/infiniband/hw/mlx4/main.c @@ -1848,7 +1848,7 @@ static int mlx4_ib_add_dont_trap_rule(struct mlx4_dev *dev, static struct ib_flow *mlx4_ib_create_flow(struct ib_qp *qp, struct ib_flow_attr *flow_attr, - int domain) + int domain, struct ib_udata *udata) { int err = 0, i = 0, j = 0; struct mlx4_ib_flow *mflow; @@ -1866,6 +1866,10 @@ static struct ib_flow *mlx4_ib_create_flow(struct ib_qp *qp, (flow_attr->type != IB_FLOW_ATTR_NORMAL)) return ERR_PTR(-EOPNOTSUPP); + if (udata && + udata->inlen && !ib_is_udata_cleared(udata, 0, udata->inlen)) + return ERR_PTR(-EOPNOTSUPP); + memset(type, 0, sizeof(type)); mflow = kzalloc(sizeof(*mflow), GFP_KERNEL); diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index 25a271ef8374..59f86198eb3b 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -3363,7 +3363,8 @@ static struct mlx5_ib_flow_handler *create_sniffer_rule(struct mlx5_ib_dev *dev, static struct ib_flow *mlx5_ib_create_flow(struct ib_qp *qp, struct ib_flow_attr *flow_attr, - int domain) + int domain, + struct ib_udata *udata) { struct mlx5_ib_dev *dev = to_mdev(qp->device); struct mlx5_ib_qp *mqp = to_mqp(qp); @@ -3375,6 +3376,10 @@ static struct ib_flow *mlx5_ib_create_flow(struct ib_qp *qp, int err; int underlay_qpn; + if (udata && + udata->inlen && !ib_is_udata_cleared(udata, 0, udata->inlen)) + return ERR_PTR(-EOPNOTSUPP); + if (flow_attr->priority > MLX5_IB_FLOW_LAST_PRIO) return ERR_PTR(-ENOMEM); diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h index f6bd3b97b971..80956b1c9f4d 100644 --- a/include/rdma/ib_verbs.h +++ b/include/rdma/ib_verbs.h @@ -2459,7 +2459,8 @@ struct ib_device { struct ib_flow * (*create_flow)(struct ib_qp *qp, struct ib_flow_attr *flow_attr, - int domain); + int domain, + struct ib_udata *udata); int (*destroy_flow)(struct ib_flow *flow_id); int (*check_mr_status)(struct ib_mr *mr, u32 check_mask, struct ib_mr_status *mr_status);