From patchwork Tue Mar 26 05:52:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Heib X-Patchwork-Id: 1065192 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="cTC4bgA+"; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 44T0kl1XZfz9sNk for ; Tue, 26 Mar 2019 16:56:47 +1100 (AEDT) Received: from localhost ([127.0.0.1]:53231 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h8f4n-0000gy-7h for incoming@patchwork.ozlabs.org; Tue, 26 Mar 2019 01:56:45 -0400 Received: from eggs.gnu.org ([209.51.188.92]:58301) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h8f1R-0006R3-Gf for qemu-devel@nongnu.org; Tue, 26 Mar 2019 01:53:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h8f1P-0001l2-Jh for qemu-devel@nongnu.org; Tue, 26 Mar 2019 01:53:17 -0400 Received: from mail-wr1-x42a.google.com ([2a00:1450:4864:20::42a]:43365) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1h8f1P-0001k1-8F for qemu-devel@nongnu.org; Tue, 26 Mar 2019 01:53:15 -0400 Received: by mail-wr1-x42a.google.com with SMTP id k17so4821045wrx.10 for ; Mon, 25 Mar 2019 22:53:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HIN5Pkbq8LxQ7RAn4m8j2epXiLWqiSOZx/6Cu7CU218=; b=cTC4bgA+BthzVBmQigchw0FMSFn5ad5QjgHkTk3qOCDm4MZk2CQlW5B6UvSqRGStG/ pDp/mWCyuo44iTombKFD8K0/hxHj8hZTyRLnt2+YakK3Xq8k7JrbFVVa8QeE4tir9c6i tBhox2UarvPkh0Miqf5stZFQHqzwG9ZhG/LkG99QSLt0OBVCGAJCNYwWmK59q4c9WZ6R ZT7xuu6w7V7eJrXky1CDQxGRgyXpLXw39n2z176aW5mP3x0GnERdkDA9FtqYK6eq6efm nNP2IgQQ5m1QpbyEFWmKfd6k7fMZKojVQtJ1l+OZUcOBvh7bMmlkuEzVEFB40XjBhDEo r+BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HIN5Pkbq8LxQ7RAn4m8j2epXiLWqiSOZx/6Cu7CU218=; b=CdXuWZFHibIEAhXReNas84j7ptw4FOWEadl3cj1BJ0J1clFHXp632Iaw0yGIpirlGW kdNmCOdaP0PPgiWYGm2fv2FGTeu0R/24V8mizjs0hF7+0TtkVE3qDpMxJGUxmoelhNeW nuidS8NcpkKrR2uzENv6uwxieQSz3jm80ibxA4rH9LkLH5GX4GdrnBu8kEjtmZKMcyVI nq+utJ/a4Hne0WscJx1kiIXqLbpKf9w9RRIWQDdTDj4tDlaka+5AXw7/ElSfvF4B4/P+ Qd2dhlhC032toajvDKcKutvTRvbgLMFcGzYkAefiGIisHPLol+o9UQ0QNYmdbxXquCzQ E02Q== X-Gm-Message-State: APjAAAWmRx+xviQMLF6JJxtQvbxDslM5cqZABGRuZeAda8Nj2L8eCW8g UOD076TIxQodw6sWeEvD+7b+65wM X-Google-Smtp-Source: APXvYqzTTDL1ujVxcEB3+nEcwnZ5vNy0FxTZqhyio9Yo9sYNhrAaw5aE8a8yE7ZpDRv2FLBaVhBCvQ== X-Received: by 2002:a5d:6a4f:: with SMTP id t15mr18099459wrw.156.1553579593899; Mon, 25 Mar 2019 22:53:13 -0700 (PDT) Received: from kheib-workstation.redhat.com (bzq-79-183-5-130.red.bezeqint.net. [79.183.5.130]) by smtp.gmail.com with ESMTPSA id y1sm42753508wrd.34.2019.03.25.22.53.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 25 Mar 2019 22:53:13 -0700 (PDT) From: Kamal Heib To: qemu-devel@nongnu.org Date: Tue, 26 Mar 2019 07:52:51 +0200 Message-Id: <20190326055252.15138-4-kamalheib1@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190326055252.15138-1-kamalheib1@gmail.com> References: <20190326055252.15138-1-kamalheib1@gmail.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::42a Subject: [Qemu-devel] [PATCH 3/4] hw/rdma: Modify create/destroy QP to support SRQ X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kamal Heib , Yuval Shaia Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Modify create/destroy QP to support shared receive queue. Signed-off-by: Kamal Heib --- hw/rdma/rdma_backend.c | 9 ++++-- hw/rdma/rdma_backend.h | 6 ++-- hw/rdma/rdma_rm.c | 23 +++++++++++++-- hw/rdma/rdma_rm.h | 3 +- hw/rdma/rdma_rm_defs.h | 1 + hw/rdma/vmw/pvrdma_cmd.c | 61 ++++++++++++++++++++++++++-------------- 6 files changed, 72 insertions(+), 31 deletions(-) diff --git a/hw/rdma/rdma_backend.c b/hw/rdma/rdma_backend.c index d75872256207..60cf0ddb3c2a 100644 --- a/hw/rdma/rdma_backend.c +++ b/hw/rdma/rdma_backend.c @@ -793,9 +793,9 @@ void rdma_backend_destroy_cq(RdmaBackendCQ *cq) int rdma_backend_create_qp(RdmaBackendQP *qp, uint8_t qp_type, RdmaBackendPD *pd, RdmaBackendCQ *scq, - RdmaBackendCQ *rcq, uint32_t max_send_wr, - uint32_t max_recv_wr, uint32_t max_send_sge, - uint32_t max_recv_sge) + RdmaBackendCQ *rcq, RdmaBackendSRQ *srq, + uint32_t max_send_wr, uint32_t max_recv_wr, + uint32_t max_send_sge, uint32_t max_recv_sge) { struct ibv_qp_init_attr attr = {}; @@ -823,6 +823,9 @@ int rdma_backend_create_qp(RdmaBackendQP *qp, uint8_t qp_type, attr.cap.max_recv_wr = max_recv_wr; attr.cap.max_send_sge = max_send_sge; attr.cap.max_recv_sge = max_recv_sge; + if (srq) { + attr.srq = srq->ibsrq; + } qp->ibqp = ibv_create_qp(pd->ibpd, &attr); if (!qp->ibqp) { diff --git a/hw/rdma/rdma_backend.h b/hw/rdma/rdma_backend.h index cad7956d98e8..7c1a19a2b5ff 100644 --- a/hw/rdma/rdma_backend.h +++ b/hw/rdma/rdma_backend.h @@ -89,9 +89,9 @@ void rdma_backend_poll_cq(RdmaDeviceResources *rdma_dev_res, RdmaBackendCQ *cq); int rdma_backend_create_qp(RdmaBackendQP *qp, uint8_t qp_type, RdmaBackendPD *pd, RdmaBackendCQ *scq, - RdmaBackendCQ *rcq, uint32_t max_send_wr, - uint32_t max_recv_wr, uint32_t max_send_sge, - uint32_t max_recv_sge); + RdmaBackendCQ *rcq, RdmaBackendSRQ *srq, + uint32_t max_send_wr, uint32_t max_recv_wr, + uint32_t max_send_sge, uint32_t max_recv_sge); int rdma_backend_qp_state_init(RdmaBackendDev *backend_dev, RdmaBackendQP *qp, uint8_t qp_type, uint32_t qkey); int rdma_backend_qp_state_rtr(RdmaBackendDev *backend_dev, RdmaBackendQP *qp, diff --git a/hw/rdma/rdma_rm.c b/hw/rdma/rdma_rm.c index bc5873cb4c14..86d69779dda8 100644 --- a/hw/rdma/rdma_rm.c +++ b/hw/rdma/rdma_rm.c @@ -384,12 +384,14 @@ int rdma_rm_alloc_qp(RdmaDeviceResources *dev_res, uint32_t pd_handle, uint8_t qp_type, uint32_t max_send_wr, uint32_t max_send_sge, uint32_t send_cq_handle, uint32_t max_recv_wr, uint32_t max_recv_sge, - uint32_t recv_cq_handle, void *opaque, uint32_t *qpn) + uint32_t recv_cq_handle, void *opaque, uint32_t *qpn, + uint8_t is_srq, uint32_t srq_handle) { int rc; RdmaRmQP *qp; RdmaRmCQ *scq, *rcq; RdmaRmPD *pd; + RdmaRmSRQ *srq = NULL; uint32_t rm_qpn; pd = rdma_rm_get_pd(dev_res, pd_handle); @@ -406,6 +408,14 @@ int rdma_rm_alloc_qp(RdmaDeviceResources *dev_res, uint32_t pd_handle, return -EINVAL; } + if (is_srq) { + srq = rdma_rm_get_srq(dev_res, srq_handle); + if (!srq) { + rdma_error_report("Invalid srqn %d", srq_handle); + return -EINVAL; + } + } + if (qp_type == IBV_QPT_GSI) { scq->notify = CNT_SET; rcq->notify = CNT_SET; @@ -422,10 +432,17 @@ int rdma_rm_alloc_qp(RdmaDeviceResources *dev_res, uint32_t pd_handle, qp->send_cq_handle = send_cq_handle; qp->recv_cq_handle = recv_cq_handle; qp->opaque = opaque; + if (is_srq) { + qp->is_srq = is_srq; + srq->recv_cq_handle = recv_cq_handle; + } rc = rdma_backend_create_qp(&qp->backend_qp, qp_type, &pd->backend_pd, - &scq->backend_cq, &rcq->backend_cq, max_send_wr, - max_recv_wr, max_send_sge, max_recv_sge); + &scq->backend_cq, &rcq->backend_cq, + is_srq ? &srq->backend_srq : NULL, + max_send_wr, max_recv_wr, max_send_sge, + max_recv_sge); + if (rc) { rc = -EIO; goto out_dealloc_qp; diff --git a/hw/rdma/rdma_rm.h b/hw/rdma/rdma_rm.h index e88ab95e264b..e8639909cd34 100644 --- a/hw/rdma/rdma_rm.h +++ b/hw/rdma/rdma_rm.h @@ -53,7 +53,8 @@ int rdma_rm_alloc_qp(RdmaDeviceResources *dev_res, uint32_t pd_handle, uint8_t qp_type, uint32_t max_send_wr, uint32_t max_send_sge, uint32_t send_cq_handle, uint32_t max_recv_wr, uint32_t max_recv_sge, - uint32_t recv_cq_handle, void *opaque, uint32_t *qpn); + uint32_t recv_cq_handle, void *opaque, uint32_t *qpn, + uint8_t is_srq, uint32_t srq_handle); RdmaRmQP *rdma_rm_get_qp(RdmaDeviceResources *dev_res, uint32_t qpn); int rdma_rm_modify_qp(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev, uint32_t qp_handle, uint32_t attr_mask, uint8_t sgid_idx, diff --git a/hw/rdma/rdma_rm_defs.h b/hw/rdma/rdma_rm_defs.h index 2a3a409d92a0..9e992f559a8f 100644 --- a/hw/rdma/rdma_rm_defs.h +++ b/hw/rdma/rdma_rm_defs.h @@ -88,6 +88,7 @@ typedef struct RdmaRmQP { uint32_t send_cq_handle; uint32_t recv_cq_handle; enum ibv_qp_state qp_state; + uint8_t is_srq; } RdmaRmQP; typedef struct RdmaRmSRQ { diff --git a/hw/rdma/vmw/pvrdma_cmd.c b/hw/rdma/vmw/pvrdma_cmd.c index 4afcd2037db2..510062f05476 100644 --- a/hw/rdma/vmw/pvrdma_cmd.c +++ b/hw/rdma/vmw/pvrdma_cmd.c @@ -357,7 +357,7 @@ static int destroy_cq(PVRDMADev *dev, union pvrdma_cmd_req *req, static int create_qp_rings(PCIDevice *pci_dev, uint64_t pdir_dma, PvrdmaRing **rings, uint32_t scqe, uint32_t smax_sge, uint32_t spages, uint32_t rcqe, uint32_t rmax_sge, - uint32_t rpages) + uint32_t rpages, uint8_t is_srq) { uint64_t *dir = NULL, *tbl = NULL; PvrdmaRing *sr, *rr; @@ -365,9 +365,14 @@ static int create_qp_rings(PCIDevice *pci_dev, uint64_t pdir_dma, char ring_name[MAX_RING_NAME_SZ]; uint32_t wqe_sz; - if (!spages || spages > PVRDMA_MAX_FAST_REG_PAGES - || !rpages || rpages > PVRDMA_MAX_FAST_REG_PAGES) { - rdma_error_report("Got invalid page count for QP ring: %d, %d", spages, + if (!spages || spages > PVRDMA_MAX_FAST_REG_PAGES) { + rdma_error_report("Got invalid send page count for QP ring: %d", + spages); + return rc; + } + + if (!is_srq && (!rpages || rpages > PVRDMA_MAX_FAST_REG_PAGES)) { + rdma_error_report("Got invalid recv page count for QP ring: %d", rpages); return rc; } @@ -384,8 +389,12 @@ static int create_qp_rings(PCIDevice *pci_dev, uint64_t pdir_dma, goto out; } - sr = g_malloc(2 * sizeof(*rr)); - rr = &sr[1]; + if (!is_srq) { + sr = g_malloc(2 * sizeof(*rr)); + rr = &sr[1]; + } else { + sr = g_malloc(sizeof(*sr)); + } *rings = sr; @@ -407,15 +416,18 @@ static int create_qp_rings(PCIDevice *pci_dev, uint64_t pdir_dma, goto out_unmap_ring_state; } - /* Create recv ring */ - rr->ring_state = &sr->ring_state[1]; - wqe_sz = pow2ceil(sizeof(struct pvrdma_rq_wqe_hdr) + - sizeof(struct pvrdma_sge) * rmax_sge - 1); - sprintf(ring_name, "qp_rring_%" PRIx64, pdir_dma); - rc = pvrdma_ring_init(rr, ring_name, pci_dev, rr->ring_state, - rcqe, wqe_sz, (dma_addr_t *)&tbl[1 + spages], rpages); - if (rc) { - goto out_free_sr; + if (!is_srq) { + /* Create recv ring */ + rr->ring_state = &sr->ring_state[1]; + wqe_sz = pow2ceil(sizeof(struct pvrdma_rq_wqe_hdr) + + sizeof(struct pvrdma_sge) * rmax_sge - 1); + sprintf(ring_name, "qp_rring_%" PRIx64, pdir_dma); + rc = pvrdma_ring_init(rr, ring_name, pci_dev, rr->ring_state, + rcqe, wqe_sz, (dma_addr_t *)&tbl[1 + spages], + rpages); + if (rc) { + goto out_free_sr; + } } goto out; @@ -436,10 +448,12 @@ out: return rc; } -static void destroy_qp_rings(PvrdmaRing *ring) +static void destroy_qp_rings(PvrdmaRing *ring, uint8_t is_srq) { pvrdma_ring_free(&ring[0]); - pvrdma_ring_free(&ring[1]); + if (!is_srq) { + pvrdma_ring_free(&ring[1]); + } rdma_pci_dma_unmap(ring->dev, ring->ring_state, TARGET_PAGE_SIZE); g_free(ring); @@ -458,7 +472,7 @@ static int create_qp(PVRDMADev *dev, union pvrdma_cmd_req *req, rc = create_qp_rings(PCI_DEVICE(dev), cmd->pdir_dma, &rings, cmd->max_send_wr, cmd->max_send_sge, cmd->send_chunks, cmd->max_recv_wr, cmd->max_recv_sge, - cmd->total_chunks - cmd->send_chunks - 1); + cmd->total_chunks - cmd->send_chunks - 1, cmd->is_srq); if (rc) { return rc; } @@ -467,9 +481,9 @@ static int create_qp(PVRDMADev *dev, union pvrdma_cmd_req *req, cmd->max_send_wr, cmd->max_send_sge, cmd->send_cq_handle, cmd->max_recv_wr, cmd->max_recv_sge, cmd->recv_cq_handle, rings, - &resp->qpn); + &resp->qpn, cmd->is_srq, cmd->srq_handle); if (rc) { - destroy_qp_rings(rings); + destroy_qp_rings(rings, cmd->is_srq); return rc; } @@ -525,16 +539,21 @@ static int destroy_qp(PVRDMADev *dev, union pvrdma_cmd_req *req, struct pvrdma_cmd_destroy_qp *cmd = &req->destroy_qp; RdmaRmQP *qp; PvrdmaRing *ring; + uint8_t is_srq = 0; qp = rdma_rm_get_qp(&dev->rdma_dev_res, cmd->qp_handle); if (!qp) { return -EINVAL; } + if (qp->is_srq) { + is_srq = 1; + } + rdma_rm_dealloc_qp(&dev->rdma_dev_res, cmd->qp_handle); ring = (PvrdmaRing *)qp->opaque; - destroy_qp_rings(ring); + destroy_qp_rings(ring, is_srq); return 0; }