From patchwork Wed Aug 21 14:09:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aaron Williams X-Patchwork-Id: 1150868 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.denx.de (client-ip=81.169.180.215; helo=lists.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=marvell.com header.i=@marvell.com header.b="PEyuvoqq"; dkim-atps=neutral Received: from lists.denx.de (dione.denx.de [81.169.180.215]) by ozlabs.org (Postfix) with ESMTP id 46D8g32Fzhz9sBp for ; Thu, 22 Aug 2019 00:09:33 +1000 (AEST) Received: by lists.denx.de (Postfix, from userid 105) id 9804BC21FD0; Wed, 21 Aug 2019 14:09:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on lists.denx.de X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=T_DKIM_INVALID autolearn=unavailable autolearn_force=no version=3.4.0 Received: from lists.denx.de (localhost [IPv6:::1]) by lists.denx.de (Postfix) with ESMTP id A77E9C21F62; Wed, 21 Aug 2019 14:09:18 +0000 (UTC) Received: by lists.denx.de (Postfix, from userid 105) id 5A5B6C21F62; Wed, 21 Aug 2019 14:09:17 +0000 (UTC) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by lists.denx.de (Postfix) with ESMTPS id D2C64C21F29 for ; Wed, 21 Aug 2019 14:09:16 +0000 (UTC) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x7LE53rO020633; Wed, 21 Aug 2019 07:09:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0818; bh=R6nDTHydcRVJeNrqG6zH+1DMcPS9mbAYIw7UBqvWFt8=; b=PEyuvoqqkvuUfuYtNWhDqlvkAYrHwPl2y2CwggtJcXQPTviie0k8ppxnfATbfVJlJhOx S80jMPLco2qhum83TW4yh2D18rDWmP7BNsCKBPRfbK1iAzbB7fmKilazaOC2V8XR/9i/ t/8/P6NEjw3gK4mtnGoV83/Q4lAXb3fyFJ8KLHL2P3zwdpTnxOLrya0jjPYbDbbkkIOC hGizDci1zKJZWLM5V8Xu0lvxAJrV+GEIIMapC8/KI9kSEr7IgybDyIhweTUZOu8DR5Gf UwDzOmhgs2+Trp27WNPITmPtjnhZYEoDJRfXrY8KaeWgR6Rvm+c4OXPszYLGW6KdrfSH 0w== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2ugu7ftg7m-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 21 Aug 2019 07:09:11 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 21 Aug 2019 07:09:10 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 21 Aug 2019 07:09:10 -0700 Received: from marvell.com (unknown [10.85.93.181]) by maili.marvell.com (Postfix) with ESMTP id DD6C93F7040; Wed, 21 Aug 2019 07:09:09 -0700 (PDT) From: Aaron Williams To: Date: Wed, 21 Aug 2019 07:09:05 -0700 Message-ID: <20190821140905.20856-1-awilliams@marvell.com> X-Mailer: git-send-email 2.16.4 MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-08-21_05:2019-08-19,2019-08-21 signatures=0 Cc: Aaron Williams Subject: [U-Boot] [PATCH] nvme: Fix PRP Offset Invalid X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.18 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" From: Aaron Williams When large writes take place I saw a Samsung EVO 970+ return a status value of 0x13, PRP Offset Invalid. I tracked this down to the improper handling of PRP entries. The blocks the PRP entries are placed in cannot cross a page boundary and thus should be allocated on page boundaries. This is how the Linux kernel driver works. With this patch, the PRP pool is allocated on a page boundary and other than the very first allocation, the pool size is a multiple of the page size. Each page can hold (4096 / 8) - 1 entries since the last entry must point to the next page in the pool. Change-Id: I8df66c87d6a6105da556d327d4cc5148e444d20e Signed-off-by: Aaron Williams --- drivers/nvme/nvme.c | 21 +++++++++++++-------- 1 file changed, 13 insertions(+), 8 deletions(-) diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c index 7008a54a6d..71ea226820 100644 --- a/drivers/nvme/nvme.c +++ b/drivers/nvme/nvme.c @@ -74,6 +74,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2, u64 *prp_pool; int length = total_len; int i, nprps; + u32 prps_per_page = (page_size >> 3) - 1; + u32 num_pages; + length -= (page_size - offset); if (length <= 0) { @@ -90,15 +93,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2, } nprps = DIV_ROUND_UP(length, page_size); + num_pages = DIV_ROUND_UP(nprps, prps_per_page); if (nprps > dev->prp_entry_num) { free(dev->prp_pool); - dev->prp_pool = malloc(nprps << 3); + dev->prp_pool = memalign(page_size, num_pages * page_size); if (!dev->prp_pool) { printf("Error: malloc prp_pool fail\n"); return -ENOMEM; } - dev->prp_entry_num = nprps; + dev->prp_entry_num = ((page_size >> 3) - 1) * num_pages; } prp_pool = dev->prp_pool; @@ -791,12 +795,6 @@ static int nvme_probe(struct udevice *udev) } memset(ndev->queues, 0, NVME_Q_NUM * sizeof(struct nvme_queue *)); - ndev->prp_pool = malloc(MAX_PRP_POOL); - if (!ndev->prp_pool) { - ret = -ENOMEM; - printf("Error: %s: Out of memory!\n", udev->name); - goto free_nvme; - } ndev->prp_entry_num = MAX_PRP_POOL >> 3; ndev->cap = nvme_readq(&ndev->bar->cap); @@ -808,6 +806,13 @@ static int nvme_probe(struct udevice *udev) if (ret) goto free_queue; + ndev->prp_pool = memalign(ndev->page_size, MAX_PRP_POOL); + if (!ndev->prp_pool) { + ret = -ENOMEM; + printf("Error: %s: Out of memory!\n", udev->name); + goto free_nvme; + } + ret = nvme_setup_io_queues(ndev); if (ret) goto free_queue;