From patchwork Thu Sep 23 23:20:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Agner X-Patchwork-Id: 1531966 X-Patchwork-Delegate: trini@ti.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=agner.ch header.i=@agner.ch header.a=rsa-sha256 header.s=dkim header.b=Zo4TixbW; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=85.214.62.61; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=) Received: from phobos.denx.de (phobos.denx.de [85.214.62.61]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4HFrlZ1NlZz9sSn for ; Fri, 24 Sep 2021 09:21:46 +1000 (AEST) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 618638342C; Fri, 24 Sep 2021 01:21:31 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=none (p=none dis=none) header.from=agner.ch Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (1024-bit key; unprotected) header.d=agner.ch header.i=@agner.ch header.b="Zo4TixbW"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 42C16833F8; Fri, 24 Sep 2021 01:21:19 +0200 (CEST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_PASS,SPF_PASS autolearn=ham autolearn_force=no version=3.4.2 Received: from mail.kmu-office.ch (mail.kmu-office.ch [IPv6:2a02:418:6a02::a2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 24FD083411 for ; Fri, 24 Sep 2021 01:21:05 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=none (p=none dis=none) header.from=agner.ch Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=stefan@agner.ch Received: from localhost.localdomain (unknown [IPv6:2a02:169:3df5:10:5c1d:e28a:f461:57d5]) by mail.kmu-office.ch (Postfix) with ESMTPSA id 5B48D5C2BB7; Fri, 24 Sep 2021 01:21:04 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=agner.ch; s=dkim; t=1632439264; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WbxaIFp98kW2uDigSMhL4StA9AricZLO4eJSRKqDs/0=; b=Zo4TixbW7eM4zp/nzHMjyezWNT3/qoJrFEGF4+bDeppGXjWqNUIoKEGaYqegTr+lT5jTXZ CyQ9jfT+Olg3eW/75jC6lKILgBlqhu4ou/5P7QKb6gYJ0KDhKhHtOesnCYWQCPh7BPacjD W4u4c8v+6x2oP5R6yz/bJa1SyIXtOM4= From: Stefan Agner To: bmeng.cn@gmail.com Cc: nsaenz@kernel.org, u-boot@lists.denx.de, mbrugger@suse.com, m.szyprowski@samsung.com, s.nawrocki@samsung.com, Stefan Agner Subject: [RFC PATCH 4/4] nvme: translate virtual addresses into the bus's address space Date: Fri, 24 Sep 2021 01:20:35 +0200 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: <10aa393df8062f14fbca0faff35e05efdcfffe96.1632439220.git.stefan@agner.ch> References: <10aa393df8062f14fbca0faff35e05efdcfffe96.1632439220.git.stefan@agner.ch> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.34 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.2 at phobos.denx.de X-Virus-Status: Clean So far we've been content with passing physical/CPU addresses when configuring memory addresses into NVMe controllers, but not all platforms have buses with transparent mappings. Specifically the Raspberry Pi 4 might introduce an offset to memory accesses incoming from its PCIe port. Introduce nvme_virt_to_bus() and nvme_bus_to_virt() to cater with these limitations, and make sure we don't break non DM users. For devices where PCIe's view of host memory doesn't match the memory as seen by the CPU. A similar change has been introduced for XHCI controller with commit 1a474559d90a ("xhci: translate virtual addresses into the bus's address space"). Signed-off-by: Stefan Agner --- drivers/nvme/nvme.c | 32 ++++++++++++++++++-------------- drivers/nvme/nvme.h | 15 +++++++++++++++ 2 files changed, 33 insertions(+), 14 deletions(-) diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c index 4c4dc7cc4d..0b7082d71b 100644 --- a/drivers/nvme/nvme.c +++ b/drivers/nvme/nvme.c @@ -95,7 +95,7 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2, buffer += (page_size - offset); if (length <= page_size) { - *prp2 = (u64)buffer; + *prp2 = nvme_virt_to_bus(dev, buffer); return 0; } @@ -120,16 +120,16 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2, i = 0; while (nprps) { if (i == prps_per_page) { - u64 next_prp_list = (u64)prp_pool + page_size; - *(prp_pool + i) = cpu_to_le64(next_prp_list); + u64 next = nvme_virt_to_bus(dev, prp_pool + page_size); + *(prp_pool + i) = cpu_to_le64(next); i = 0; prp_pool += page_size; } - *(prp_pool + i++) = cpu_to_le64((u64)buffer); + *(prp_pool + i++) = cpu_to_le64(nvme_virt_to_bus(dev, buffer)); buffer += page_size; nprps--; } - *prp2 = (u64)dev->prp_pool; + *prp2 = nvme_virt_to_bus(dev, dev->prp_pool); flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool + dev->prp_entry_num * sizeof(u64)); @@ -356,6 +356,7 @@ static int nvme_configure_admin_queue(struct nvme_dev *dev) int result; u32 aqa; u64 cap = dev->cap; + u64 dma_addr; struct nvme_queue *nvmeq; /* most architectures use 4KB as the page size */ unsigned page_shift = 12; @@ -396,8 +397,10 @@ static int nvme_configure_admin_queue(struct nvme_dev *dev) dev->ctrl_config |= NVME_CC_IOSQES | NVME_CC_IOCQES; writel(aqa, &dev->bar->aqa); - nvme_writeq((ulong)nvmeq->sq_cmds, &dev->bar->asq); - nvme_writeq((ulong)nvmeq->cqes, &dev->bar->acq); + dma_addr = nvme_virt_to_bus(dev, nvmeq->sq_cmds); + nvme_writeq(dma_addr, &dev->bar->asq); + dma_addr = nvme_virt_to_bus(dev, nvmeq->cqes); + nvme_writeq(dma_addr, &dev->bar->acq); result = nvme_enable_ctrl(dev); if (result) @@ -423,7 +426,7 @@ static int nvme_alloc_cq(struct nvme_dev *dev, u16 qid, memset(&c, 0, sizeof(c)); c.create_cq.opcode = nvme_admin_create_cq; - c.create_cq.prp1 = cpu_to_le64((ulong)nvmeq->cqes); + c.create_cq.prp1 = cpu_to_le64(nvme_virt_to_bus(dev, nvmeq->cqes)); c.create_cq.cqid = cpu_to_le16(qid); c.create_cq.qsize = cpu_to_le16(nvmeq->q_depth - 1); c.create_cq.cq_flags = cpu_to_le16(flags); @@ -440,7 +443,7 @@ static int nvme_alloc_sq(struct nvme_dev *dev, u16 qid, memset(&c, 0, sizeof(c)); c.create_sq.opcode = nvme_admin_create_sq; - c.create_sq.prp1 = cpu_to_le64((ulong)nvmeq->sq_cmds); + c.create_sq.prp1 = cpu_to_le64(nvme_virt_to_bus(dev, nvmeq->sq_cmds)); c.create_sq.sqid = cpu_to_le16(qid); c.create_sq.qsize = cpu_to_le16(nvmeq->q_depth - 1); c.create_sq.sq_flags = cpu_to_le16(flags); @@ -461,14 +464,14 @@ int nvme_identify(struct nvme_dev *dev, unsigned nsid, memset(&c, 0, sizeof(c)); c.identify.opcode = nvme_admin_identify; c.identify.nsid = cpu_to_le32(nsid); - c.identify.prp1 = cpu_to_le64((u64)buffer); + c.identify.prp1 = cpu_to_le64(nvme_virt_to_bus(dev, buffer)); length -= (page_size - offset); if (length <= 0) { c.identify.prp2 = 0; } else { buffer += (page_size - offset); - c.identify.prp2 = cpu_to_le64((u64)buffer); + c.identify.prp2 = cpu_to_le64(nvme_virt_to_bus(dev, buffer)); } c.identify.cns = cpu_to_le32(cns); @@ -493,7 +496,7 @@ int nvme_get_features(struct nvme_dev *dev, unsigned fid, unsigned nsid, memset(&c, 0, sizeof(c)); c.features.opcode = nvme_admin_get_features; c.features.nsid = cpu_to_le32(nsid); - c.features.prp1 = cpu_to_le64((u64)buffer); + c.features.prp1 = cpu_to_le64(nvme_virt_to_bus(dev, buffer)); c.features.fid = cpu_to_le32(fid); ret = nvme_submit_admin_cmd(dev, &c, result); @@ -519,7 +522,7 @@ int nvme_set_features(struct nvme_dev *dev, unsigned fid, unsigned dword11, memset(&c, 0, sizeof(c)); c.features.opcode = nvme_admin_set_features; - c.features.prp1 = cpu_to_le64((u64)buffer); + c.features.prp1 = cpu_to_le64(nvme_virt_to_bus(dev, buffer)); c.features.fid = cpu_to_le32(fid); c.features.dword11 = cpu_to_le32(dword11); @@ -775,7 +778,7 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t blknr, c.rw.slba = cpu_to_le64(slba); slba += lbas; c.rw.length = cpu_to_le16(lbas - 1); - c.rw.prp1 = cpu_to_le64((ulong)buffer); + c.rw.prp1 = cpu_to_le64(nvme_virt_to_bus(dev, buffer)); c.rw.prp2 = cpu_to_le64(prp2); status = nvme_submit_sync_cmd(dev->queues[NVME_IO_Q], &c, NULL, IO_TIMEOUT); @@ -834,6 +837,7 @@ static int nvme_probe(struct udevice *udev) struct nvme_id_ns *id; ndev->instance = trailing_strtol(udev->name); + ndev->dev = udev->parent; INIT_LIST_HEAD(&ndev->namespaces); ndev->bar = dm_pci_map_bar(udev, PCI_BASE_ADDRESS_0, diff --git a/drivers/nvme/nvme.h b/drivers/nvme/nvme.h index c6aae4da5d..31e6899bca 100644 --- a/drivers/nvme/nvme.h +++ b/drivers/nvme/nvme.h @@ -7,8 +7,15 @@ #ifndef __DRIVER_NVME_H__ #define __DRIVER_NVME_H__ +#include #include +#if CONFIG_IS_ENABLED(DM_USB) +#define nvme_to_dev(_dev) _dev->dev +#else +#define nvme_to_dev(_dev) NULL +#endif + struct nvme_id_power_state { __le16 max_power; /* centiwatts */ __u8 rsvd2; @@ -596,6 +603,9 @@ enum { /* Represents an NVM Express device. Each nvme_dev is a PCI function. */ struct nvme_dev { +#if CONFIG_IS_ENABLED(DM_USB) + struct udevice *dev; +#endif struct list_head node; struct nvme_queue **queues; u32 __iomem *dbs; @@ -635,4 +645,9 @@ struct nvme_ns { u8 flbas; }; +static inline dma_addr_t nvme_virt_to_bus(struct nvme_dev *dev, void *addr) +{ + return dev_phys_to_bus(nvme_to_dev(dev), virt_to_phys(addr)); +} + #endif /* __DRIVER_NVME_H__ */