From patchwork Sun Oct 9 15:51:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marek Vasut X-Patchwork-Id: 1687829 X-Patchwork-Delegate: trini@ti.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=denx.de header.i=@denx.de header.a=rsa-sha256 header.s=phobos-20191101 header.b=rg9cBZdn; dkim=pass (2048-bit key) header.d=denx.de header.i=@denx.de header.a=rsa-sha256 header.s=phobos-20191101 header.b=0N6bcMve; dkim-atps=neutral Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Mlml02T2Hz20cX for ; Mon, 10 Oct 2022 02:52:12 +1100 (AEDT) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id 1513084ECC; Sun, 9 Oct 2022 17:51:57 +0200 (CEST) Authentication-Results: phobos.denx.de; dmarc=none (p=none dis=none) header.from=denx.de Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=denx.de; s=phobos-20191101; t=1665330717; bh=CwgN036tCKTqlxLY0IIlyqhiTWQcWDEa+BU+8k/IuCU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From; b=rg9cBZdnlpQYKZY62aXYc556Ya9an7dFqA7SNWpLelCY1ZKwAidzldanwKAB/1Oh/ Dt8f5lAdJjHOYQjhdlhViO3hh8D9fPd+g9eU8gq56YjDDan3O+BChW7VvxRNd7kMO/ Ew/5pgZu1TLF07FcxS2lG3tnGSZBLLMVrL0EgjsVqXjrkWFlTVtR8XjreYFD9R7Vr/ MTso+VY/wjqGK0oLJ+U32W4Kati+/x/8FYFEfjucCbqciuikt160kDghv6adlXNtrS CekZNciJlNJ7QIr6mnNozRkkDAox3lqlD2tqkHSd9nL0ZOIL4y19SymyzJC02lZaqj /b5sseWNEW4bQ== Received: from tr.lan (ip-86-49-12-201.bb.vodafone.cz [86.49.12.201]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)) (No client certificate requested) (Authenticated sender: marex@denx.de) by phobos.denx.de (Postfix) with ESMTPSA id 02DBD84ECD; Sun, 9 Oct 2022 17:51:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=denx.de; s=phobos-20191101; t=1665330713; bh=CwgN036tCKTqlxLY0IIlyqhiTWQcWDEa+BU+8k/IuCU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=0N6bcMvedsj99AxuCTbmwSoVuJdLxfcR06Exqx9KgdhkxYkQp4ug20S896aUkfLsy rAp5zv5n3mfHFTqEDJbsCybs8QhWcS8mUaDkZcwwwwFzDhnkMHIOnZ8HhuhMkPIbxS Bj6r6NqPIVsUY/DZfrEfSIBgFtISU0/IjYqlNAq4W1JQZHwNU8+pxXJghRmXsaXHU/ 1R+jNowMZw/Kk5VhQXNCJXxDU13FN7YYRBaVPKg8IzF+s1EXdNp6LaMSCUfG7qFDER AZTtW1eWTMdAh2duUZmixSwDhMprOa9OYlCdx6DL9pPl2iK1O4BpbJ3xBcFtgbUL4Q rXSFMZILY/K8Q== From: Marek Vasut To: u-boot@lists.denx.de Cc: Marek Vasut , Joe Hershberger , Patrice Chotard , Patrick Delaunay , Ramon Fried , Stephen Warren Subject: [PATCH 2/2] net: dwc_eth_qos: Add support for bulk RX descriptor cleaning Date: Sun, 9 Oct 2022 17:51:46 +0200 Message-Id: <20221009155146.18697-2-marex@denx.de> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221009155146.18697-1-marex@denx.de> References: <20221009155146.18697-1-marex@denx.de> MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.6 at phobos.denx.de X-Virus-Status: Clean Add new desc_per_cacheline property which lets a platform run RX descriptor cleanup after every power-of-2 - 1 received packets instead of every packet. This is useful on platforms where (axi_bus_width EQOS_AXI_WIDTH_n * DMA DSL inter-descriptor word skip count + DMA descriptor size) is less than cache line size, which necessitates packing multiple DMA descriptors into single cache line. In case of TX descriptors, this is not a problem, since the driver always does synchronous TX, i.e. the TX descriptor is always written, flushed and polled for completion in eqos_send(). In case of RX descriptors, it is necessary to update their status in bulk, i.e. after the entire cache line worth of RX descriptors has been used up to receive data. Signed-off-by: Marek Vasut Reviewed-by: Patrice Chotard Reviewed-by: Ramon Fried --- Cc: Joe Hershberger Cc: Patrice Chotard Cc: Patrick Delaunay Cc: Ramon Fried Cc: Stephen Warren --- drivers/net/dwc_eth_qos.c | 67 +++++++++++++++++++++++++-------------- drivers/net/dwc_eth_qos.h | 2 ++ 2 files changed, 46 insertions(+), 23 deletions(-) diff --git a/drivers/net/dwc_eth_qos.c b/drivers/net/dwc_eth_qos.c index dde2c183b06..afc47b56ff5 100644 --- a/drivers/net/dwc_eth_qos.c +++ b/drivers/net/dwc_eth_qos.c @@ -75,7 +75,7 @@ */ static void *eqos_alloc_descs(struct eqos_priv *eqos, unsigned int num) { - return memalign(eqos->desc_size, num * eqos->desc_size); + return memalign(ARCH_DMA_MINALIGN, num * eqos->desc_size); } static void eqos_free_descs(void *descs) @@ -92,7 +92,7 @@ static struct eqos_desc *eqos_get_desc(struct eqos_priv *eqos, void eqos_inval_desc_generic(void *desc) { - unsigned long start = (unsigned long)desc; + unsigned long start = (unsigned long)desc & ~(ARCH_DMA_MINALIGN - 1); unsigned long end = ALIGN(start + sizeof(struct eqos_desc), ARCH_DMA_MINALIGN); @@ -101,7 +101,7 @@ void eqos_inval_desc_generic(void *desc) void eqos_flush_desc_generic(void *desc) { - unsigned long start = (unsigned long)desc; + unsigned long start = (unsigned long)desc & ~(ARCH_DMA_MINALIGN - 1); unsigned long end = ALIGN(start + sizeof(struct eqos_desc), ARCH_DMA_MINALIGN); @@ -1185,6 +1185,7 @@ static int eqos_recv(struct udevice *dev, int flags, uchar **packetp) static int eqos_free_pkt(struct udevice *dev, uchar *packet, int length) { struct eqos_priv *eqos = dev_get_priv(dev); + u32 idx, idx_mask = eqos->desc_per_cacheline - 1; uchar *packet_expected; struct eqos_desc *rx_desc; @@ -1200,24 +1201,30 @@ static int eqos_free_pkt(struct udevice *dev, uchar *packet, int length) eqos->config->ops->eqos_inval_buffer(packet, length); - rx_desc = eqos_get_desc(eqos, eqos->rx_desc_idx, true); - - rx_desc->des0 = 0; - mb(); - eqos->config->ops->eqos_flush_desc(rx_desc); - eqos->config->ops->eqos_inval_buffer(packet, length); - rx_desc->des0 = (u32)(ulong)packet; - rx_desc->des1 = 0; - rx_desc->des2 = 0; - /* - * Make sure that if HW sees the _OWN write below, it will see all the - * writes to the rest of the descriptor too. - */ - mb(); - rx_desc->des3 = EQOS_DESC3_OWN | EQOS_DESC3_BUF1V; - eqos->config->ops->eqos_flush_desc(rx_desc); - - writel((ulong)rx_desc, &eqos->dma_regs->ch0_rxdesc_tail_pointer); + if ((eqos->rx_desc_idx & idx_mask) == idx_mask) { + for (idx = eqos->rx_desc_idx - idx_mask; + idx <= eqos->rx_desc_idx; + idx++) { + rx_desc = eqos_get_desc(eqos, idx, true); + rx_desc->des0 = 0; + mb(); + eqos->config->ops->eqos_flush_desc(rx_desc); + eqos->config->ops->eqos_inval_buffer(packet, length); + rx_desc->des0 = (u32)(ulong)(eqos->rx_dma_buf + + (idx * EQOS_MAX_PACKET_SIZE)); + rx_desc->des1 = 0; + rx_desc->des2 = 0; + /* + * Make sure that if HW sees the _OWN write below, + * it will see all the writes to the rest of the + * descriptor too. + */ + mb(); + rx_desc->des3 = EQOS_DESC3_OWN | EQOS_DESC3_BUF1V; + eqos->config->ops->eqos_flush_desc(rx_desc); + } + writel((ulong)rx_desc, &eqos->dma_regs->ch0_rxdesc_tail_pointer); + } eqos->rx_desc_idx++; eqos->rx_desc_idx %= EQOS_DESCRIPTORS_RX; @@ -1228,12 +1235,26 @@ static int eqos_free_pkt(struct udevice *dev, uchar *packet, int length) static int eqos_probe_resources_core(struct udevice *dev) { struct eqos_priv *eqos = dev_get_priv(dev); + unsigned int desc_step; int ret; debug("%s(dev=%p):\n", __func__, dev); - eqos->desc_size = ALIGN(sizeof(struct eqos_desc), - (unsigned int)ARCH_DMA_MINALIGN); + /* Maximum distance between neighboring descriptors, in Bytes. */ + desc_step = sizeof(struct eqos_desc) + + EQOS_DMA_CH0_CONTROL_DSL_MASK * eqos->config->axi_bus_width; + if (desc_step < ARCH_DMA_MINALIGN) { + /* + * The EQoS hardware implementation cannot place one descriptor + * per cacheline, it is necessary to place multiple descriptors + * per cacheline in memory and do cache management carefully. + */ + eqos->desc_size = BIT(fls(desc_step) - 1); + } else { + eqos->desc_size = ALIGN(sizeof(struct eqos_desc), + (unsigned int)ARCH_DMA_MINALIGN); + } + eqos->desc_per_cacheline = ARCH_DMA_MINALIGN / eqos->desc_size; eqos->tx_descs = eqos_alloc_descs(eqos, EQOS_DESCRIPTORS_TX); if (!eqos->tx_descs) { diff --git a/drivers/net/dwc_eth_qos.h b/drivers/net/dwc_eth_qos.h index e3e43c86d11..8fccd6f0572 100644 --- a/drivers/net/dwc_eth_qos.h +++ b/drivers/net/dwc_eth_qos.h @@ -162,6 +162,7 @@ struct eqos_dma_regs { #define EQOS_DMA_SYSBUS_MODE_BLEN4 BIT(1) #define EQOS_DMA_CH0_CONTROL_DSL_SHIFT 18 +#define EQOS_DMA_CH0_CONTROL_DSL_MASK 0x7 #define EQOS_DMA_CH0_CONTROL_PBLX8 BIT(16) #define EQOS_DMA_CH0_TX_CONTROL_TXPBL_SHIFT 16 @@ -268,6 +269,7 @@ struct eqos_priv { void *rx_descs; int tx_desc_idx, rx_desc_idx; unsigned int desc_size; + unsigned int desc_per_cacheline; void *tx_dma_buf; void *rx_dma_buf; void *rx_pkt;