From patchwork Tue Nov 6 21:27:29 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Warren X-Patchwork-Id: 197548 X-Patchwork-Delegate: afleming@freescale.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from theia.denx.de (theia.denx.de [85.214.87.163]) by ozlabs.org (Postfix) with ESMTP id B5DBC2C008A for ; Wed, 7 Nov 2012 08:27:59 +1100 (EST) Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 5B97C4A5A4; Tue, 6 Nov 2012 22:27:56 +0100 (CET) X-Virus-Scanned: Debian amavisd-new at theia.denx.de Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Z8PhtB-baXBK; Tue, 6 Nov 2012 22:27:56 +0100 (CET) Received: from theia.denx.de (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id E3D9D4A5A6; Tue, 6 Nov 2012 22:27:48 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 5730E4A593 for ; Tue, 6 Nov 2012 22:27:45 +0100 (CET) X-Virus-Scanned: Debian amavisd-new at theia.denx.de Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RYItbiJtU+tR for ; Tue, 6 Nov 2012 22:27:44 +0100 (CET) X-policyd-weight: NOT_IN_SBL_XBL_SPAMHAUS=-1.5 NOT_IN_SPAMCOP=-1.5 NOT_IN_BL_NJABL=-1.5 (only DNSBL check requested) Received: from avon.wwwdotorg.org (avon.wwwdotorg.org [70.85.31.133]) by theia.denx.de (Postfix) with ESMTPS id 76DA24A595 for ; Tue, 6 Nov 2012 22:27:42 +0100 (CET) Received: from severn.wwwdotorg.org (unknown [192.168.65.5]) (using TLSv1 with cipher ADH-AES256-SHA (256/256 bits)) (No client certificate requested) by avon.wwwdotorg.org (Postfix) with ESMTPS id 3A04F6234; Tue, 6 Nov 2012 14:28:30 -0700 (MST) Received: from localhost.localdomain (localhost [127.0.0.1]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by severn.wwwdotorg.org (Postfix) with ESMTPSA id 50344E4104; Tue, 6 Nov 2012 14:27:39 -0700 (MST) From: Stephen Warren To: u-boot@lists.denx.de, Tom Rini Date: Tue, 6 Nov 2012 14:27:29 -0700 Message-Id: <1352237250-8764-3-git-send-email-swarren@wwwdotorg.org> X-Mailer: git-send-email 1.7.0.4 In-Reply-To: <1352237250-8764-1-git-send-email-swarren@wwwdotorg.org> References: <1352237250-8764-1-git-send-email-swarren@wwwdotorg.org> X-NVConfidentiality: public X-Virus-Scanned: clamav-milter 0.96.5 at avon.wwwdotorg.org X-Virus-Status: Clean Cc: Andy Fleming , Stephen Warren , Tom Warren Subject: [U-Boot] [PATCH V2 3/4] common: rework bouncebuf implementation X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.11 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: u-boot-bounces@lists.denx.de Errors-To: u-boot-bounces@lists.denx.de From: Stephen Warren The current bouncebuf API requires all parameters to be passed to both bounce_buffer_start() and bounce_buffer_stop(). Modify the bouncebuf start function to accept a state structure as a parameter, and only require that state struct to be passed to the stop function. This simplifies usage of the bounce buffer by clients. Don't modify the data pointer, but rather store the temporary buffer in this state struct. The bouncebuf code ensures that client code can always use a single buffer pointer in the state structure, irrespective of whether a bounce buffer actually had to be allocated. Move cache management logic into the bounce buffer code, so that each client doesn't have to duplicate this. I believe there's no need to invalidate the buffer before a DMA operation, since flushing the cache should prevent any write-backs. Update the MXS MMC driver for this change. Signed-off-by: Stephen Warren Acked-by: Simon Glass Tested-by: Simon Glass --- v2: * Move cache management into bounce buffer code. * Simplify addr_aligned(). * free() any allocated bounce buffer. * Added comments to struct bounce_buffer. * Change flags parameter from uint8_t to unsigned int. * s/bounce_buffer_state/bounce_buffer/. * Include in bouncebuf.h so it's self-contained. * Minor changes to rebase on top of earlier changed patches. --- common/bouncebuf.c | 75 +++++++++++++++++++++++++++---------------------- drivers/mmc/mxsmmc.c | 30 ++++--------------- include/bouncebuf.h | 33 ++++++++++++++++------ 3 files changed, 72 insertions(+), 66 deletions(-) diff --git a/common/bouncebuf.c b/common/bouncebuf.c index 4f827f8..1df12cd 100644 --- a/common/bouncebuf.c +++ b/common/bouncebuf.c @@ -27,21 +27,19 @@ #include #include -static int addr_aligned(void *data, size_t len) +static int addr_aligned(struct bounce_buffer *state) { const ulong align_mask = ARCH_DMA_MINALIGN - 1; /* Check if start is aligned */ - if ((ulong)data & align_mask) { - debug("Unaligned start address %p\n", data); + if ((ulong)state->user_buffer & align_mask) { + debug("Unaligned buffer address %p\n", state->user_buffer); return 0; } - data += len; - - /* Check if end is aligned */ - if ((ulong)data & align_mask) { - debug("Unaligned end address %p\n", data); + /* Check if length is aligned */ + if (state->len != state->len_aligned) { + debug("Unaligned buffer length %d\n", state->len); return 0; } @@ -49,44 +47,53 @@ static int addr_aligned(void *data, size_t len) return 1; } -int bounce_buffer_start(void **data, size_t len, void **backup, uint8_t flags) +int bounce_buffer_start(struct bounce_buffer *state, void *data, + size_t len, unsigned int flags) { - void *tmp; - size_t alen; - - if (addr_aligned(*data, len)) { - *backup = NULL; - return 0; + state->user_buffer = data; + state->bounce_buffer = data; + state->len = len; + state->len_aligned = roundup(len, ARCH_DMA_MINALIGN); + state->flags = flags; + + if (!addr_aligned(state)) { + state->bounce_buffer = memalign(ARCH_DMA_MINALIGN, + state->len_aligned); + if (!state->bounce_buffer) + return -ENOMEM; + + if (state->flags & GEN_BB_READ) + memcpy(state->bounce_buffer, state->user_buffer, + state->len); } - alen = roundup(len, ARCH_DMA_MINALIGN); - tmp = memalign(ARCH_DMA_MINALIGN, alen); - - if (!tmp) - return -ENOMEM; - - if (flags & GEN_BB_READ) - memcpy(tmp, *data, len); - - *backup = *data; - *data = tmp; + /* + * Flush data to RAM so DMA reads can pick it up, + * and any CPU writebacks don't race with DMA writes + */ + flush_dcache_range((unsigned long)state->bounce_buffer, + (unsigned long)(state->bounce_buffer) + + state->len_aligned); return 0; } -int bounce_buffer_stop(void **data, size_t len, void **backup, uint8_t flags) +int bounce_buffer_stop(struct bounce_buffer *state) { - void *tmp = *data; + if (state->flags & GEN_BB_WRITE) { + /* Invalidate cache so that CPU can see any newly DMA'd data */ + invalidate_dcache_range((unsigned long)state->bounce_buffer, + (unsigned long)(state->bounce_buffer) + + state->len_aligned); + } - /* The buffer was already aligned, since "backup" is NULL. */ - if (!*backup) + if (state->bounce_buffer == state->user_buffer) return 0; - if (flags & GEN_BB_WRITE) - memcpy(*backup, *data, len); + if (state->flags & GEN_BB_WRITE) + memcpy(state->user_buffer, state->bounce_buffer, state->len); - *data = *backup; - free(tmp); + free(state->bounce_buffer); return 0; } diff --git a/drivers/mmc/mxsmmc.c b/drivers/mmc/mxsmmc.c index 109acbf..024df59 100644 --- a/drivers/mmc/mxsmmc.c +++ b/drivers/mmc/mxsmmc.c @@ -96,11 +96,11 @@ static int mxsmmc_send_cmd_pio(struct mxsmmc_priv *priv, struct mmc_data *data) static int mxsmmc_send_cmd_dma(struct mxsmmc_priv *priv, struct mmc_data *data) { uint32_t data_count = data->blocksize * data->blocks; - uint32_t cache_data_count = roundup(data_count, ARCH_DMA_MINALIGN); int dmach; struct mxs_dma_desc *desc = priv->desc; - void *addr, *backup; - uint8_t flags; + void *addr; + unsigned int flags; + struct bounce_buffer bbstate; memset(desc, 0, sizeof(struct mxs_dma_desc)); desc->address = (dma_addr_t)desc; @@ -115,19 +115,9 @@ static int mxsmmc_send_cmd_dma(struct mxsmmc_priv *priv, struct mmc_data *data) flags = GEN_BB_READ; } - bounce_buffer_start(&addr, data_count, &backup, flags); + bounce_buffer_start(&bbstate, addr, data_count, flags); - priv->desc->cmd.address = (dma_addr_t)addr; - - if (data->flags & MMC_DATA_WRITE) { - /* Flush data to DRAM so DMA can pick them up */ - flush_dcache_range((uint32_t)addr, - (uint32_t)(addr) + cache_data_count); - } - - /* Invalidate the area, so no writeback into the RAM races with DMA */ - invalidate_dcache_range((uint32_t)priv->desc->cmd.address, - (uint32_t)(priv->desc->cmd.address + cache_data_count)); + priv->desc->cmd.address = (dma_addr_t)bbstate.bounce_buffer; priv->desc->cmd.data |= MXS_DMA_DESC_IRQ | MXS_DMA_DESC_DEC_SEM | (data_count << MXS_DMA_DESC_BYTES_OFFSET); @@ -135,17 +125,11 @@ static int mxsmmc_send_cmd_dma(struct mxsmmc_priv *priv, struct mmc_data *data) dmach = MXS_DMA_CHANNEL_AHB_APBH_SSP0 + priv->id; mxs_dma_desc_append(dmach, priv->desc); if (mxs_dma_go(dmach)) { - bounce_buffer_stop(&addr, data_count, &backup, flags); + bounce_buffer_stop(&bbstate); return COMM_ERR; } - /* The data arrived into DRAM, invalidate cache over them */ - if (data->flags & MMC_DATA_READ) { - invalidate_dcache_range((uint32_t)addr, - (uint32_t)(addr) + cache_data_count); - } - - bounce_buffer_stop(&addr, data_count, &backup, flags); + bounce_buffer_stop(&bbstate); return 0; } diff --git a/include/bouncebuf.h b/include/bouncebuf.h index aa2278c..707b588 100644 --- a/include/bouncebuf.h +++ b/include/bouncebuf.h @@ -25,6 +25,8 @@ #ifndef __INCLUDE_BOUNCEBUF_H__ #define __INCLUDE_BOUNCEBUF_H__ +#include + /* * GEN_BB_READ -- Data are read from the buffer eg. by DMA hardware. * The source buffer is copied into the bounce buffer (if unaligned, otherwise @@ -51,23 +53,36 @@ */ #define GEN_BB_RW (GEN_BB_READ | GEN_BB_WRITE) +struct bounce_buffer { + /* Copy of data parameter passed to start() */ + void *user_buffer; + /* + * DMA-aligned buffer. This field is always set to the value that + * should be used for DMA; either equal to .user_buffer, or to a + * freshly allocated aligned buffer. + */ + void *bounce_buffer; + /* Copy of len parameter passed to start() */ + size_t len; + /* DMA-aligned buffer length */ + size_t len_aligned; + /* Copy of flags parameter passed to start() */ + unsigned int flags; +}; + /** * bounce_buffer_start() -- Start the bounce buffer session + * state: stores state passed between bounce_buffer_{start,stop} * data: pointer to buffer to be aligned * len: length of the buffer - * backup: pointer to backup buffer (the original value is stored here if - * needed * flags: flags describing the transaction, see above. */ -int bounce_buffer_start(void **data, size_t len, void **backup, uint8_t flags); +int bounce_buffer_start(struct bounce_buffer *state, void *data, + size_t len, unsigned int flags); /** * bounce_buffer_stop() -- Finish the bounce buffer session - * data: pointer to buffer that was aligned - * len: length of the buffer - * backup: pointer to backup buffer (the original value is stored here if - * needed - * flags: flags describing the transaction, see above. + * state: stores state passed between bounce_buffer_{start,stop} */ -int bounce_buffer_stop(void **data, size_t len, void **backup, uint8_t flags); +int bounce_buffer_stop(struct bounce_buffer *state); #endif