From patchwork Thu Sep 20 15:44:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnout Vandecappelle X-Patchwork-Id: 185426 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from merlin.infradead.org (unknown [IPv6:2001:4978:20e::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id D34342C0094 for ; Fri, 21 Sep 2012 01:46:09 +1000 (EST) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1TEiw0-0002a0-Pz; Thu, 20 Sep 2012 15:45:00 +0000 Received: from casper.infradead.org ([2001:770:15f::2]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TEivy-0002Zv-6T for linux-mtd@merlin.infradead.org; Thu, 20 Sep 2012 15:44:58 +0000 Received: from 132.79-246-81.adsl-static.isp.belgacom.be ([81.246.79.132] helo=viper.mind.be) by casper.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1TEivw-00037O-ET for linux-mtd@lists.infradead.org; Thu, 20 Sep 2012 15:44:56 +0000 Received: from [172.16.2.6] (helo=vandecaa-laptop) by viper.mind.be with esmtp (Exim 4.69) (envelope-from ) id 1TEive-00052N-38; Thu, 20 Sep 2012 17:44:39 +0200 Received: from arnout by vandecaa-laptop with local (Exim 4.80) (envelope-from ) id 1TEivd-0006r5-G6; Thu, 20 Sep 2012 17:44:37 +0200 From: "Arnout Vandecappelle (Essensium/Mind)" To: linux-mtd@lists.infradead.org, linux-omap@vger.kernel.org, David Woodhouse , Tony Lindgren Subject: [PATCH] mtd: omap2-nand: avoid unaligned DMA accesses, fall back on prefetch method Date: Thu, 20 Sep 2012 17:44:10 +0200 Message-Id: <1348155850-26174-1-git-send-email-arnout@mind.be> X-Mailer: git-send-email 1.7.10.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20120920_164456_565217_5A603755 X-CRM114-Status: GOOD ( 15.26 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.3.2 on casper.infradead.org summary: Content analysis details: (-1.9 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 0.0 TVD_RCVD_IP TVD_RCVD_IP -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: "Arnout Vandecappelle \(Essensium/Mind\)" , Sven Krauss X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: linux-mtd-bounces@lists.infradead.org Errors-To: linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: "Arnout Vandecappelle (Essensium/Mind)" The buffers given to the read_buf and write_buf methods are not necessarily u32-aligned, while the DMA engine is configured with 32-bit accesses. As a consequence, the DMA engine gives an error which appears in the log as follows: DMA misaligned error with device 4 After this, no accesses to the NAND are possible anymore because the access never completes. This usually means the system hangs if the rootfs is in NAND. To avoid this, use the prefetch method if the buffer is not aligned. It's difficult to reproduce the error, because the buffers are aligned most of the time. This bug and a patch was originally reported by Sven Krauss in http://article.gmane.org/gmane.linux.drivers.mtd/34548 Signed-off-by: Arnout Vandecappelle (Essensium/Mind) Cc: Sven Krauss --- Perhaps a better method is to fetch the first few unaligned bytes with the prefetch method, and then continue with DMA. However, since it's hard to force an unaligned buffer, it's also hard to test that this method works. --- drivers/mtd/nand/omap2.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c index c719b86..a313e83 100644 --- a/drivers/mtd/nand/omap2.c +++ b/drivers/mtd/nand/omap2.c @@ -441,7 +441,7 @@ out_copy: */ static void omap_read_buf_dma_pref(struct mtd_info *mtd, u_char *buf, int len) { - if (len <= mtd->oobsize) + if (len <= mtd->oobsize || !IS_ALIGNED((unsigned long)buf, 4)) omap_read_buf_pref(mtd, buf, len); else /* start transfer in DMA mode */ @@ -457,7 +457,7 @@ static void omap_read_buf_dma_pref(struct mtd_info *mtd, u_char *buf, int len) static void omap_write_buf_dma_pref(struct mtd_info *mtd, const u_char *buf, int len) { - if (len <= mtd->oobsize) + if (len <= mtd->oobsize || !IS_ALIGNED((unsigned long)buf, 4)) omap_write_buf_pref(mtd, buf, len); else /* start transfer in DMA mode */