From patchwork Sun Dec 11 15:27:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jit Loon Lim X-Patchwork-Id: 1714635 X-Patchwork-Delegate: trini@ti.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.denx.de (client-ip=2a01:238:438b:c500:173d:9f52:ddab:ee01; helo=phobos.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=JcAuxXRB; dkim-atps=neutral Received: from phobos.denx.de (phobos.denx.de [IPv6:2a01:238:438b:c500:173d:9f52:ddab:ee01]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NVTCg5XPPz23yc for ; Mon, 12 Dec 2022 02:27:43 +1100 (AEDT) Received: from h2850616.stratoserver.net (localhost [IPv6:::1]) by phobos.denx.de (Postfix) with ESMTP id C47F885254; Sun, 11 Dec 2022 16:27:38 +0100 (CET) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: phobos.denx.de; spf=pass smtp.mailfrom=u-boot-bounces@lists.denx.de Authentication-Results: phobos.denx.de; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.b="JcAuxXRB"; dkim-atps=neutral Received: by phobos.denx.de (Postfix, from userid 109) id 6BF2D8539B; Sun, 11 Dec 2022 16:27:36 +0100 (CET) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on phobos.denx.de X-Spam-Level: X-Spam-Status: No, score=0.9 required=5.0 tests=AC_FROM_MANY_DOTS,BAYES_00, DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, SPF_HELO_NONE,SPF_NONE autolearn=no autolearn_force=no version=3.4.2 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by phobos.denx.de (Postfix) with ESMTPS id 424C782AE2 for ; Sun, 11 Dec 2022 16:27:33 +0100 (CET) Authentication-Results: phobos.denx.de; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: phobos.denx.de; spf=none smtp.mailfrom=jitloonl@ecsmtp.png.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670772453; x=1702308453; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=MVuBt0PEeZHjiSDB4x2eFF5a3Hp4uJRK59yC6uBUf6g=; b=JcAuxXRB6rm5529c951PbPv3deTwcU/6EblESMWT1bu8WPUPvwKYH5Pj RhSSQK157nuyw/Vdjwxxx5nc1gK7tXRtmmYWTj2j1gdlaye4jZv721nVZ ZcRmblmfubxe68F3b9FcvZpLZoIwebzKHeJGHfMlFN0iJ8yAIMYNfWmts 6P4ocl5kAPk8LDkk9Sq6w7PMBgPbh+tx4S3BFSb20a+g06K+8d/nrm0we OPEtg/TgWQ9NInVoDC8uQxMuO1FLYjpFvbT2NJyJUZD5VseoffBMtjpPb mdkzTE22r+Fq+YmrI7Gx10Fmgqm8OE9dJFfsKTefuGmLOb6ziFidomKe/ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="379916996" X-IronPort-AV: E=Sophos;i="5.96,236,1665471600"; d="scan'208";a="379916996" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Dec 2022 07:27:31 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10558"; a="678647103" X-IronPort-AV: E=Sophos;i="5.96,236,1665471600"; d="scan'208";a="678647103" Received: from pglmail07.png.intel.com ([10.221.193.207]) by orsmga008.jf.intel.com with ESMTP; 11 Dec 2022 07:27:26 -0800 Received: from localhost (pgli0028.png.intel.com [10.221.84.177]) by pglmail07.png.intel.com (Postfix) with ESMTP id 846D72B0B; Sun, 11 Dec 2022 23:27:25 +0800 (+08) Received: by localhost (Postfix, from userid 12048045) id 5125FE00218; Sun, 11 Dec 2022 23:27:25 +0800 (+08) From: Jit Loon Lim To: u-boot@lists.denx.de Cc: Jagan Teki , Vignesh R , Marek , Simon , Tien Fong , Kok Kiang , Siew Chin , Sin Hui , Raaj , Dinesh , Boon Khai , Alif , Teik Heng , Hazim , Jit Loon Lim , Sieu Mun Tang Subject: [PATCH] spl: fit: nand: fix fit loading on bad blocks Date: Sun, 11 Dec 2022 23:27:24 +0800 Message-Id: <20221211152724.5948-1-jit.loon.lim@intel.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.39 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" X-Virus-Scanned: clamav-milter 0.103.6 at phobos.denx.de X-Virus-Status: Clean From: Tien Fong Chee The offset at which the image to be loaded from NAND is located is retrieved from the itb header. The presence of bad blocks in the area of the NAND where the itb image is located could invalidate the offset which must therefore be adjusted taking into account the state of the sectors concerned. Signed-off-by: Tien Fong Chee Signed-off-by: Jit Loon Lim --- common/spl/spl_nand.c | 3 +- drivers/mtd/nand/raw/denali.c | 91 ++++++++++++++++++++++++++++++++++- 2 files changed, 92 insertions(+), 2 deletions(-) diff --git a/common/spl/spl_nand.c b/common/spl/spl_nand.c index 7b7579a2df..d94148ec71 100644 --- a/common/spl/spl_nand.c +++ b/common/spl/spl_nand.c @@ -45,10 +45,11 @@ static ulong spl_nand_fit_read(struct spl_load_info *load, ulong offs, int err; ulong sector; - sector = *(int *)load->priv; offs *= load->bl_len; size *= load->bl_len; + sector = *(int *)load->priv; offs = sector + nand_spl_adjust_offset(sector, offs - sector); + err = nand_spl_load_image(offs, size, dst); if (err) return 0; diff --git a/drivers/mtd/nand/raw/denali.c b/drivers/mtd/nand/raw/denali.c index c827f80281..290c12ac96 100644 --- a/drivers/mtd/nand/raw/denali.c +++ b/drivers/mtd/nand/raw/denali.c @@ -2,7 +2,7 @@ /* * Copyright (C) 2014 Panasonic Corporation * Copyright (C) 2013-2014, Altera Corporation - * Copyright (C) 2009-2010, Intel Corporation and its suppliers. + * Copyright (C) 2009-2022, Intel Corporation and its suppliers. */ #include @@ -1374,3 +1374,92 @@ free_buf: return ret; } + +#ifdef CONFIG_SPL_BUILD +struct mtd_info *nand_get_mtd(void) +{ + struct mtd_info *mtd; + + mtd = get_nand_dev_by_index(nand_curr_device); + if (!mtd) + hang(); + + return mtd; +} + +int nand_spl_load_image(u32 offset, u32 len, void *dst) +{ + size_t count = len, actual = 0, page_align_overhead = 0; + u32 page_align_offset = 0; + u8 *page_buffer; + int err = 0; + struct mtd_info *mtd; + if (!len || !dst) + return -EINVAL; + + mtd = get_nand_dev_by_index(nand_curr_device); + if (!mtd) + hang(); + mtd = nand_get_mtd(); + + if ((offset & (mtd->writesize - 1)) != 0) { + page_buffer = malloc_cache_aligned(mtd->writesize); + if (!page_buffer) { + debug("Error: allocating buffer\n"); + return -ENOMEM; + } + page_align_overhead = offset % mtd->writesize; + page_align_offset = (offset / mtd->writesize) * mtd->writesize; + count = mtd->writesize; + err = nand_read_skip_bad(mtd, page_align_offset, &count, + &actual, mtd->size, page_buffer); + if (err) + return err; + count -= page_align_overhead; + count = min((size_t)len, count); + memcpy(dst, page_buffer + page_align_overhead, count); + free(page_buffer); + len -= count; + if (!len) + return err; + offset += count; + dst += count; + count = len; + } + return nand_read_skip_bad(mtd, offset, &count, &actual, mtd->size, dst); +} + +/* + * This function is to adjust the load offset to skip bad blocks. + * The Denali NAND load image does skip bad blocks during read, + * hence this function is returning the offset as it is. + * The offset at which the image to be loaded from NAND is located is + * retrieved from the itb header. The presence of bad blocks in the area + * of the NAND where the itb image is located could invalidate the offset + * which must therefore be adjusted taking into account the state of the + * sectors concerned + */ +u32 nand_spl_adjust_offset(u32 sector, u32 offs) +{ + u32 sector_align_offset, sector_align_end_offset; + struct mtd_info *mtd; + + mtd = nand_get_mtd(); + + sector_align_offset = sector & (~(mtd->erasesize - 1)); + + sector_align_end_offset = (sector + offs) & (~(mtd->erasesize - 1)); + + while (sector_align_offset <= sector_align_end_offset) { + if (nand_block_isbad(mtd, sector_align_offset)) { + offs += mtd->erasesize; + sector_align_end_offset += mtd->erasesize; + } + sector_align_offset += mtd->erasesize; + } + + return offs; +} + +void nand_deselect(void) {} +#endif