From patchwork Wed May 24 07:07:07 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?UGV0ZXIgUGFuIOa9mOagiyAocGV0ZXJwYW5kb25nKQ==?= X-Patchwork-Id: 766339 X-Patchwork-Delegate: boris.brezillon@free-electrons.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3wXjxx2T6Zz9sNp for ; Wed, 24 May 2017 17:02:13 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="m1GnZlC4"; dkim-atps=neutral DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3ARb3WYc7HPuKidY+CL+uSPmdYoiVT95s+8gn4bKTKY=; b=m1GnZlC4rZ9maZ MSvGLfgCkKXTC9utfuLaXZUgdun51qKXkK/mLvNuYiUzlRNbQRpdsUxfL7Hlib+kIIVeMGP3tZEyE dQxar89g6ZuvhTzPacGzxOKf8c4NxzVOOAAmuGZzBNggOioyOWr9idaDPPYpcgrOrussLeI6Fnd0N RXH1Fv70ZQTbka0pQF0RPCuZ9aG8z5KIbePgGInL/RIMUiliJzPvsaWh+6pZoesgEXbTGO1pVWnaZ M2yRctFRk70rAPDRd+swk4+gewAkyTBGTHcH8TNHBQZnCbrtxDN/GrOsAMMt3nk0EhXfABcCLiev2 YzcS7lFDzoAG1UrO+O6A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dDQJ9-000257-6U; Wed, 24 May 2017 07:02:11 +0000 Received: from mailout.micron.com ([137.201.242.129]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1dDQHy-0000PP-Fl for linux-mtd@lists.infradead.org; Wed, 24 May 2017 07:01:16 +0000 Received: from mail.micron.com (bowex36g.micron.com [137.201.84.120]) by mailout.micron.com (8.14.4/8.14.6) with ESMTP id v4O70XWD015785; Wed, 24 May 2017 01:00:33 -0600 Received: from SIWEX5A.sing.micron.com (10.160.29.59) by BOWEX36G.micron.com (137.201.84.120) with Microsoft SMTP Server (TLS) id 15.0.1263.5; Wed, 24 May 2017 01:00:31 -0600 Received: from BOWEX36G.micron.com (137.201.84.120) by SIWEX5A.sing.micron.com (10.160.29.59) with Microsoft SMTP Server (TLS) id 15.0.1263.5; Wed, 24 May 2017 15:00:29 +0800 Received: from peterpan-Linux-Desktop.micron.com (10.66.12.56) by BOWEX36G.micron.com (137.201.84.120) with Microsoft SMTP Server id 15.0.1263.5 via Frontend Transport; Wed, 24 May 2017 01:00:26 -0600 From: Peter Pan To: , , , , , , , Subject: [PATCH v6 11/15] nand: spi: add basic operations support Date: Wed, 24 May 2017 15:07:07 +0800 Message-ID: <1495609631-18880-12-git-send-email-peterpandong@micron.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1495609631-18880-1-git-send-email-peterpandong@micron.com> References: <1495609631-18880-1-git-send-email-peterpandong@micron.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.0.0.1464-8.100.1062-23088.005 X-TM-AS-Result: No--2.875800-0.000000-31 X-TM-AS-MatchedID: 700398-704425-184142-863432-700648-862883-188019-706290-7 08712-707788-704318-704568-702898-704983-863596-707451-700811-847298-710207 -701604-704473-703712-704713-705753-702609-851458-701594-700264-186003-7002 51-851547-703267-705450-700324-700057-703399-105250-700383-703788-702098-70 3529-700693-708797-300010-186035-700756-105700-860493-121367-822231-148004- 148036-24831-42000-42003-52000 X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-MT-CheckInternalSenderRule: True X-Scanned-By: MIMEDefang 2.78 on 137.201.82.98 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170524_000059_120812_9F711EC4 X-CRM114-Status: GOOD ( 14.04 ) X-Spam-Score: -4.2 (----) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-4.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [137.201.242.129 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peterpansjtu@gmail.com, linshunquan1@hisilicon.com, peterpandong@micron.com Sender: "linux-mtd" Errors-To: linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org This commit is to support read, readoob, write, writeoob and erase operations in the new spi nand framework. No ECC support right now. Signed-off-by: Peter Pan --- drivers/mtd/nand/spi/core.c | 638 ++++++++++++++++++++++++++++++++++++++++++++ include/linux/mtd/spinand.h | 3 + 2 files changed, 641 insertions(+) diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c index 93ce212..6251469 100644 --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -108,6 +108,222 @@ static int spinand_read_status(struct spinand_device *spinand, u8 *status) } /** + * spinand_get_cfg - get configuration register value + * @spinand: SPI NAND device structure + * @cfg: buffer to store value + * Description: + * Configuration register includes OTP config, Lock Tight enable/disable + * and Internal ECC enable/disable. + */ +static int spinand_get_cfg(struct spinand_device *spinand, u8 *cfg) +{ + return spinand_read_reg(spinand, REG_CFG, cfg); +} + +/** + * spinand_set_cfg - set value to configuration register + * @spinand: SPI NAND device structure + * @cfg: value to set + * Description: + * Configuration register includes OTP config, Lock Tight enable/disable + * and Internal ECC enable/disable. + */ +static int spinand_set_cfg(struct spinand_device *spinand, u8 cfg) +{ + return spinand_write_reg(spinand, REG_CFG, cfg); +} + +/** + * spinand_disable_ecc - disable internal ECC + * @spinand: SPI NAND device structure + */ +static void spinand_disable_ecc(struct spinand_device *spinand) +{ + u8 cfg = 0; + + spinand_get_cfg(spinand, &cfg); + + if ((cfg & CFG_ECC_MASK) == CFG_ECC_ENABLE) { + cfg &= ~CFG_ECC_ENABLE; + spinand_set_cfg(spinand, cfg); + } +} + +/** + * spinand_write_enable - send command 06h to enable write or erase the + * NAND cells + * @spinand: SPI NAND device structure + */ +static int spinand_write_enable(struct spinand_device *spinand) +{ + struct spinand_op op; + + spinand_init_op(&op); + op.cmd = SPINAND_CMD_WR_ENABLE; + + return spinand_exec_op(spinand, &op); +} + +/** + * spinand_read_page_to_cache - send command 13h to read data from NAND array + * to cache + * @spinand: SPI NAND device structure + * @page_addr: page to read + */ +static int spinand_read_page_to_cache(struct spinand_device *spinand, + u32 page_addr) +{ + struct spinand_op op; + + spinand_init_op(&op); + op.cmd = SPINAND_CMD_PAGE_READ; + op.n_addr = 3; + op.addr[0] = (u8)(page_addr >> 16); + op.addr[1] = (u8)(page_addr >> 8); + op.addr[2] = (u8)page_addr; + + return spinand_exec_op(spinand, &op); +} + +/** + * spinand_get_address_bits - return address should be transferred + * by how many bits + * @opcode: command's operation code + */ +static int spinand_get_address_bits(u8 opcode) +{ + switch (opcode) { + case SPINAND_CMD_READ_FROM_CACHE_QUAD_IO: + return 4; + case SPINAND_CMD_READ_FROM_CACHE_DUAL_IO: + return 2; + default: + return 1; + } +} + +/** + * spinand_get_data_bits - return data should be transferred by how many bits + * @opcode: command's operation code + */ +static int spinand_get_data_bits(u8 opcode) +{ + switch (opcode) { + case SPINAND_CMD_READ_FROM_CACHE_QUAD_IO: + case SPINAND_CMD_READ_FROM_CACHE_X4: + case SPINAND_CMD_PROG_LOAD_X4: + case SPINAND_CMD_PROG_LOAD_RDM_DATA_X4: + return 4; + case SPINAND_CMD_READ_FROM_CACHE_DUAL_IO: + case SPINAND_CMD_READ_FROM_CACHE_X2: + return 2; + default: + return 1; + } +} + +/** + * spinand_read_from_cache - read data out from cache register + * @spinand: SPI NAND device structure + * @page_addr: page to read + * @column: the location to read from the cache + * @len: number of bytes to read + * @rbuf: buffer held @len bytes + */ +static int spinand_read_from_cache(struct spinand_device *spinand, + u32 page_addr, u32 column, + size_t len, u8 *rbuf) +{ + struct spinand_op op; + + spinand_init_op(&op); + op.cmd = spinand->read_cache_op; + op.n_addr = 2; + op.addr[0] = (u8)(column >> 8); + op.addr[1] = (u8)column; + op.addr_nbits = spinand_get_address_bits(spinand->read_cache_op); + op.n_rx = len; + op.rx_buf = rbuf; + op.data_nbits = spinand_get_data_bits(spinand->read_cache_op); + + if (spinand->manufacturer.manu->ops->prepare_op) + spinand->manufacturer.manu->ops->prepare_op(spinand, &op, + page_addr, column); + + return spinand_exec_op(spinand, &op); +} + +/** + * spinand_write_to_cache - write data to cache register + * @spinand: SPI NAND device structure + * @page_addr: page to write + * @column: the location to write to the cache + * @len: number of bytes to write + * @wrbuf: buffer held @len bytes + */ +static int spinand_write_to_cache(struct spinand_device *spinand, u32 page_addr, + u32 column, size_t len, const u8 *wbuf) +{ + struct spinand_op op; + + spinand_init_op(&op); + op.cmd = spinand->write_cache_op; + op.n_addr = 2; + op.addr[0] = (u8)(column >> 8); + op.addr[1] = (u8)column; + op.addr_nbits = spinand_get_address_bits(spinand->write_cache_op); + op.n_tx = len; + op.tx_buf = wbuf; + op.data_nbits = spinand_get_data_bits(spinand->write_cache_op); + + if (spinand->manufacturer.manu->ops->prepare_op) + spinand->manufacturer.manu->ops->prepare_op(spinand, &op, + page_addr, column); + + return spinand_exec_op(spinand, &op); +} + +/** + * spinand_program_execute - send command 10h to write a page from + * cache to the NAND array + * @spinand: SPI NAND device structure + * @page_addr: the physical page location to write the page. + */ +static int spinand_program_execute(struct spinand_device *spinand, + u32 page_addr) +{ + struct spinand_op op; + + spinand_init_op(&op); + op.cmd = SPINAND_CMD_PROG_EXC; + op.n_addr = 3; + op.addr[0] = (u8)(page_addr >> 16); + op.addr[1] = (u8)(page_addr >> 8); + op.addr[2] = (u8)page_addr; + + return spinand_exec_op(spinand, &op); +} + +/** + * spinand_erase_block_erase - send command D8h to erase a block + * @spinand: SPI NAND device structure + * @page_addr: the start page address of block to be erased. + */ +static int spinand_erase_block(struct spinand_device *spinand, u32 page_addr) +{ + struct spinand_op op; + + spinand_init_op(&op); + op.cmd = SPINAND_CMD_BLK_ERASE; + op.n_addr = 3; + op.addr[0] = (u8)(page_addr >> 16); + op.addr[1] = (u8)(page_addr >> 8); + op.addr[2] = (u8)page_addr; + + return spinand_exec_op(spinand, &op); +} + +/** * spinand_wait - wait until the command is done * @spinand: SPI NAND device structure * @s: buffer to store status register value (can be NULL) @@ -193,6 +409,415 @@ static int spinand_lock_block(struct spinand_device *spinand, u8 lock) } /** + * spinand_do_read_page - read page from device to buffer + * @mtd: MTD device structure + * @page_addr: page address/raw address + * @oob_only: read OOB only or the whole page + */ +static int spinand_do_read_page(struct mtd_info *mtd, u32 page_addr, + bool oob_only) +{ + struct spinand_device *spinand = mtd_to_spinand(mtd); + struct nand_device *nand = mtd_to_nand(mtd); + int ret; + + spinand_read_page_to_cache(spinand, page_addr); + + ret = spinand_wait(spinand, NULL); + if (ret < 0) { + dev_err(spinand->dev, "error %d waiting page 0x%x to cache\n", + ret, page_addr); + return ret; + } + + if (!oob_only) + spinand_read_from_cache(spinand, page_addr, 0, + nand_page_size(nand) + + nand_per_page_oobsize(nand), + spinand->buf); + else + spinand_read_from_cache(spinand, page_addr, + nand_page_size(nand), + nand_per_page_oobsize(nand), + spinand->oobbuf); + + return 0; +} + +/** + * spinand_do_write_page - write data from buffer to device + * @mtd: MTD device structure + * @page_addr: page address/raw address + * @oob_only: write OOB only or the whole page + */ +static int spinand_do_write_page(struct mtd_info *mtd, u32 page_addr, + bool oob_only) +{ + struct spinand_device *spinand = mtd_to_spinand(mtd); + struct nand_device *nand = mtd_to_nand(mtd); + u8 status; + int ret = 0; + + spinand_write_enable(spinand); + + if (!oob_only) + spinand_write_to_cache(spinand, page_addr, 0, + nand_page_size(nand) + + nand_per_page_oobsize(nand), + spinand->buf); + else + spinand_write_to_cache(spinand, page_addr, nand_page_size(nand), + nand_per_page_oobsize(nand), + spinand->oobbuf); + + spinand_program_execute(spinand, page_addr); + + ret = spinand_wait(spinand, &status); + if (ret < 0) { + dev_err(spinand->dev, "error %d reading page 0x%x from cache\n", + ret, page_addr); + return ret; + } + + if ((status & STATUS_P_FAIL_MASK) == STATUS_P_FAIL) { + dev_err(spinand->dev, "program page 0x%x failed\n", page_addr); + ret = -EIO; + } + + return ret; +} + +/** + * spinand_read_pages - read data from device to buffer + * @mtd: MTD device structure + * @from: offset to read from + * @ops: oob operations description structure + */ +static int spinand_read_pages(struct mtd_info *mtd, loff_t from, + struct mtd_oob_ops *ops) +{ + struct spinand_device *spinand = mtd_to_spinand(mtd); + struct nand_device *nand = mtd_to_nand(mtd); + int size, ret; + int ooblen = mtd->oobsize; + bool oob_only = !ops->datbuf; + struct nand_page_iter iter; + + ops->retlen = 0; + ops->oobretlen = 0; + + nand_for_each_page(nand, from, ops->len, ops->ooboffs, ops->ooblen, + ooblen, &iter) { + ret = spinand_do_read_page(mtd, iter.page, oob_only); + if (ret) + break; + + if (ops->datbuf) { + size = min_t(int, iter.dataleft, + nand_page_size(nand) - iter.pageoffs); + memcpy(ops->datbuf + ops->retlen, + spinand->buf + iter.pageoffs, size); + ops->retlen += size; + } + + if (ops->oobbuf) { + size = min_t(int, iter.oobleft, ooblen); + memcpy(ops->oobbuf + ops->oobretlen, + spinand->oobbuf + ops->ooboffs, size); + ops->oobretlen += size; + } + } + + return ret; +} + +/** + * spinand_do_read_ops - read data from device to buffer + * @mtd: MTD device structure + * @from: offset to read from + * @ops: oob operations description structure + */ +static int spinand_do_read_ops(struct mtd_info *mtd, loff_t from, + struct mtd_oob_ops *ops) +{ + struct spinand_device *spinand = mtd_to_spinand(mtd); + struct nand_device *nand = mtd_to_nand(mtd); + int ret; + + ret = nand_check_address(nand, from); + if (ret) { + dev_err(spinand->dev, "%s: invalid read address\n", __func__); + return ret; + } + + ret = nand_check_oob_ops(nand, from, ops); + if (ret) { + dev_err(spinand->dev, + "%s: invalid oob operation input\n", __func__); + return ret; + } + + mutex_lock(&spinand->lock); + ret = spinand_read_pages(mtd, from, ops); + mutex_unlock(&spinand->lock); + + return ret; +} + +/** + * spinand_write_pages - write data from buffer to device + * @mtd: MTD device structure + * @to: offset to write to + * @ops: oob operations description structure + */ +static int spinand_write_pages(struct mtd_info *mtd, loff_t to, + struct mtd_oob_ops *ops) +{ + struct spinand_device *spinand = mtd_to_spinand(mtd); + struct nand_device *nand = mtd_to_nand(mtd); + int ret = 0; + int size = 0; + int oob_size = 0; + int ooblen = mtd->oobsize; + bool oob_only = !ops->datbuf; + struct nand_page_iter iter; + + ops->retlen = 0; + ops->oobretlen = 0; + + nand_for_each_page(nand, to, ops->len, ops->ooboffs, ops->ooblen, + ooblen, &iter) { + memset(spinand->buf, 0xff, + nand_page_size(nand) + nand_per_page_oobsize(nand)); + + if (ops->oobbuf) { + oob_size = min_t(int, iter.oobleft, ooblen); + memcpy(spinand->oobbuf + ops->ooboffs, + ops->oobbuf + ops->oobretlen, oob_size); + } + + if (ops->datbuf) { + size = min_t(int, iter.dataleft, + nand_page_size(nand) - iter.pageoffs); + memcpy(spinand->buf + iter.pageoffs, + ops->datbuf + ops->retlen, size); + } + + ret = spinand_do_write_page(mtd, iter.page, oob_only); + if (ret) { + dev_err(spinand->dev, "error %d writing page 0x%x\n", + ret, iter.page); + return ret; + } + + if (ops->datbuf) + ops->retlen += size; + + if (ops->oobbuf) + ops->oobretlen += oob_size; + } + + return ret; +} + +/** + * spinand_do_write_ops - write data from buffer to device + * @mtd: MTD device structure + * @to: offset to write to + * @ops: oob operations description structure + */ +static int spinand_do_write_ops(struct mtd_info *mtd, loff_t to, + struct mtd_oob_ops *ops) +{ + struct spinand_device *spinand = mtd_to_spinand(mtd); + struct nand_device *nand = mtd_to_nand(mtd); + int ret = 0; + + ret = nand_check_address(nand, to); + if (ret) { + dev_err(spinand->dev, "%s: invalid write address\n", __func__); + return ret; + } + + ret = nand_check_oob_ops(nand, to, ops); + if (ret) { + dev_err(spinand->dev, + "%s: invalid oob operation input\n", __func__); + return ret; + } + + if (nand_oob_ops_across_page(mtd_to_nand(mtd), ops)) { + dev_err(spinand->dev, + "%s: try to across page when writing with OOB\n", + __func__); + return -EINVAL; + } + + mutex_lock(&spinand->lock); + ret = spinand_write_pages(mtd, to, ops); + mutex_unlock(&spinand->lock); + + return ret; +} + +/** + * spinand_read - [MTD Interface] read page data + * @mtd: MTD device structure + * @from: offset to read from + * @len: number of bytes to read + * @retlen: pointer to variable to store the number of read bytes + * @buf: the databuffer to put data + */ +static int spinand_read(struct mtd_info *mtd, loff_t from, size_t len, + size_t *retlen, u8 *buf) +{ + struct mtd_oob_ops ops; + int ret; + + memset(&ops, 0, sizeof(ops)); + ops.len = len; + ops.datbuf = buf; + ops.mode = MTD_OPS_PLACE_OOB; + ret = spinand_do_read_ops(mtd, from, &ops); + *retlen = ops.retlen; + + return ret; +} + +/** + * spinand_write - [MTD Interface] write page data + * @mtd: MTD device structure + * @to: offset to write to + * @len: number of bytes to write + * @retlen: pointer to variable to store the number of written bytes + * @buf: the data to write + */ +static int spinand_write(struct mtd_info *mtd, loff_t to, size_t len, + size_t *retlen, const u8 *buf) +{ + struct mtd_oob_ops ops; + int ret; + + memset(&ops, 0, sizeof(ops)); + ops.len = len; + ops.datbuf = (uint8_t *)buf; + ops.mode = MTD_OPS_PLACE_OOB; + ret = spinand_do_write_ops(mtd, to, &ops); + *retlen = ops.retlen; + + return ret; +} + +/** + * spinand_read_oob - [MTD Interface] read page data and/or out-of-band + * @mtd: MTD device structure + * @from: offset to read from + * @ops: oob operation description structure + */ +static int spinand_read_oob(struct mtd_info *mtd, loff_t from, + struct mtd_oob_ops *ops) +{ + int ret = -ENOTSUPP; + + ops->retlen = 0; + switch (ops->mode) { + case MTD_OPS_PLACE_OOB: + case MTD_OPS_AUTO_OOB: + case MTD_OPS_RAW: + ret = spinand_do_read_ops(mtd, from, ops); + break; + } + + return ret; +} + +/** + * spinand_write_oob - [MTD Interface] write page data and/or out-of-band + * @mtd: MTD device structure + * @to: offset to write to + * @ops: oob operation description structure + */ +static int spinand_write_oob(struct mtd_info *mtd, loff_t to, + struct mtd_oob_ops *ops) +{ + int ret = -ENOTSUPP; + + ops->retlen = 0; + switch (ops->mode) { + case MTD_OPS_PLACE_OOB: + case MTD_OPS_AUTO_OOB: + case MTD_OPS_RAW: + ret = spinand_do_write_ops(mtd, to, ops); + break; + } + + return ret; +} + +/** + * spinand_erase - [MTD Interface] erase block(s) + * @mtd: MTD device structure + * @einfo: erase instruction + */ +static int spinand_erase(struct mtd_info *mtd, struct erase_info *einfo) +{ + struct spinand_device *spinand = mtd_to_spinand(mtd); + struct nand_device *nand = mtd_to_nand(mtd); + loff_t offs = einfo->addr, len = einfo->len; + u8 status; + int ret; + + ret = nand_check_erase_ops(nand, einfo); + if (ret) { + dev_err(spinand->dev, "invalid erase operation input\n"); + return ret; + } + + mutex_lock(&spinand->lock); + einfo->fail_addr = MTD_FAIL_ADDR_UNKNOWN; + einfo->state = MTD_ERASING; + + while (len) { + spinand_write_enable(spinand); + spinand_erase_block(spinand, nand_offs_to_page(nand, offs)); + + ret = spinand_wait(spinand, &status); + if (ret < 0) { + dev_err(spinand->dev, + "block erase command wait failed\n"); + einfo->state = MTD_ERASE_FAILED; + goto erase_exit; + } + + if ((status & STATUS_E_FAIL_MASK) == STATUS_E_FAIL) { + dev_err(spinand->dev, + "erase block 0x%012llx failed\n", offs); + einfo->state = MTD_ERASE_FAILED; + einfo->fail_addr = offs; + goto erase_exit; + } + + /* Increment page address and decrement length */ + len -= nand_eraseblock_size(nand); + offs += nand_eraseblock_size(nand); + } + + einfo->state = MTD_ERASE_DONE; + +erase_exit: + + ret = einfo->state == MTD_ERASE_DONE ? 0 : -EIO; + + mutex_unlock(&spinand->lock); + + /* Do call back function */ + if (!ret) + mtd_erase_callback(einfo); + + return ret; +} + +/** * spinand_set_rd_wr_op - choose the best read write command * @spinand: SPI NAND device structure * Description: @@ -390,9 +1015,22 @@ int spinand_init(struct spinand_device *spinand) * area is available for user. */ mtd->oobavail = mtd->oobsize; + mtd->_erase = spinand_erase; + /* + * Since no ECC support right now, so for spinand_read(), + * spinand_write(), spinand_read_oob() and spinand_write_oob() + * both MTD_OPS_PLACE_OOB and MTD_OPS_AUTO_OOB are treated as + * MTD_OPS_RAW. + */ + mtd->_read = spinand_read; + mtd->_write = spinand_write; + mtd->_read_oob = spinand_read_oob; + mtd->_write_oob = spinand_write_oob; /* After power up, all blocks are locked, so unlock it here. */ spinand_lock_block(spinand, BL_ALL_UNLOCKED); + /* Right now, we don't support ECC, so disable on-die ECC */ + spinand_disable_ecc(spinand); return 0; diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h index dd9da71..04ad1dd 100644 --- a/include/linux/mtd/spinand.h +++ b/include/linux/mtd/spinand.h @@ -103,11 +103,14 @@ struct spinand_controller_ops { * return directly and let others to detect. * @init: initialize SPI NAND device. * @cleanup: clean SPI NAND device footprint. + * @prepare_op: prepara read/write operation. */ struct spinand_manufacturer_ops { bool (*detect)(struct spinand_device *spinand); int (*init)(struct spinand_device *spinand); void (*cleanup)(struct spinand_device *spinand); + void (*prepare_op)(struct spinand_device *spinand, + struct spinand_op *op, u32 page, u32 column); }; /**