diff mbox series

[v2,3/4] mtd: spinand: Add support continuous read operation

Message ID 20230209120853.660564-4-jaimeliao.tw@gmail.com
State New
Delegated to: Miquel Raynal
Headers show
Series mtd: spinand: Add continuous read mode support | expand

Commit Message

liao jaime Feb. 9, 2023, 12:08 p.m. UTC
The continuous read operation includes three phases:
Firstly, starting with the page read command and the 1st
page data will be read into the cache after the read latency tRD.
Secondly, Issuing the Read From Cache commands
(03h/0Bh/3Bh/6Bh/BBh/EBh) to read out the data from cache continuously.
After all the data is read out, the host should pull CS# high
to terminate this continuous read operation and wait tRST for the NAND
device resets read operation.

Continuous reads have a positive impact when reading more than
one page and the column address is don't care in this operation,
a full page data will be read out for each page.

The performance of continuous read mode is as follows. Set the
flash to QUAD mode and run 25MHz direct mapping mode on the SPI
bus and use the MTD test module to show the performance of
continuous reads.

As below is the test result on two cases.
We could reduce time on continuous read when multiple page read.

===============Continuous======================
mtd_speedtest: MTD device: 0    count: 100
mtd_speedtest: MTD device size 268435456, eraseblock size 131072,
               page size 2048, count of eraseblocks 2048, pages per
               eraseblock 64, OOB size 64
mtd_test: scanning for bad eraseblocks
mtd_test: scanned 100 eraseblocks, 0 are bad
mtd_speedtest: testing eraseblock write speed
mtd_speedtest: eraseblock write speed is 1298 KiB/s
mtd_speedtest: testing eraseblock read speed
mtd_speedtest: eraseblock read speed is 11053 KiB/s
mtd_speedtest: testing page write speed
mtd_speedtest: page write speed is 1291 KiB/s
mtd_speedtest: testing page read speed
mtd_speedtest: page read speed is 3240 KiB/s
mtd_speedtest: testing 2 page write speed
mtd_speedtest: 2 page write speed is 1289 KiB/s
mtd_speedtest: testing 2 page read speed
mtd_speedtest: 2 page read speed is 2909 KiB/s
mtd_speedtest: Testing erase speed
mtd_speedtest: erase speed is 45229 KiB/s
mtd_speedtest: Testing 2x multi-block erase speed
mtd_speedtest: 2x multi-block erase speed is 62135 KiB/s
mtd_speedtest: Testing 4x multi-block erase speed
mtd_speedtest: 4x multi-block erase speed is 60093 KiB/s
mtd_speedtest: Testing 8x multi-block erase speed
mtd_speedtest: 8x multi-block erase speed is 61244 KiB/s
mtd_speedtest: Testing 16x multi-block erase speed
mtd_speedtest: 16x multi-block erase speed is 61538 KiB/s
mtd_speedtest: Testing 32x multi-block erase speed
mtd_speedtest: 32x multi-block erase speed is 61835 KiB/s
mtd_speedtest: Testing 64x multi-block erase speed
mtd_speedtest: 64x multi-block erase speed is 60663 KiB/s
mtd_speedtest: finished
=================================================

===============Normal============================
mtd_speedtest: MTD device: 0    count: 100
mtd_speedtest: MTD device size 268435456, eraseblock size 131072,
	       page size 2048, count of eraseblocks 2048, pages per
	       eraseblock 64, OOB size 128
mtd_test: scanning for bad eraseblocks
mtd_test: scanned 100 eraseblocks, 0 are bad
mtd_speedtest: testing eraseblock write speed
mtd_speedtest: eraseblock write speed is 4467 KiB/s
mtd_speedtest: testing eraseblock read speed
mtd_speedtest: eraseblock read speed is 2278 KiB/s
mtd_speedtest: testing page write speed
mtd_speedtest: page write speed is 4447 KiB/s
mtd_speedtest: testing page read speed
mtd_speedtest: page read speed is 2204 KiB/s
mtd_speedtest: testing 2 page write speed
mtd_speedtest: 2 page write speed is 4479 KiB/s
mtd_speedtest: testing 2 page read speed
mtd_speedtest: 2 page read speed is 2274 KiB/s
mtd_speedtest: Testing erase speed
mtd_speedtest: erase speed is 44982 KiB/s
mtd_speedtest: Testing 2x multi-block erase speed
mtd_speedtest: 2x multi-block erase speed is 33766 KiB/s
mtd_speedtest: Testing 4x multi-block erase speed
mtd_speedtest: 4x multi-block erase speed is 66876 KiB/s
mtd_speedtest: Testing 8x multi-block erase speed
mtd_speedtest: 8x multi-block erase speed is 67518 KiB/s
mtd_speedtest: Testing 16x multi-block erase speed
mtd_speedtest: 16x multi-block erase speed is 67792 KiB/s
mtd_speedtest: Testing 32x multi-block erase speed
mtd_speedtest: 32x multi-block erase speed is 67964 KiB/s
mtd_speedtest: Testing 64x multi-block erase speed
mtd_speedtest: 64x multi-block erase speed is 68101 KiB/s
mtd_speedtest: finished
=================================================

Signed-off-by: Jaime Liao <jaimeliao.tw@gmail.com>
---
 drivers/mtd/nand/spi/core.c | 97 +++++++++++++++++++++++++++++++++++++
 include/linux/mtd/spinand.h |  1 +
 2 files changed, 98 insertions(+)

Comments

Chuanhong Guo Feb. 15, 2023, 6:51 a.m. UTC | #1
Hi!

On Thu, Feb 9, 2023 at 8:10 PM Jaime Liao <jaimeliao.tw@gmail.com> wrote:
> [...]
> +static int spinand_mtd_continuous_read(struct mtd_info *mtd, loff_t from,
> +                                      struct mtd_oob_ops *ops,
> +                                      struct nand_io_iter *iter)
> +{
> +       struct spinand_device *spinand = mtd_to_spinand(mtd);
> +       struct nand_device *nand = mtd_to_nanddev(mtd);
> +       int ret = 0;
> +
> +       /*
> +        * Continuous read mode could reduce some operation in On-die ECC free
> +        * flash when read page sequentially.
> +        */
> +       iter->req.type = NAND_PAGE_READ;
> +       iter->req.mode = MTD_OPS_RAW;
> +       iter->req.dataoffs = nanddev_offs_to_pos(nand, from, &iter->req.pos);
> +       iter->req.databuf.in = ops->datbuf;
> +       iter->req.datalen = ops->len;
> +
> +       if (from & (nanddev_page_size(nand) - 1)) {
> +               pr_debug("%s: unaligned address\n", __func__);
> +               return -EINVAL;
> +       }
> +
> +       ret = spinand_continuous_read_enable(spinand);
> +       if (ret)
> +               return ret;
> +
> +       spinand->use_continuous_read = true;
> +
> +       ret = spinand_select_target(spinand, iter->req.pos.target);
> +       if (ret)
> +               return ret;
> +
> +       /*
> +        * The continuous read operation including: firstly, starting with the
> +        * page read command and the 1 st page data will be read into the cache
> +        * after the read latency tRD. Secondly, Issuing the Read From Cache
> +        * commands (03h/0Bh/3Bh/6Bh/BBh/EBh) to read out the data from cache
> +        * continuously.
> +        *
> +        * The cache is divided into two halves, while one half of the cache is
> +        * outputting the data, the other half will be loaded for the new data;
> +        * therefore, the host can read out the data continuously from page to
> +        * page. Multiple of Read From Cache commands can be issued in one
> +        * continuous read operation, each Read From Cache command is required
> +        * to read multiple 4-byte data exactly; otherwise, the data output will
> +        * be out of sequence from one Read From Cache command to another Read
> +        * From Cache command.
> +        *
> +        * After all the data is read out, the host should pull CS# high to
> +        * terminate this continuous read operation and wait a 6us of tRST for
> +        * the NAND device resets read operation. The data output for each page
> +        * will always start from byte 0 and a full page data should be read out
> +        * for each page.
> +        */

This mode requires the entire read_from_cache op to finish in one command.
i.e. there can only be one read_from_cache command and the chip-select
can only be pulled low exactly once.
There's no guarantee that spinand_read_from_cache_op issues exactly one
command. spi-mem controllers may have a transfer size limit, requiring an
operation to be split into multiple read_from_cache requests with different
column addresses. spi-mem controllers with dirmap support can map a
memory space to a specific spi-mem read operation, and reading from this
memory-mapped space also isn't guaranteed to be completed in one
command. (Controllers may decide to respond to a bus read request using
multiple spi-mem ops with auto-incremented op addr.)
So, in order to use this mode, you can't reuse the spinand_read_from_cache_op
function. Instead, you should write a new function for read_from_cache
using spi_mem_adjust_op_size and spi_mem_exec_op: When calling
adjust_op_size, check whether the op size is truncated. If it is, current
continuous_read request should be aborted and fallback to the normal
read mode.

> +       ret = spinand_read_page(spinand, &iter->req);
> +       if (ret)
> +               goto continuous_read_error;
> +
> +       ret = spinand_reset_op(spinand);
> +       if (ret)
> +               goto continuous_read_error;
> +
liao jaime April 27, 2023, 9:58 a.m. UTC | #2
Hi Chuanhong

>
> Hi!
>
> On Thu, Feb 9, 2023 at 8:10 PM Jaime Liao <jaimeliao.tw@gmail.com> wrote:
> > [...]
> > +static int spinand_mtd_continuous_read(struct mtd_info *mtd, loff_t from,
> > +                                      struct mtd_oob_ops *ops,
> > +                                      struct nand_io_iter *iter)
> > +{
> > +       struct spinand_device *spinand = mtd_to_spinand(mtd);
> > +       struct nand_device *nand = mtd_to_nanddev(mtd);
> > +       int ret = 0;
> > +
> > +       /*
> > +        * Continuous read mode could reduce some operation in On-die ECC free
> > +        * flash when read page sequentially.
> > +        */
> > +       iter->req.type = NAND_PAGE_READ;
> > +       iter->req.mode = MTD_OPS_RAW;
> > +       iter->req.dataoffs = nanddev_offs_to_pos(nand, from, &iter->req.pos);
> > +       iter->req.databuf.in = ops->datbuf;
> > +       iter->req.datalen = ops->len;
> > +
> > +       if (from & (nanddev_page_size(nand) - 1)) {
> > +               pr_debug("%s: unaligned address\n", __func__);
> > +               return -EINVAL;
> > +       }
> > +
> > +       ret = spinand_continuous_read_enable(spinand);
> > +       if (ret)
> > +               return ret;
> > +
> > +       spinand->use_continuous_read = true;
> > +
> > +       ret = spinand_select_target(spinand, iter->req.pos.target);
> > +       if (ret)
> > +               return ret;
> > +
> > +       /*
> > +        * The continuous read operation including: firstly, starting with the
> > +        * page read command and the 1 st page data will be read into the cache
> > +        * after the read latency tRD. Secondly, Issuing the Read From Cache
> > +        * commands (03h/0Bh/3Bh/6Bh/BBh/EBh) to read out the data from cache
> > +        * continuously.
> > +        *
> > +        * The cache is divided into two halves, while one half of the cache is
> > +        * outputting the data, the other half will be loaded for the new data;
> > +        * therefore, the host can read out the data continuously from page to
> > +        * page. Multiple of Read From Cache commands can be issued in one
> > +        * continuous read operation, each Read From Cache command is required
> > +        * to read multiple 4-byte data exactly; otherwise, the data output will
> > +        * be out of sequence from one Read From Cache command to another Read
> > +        * From Cache command.
> > +        *
> > +        * After all the data is read out, the host should pull CS# high to
> > +        * terminate this continuous read operation and wait a 6us of tRST for
> > +        * the NAND device resets read operation. The data output for each page
> > +        * will always start from byte 0 and a full page data should be read out
> > +        * for each page.
> > +        */
>
> This mode requires the entire read_from_cache op to finish in one command.
> i.e. there can only be one read_from_cache command and the chip-select
> can only be pulled low exactly once.
> There's no guarantee that spinand_read_from_cache_op issues exactly one
> command. spi-mem controllers may have a transfer size limit, requiring an
> operation to be split into multiple read_from_cache requests with different
> column addresses. spi-mem controllers with dirmap support can map a
> memory space to a specific spi-mem read operation, and reading from this
> memory-mapped space also isn't guaranteed to be completed in one
> command. (Controllers may decide to respond to a bus read request using
> multiple spi-mem ops with auto-incremented op addr.)
> So, in order to use this mode, you can't reuse the spinand_read_from_cache_op
> function. Instead, you should write a new function for read_from_cache
> using spi_mem_adjust_op_size and spi_mem_exec_op: When calling
> adjust_op_size, check whether the op size is truncated. If it is, current
> continuous_read request should be aborted and fallback to the normal
> read mode.
As I know, data read could be handle by DMA engine even data length is
greater than
controller limit.
In spi-mem.c, spi_mem_adjust_op_size are using when "no_dirmap" so
that I not sure
that is import for checking before continuous read.
Should I check dirmap mode before enable continuous read?

Thanks your reply.
Jaime

>
> > +       ret = spinand_read_page(spinand, &iter->req);
> > +       if (ret)
> > +               goto continuous_read_error;
> > +
> > +       ret = spinand_reset_op(spinand);
> > +       if (ret)
> > +               goto continuous_read_error;
> > +
>
> --
> Regards,
> Chuanhong Guo
diff mbox series

Patch

diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
index aa57d5b0a3dc..1c9ec84e6361 100644
--- a/drivers/mtd/nand/spi/core.c
+++ b/drivers/mtd/nand/spi/core.c
@@ -386,6 +386,10 @@  static int spinand_read_from_cache_op(struct spinand_device *spinand,
 	if (req->datalen) {
 		buf = spinand->databuf;
 		nbytes = nanddev_page_size(nand);
+		if (spinand->use_continuous_read) {
+			buf = req->databuf.in;
+			nbytes = req->datalen;
+		}
 		column = 0;
 	}
 
@@ -415,6 +419,9 @@  static int spinand_read_from_cache_op(struct spinand_device *spinand,
 		buf += ret;
 	}
 
+	if (spinand->use_continuous_read)
+		goto finish;
+
 	if (req->datalen)
 		memcpy(req->databuf.in, spinand->databuf + req->dataoffs,
 		       req->datalen);
@@ -430,6 +437,7 @@  static int spinand_read_from_cache_op(struct spinand_device *spinand,
 			       req->ooblen);
 	}
 
+finish:
 	return 0;
 }
 
@@ -646,6 +654,77 @@  static int spinand_write_page(struct spinand_device *spinand,
 	return nand_ecc_finish_io_req(nand, (struct nand_page_io_req *)req);
 }
 
+static int spinand_mtd_continuous_read(struct mtd_info *mtd, loff_t from,
+				       struct mtd_oob_ops *ops,
+				       struct nand_io_iter *iter)
+{
+	struct spinand_device *spinand = mtd_to_spinand(mtd);
+	struct nand_device *nand = mtd_to_nanddev(mtd);
+	int ret = 0;
+
+	/*
+	 * Continuous read mode could reduce some operation in On-die ECC free
+	 * flash when read page sequentially.
+	 */
+	iter->req.type = NAND_PAGE_READ;
+	iter->req.mode = MTD_OPS_RAW;
+	iter->req.dataoffs = nanddev_offs_to_pos(nand, from, &iter->req.pos);
+	iter->req.databuf.in = ops->datbuf;
+	iter->req.datalen = ops->len;
+
+	if (from & (nanddev_page_size(nand) - 1)) {
+		pr_debug("%s: unaligned address\n", __func__);
+		return -EINVAL;
+	}
+
+	ret = spinand_continuous_read_enable(spinand);
+	if (ret)
+		return ret;
+
+	spinand->use_continuous_read = true;
+
+	ret = spinand_select_target(spinand, iter->req.pos.target);
+	if (ret)
+		return ret;
+
+	/*
+	 * The continuous read operation including: firstly, starting with the
+	 * page read command and the 1 st page data will be read into the cache
+	 * after the read latency tRD. Secondly, Issuing the Read From Cache
+	 * commands (03h/0Bh/3Bh/6Bh/BBh/EBh) to read out the data from cache
+	 * continuously.
+	 *
+	 * The cache is divided into two halves, while one half of the cache is
+	 * outputting the data, the other half will be loaded for the new data;
+	 * therefore, the host can read out the data continuously from page to
+	 * page. Multiple of Read From Cache commands can be issued in one
+	 * continuous read operation, each Read From Cache command is required
+	 * to read multiple 4-byte data exactly; otherwise, the data output will
+	 * be out of sequence from one Read From Cache command to another Read
+	 * From Cache command.
+	 *
+	 * After all the data is read out, the host should pull CS# high to
+	 * terminate this continuous read operation and wait a 6us of tRST for
+	 * the NAND device resets read operation. The data output for each page
+	 * will always start from byte 0 and a full page data should be read out
+	 * for each page.
+	 */
+	ret = spinand_read_page(spinand, &iter->req);
+	if (ret)
+		goto continuous_read_error;
+
+	ret = spinand_reset_op(spinand);
+	if (ret)
+		goto continuous_read_error;
+
+continuous_read_error:
+	spinand->use_continuous_read = false;
+	ops->retlen = iter->req.datalen;
+
+	ret = spinand_continuous_read_disable(spinand);
+	return ret;
+}
+
 static int spinand_mtd_read(struct mtd_info *mtd, loff_t from,
 			    struct mtd_oob_ops *ops)
 {
@@ -665,6 +744,24 @@  static int spinand_mtd_read(struct mtd_info *mtd, loff_t from,
 
 	old_stats = mtd->ecc_stats;
 
+	/*
+	 * If the device support continuous read mode and read length larger
+	 * than one page size will enter the continuous read mode. This mode
+	 * helps avoid issuing a page read command and read from cache command
+	 * again, and improves read performance for continuous addresses.
+	 */
+	if ((spinand->flags & SPINAND_HAS_CONT_READ_BIT) &&
+	    (ops->len > nanddev_page_size(nand))) {
+		ret = spinand_mtd_continuous_read(mtd, from, ops, &iter);
+
+		mutex_unlock(&spinand->lock);
+
+		if (ecc_failed && !ret)
+			ret = -EBADMSG;
+
+		return ret ? ret : max_bitflips;
+	}
+
 	nanddev_io_for_each_page(nand, NAND_PAGE_READ, from, ops, &iter) {
 		if (disable_ecc)
 			iter.req.mode = MTD_OPS_RAW;
diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
index f598a0c5a376..efbd9caf53bf 100644
--- a/include/linux/mtd/spinand.h
+++ b/include/linux/mtd/spinand.h
@@ -310,6 +310,7 @@  struct spinand_ecc_info {
 
 #define SPINAND_HAS_QE_BIT		BIT(0)
 #define SPINAND_HAS_CR_FEAT_BIT		BIT(1)
+#define SPINAND_HAS_CONT_READ_BIT	BIT(2)
 
 /**
  * struct spinand_ondie_ecc_conf - private SPI-NAND on-die ECC engine structure