From patchwork Wed Jul 28 05:58:07 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haojian Zhuang X-Patchwork-Id: 60104 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from bombadil.infradead.org (bombadil.infradead.org [18.85.46.34]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id D6979B6ED0 for ; Wed, 28 Jul 2010 16:06:22 +1000 (EST) Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.72 #1 (Red Hat Linux)) id 1Odzke-0001X2-Mj; Wed, 28 Jul 2010 06:04:26 +0000 Received: from mail-pw0-f49.google.com ([209.85.160.49]) by bombadil.infradead.org with esmtp (Exim 4.72 #1 (Red Hat Linux)) id 1Odzea-00046c-NW; Wed, 28 Jul 2010 05:58:11 +0000 Received: by pwj7 with SMTP id 7so900416pwj.36 for ; Tue, 27 Jul 2010 22:58:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=lMiYpoJpofQMBAbuI/BfKZiTyvcVD8dg5ae/iGi5gq0=; b=hhcFFa4TaDhdaiGLNkv+rYyDU6b9vr8FtVQwlpqngbIl9Jhowsc8oBY8WYSZiW9Vcy 9w99WXp/vQPdkH0lF8OY/tPiYR9yct+RUrLmRjQClWNL8q4IjuP4AKdZeUfSyO3NV3Z7 keaPSuhGUoBsGACPDe+sFy1VxfWWkWedII7CQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=qa8ECjZPd9uA4SE2p1o9YFXc5tvqPgvpir82Um99gaUuFfakXS0RGUL0QqLENpNrsZ LK9q8eJcAAWG7tbn4rT8cc6tkQfJKHPf4wk8WrMhpvbFDXk26c1tEBuHE4hjxwLCkXPw KItTHyzb8y8vHgmNJN2yb9+rqRfawu9kQdi6Q= MIME-Version: 1.0 Received: by 10.142.48.6 with SMTP id v6mr10546713wfv.96.1280296687955; Tue, 27 Jul 2010 22:58:07 -0700 (PDT) Received: by 10.142.54.6 with HTTP; Tue, 27 Jul 2010 22:58:07 -0700 (PDT) Date: Wed, 28 Jul 2010 13:58:07 +0800 Message-ID: Subject: [PATCH 20/29] pxa3xx_nand: use the buff passed from upper layer if not use dma From: Haojian Zhuang To: Eric Miao , linux-arm-kernel , David Woodhouse , David Woodhouse , Marc Kleine-Budde , linux-mtd@lists.infradead.org, Lei Wen X-CRM114-Version: 20090807-BlameThorstenAndJenny ( TRE 0.7.6 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20100728_015809_429844_72BBB6B1 X-CRM114-Status: GOOD ( 26.09 ) X-Spam-Score: -0.1 (/) X-Spam-Report: SpamAssassin version 3.3.1 on bombadil.infradead.org summary: Content analysis details: (-0.1 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.160.49 listed in list.dnswl.org] 0.0 FREEMAIL_FROM Sender email is freemail (haojian.zhuang[at]gmail.com) -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-mtd-bounces@lists.infradead.org Errors-To: linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From e5000f58ce7a83f0a370d4b93810a097e3884ea5 Mon Sep 17 00:00:00 2001 From: Lei Wen Date: Tue, 22 Jun 2010 22:47:13 +0800 Subject: [PATCH 20/29] pxa3xx_nand: use the buff passed from upper layer if not use dma For dma enabling case, we couldn't avoid the memcpy, for the buffer in the upper layer may not be the physical consecutive. Signed-off-by: Lei Wen --- drivers/mtd/nand/pxa3xx_nand.c | 297 ++++++++++++++++++++++++---------------- 1 files changed, 181 insertions(+), 116 deletions(-) } @@ -638,6 +639,8 @@ static int prepare_command_pool(struct pxa3xx_nand *nand, int command, int addr_cycle, exec_cmd, ndcb0, i, chunks = 0; struct mtd_info *mtd; struct pxa3xx_nand_info *info = nand->info[nand->chip_select]; + struct platform_device *pdev = nand->pdev; + struct pxa3xx_nand_platform_data *pdata = pdev->dev.platform_data; mtd = get_mtd_by_info(info); ndcb0 = (nand->chip_select) ? NDCB0_CSEL : 0;; @@ -647,8 +650,6 @@ static int prepare_command_pool(struct pxa3xx_nand *nand, int command, /* reset data and oob column point to handle data */ nand->data_column = 0; nand->oob_column = 0; - nand->buf_start = 0; - nand->buf_count = 0; nand->total_cmds = 1; nand->cmd_seqs = 0; nand->data_size = 0; @@ -658,26 +659,25 @@ static int prepare_command_pool(struct pxa3xx_nand *nand, int command, nand->state = 0; nand->bad_count = 0; nand->retcode = ERR_NONE; - nand->command = command; + nand->buf_start = column; switch (command) { - case NAND_CMD_READ0: case NAND_CMD_PAGEPROG: - nand->use_ecc = info->use_ecc; - case NAND_CMD_READOOB: + case NAND_CMD_RNDOUT: pxa3xx_set_datasize(info); - nand->oob_buff = nand->data_buff + nand->data_size; nand->use_dma = use_dma; chunks = info->page_size / nand->data_size; break; - case NAND_CMD_SEQIN: - exec_cmd = 0; - break; default: nand->ndcb1 = 0; nand->ndcb2 = 0; + nand->use_ecc = ECC_NONE; break; } + if (nand->use_dma) { + nand->data_buff = nand->dma_buff; + nand->oob_buff = nand->dma_buff + mtd->writesize; + } /* clear the command buffer */ for (i = 0; i < CMD_POOL_SIZE; i ++) { @@ -688,68 +688,67 @@ static int prepare_command_pool(struct pxa3xx_nand *nand, int command, + info->col_addr_cycles); switch (command) { - case NAND_CMD_READOOB: case NAND_CMD_READ0: - cmd = info->cmdset->read1; - if (command == NAND_CMD_READOOB) - nand->buf_start = mtd->writesize + column; - else - nand->buf_start = column; - - if (unlikely(info->page_size < PAGE_CHUNK_SIZE)) - nand->ndcb0[0] |= NDCB0_CMD_TYPE(0) - | addr_cycle - | (cmd & NDCB0_CMD1_MASK); - else { - if (chunks == 1) - nand->ndcb0[0] |= NDCB0_CMD_TYPE(0) - | NDCB0_DBC - | addr_cycle - | cmd; - else { - nand->total_cmds = chunks + 1; - nand->ndcb0[0] |= NDCB0_CMD_XTYPE(0x6) - | NDCB0_CMD_TYPE(0) - | NDCB0_DBC - | NDCB0_NC - | addr_cycle - | cmd; - - nand->ndcb0[1] |= NDCB0_CMD_XTYPE(0x5) - | NDCB0_NC - | addr_cycle; - - for (i = 2; i <= chunks; i ++) - nand->ndcb0[i] = nand->ndcb0[1]; - - nand->ndcb0[chunks] &= ~NDCB0_NC; - /* we should wait RnB go high again - * before read out data*/ - nand->wait_ready[1] = 1; - } - } - case NAND_CMD_SEQIN: + nand->use_ecc = info->use_ecc; + case NAND_CMD_READOOB: + memset(nand->data_buff, 0xff, column); + nand->buf_count = mtd->writesize + mtd->oobsize; + exec_cmd = 0; + info->page_addr = page_addr; /* small page addr setting */ - if (unlikely(info->page_size < PAGE_CHUNK_SIZE)) { + if (unlikely(info->page_size < PAGE_CHUNK_SIZE)) nand->ndcb1 = ((page_addr & 0xFFFFFF) << 8) | (column & 0xFF); - - nand->ndcb2 = 0; - } else { nand->ndcb1 = ((page_addr & 0xFFFF) << 16) | (column & 0xFFFF); if (page_addr & 0xFF0000) nand->ndcb2 = (page_addr & 0xFF0000) >> 16; + } + break; + + case NAND_CMD_RNDOUT: + cmd = info->cmdset->read1; + if (nand->command == NAND_CMD_READOOB) { + nand->buf_start = mtd->writesize + column; + nand->buf_count = mtd->oobsize; + } + + if (unlikely(info->page_size < PAGE_CHUNK_SIZE) + || !(pdata->controller_attrs & PXA3XX_NAKED_CMD_EN)) { + if (unlikely(info->page_size < PAGE_CHUNK_SIZE)) + nand->ndcb0[0] |= NDCB0_CMD_TYPE(0) + | addr_cycle + | (cmd & NDCB0_CMD1_MASK); else - nand->ndcb2 = 0; + nand->ndcb0[0] |= NDCB0_CMD_TYPE(0) + | NDCB0_DBC + | addr_cycle + | cmd; + break; } - nand->buf_count = mtd->writesize + mtd->oobsize; - memset(nand->data_buff, 0xFF, nand->buf_count); + nand->total_cmds = chunks + 1; + nand->ndcb0[0] |= NDCB0_CMD_XTYPE(0x6) + | NDCB0_CMD_TYPE(0) + | NDCB0_DBC + | NDCB0_NC + | addr_cycle + | cmd; + + nand->ndcb0[1] |= NDCB0_CMD_XTYPE(0x5) + | NDCB0_NC + | addr_cycle; + + for (i = 2; i <= chunks; i ++) + nand->ndcb0[i] = nand->ndcb0[1]; + nand->ndcb0[chunks] &= ~NDCB0_NC; + /* we should wait RnB go high again + * before read out data*/ + nand->wait_ready[1] = 1; break; case NAND_CMD_PAGEPROG: @@ -760,40 +759,42 @@ static int prepare_command_pool(struct pxa3xx_nand *nand, int command, cmd = info->cmdset->program; nand->state |= STATE_IS_WRITE; - if (chunks == 1) + if (unlikely(info->page_size < PAGE_CHUNK_SIZE) + || !(pdata->controller_attrs & PXA3XX_NAKED_CMD_EN)) { nand->ndcb0[0] |= NDCB0_CMD_TYPE(0x1) | NDCB0_AUTO_RS | NDCB0_ST_ROW_EN | NDCB0_DBC | cmd | addr_cycle; - else { - nand->total_cmds = chunks + 1; - nand->ndcb0[0] |= NDCB0_CMD_XTYPE(0x4) - | NDCB0_CMD_TYPE(0x1) + break; + } + + nand->total_cmds = chunks + 1; + nand->ndcb0[0] |= NDCB0_CMD_XTYPE(0x4) + | NDCB0_CMD_TYPE(0x1) + | NDCB0_NC + | NDCB0_AUTO_RS + | (cmd & NDCB0_CMD1_MASK) + | addr_cycle; + + for (i = 1; i < chunks; i ++) + nand->ndcb0[i] |= NDCB0_CMD_XTYPE(0x5) | NDCB0_NC | NDCB0_AUTO_RS - | (cmd & NDCB0_CMD1_MASK) + | NDCB0_CMD_TYPE(0x1) | addr_cycle; - for (i = 1; i < chunks; i ++) - nand->ndcb0[i] |= NDCB0_CMD_XTYPE(0x5) - | NDCB0_NC - | NDCB0_AUTO_RS - | NDCB0_CMD_TYPE(0x1) - | addr_cycle; - - nand->ndcb0[chunks] |= NDCB0_CMD_XTYPE(0x3) - | NDCB0_CMD_TYPE(0x1) - | NDCB0_ST_ROW_EN - | NDCB0_DBC - | (cmd & NDCB0_CMD2_MASK) - | NDCB0_CMD1_MASK - | addr_cycle; - /* we should wait for RnB goes high which - * indicate the data has been written succesfully*/ - nand->wait_ready[nand->total_cmds] = 1; - } + nand->ndcb0[chunks] |= NDCB0_CMD_XTYPE(0x3) + | NDCB0_CMD_TYPE(0x1) + | NDCB0_ST_ROW_EN + | NDCB0_DBC + | (cmd & NDCB0_CMD2_MASK) + | NDCB0_CMD1_MASK + | addr_cycle; + /* we should wait for RnB goes high which + * indicate the data has been written succesfully*/ + nand->wait_ready[nand->total_cmds] = 1; break; case NAND_CMD_READID: @@ -807,6 +808,7 @@ static int prepare_command_pool(struct pxa3xx_nand *nand, int command, break; case NAND_CMD_STATUS: cmd = info->cmdset->read_status; + nand->data_buff = nand->dma_buff; nand->buf_count = 1; nand->ndcb0[0] |= NDCB0_CMD_TYPE(4) | NDCB0_ADDR_CYC(1) @@ -843,6 +845,7 @@ static int prepare_command_pool(struct pxa3xx_nand *nand, int command, break; } + nand->command = command; return exec_cmd; } @@ -1076,44 +1079,104 @@ static void free_cs_resource(struct pxa3xx_nand_info *info, int cs) nand->info[cs] = NULL; } -static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd, - struct nand_chip *chip, uint8_t *buf, int page) +static void pxa3xx_read_page(struct mtd_info *mtd, uint8_t *buf) { struct pxa3xx_nand_info *info = mtd->priv; + struct nand_chip *chip = mtd->priv; struct pxa3xx_nand *nand = info->nand_data; + int buf_blank; - chip->read_buf(mtd, buf, mtd->writesize); - chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); - - if (nand->retcode == ERR_SBERR) { + nand->data_buff = buf; + nand->oob_buff = chip->oob_poi; + pxa3xx_nand_cmdfunc(mtd, NAND_CMD_RNDOUT, 0, info->page_addr); + switch (nand->retcode) { + case ERR_SBERR: switch (nand->use_ecc) { case ECC_BCH: if (nand->bad_count > BCH_THRESHOLD) mtd->ecc_stats.corrected += (nand->bad_count - BCH_THRESHOLD); break; + case ECC_HAMMIN: mtd->ecc_stats.corrected ++; + break; + case ECC_NONE: default: break; } - } else if (nand->retcode == ERR_DBERR) { - int buf_blank; - - buf_blank = is_buf_blank(buf, mtd->writesize); + break; + case ERR_DBERR: + buf_blank = is_buf_blank(nand->data_buff, mtd->writesize); if (!buf_blank) mtd->ecc_stats.failed++; + break; + case ERR_NONE: + break; + default: + mtd->ecc_stats.failed++; + break; + } +} + +static int pxa3xx_nand_read_page_hwecc(struct mtd_info *mtd, + struct nand_chip *chip, uint8_t *buf, int page) +{ + pxa3xx_read_page(mtd, buf); + if (use_dma) { + chip->read_buf(mtd, buf, mtd->writesize); + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); } + return 0; +} +static int pxa3xx_nand_read_oob(struct mtd_info *mtd, struct nand_chip *chip, + int page, int sndcmd) +{ + if (sndcmd) { + pxa3xx_nand_cmdfunc(mtd, NAND_CMD_READOOB, 0, page); + pxa3xx_read_page(mtd, chip->oob_poi); + } + if (use_dma) + chip->read_buf(mtd, chip->oob_poi, mtd->oobsize); return 0; } static void pxa3xx_nand_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip, const uint8_t *buf) { - chip->write_buf(mtd, buf, mtd->writesize); - chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); + struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand *nand = info->nand_data; + if (use_dma) { + chip->write_buf(mtd, buf, mtd->writesize); + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); + } + else { + nand->data_buff = (uint8_t *)buf; + nand->oob_buff = chip->oob_poi; + } +} + +static int pxa3xx_nand_write_oob(struct mtd_info *mtd, struct nand_chip *chip, + int page) +{ + struct pxa3xx_nand_info *info = mtd->priv; + struct pxa3xx_nand *nand = info->nand_data; + int status = 0; + + nand->data_buff = nand->dma_buff; + chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize, page); + if (use_dma) + chip->write_buf(mtd, chip->oob_poi, mtd->oobsize); + else + nand->oob_buff = chip->oob_poi; + /* Send command to program the OOB data */ + chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1); + + status = chip->waitfunc(mtd, chip); + + return status & NAND_STATUS_FAIL ? -EIO : 0; } static void pxa3xx_nand_erase_cmd(struct mtd_info *mtd, int page) @@ -1162,8 +1225,8 @@ static int __devinit pxa3xx_nand_scan(struct mtd_info *mtd) return -EINVAL; } + nand->data_buff = (unsigned char *)&id; chip->cmdfunc(mtd, NAND_CMD_READID, 0, 0); - id = *((uint16_t *)(nand->data_buff)); if (id != 0) dev_info(&nand->pdev->dev, "Detect a flash id %x\n", id); else { @@ -1321,7 +1384,11 @@ static int alloc_nand_resource(struct platform_device *pdev) chip = (struct nand_chip *)(&mtd[1]); chip->controller = &nand->controller; chip->ecc.read_page = pxa3xx_nand_read_page_hwecc; + chip->ecc.read_page_raw = pxa3xx_nand_read_page_hwecc; + chip->ecc.read_oob = pxa3xx_nand_read_oob; chip->ecc.write_page = pxa3xx_nand_write_page_hwecc; + chip->ecc.write_page_raw= pxa3xx_nand_write_page_hwecc; + chip->ecc.write_oob = pxa3xx_nand_write_oob; chip->waitfunc = pxa3xx_nand_waitfunc; chip->select_chip = pxa3xx_nand_select_chip; chip->cmdfunc = pxa3xx_nand_cmdfunc; @@ -1333,25 +1400,16 @@ static int alloc_nand_resource(struct platform_device *pdev) chip->erase_cmd = pxa3xx_nand_erase_cmd; } - if (use_dma == 0) { - nand->data_buff = kmalloc(MAX_BUFF_SIZE, GFP_KERNEL); - if (nand->data_buff == NULL) { - ret = -ENOMEM; - goto fail_free_buf; - } - goto success_exit; - } - - nand->data_buff = dma_alloc_coherent(&pdev->dev, MAX_BUFF_SIZE, - &nand->data_buff_phys, GFP_KERNEL); - if (nand->data_buff == NULL) { + nand->dma_buff = dma_alloc_coherent(&pdev->dev, MAX_BUFF_SIZE, + &nand->dma_buff_phys, GFP_KERNEL); + if (nand->dma_buff == NULL) { dev_err(&pdev->dev, "failed to allocate dma buffer\n"); ret = -ENOMEM; goto fail_free_buf; } - nand->data_desc = (void *)nand->data_buff + data_desc_offset; - nand->data_desc_addr = nand->data_buff_phys + data_desc_offset; + nand->data_desc = (void *)nand->dma_buff + data_desc_offset; + nand->dma_desc_addr = nand->dma_buff_phys + data_desc_offset; nand->data_dma_ch = pxa_request_dma("nand-data", DMA_PRIO_LOW, pxa3xx_nand_data_dma_irq, nand); if (nand->data_dma_ch < 0) { @@ -1359,7 +1417,6 @@ static int alloc_nand_resource(struct platform_device *pdev) ret = -ENXIO; goto fail_free_buf; } -success_exit: return 0; fail_free_buf: @@ -1409,6 +1466,14 @@ static int pxa3xx_nand_remove(struct platform_device *pdev) del_mtd_device(mtd); free_cs_resource(info, cs); } + if (nand->dma_buff_phys) { + if (nand->data_dma_ch >= 0) + pxa_free_dma(nand->data_dma_ch); + if (nand->dma_buff) + dma_free_coherent(&nand->pdev->dev, MAX_BUFF_SIZE, + nand->dma_buff, nand->dma_buff_phys); + nand->dma_buff_phys = 0; + } return 0; } diff --git a/drivers/mtd/nand/pxa3xx_nand.c b/drivers/mtd/nand/pxa3xx_nand.c index 50f653b..048b576 100644 --- a/drivers/mtd/nand/pxa3xx_nand.c +++ b/drivers/mtd/nand/pxa3xx_nand.c @@ -158,6 +158,7 @@ struct pxa3xx_nand_info { struct nand_chip nand_chip; struct pxa3xx_nand_cmdset *cmdset; /* page size of attached chip */ + int page_addr; uint16_t page_size; uint8_t chip_select; uint8_t use_ecc; @@ -186,8 +187,8 @@ struct pxa3xx_nand { int drcmr_dat; int drcmr_cmd; int data_dma_ch; - dma_addr_t data_buff_phys; - dma_addr_t data_desc_addr; + dma_addr_t dma_buff_phys; + dma_addr_t dma_desc_addr; struct pxa_dma_desc *data_desc; struct pxa3xx_nand_info *info[NUM_CHIP_SELECT]; @@ -496,25 +497,25 @@ static void start_data_dma(struct pxa3xx_nand *nand, int dir_out) desc_oob->ddadr = desc->ddadr = DDADR_STOP; desc_oob->dcmd = desc->dcmd = DCMD_WIDTH4 | DCMD_BURST32; if (dir_out) { - desc->dsadr = nand->data_buff_phys + nand->data_column; + desc->dsadr = nand->dma_buff_phys + nand->data_column; desc->dtadr = nand->mmio_phys + NDDB; desc->dcmd |= DCMD_ENDIRQEN | DCMD_INCSRCADDR | DCMD_FLOWTRG | (data_len + oob_len); } else { if (nand->oob_size > 0) { - desc_oob->dtadr = nand->data_buff_phys + desc_oob->dtadr = nand->dma_buff_phys + info->page_size + nand->oob_column; desc_oob->dcmd |= DCMD_ENDIRQEN | DCMD_INCTRGADDR | DCMD_FLOWSRC | oob_len; - desc->ddadr = nand->data_desc_addr + sizeof(struct pxa_dma_desc); + desc->ddadr = nand->dma_desc_addr + sizeof(struct pxa_dma_desc); desc->dcmd |= DCMD_INCTRGADDR | DCMD_FLOWSRC | data_len; } else desc->dcmd |= DCMD_ENDIRQEN | DCMD_INCTRGADDR | DCMD_FLOWSRC | data_len; - desc->dtadr = nand->data_buff_phys + nand->data_column; + desc->dtadr = nand->dma_buff_phys + nand->data_column; desc_oob->dsadr = desc->dsadr = nand->mmio_phys + NDDB; } DRCMR(nand->drcmr_dat) = DRCMR_MAPVLD | nand->data_dma_ch; - DDADR(nand->data_dma_ch) = nand->data_desc_addr; + DDADR(nand->data_dma_ch) = nand->dma_desc_addr; DCSR(nand->data_dma_ch) |= DCSR_RUN;