From patchwork Sat Dec 31 03:32:04 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sonic Zhang X-Patchwork-Id: 133715 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 26463B6FC9 for ; Sat, 31 Dec 2011 14:43:01 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752314Ab1LaDm7 (ORCPT ); Fri, 30 Dec 2011 22:42:59 -0500 Received: from am1ehsobe005.messaging.microsoft.com ([213.199.154.208]:12826 "EHLO AM1EHSOBE005.bigfish.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751926Ab1LaDm7 (ORCPT ); Fri, 30 Dec 2011 22:42:59 -0500 Received: from mail70-am1-R.bigfish.com (10.3.201.252) by AM1EHSOBE005.bigfish.com (10.3.204.25) with Microsoft SMTP Server id 14.1.225.23; Sat, 31 Dec 2011 03:42:32 +0000 Received: from mail70-am1 (localhost [127.0.0.1]) by mail70-am1-R.bigfish.com (Postfix) with ESMTP id DE7264003FE; Sat, 31 Dec 2011 03:42:31 +0000 (UTC) X-SpamScore: 1 X-BigFish: VS1(zzzz1202hzz8275bhz2ei87h2a8h668h839h) X-Forefront-Antispam-Report: CIP:137.71.25.55; KIP:(null); UIP:(null); IPV:NLI; H:nwd2mta1.analog.com; RD:nwd2mail10.analog.com; EFVD:NLI Received-SPF: neutral (mail70-am1: 137.71.25.55 is neither permitted nor denied by domain of gmail.com) client-ip=137.71.25.55; envelope-from=sonic.adi@gmail.com; helo=nwd2mta1.analog.com ; 1.analog.com ; X-FB-DOMAIN-IP-MATCH: fail Received: from mail70-am1 (localhost.localdomain [127.0.0.1]) by mail70-am1 (MessageSwitch) id 1325302951648364_29958; Sat, 31 Dec 2011 03:42:31 +0000 (UTC) Received: from AM1EHSMHS012.bigfish.com (unknown [10.3.201.240]) by mail70-am1.bigfish.com (Postfix) with ESMTP id 8F45C2A0048; Sat, 31 Dec 2011 03:42:31 +0000 (UTC) Received: from nwd2mta1.analog.com (137.71.25.55) by AM1EHSMHS012.bigfish.com (10.3.207.112) with Microsoft SMTP Server (TLS) id 14.1.225.23; Sat, 31 Dec 2011 03:42:31 +0000 Received: from NWD2HUBCAS1.ad.analog.com (nwd2hubcas1.ad.analog.com [10.64.73.29]) by nwd2mta1.analog.com (8.13.8/8.13.8) with ESMTP id pBV3ZvvL025141 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Fri, 30 Dec 2011 22:35:57 -0500 Received: from zeus.spd.analog.com (10.64.82.11) by NWD2HUBCAS1.ad.analog.com (10.64.73.29) with Microsoft SMTP Server id 8.3.83.0; Fri, 30 Dec 2011 22:43:09 -0500 Received: from linux.site ([10.99.22.20]) by zeus.spd.analog.com (8.14.5/8.14.5) with ESMTP id pBV3goU9004548; Fri, 30 Dec 2011 22:42:51 -0500 (EST) Received: from localhost.localdomain (unknown [10.99.22.81]) by linux.site (Postfix) with ESMTP id 2E305429C811; Fri, 30 Dec 2011 13:02:09 -0700 (MST) From: Sonic Zhang To: David Miller , , CC: , , Sonic Zhang Subject: [PATCH] ata: pata-bf54x: Support sg list in bmdma transfer. Date: Sat, 31 Dec 2011 11:32:04 +0800 Message-ID: <1325302324-28635-1-git-send-email-sonic.adi@gmail.com> X-Mailer: git-send-email 1.7.0.4 MIME-Version: 1.0 Sender: linux-ide-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ide@vger.kernel.org From: Sonic Zhang BF54x on-chip ATAPI controller allows maximum 0x1fffe bytes to be transfered in one ATAPI transfer. So, set the max sg_tablesize to 4. Signed-off-by: Sonic Zhang --- drivers/ata/pata_bf54x.c | 174 +++++++++++++++++++++++++--------------------- 1 files changed, 95 insertions(+), 79 deletions(-) diff --git a/drivers/ata/pata_bf54x.c b/drivers/ata/pata_bf54x.c index bd987bb..06d333a 100644 --- a/drivers/ata/pata_bf54x.c +++ b/drivers/ata/pata_bf54x.c @@ -251,6 +251,15 @@ static const u32 udma_tenvmin = 20; static const u32 udma_tackmin = 20; static const u32 udma_tssmin = 50; +#define BFIN_MAX_SG_SEGMENTS 4 + +struct dma_desc_array { + unsigned long start_addr; + unsigned short cfg; + unsigned short x_count; + short x_modify; +} __packed; + /** * * Function: num_clocks_min @@ -841,79 +850,61 @@ static void bfin_set_devctl(struct ata_port *ap, u8 ctl) static void bfin_bmdma_setup(struct ata_queued_cmd *qc) { - unsigned short config = WDSIZE_16; + struct ata_port *ap = qc->ap; + struct dma_desc_array *dma_desc_cpu = (struct dma_desc_array *)ap->bmdma_prd; + void __iomem *base = (void __iomem *)ap->ioaddr.ctl_addr; + unsigned short config = DMAFLOW_ARRAY | NDSIZE_5 | RESTART | WDSIZE_16 | DMAEN; struct scatterlist *sg; unsigned int si; + unsigned int channel; + unsigned int dir; + unsigned int size = 0; dev_dbg(qc->ap->dev, "in atapi dma setup\n"); /* Program the ATA_CTRL register with dir */ if (qc->tf.flags & ATA_TFLAG_WRITE) { - /* fill the ATAPI DMA controller */ - set_dma_config(CH_ATAPI_TX, config); - set_dma_x_modify(CH_ATAPI_TX, 2); - for_each_sg(qc->sg, sg, qc->n_elem, si) { - set_dma_start_addr(CH_ATAPI_TX, sg_dma_address(sg)); - set_dma_x_count(CH_ATAPI_TX, sg_dma_len(sg) >> 1); - } + channel = CH_ATAPI_TX; + dir = DMA_TO_DEVICE; } else { + channel = CH_ATAPI_RX; + dir = DMA_FROM_DEVICE; config |= WNR; - /* fill the ATAPI DMA controller */ - set_dma_config(CH_ATAPI_RX, config); - set_dma_x_modify(CH_ATAPI_RX, 2); - for_each_sg(qc->sg, sg, qc->n_elem, si) { - set_dma_start_addr(CH_ATAPI_RX, sg_dma_address(sg)); - set_dma_x_count(CH_ATAPI_RX, sg_dma_len(sg) >> 1); - } } -} -/** - * bfin_bmdma_start - Start an IDE DMA transaction - * @qc: Info associated with this ATA transaction. - * - * Note: Original code is ata_bmdma_start(). - */ + dma_map_sg(ap->dev, qc->sg, qc->n_elem, dir); -static void bfin_bmdma_start(struct ata_queued_cmd *qc) -{ - struct ata_port *ap = qc->ap; - void __iomem *base = (void __iomem *)ap->ioaddr.ctl_addr; - struct scatterlist *sg; - unsigned int si; + /* fill the ATAPI DMA controller */ + for_each_sg(qc->sg, sg, qc->n_elem, si) { + dma_desc_cpu[si].start_addr = sg_dma_address(sg); + dma_desc_cpu[si].cfg = config; + dma_desc_cpu[si].x_count = sg_dma_len(sg) >> 1; + dma_desc_cpu[si].x_modify = 2; + size += sg_dma_len(sg); + } - dev_dbg(qc->ap->dev, "in atapi dma start\n"); - if (!(ap->udma_mask || ap->mwdma_mask)) - return; + /* Set the last descriptor to stop mode */ + dma_desc_cpu[qc->n_elem - 1].cfg &= ~(DMAFLOW | NDSIZE); - /* start ATAPI DMA controller*/ - if (qc->tf.flags & ATA_TFLAG_WRITE) { - /* - * On blackfin arch, uncacheable memory is not - * allocated with flag GFP_DMA. DMA buffer from - * common kenel code should be flushed if WB - * data cache is enabled. Otherwise, this loop - * is an empty loop and optimized out. - */ - for_each_sg(qc->sg, sg, qc->n_elem, si) { - flush_dcache_range(sg_dma_address(sg), - sg_dma_address(sg) + sg_dma_len(sg)); - } - enable_dma(CH_ATAPI_TX); - dev_dbg(qc->ap->dev, "enable udma write\n"); + flush_dcache_range((unsigned int)dma_desc_cpu, + (unsigned int)dma_desc_cpu + + qc->n_elem * sizeof(struct dma_desc_array)); + + /* Enable ATA DMA operation*/ + set_dma_curr_desc_addr(channel, (unsigned long *)ap->bmdma_prd_dma); + set_dma_x_count(channel, 0); + set_dma_x_modify(channel, 0); + set_dma_config(channel, config); + + SSYNC(); - /* Send ATA DMA write command */ - bfin_exec_command(ap, &qc->tf); + /* Send ATA DMA command */ + bfin_exec_command(ap, &qc->tf); + if (qc->tf.flags & ATA_TFLAG_WRITE) { /* set ATA DMA write direction */ ATAPI_SET_CONTROL(base, (ATAPI_GET_CONTROL(base) | XFER_DIR)); } else { - enable_dma(CH_ATAPI_RX); - dev_dbg(qc->ap->dev, "enable udma read\n"); - - /* Send ATA DMA read command */ - bfin_exec_command(ap, &qc->tf); - /* set ATA DMA read direction */ ATAPI_SET_CONTROL(base, (ATAPI_GET_CONTROL(base) & ~XFER_DIR)); @@ -925,12 +916,28 @@ static void bfin_bmdma_start(struct ata_queued_cmd *qc) /* Set ATAPI state machine contorl in terminate sequence */ ATAPI_SET_CONTROL(base, ATAPI_GET_CONTROL(base) | END_ON_TERM); - /* Set transfer length to buffer len */ - for_each_sg(qc->sg, sg, qc->n_elem, si) { - ATAPI_SET_XFER_LEN(base, (sg_dma_len(sg) >> 1)); - } + /* Set transfer length to the total size of sg buffers */ + ATAPI_SET_XFER_LEN(base, size >> 1); +} - /* Enable ATA DMA operation*/ +/** + * bfin_bmdma_start - Start an IDE DMA transaction + * @qc: Info associated with this ATA transaction. + * + * Note: Original code is ata_bmdma_start(). + */ + +static void bfin_bmdma_start(struct ata_queued_cmd *qc) +{ + struct ata_port *ap = qc->ap; + void __iomem *base = (void __iomem *)ap->ioaddr.ctl_addr; + + dev_dbg(qc->ap->dev, "in atapi dma start\n"); + + if (!(ap->udma_mask || ap->mwdma_mask)) + return; + + /* start ATAPI transfer*/ if (ap->udma_mask) ATAPI_SET_CONTROL(base, ATAPI_GET_CONTROL(base) | ULTRA_START); @@ -947,34 +954,23 @@ static void bfin_bmdma_start(struct ata_queued_cmd *qc) static void bfin_bmdma_stop(struct ata_queued_cmd *qc) { struct ata_port *ap = qc->ap; - struct scatterlist *sg; - unsigned int si; + unsigned int dir; dev_dbg(qc->ap->dev, "in atapi dma stop\n"); + if (!(ap->udma_mask || ap->mwdma_mask)) return; /* stop ATAPI DMA controller*/ - if (qc->tf.flags & ATA_TFLAG_WRITE) + if (qc->tf.flags & ATA_TFLAG_WRITE) { + dir = DMA_TO_DEVICE; disable_dma(CH_ATAPI_TX); - else { + } else { + dir = DMA_FROM_DEVICE; disable_dma(CH_ATAPI_RX); - if (ap->hsm_task_state & HSM_ST_LAST) { - /* - * On blackfin arch, uncacheable memory is not - * allocated with flag GFP_DMA. DMA buffer from - * common kenel code should be invalidated if - * data cache is enabled. Otherwise, this loop - * is an empty loop and optimized out. - */ - for_each_sg(qc->sg, sg, qc->n_elem, si) { - invalidate_dcache_range( - sg_dma_address(sg), - sg_dma_address(sg) - + sg_dma_len(sg)); - } - } } + + dma_unmap_sg(ap->dev, qc->sg, qc->n_elem, dir); } /** @@ -1276,6 +1272,11 @@ static void bfin_port_stop(struct ata_port *ap) { dev_dbg(ap->dev, "in atapi port stop\n"); if (ap->udma_mask != 0 || ap->mwdma_mask != 0) { + dma_free_coherent(ap->dev, + BFIN_MAX_SG_SEGMENTS * sizeof(struct dma_desc_array), + ap->bmdma_prd, + ap->bmdma_prd_dma); + free_dma(CH_ATAPI_RX); free_dma(CH_ATAPI_TX); } @@ -1287,14 +1288,29 @@ static int bfin_port_start(struct ata_port *ap) if (!(ap->udma_mask || ap->mwdma_mask)) return 0; + ap->bmdma_prd = dma_alloc_coherent(ap->dev, + BFIN_MAX_SG_SEGMENTS * sizeof(struct dma_desc_array), + &ap->bmdma_prd_dma, + GFP_KERNEL); + + if (ap->bmdma_prd == NULL) { + dev_info(ap->dev, "Unable to allocate DMA descriptor array.\n"); + goto out; + } + if (request_dma(CH_ATAPI_RX, "BFIN ATAPI RX DMA") >= 0) { if (request_dma(CH_ATAPI_TX, "BFIN ATAPI TX DMA") >= 0) return 0; free_dma(CH_ATAPI_RX); + dma_free_coherent(ap->dev, + BFIN_MAX_SG_SEGMENTS * sizeof(struct dma_desc_array), + ap->bmdma_prd, + ap->bmdma_prd_dma); } +out: ap->udma_mask = 0; ap->mwdma_mask = 0; dev_err(ap->dev, "Unable to request ATAPI DMA!" @@ -1416,7 +1432,7 @@ static irqreturn_t bfin_ata_interrupt(int irq, void *dev_instance) static struct scsi_host_template bfin_sht = { ATA_BASE_SHT(DRV_NAME), - .sg_tablesize = SG_NONE, + .sg_tablesize = BFIN_MAX_SG_SEGMENTS, .dma_boundary = ATA_DMA_BOUNDARY, };