diff mbox

fix dmaengine_unmap failure.

Message ID 1395131520-30207-1-git-send-email-xuelin.shi@freescale.com (mailing list archive)
State Superseded
Headers show

Commit Message

xuelin.shi@freescale.com March 18, 2014, 8:32 a.m. UTC
From: Xuelin Shi <xuelin.shi@freescale.com>

The count which is used to get_unmap_data maybe not the same as the
count computed in dmaengine_unmap which causes to free data in a
wrong pool.

This patch fixes this issue by keeping the pool in unmap_data
structure.

Signed-off-by: Xuelin Shi <xuelin.shi@freescale.com>
---
 drivers/dma/dmaengine.c   | 7 +++++--
 include/linux/dmaengine.h | 1 +
 2 files changed, 6 insertions(+), 2 deletions(-)

Comments

Dan Williams March 18, 2014, 5:13 p.m. UTC | #1
On Tue, Mar 18, 2014 at 1:32 AM,  <xuelin.shi@freescale.com> wrote:
> From: Xuelin Shi <xuelin.shi@freescale.com>
>
> The count which is used to get_unmap_data maybe not the same as the
> count computed in dmaengine_unmap which causes to free data in a
> wrong pool.
>
> This patch fixes this issue by keeping the pool in unmap_data
> structure.

Won't this free the entire count anyways?  In what scenario is the
count different at unmap?
xuelin.shi@freescale.com March 19, 2014, 6:39 a.m. UTC | #2
Hi Dan,

In async_mult(...) of async_raid6_recov.c, the count 3 is used to request an unmap.
However the to_cnt and bidi_cnt are both set to 1 and from_cnt to 0.
Then while trying to do unmap, we are getting the wrong "unmap" from a different mempool.

In this patch, the mempool is associated with the unmap structure instead of computing it again.
By this way, it is guaranteed that the unmap is the same when we get and put the unmap data.

BTW: the mempool is just used to manage the struct unmap, not the pages.

Thanks,
Xuelin Shi


-----Original Message-----
From: Dan Williams [mailto:dan.j.williams@intel.com] 

Sent: 2014年3月19日 1:14
To: Shi Xuelin-B29237
Cc: Vinod Koul; linuxppc-dev; dmaengine@vger.kernel.org
Subject: Re: [PATCH] fix dmaengine_unmap failure.

On Tue, Mar 18, 2014 at 1:32 AM,  <xuelin.shi@freescale.com> wrote:
> From: Xuelin Shi <xuelin.shi@freescale.com>

>

> The count which is used to get_unmap_data maybe not the same as the 

> count computed in dmaengine_unmap which causes to free data in a wrong 

> pool.

>

> This patch fixes this issue by keeping the pool in unmap_data 

> structure.


Won't this free the entire count anyways?  In what scenario is the count different at unmap?
Dan Williams March 19, 2014, 3:43 p.m. UTC | #3
On Tue, Mar 18, 2014 at 11:39 PM, Xuelin Shi <xuelin.shi@freescale.com> wrote:
> Hi Dan,
>
> In async_mult(...) of async_raid6_recov.c, the count 3 is used to request an unmap.
> However the to_cnt and bidi_cnt are both set to 1 and from_cnt to 0.
> Then while trying to do unmap, we are getting the wrong "unmap" from a different mempool.
>
> In this patch, the mempool is associated with the unmap structure instead of computing it again.
> By this way, it is guaranteed that the unmap is the same when we get and put the unmap data.
>
> BTW: the mempool is just used to manage the struct unmap, not the pages.
>

I see, what about just storing the map_cnt at allocation time?  It
could be another byte in struct dmaengine_unmap_data rather than an 8
byte pointer.
xuelin.shi@freescale.com March 20, 2014, 6:34 a.m. UTC | #4
Hi Dan,

I'm OK to save memory here. I'd like to send v2.

Thanks,
Xuelin Shi

-----Original Message-----
From: Dan Williams [mailto:dan.j.williams@intel.com] 

Sent: 2014年3月19日 23:44
To: Shi Xuelin-B29237
Cc: Vinod Koul; linuxppc-dev; dmaengine@vger.kernel.org
Subject: Re: [PATCH] fix dmaengine_unmap failure.

On Tue, Mar 18, 2014 at 11:39 PM, Xuelin Shi <xuelin.shi@freescale.com> wrote:
> Hi Dan,

>

> In async_mult(...) of async_raid6_recov.c, the count 3 is used to request an unmap.

> However the to_cnt and bidi_cnt are both set to 1 and from_cnt to 0.

> Then while trying to do unmap, we are getting the wrong "unmap" from a different mempool.

>

> In this patch, the mempool is associated with the unmap structure instead of computing it again.

> By this way, it is guaranteed that the unmap is the same when we get and put the unmap data.

>

> BTW: the mempool is just used to manage the struct unmap, not the pages.

>


I see, what about just storing the map_cnt at allocation time?  It could be another byte in struct dmaengine_unmap_data rather than an 8 byte pointer.
diff mbox

Patch

diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index ed610b4..2977eee 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -1014,7 +1014,7 @@  static void dmaengine_unmap(struct kref *kref)
 		dma_unmap_page(dev, unmap->addr[i], unmap->len,
 			       DMA_BIDIRECTIONAL);
 	}
-	mempool_free(unmap, __get_unmap_pool(cnt)->pool);
+	mempool_free(unmap, unmap->unmap_pool->pool);
 }
 
 void dmaengine_unmap_put(struct dmaengine_unmap_data *unmap)
@@ -1071,14 +1071,17 @@  struct dmaengine_unmap_data *
 dmaengine_get_unmap_data(struct device *dev, int nr, gfp_t flags)
 {
 	struct dmaengine_unmap_data *unmap;
+	struct dmaengine_unmap_pool *unmap_pool;
 
-	unmap = mempool_alloc(__get_unmap_pool(nr)->pool, flags);
+	unmap_pool = __get_unmap_pool(nr);
+	unmap = mempool_alloc(unmap_pool->pool, flags);
 	if (!unmap)
 		return NULL;
 
 	memset(unmap, 0, sizeof(*unmap));
 	kref_init(&unmap->kref);
 	unmap->dev = dev;
+	unmap->unmap_pool = unmap_pool;
 
 	return unmap;
 }
diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
index c5c92d5..6a25635 100644
--- a/include/linux/dmaengine.h
+++ b/include/linux/dmaengine.h
@@ -439,6 +439,7 @@  struct dmaengine_unmap_data {
 	struct device *dev;
 	struct kref kref;
 	size_t len;
+	struct dmaengine_unmap_pool *unmap_pool;
 	dma_addr_t addr[0];
 };