Patchwork dmaengine: async_xor, fix zero address issue when xor highmem page

login
register
mail settings
Submitter b29237@freescale.com
Date Jan. 11, 2012, 7:51 a.m.
Message ID <DBB740589CE8814680DECFE34BE197AB166593@039-SN1MPN1-006.039d.mgd.msft.net>
Download mbox | patch
Permalink /patch/135362/
State Not Applicable
Delegated to: Kumar Gala
Headers show

Comments

b29237@freescale.com - Jan. 11, 2012, 7:51 a.m.
Hello Dan Williams,

Do you have any comment about this patch?

Thanks,
Forrest

-----Original Message-----
From: Shi Xuelin-B29237 
Sent: 2011年12月27日 14:31
To: vinod.koul@intel.com; dan.j.williams@intel.com; linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org; Li Yang-R58472
Cc: Shi Xuelin-B29237
Subject: [PATCH] dmaengine: async_xor, fix zero address issue when xor highmem page

From: Forrest shi <b29237@freescale.com>

	we may do_sync_xor high mem pages, in this case, page_address will
        return zero address which cause a failure.

	this patch uses kmap_atomic before xor the pages and kunmap_atomic
        after it.

	Signed-off-by: b29237@freescale.com <xuelin.shi@freescale.com>
---
 crypto/async_tx/async_xor.c |   16 ++++++++++++----
 1 files changed, 12 insertions(+), 4 deletions(-)

 
 	/* convert to buffer pointers */
 	for (i = 0; i < src_cnt; i++)
-		if (src_list[i])
-			srcs[xor_src_cnt++] = page_address(src_list[i]) + offset;
+		if (src_list[i]) {
+			srcs[xor_src_cnt++] = kmap_atomic(src_list[i], KM_USER1) + offset;
+		}
+	kmap_cnt = xor_src_cnt;
 	src_cnt = xor_src_cnt;
 	/* set destination address */
-	dest_buf = page_address(dest) + offset;
+	dest_buf = kmap_atomic(dest, KM_USER0) + offset;
 
 	if (submit->flags & ASYNC_TX_XOR_ZERO_DST)
 		memset(dest_buf, 0, len);
@@ -157,6 +160,11 @@ do_sync_xor(struct page *dest, struct page **src_list, unsigned int offset,
 		src_off += xor_src_cnt;
 	}
 
+	kunmap_atomic(dest_buf, KM_USER0);
+	for (i = 0; i < kmap_cnt; i++) 
+		if (src_list[i])
+			kunmap_atomic(srcs[i], KM_USER1);
+
 	async_tx_sync_epilog(submit);
 }
 
--
1.7.0.4
Dan Williams - Jan. 26, 2012, 9:12 a.m.
2012/1/10 Shi Xuelin-B29237 <B29237@freescale.com>:
> Hello Dan Williams,
>
> Do you have any comment about this patch?

Hi, sorrry for the delay.

>
> Thanks,
> Forrest
>
> -----Original Message-----
> From: Shi Xuelin-B29237
> Sent: 2011年12月27日 14:31
> To: vinod.koul@intel.com; dan.j.williams@intel.com; linuxppc-dev@lists.ozlabs.org; linux-kernel@vger.kernel.org; Li Yang-R58472
> Cc: Shi Xuelin-B29237
> Subject: [PATCH] dmaengine: async_xor, fix zero address issue when xor highmem page
>
> From: Forrest shi <b29237@freescale.com>
>
>        we may do_sync_xor high mem pages, in this case, page_address will
>        return zero address which cause a failure.

In what scenarios do we xor highmem?

In the case of raid we currently always xor on kmalloc'd memory.

>
>        this patch uses kmap_atomic before xor the pages and kunmap_atomic
>        after it.
>
>        Signed-off-by: b29237@freescale.com <xuelin.shi@freescale.com>
> ---
>  crypto/async_tx/async_xor.c |   16 ++++++++++++----
>  1 files changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/crypto/async_tx/async_xor.c b/crypto/async_tx/async_xor.c index bc28337..5b416d1 100644
> --- a/crypto/async_tx/async_xor.c
> +++ b/crypto/async_tx/async_xor.c
> @@ -26,6 +26,7 @@
>  #include <linux/kernel.h>
>  #include <linux/interrupt.h>
>  #include <linux/mm.h>
> +#include <linux/highmem.h>
>  #include <linux/dma-mapping.h>
>  #include <linux/raid/xor.h>
>  #include <linux/async_tx.h>
> @@ -126,7 +127,7 @@ do_sync_xor(struct page *dest, struct page **src_list, unsigned int offset,
>            int src_cnt, size_t len, struct async_submit_ctl *submit)  {
>        int i;
> -       int xor_src_cnt = 0;
> +       int xor_src_cnt = 0, kmap_cnt=0;
>        int src_off = 0;
>        void *dest_buf;
>        void **srcs;
> @@ -138,11 +139,13 @@ do_sync_xor(struct page *dest, struct page **src_list, unsigned int offset,
>
>        /* convert to buffer pointers */
>        for (i = 0; i < src_cnt; i++)
> -               if (src_list[i])
> -                       srcs[xor_src_cnt++] = page_address(src_list[i]) + offset;
> +               if (src_list[i]) {
> +                       srcs[xor_src_cnt++] = kmap_atomic(src_list[i], KM_USER1) + offset;
> +               }
> +       kmap_cnt = xor_src_cnt;

I guess this works now that we have stack based kmap_atomic, but on
older kernels you could not simultaneously map that many buffers with
a single kmap slot.  So if you resend, drop the second parameter to
kmap_atomic.

...but unless you have a non md/raid456 use case in mind, or have
patches to convert md/raid to xor straight out of the incoming biovecs
I don't think this patch is needed right?

Patch

diff --git a/crypto/async_tx/async_xor.c b/crypto/async_tx/async_xor.c index bc28337..5b416d1 100644
--- a/crypto/async_tx/async_xor.c
+++ b/crypto/async_tx/async_xor.c
@@ -26,6 +26,7 @@ 
 #include <linux/kernel.h>
 #include <linux/interrupt.h>
 #include <linux/mm.h>
+#include <linux/highmem.h>
 #include <linux/dma-mapping.h>
 #include <linux/raid/xor.h>
 #include <linux/async_tx.h>
@@ -126,7 +127,7 @@  do_sync_xor(struct page *dest, struct page **src_list, unsigned int offset,
 	    int src_cnt, size_t len, struct async_submit_ctl *submit)  {
 	int i;
-	int xor_src_cnt = 0;
+	int xor_src_cnt = 0, kmap_cnt=0;
 	int src_off = 0;
 	void *dest_buf;
 	void **srcs;
@@ -138,11 +139,13 @@  do_sync_xor(struct page *dest, struct page **src_list, unsigned int offset,