diff mbox

SPARC32: Fixed unaligned memory copying in function __csum_partial_copy_sparc_generic

Message ID 772061304970955@web57.yandex.ru
State Superseded
Delegated to: David Miller
Headers show

Commit Message

Kirill Tkhai May 9, 2011, 7:55 p.m. UTC
When we are in the label cc_dword_align, registers %o0 and %o1 have the same last 2 bits,
but it's not guaranteed one of they is zero. So we can get unaligned memory access
in label ccte. Example of parameters which lead to this:
%o0=0x7ff183e9, %o1=0x8e709e7d, %g1=3

With the parameters I had a memory corruption, when the additional 5 bytes were rewritten.
This patch corrects the error.

One comment to the patch. We don't care about the third bit in %o1, because cc_end_cruft
stores word or less.

Signed-off-by: Tkhai Kirill <tkhai@yandex.ru>
---

--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Daniel Hellstrom May 11, 2011, 12:53 p.m. UTC | #1
Tkhai Kirill wrote:

>When we are in the label cc_dword_align, registers %o0 and %o1 have the same last 2 bits,
>but it's not guaranteed one of they is zero. So we can get unaligned memory access
>in label ccte. Example of parameters which lead to this:
>%o0=0x7ff183e9, %o1=0x8e709e7d, %g1=3
>  
>
I just wanted to add that I also got unaligned accesses in the checksum 
calculation on the SPARC32/LEON. Bad alignment is bad for performance of 
course, in my case the MNA-trap handler was erroneous storing 
incorrectly and that was the reason for me to notice it. I did not look 
at the reason for the unaligned access in the first place though.

Daniel


commit 2492218c63dca0fb4f041bdc366d243ae3426b40
Author: Daniel Hellstrom <daniel@gaisler.com>
Date:   Tue Feb 1 12:39:59 2011 -0800

    sparc32: unaligned memory access (MNA) trap handler bug

    Since commit f0e98c387e61de00646be31fab4c2fa0224e1efb ("[SPARC]: Fix
    link errors with gcc-4.3") the MNA trap handler does not emulate
    stores to unaligned addresses correctly. MNA operation from both
    kernel and user space are affected.

    A typical effect of this bug is nr_frags in skbs are overwritten
    during buffer copying/checksum-calculation, or maximally 6 bytes
    of data in the network buffer will be overwitten with garbage.

    Signed-off-by: Daniel Hellstrom <daniel@gaisler.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>

--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller May 11, 2011, 5:10 p.m. UTC | #2
From: Daniel Hellstrom <daniel@gaisler.com>
Date: Wed, 11 May 2011 14:53:40 +0200

[ Warning, I'm testing a new email setup so if this looks strange
  or doesn't go out properly, my bad... ]

> I just wanted to add that I also got unaligned accesses in the
> checksum calculation on the SPARC32/LEON. Bad alignment is bad for
> performance of course, in my case the MNA-trap handler was erroneous
> storing incorrectly and that was the reason for me to notice it. I did
> not look at the reason for the unaligned access in the first place
> though.

I totally agree that, without question, we should fix this bug
in the 32-bit checksumming routines.  I just want to make sure
I absolutely understand the new logic before I apply the patch
:-)

>    sparc32: unaligned memory access (MNA) trap handler bug

And indeed, it's great that we've fixed this bug meanwhile too.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

--- linux-2.6.38.5/arch/sparc/lib/checksum_32.S.orig	2011-05-06 22:54:25.000000000 +0400
+++ linux-2.6.38.5/arch/sparc/lib/checksum_32.S	2011-05-08 11:43:35.000000000 +0400
@@ -289,10 +289,16 @@  cc_end_cruft:
 
 	/* Also, handle the alignment code out of band. */
 cc_dword_align:
-	cmp	%g1, 6
-	bl,a	ccte
+	cmp	%g1, 16
+	bge,a	1f
+	 srl	%g1, 1, %o3
+2:	cmp	%o3, 0
+	be,a	ccte
 	 andcc	%g1, 0xf, %o3
-	andcc	%o0, 0x1, %g0
+	andcc	%o3, %o0, %g0	! Check %o0 only (%o1 has the same last 2 bits)
+	be,a	2b
+	 srl	%o3, 1, %o3
+1:	andcc	%o0, 0x1, %g0
 	bne	ccslow
 	 andcc	%o0, 0x2, %g0
 	be	1f