diff mbox

SPARC32: Fixed unaligned memory copying in function __csum_partial_copy_sparc_generic

Message ID 549071304930964@web114.yandex.ru
State Superseded
Delegated to: David Miller
Headers show

Commit Message

Kirill Tkhai May 9, 2011, 8:49 a.m. UTC
>Your patch is corrupted, it has chopped up long lines.

>Please fix this up, and send a test patch email to yourself.

It's strange... I had did it at least twice before I sent the first message. And it's possible for mine diff to apply patch from http://spinics.net/lists/sparclinux mail list without any warnings.

Ok, I'm sending patch from another mail program:

--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

David Miller May 9, 2011, 6:33 p.m. UTC | #1
From: Tkhai Kirill <tkhai@yandex.ru>
Date: Mon, 09 May 2011 12:49:24 +0400

>>Your patch is corrupted, it has chopped up long lines.
> 
>>Please fix this up, and send a test patch email to yourself.
> 
> It's strange... I had did it at least twice before I sent the first message. And it's possible for mine diff to apply patch from http://spinics.net/lists/sparclinux mail list without any warnings.
> 
> Ok, I'm sending patch from another mail program:

Please don't just reply with a new patch and make it part of the
discussion thread.

Instead, make a fresh new posting with the full commit log message
and patch.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

--- linux-2.6.38.5/arch/sparc/lib/checksum_32.S.orig	2011-05-06 22:54:25.000000000 +0400
+++ linux-2.6.38.5/arch/sparc/lib/checksum_32.S	2011-05-08 11:43:35.000000000 +0400
@@ -289,10 +289,16 @@  cc_end_cruft:
 
 	/* Also, handle the alignment code out of band. */
 cc_dword_align:
-	cmp	%g1, 6
-	bl,a	ccte
+	cmp	%g1, 16
+	bge,a	1f
+	 srl	%g1, 1, %o3
+2:	cmp	%o3, 0
+	be,a	ccte
 	 andcc	%g1, 0xf, %o3
-	andcc	%o0, 0x1, %g0
+	andcc	%o3, %o0, %g0	! Check %o0 only (%o1 has the same last 2 bits)
+	be,a	2b
+	 srl	%o3, 1, %o3
+1:	andcc	%o0, 0x1, %g0
 	bne	ccslow
 	 andcc	%o0, 0x2, %g0
 	be	1f