diff mbox

[AArch64,AArch64-4.7] Fix AArch64 clear_cache

Message ID 50F7F839.50309@arm.com
State New
Headers show

Commit Message

Yufeng Zhang Jan. 17, 2013, 1:10 p.m. UTC
Hi,

The attached patch fixes a bug in the AArch64 __clear_cache 
implementation in which the loop iterating through the cache lines to 
clear started from the first address to clear, incrementing by the size 
of the cache line, and potentially missing to clear the last cache line.

Patch passes the regression test on aarch64-none-gnu-linux.  OK for 
trunk and aarch64-4.7-branch?

Thanks,
Yufeng

libgcc/

2013-01-17  Sofiane Naci  <sofiane.naci@arm.com>
	    Yufeng Zhang  <yufeng.zhang@arm.com>

	* config/aarch64/sync-cache.c (__aarch64_sync_cache_range): Align
	the loop start address for cache clearing.

Comments

Marcus Shawcroft Jan. 17, 2013, 1:12 p.m. UTC | #1
On 17/01/13 13:10, Yufeng Zhang wrote:
> Hi,
>
> The attached patch fixes a bug in the AArch64 __clear_cache
> implementation in which the loop iterating through the cache lines to
> clear started from the first address to clear, incrementing by the size
> of the cache line, and potentially missing to clear the last cache line.
>
> Patch passes the regression test on aarch64-none-gnu-linux.  OK for
> trunk and aarch64-4.7-branch?
>
> Thanks,
> Yufeng
>
> libgcc/
>
> 2013-01-17  Sofiane Naci  <sofiane.naci@arm.com>
> 	    Yufeng Zhang  <yufeng.zhang@arm.com>
>
> 	* config/aarch64/sync-cache.c (__aarch64_sync_cache_range): Align
> 	the loop start address for cache clearing.
>

OK
diff mbox

Patch

diff --git a/libgcc/config/aarch64/sync-cache.c b/libgcc/config/aarch64/sync-cache.c
index d7b621e..66b7afe 100644
--- a/libgcc/config/aarch64/sync-cache.c
+++ b/libgcc/config/aarch64/sync-cache.c
@@ -39,7 +39,11 @@  __aarch64_sync_cache_range (const void *base, const void *end)
      instruction cache fetches the updated data.  'end' is exclusive,
      as per the GNU definition of __clear_cache.  */
 
-  for (address = base; address < (const char *) end; address += dcache_lsize)
+  /* Make the start address of the loop cache aligned.  */
+  address = (const char*) ((__UINTPTR_TYPE__) base
+			   & ~ (__UINTPTR_TYPE__) (dcache_lsize - 1));
+
+  for (address; address < (const char *) end; address += dcache_lsize)
     asm volatile ("dc\tcvau, %0"
 		  :
 		  : "r" (address)
@@ -47,7 +51,11 @@  __aarch64_sync_cache_range (const void *base, const void *end)
 
   asm volatile ("dsb\tish" : : : "memory");
 
-  for (address = base; address < (const char *) end; address += icache_lsize)
+  /* Make the start address of the loop cache aligned.  */
+  address = (const char*) ((__UINTPTR_TYPE__) base
+			   & ~ (__UINTPTR_TYPE__) (icache_lsize - 1));
+
+  for (address; address < (const char *) end; address += icache_lsize)
     asm volatile ("ic\tivau, %0"
 		  :
 		  : "r" (address)