diff mbox series

AArch64: Improve A64FX memcpy

Message ID VE1PR08MB55998CA2C87D1278B9A067FD83019@VE1PR08MB5599.eurprd08.prod.outlook.com
State New
Headers show
Series AArch64: Improve A64FX memcpy | expand

Commit Message

Wilco Dijkstra June 30, 2021, 3:38 p.m. UTC
Hi Naohiro,

Since the A64FX memcpy is still quite large, I decided to do a quick pass to 
cleanup to simplify the code. I believe the code is better overall and a bit faster,
but please let me know what you think. I've left the structure as it was, but there
are likely more tweaks possible. Here it is:


Reduce the codesize of the A64FX memcpy by avoiding duplication of code,
and removing redundant instructions. The size for memcpy and memmove
goes down from 1796 bytes to 1080 bytes. Performance is mostly unchanged
or slightly better as the critical loops are identical but fewer instructions are
executed before entering the loop.

Passes GLIBC regress, OK for commit?

---

Comments

Naohiro Tamura July 1, 2021, 8:26 a.m. UTC | #1
Hi Wilco,

> From: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
 
> Since the A64FX memcpy is still quite large, I decided to do a quick pass to
> cleanup to simplify the code. I believe the code is better overall and a bit faster,
> but please let me know what you think. I've left the structure as it was, but there
> are likely more tweaks possible. Here it is:

Thank you for the update, I'll take a look, give me some time.
BTW, are we going to try to merge in this cycle? The code freeze will be held soon, right?
And how did you measure the performance? Did you use some kind of simulator?

Naohiro
Wilco Dijkstra July 9, 2021, 12:50 p.m. UTC | #2
Hi Naohiro,

> The following Google Sheet Graph [1] shows the most distinctive data
> without noise among the performance data I measured some times.

I do see a lot of noise (easily 10-20% from run to run), but it reduces a lot
if you execute a few iterations before the benchmark timing starts.

> - memset-default shows that the update patch performance is better
>  from 64 byte to 1024 byte, but the master performance is better from
>  1024 byte to 8192 byte. 
>
> I think we need to do a lot of trial and error to keep the same
> performance with reducing the code size.
> Do you have any idea how to further update the patch?
> I believe that memset 1024 byte to 8192 byte performance has something
> to do with removing unroll32.

I don't see unroll32 making any difference. After you unroll by about 4 times,
any loop overhead is fully hidden by the unrolled code and thus further unrolling
cannot make any difference. I do sometimes see a small difference between 4x
and 8x unrolling - for smaller sizes up to about 4KB using unroll of 4x is faster,
but for 32-64KB unroll of 8x wins by 5-10%. I've left it at 8x for now.

The difference is due to handling of the last 512 bytes - for copies around 1KB
it can be a measurable overhead to always do an extra 512 bytes from the end.
So I've added some branches to do this in smaller chunks, but this also adds
extra branch mispredictions if the size varies from call to call (which is what
these microbenchmarks don't simulate).

I also fixed the zero size issue with memset - it's almost twice as fast to zero
memory with DC ZVA, so that is used when possible for large copies.
For non-zero copies it turns out to be a bad idea to use DC ZVA plus a second
store. The unroll8 loop is about 10% faster, so the new version just uses that.

Cheers,
Wilco
develop--- via Libc-alpha July 13, 2021, 8:33 a.m. UTC | #3
Hi Wilco,

Thanks for the V2 patch.

> From: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
> Sent: Friday, July 9, 2021 9:51 PM

> > The following Google Sheet Graph [1] shows the most distinctive data
> > without noise among the performance data I measured some times.
> 
> I do see a lot of noise (easily 10-20% from run to run), but it reduces a lot if you
> execute a few iterations before the benchmark timing starts.

I measured the V1 patch and the V2 patch as the way you mentioned,
please find the Google Sheet Graph [1].
What I meant by noise is any disturbance from other tasks. I believe
that numactl command such as "numactl --membind=7 --physcpubind=48
make bench" can minimize that.

[1] https://docs.google.com/spreadsheets/d/1YcCIQGJUG5TMSvckp6tk0jntbgQjHg8Kb-_XXCt-2Ew/edit?usp=sharing

> > - memset-default shows that the update patch performance is better
> >  from 64 byte to 1024 byte, but the master performance is better from
> >  1024 byte to 8192 byte.
> >
> > I think we need to do a lot of trial and error to keep the same
> > performance with reducing the code size.
> > Do you have any idea how to further update the patch?
> > I believe that memset 1024 byte to 8192 byte performance has something
> > to do with removing unroll32.
> 
> I don't see unroll32 making any difference. After you unroll by about 4 times, any
> loop overhead is fully hidden by the unrolled code and thus further unrolling
> cannot make any difference. I do sometimes see a small difference between 4x
> and 8x unrolling - for smaller sizes up to about 4KB using unroll of 4x is faster, but
> for 32-64KB unroll of 8x wins by 5-10%. I've left it at 8x for now.
> 
> The difference is due to handling of the last 512 bytes - for copies around 1KB it
> can be a measurable overhead to always do an extra 512 bytes from the end.
> So I've added some branches to do this in smaller chunks, but this also adds extra
> branch mispredictions if the size varies from call to call (which is what these
> microbenchmarks don't simulate).

I see. Patch V2 improves slightly than V1 from 1KB to 8KB, but
degraded from 8KB to 64KB.
But the master is still better from 1KB to 64KB.
I'm just thinking if it is possible to merge only the code for less
than 1024B and more than 64KB or not.
Is it possible?

> I also fixed the zero size issue with memset - it's almost twice as fast to zero
> memory with DC ZVA, so that is used when possible for large copies.
> For non-zero copies it turns out to be a bad idea to use DC ZVA plus a second store.
> The unroll8 loop is about 10% faster, so the new version just uses that.

Patch V2 is better than the master in case of memset-large [1].
I notice that memset-large always calls memset with the second
parameter 65. So DC ZVA is not called.
I confirmed that Patch V2 code is faster when the second parameter is
0 by the patch [2] I just submitted before this mail.
Theoretically DC ZVA should make data fill operation from memory to L2
cache omit when store instruction is called.
Therefor non zero data store should become slower without DC ZVA.
Do you have any idea why Patch V2 is faster in the range more than 8MB
of memset-large without calling DC ZVA?

[2] https://sourceware.org/pipermail/libc-alpha/2021-July/128982.html

Best regards,
Naohiro
Wilco Dijkstra July 14, 2021, 5:49 p.m. UTC | #4
Hi Naohiro,

> What I meant by noise is any disturbance from other tasks. I believe
> that numactl command such as "numactl --membind=7 --physcpubind=48
> make bench" can minimize that.

I managed to reduce the variations a bit further by increasing the iteration
count 8x in addition to doing a few iterations before the timing starts.
You can run benchmarks separately as well, I used 
"time taskset -c N build/benchtests/bench-memset > out".

> I see. Patch V2 improves slightly than V1 from 1KB to 8KB, but
> degraded from 8KB to 64KB.

The 64KB case might need some retuning - previously it would enter
the L1_prefetch loop, do just one iteration before jumping to unroll32...
That doesn't make any sense. Either L1_SIZE may need to be larger or
the size where it falls back to the non-prefetch loop has to be different.

> But the master is still better from 1KB to 64KB.
> I'm just thinking if it is possible to merge only the code for less
> than 1024B and more than 64KB or not.
> Is it possible?

I benchmarked various options, here are the bench-memset results:

memset	v2     unroll32	newlast
all	 4.72%	 4.07%	 4.16%
0-1024	28.80%	23.48%	27.91%
1k-4k	-2.65%	-4.35%	-3.68%
4k-8k	-2.77%	-2.32%	-2.71%

These are overall speedup (from sum of reported times) for the
v2 patch, the v2 patch using the original unroll32, unroll8 and last
code, and finally the same but using the v2 code for the last 512
bytes.

Overall the new code gives a huge speedup for the smaller sizes,
while the larger cases become slightly slower. The old unroll32 code
makes the 1KB-4KB range worse, and while it is slightly faster than v2
in the 4KB-8KB range, it remains >2% slower than the original.

It's not clear why the larger cases become slower - a small overhead
should quickly become unnoticeable with larger sizes. However here
the slowdown seems constant, so it may be caused by something else.

> Theoretically DC ZVA should make data fill operation from memory to L2
> cache omit when store instruction is called.
> Therefor non zero data store should become slower without DC ZVA.
> Do you have any idea why Patch V2 is faster in the range more than 8MB
> of memset-large without calling DC ZVA?

I have no idea, it's possible a different ZVA distance might work out better.
However having lots of different loop variations creates more opportunities
for performance anomalies...

Cheers,
Wilco
develop--- via Libc-alpha July 15, 2021, 8:16 a.m. UTC | #5
Hi Wilco,

> From: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
> Sent: Thursday, July 15, 2021 2:49 AM
 
> > What I meant by noise is any disturbance from other tasks. I believe
> > that numactl command such as "numactl --membind=7 --physcpubind=48
> > make bench" can minimize that.
> 
> I managed to reduce the variations a bit further by increasing the iteration
> count 8x in addition to doing a few iterations before the timing starts.
> You can run benchmarks separately as well, I used
> "time taskset -c N build/benchtests/bench-memset > out".

Thanks. Actually I'm doing like below to run a benchtest. I execute it
a few times, then take the last result.

$ numactl --membind=7 --physcpubind=48 make bench BENCHSET="string-benchset" string-benchset="memset"

> > I see. Patch V2 improves slightly than V1 from 1KB to 8KB, but
> > degraded from 8KB to 64KB.
> 
> The 64KB case might need some retuning - previously it would enter
> the L1_prefetch loop, do just one iteration before jumping to unroll32...
> That doesn't make any sense. Either L1_SIZE may need to be larger or
> the size where it falls back to the non-prefetch loop has to be different.
> 
> > But the master is still better from 1KB to 64KB.
> > I'm just thinking if it is possible to merge only the code for less
> > than 1024B and more than 64KB or not.
> > Is it possible?
> 
> I benchmarked various options, here are the bench-memset results:
> 
> memset	v2     unroll32	newlast
> all	 4.72%	 4.07%	 4.16%
> 0-1024	28.80%	23.48%	27.91%
> 1k-4k	-2.65%	-4.35%	-3.68%
> 4k-8k	-2.77%	-2.32%	-2.71%
> 
> These are overall speedup (from sum of reported times) for the
> v2 patch, the v2 patch using the original unroll32, unroll8 and last
> code, and finally the same but using the v2 code for the last 512
> bytes.
> 
> Overall the new code gives a huge speedup for the smaller sizes,
> while the larger cases become slightly slower. The old unroll32 code
> makes the 1KB-4KB range worse, and while it is slightly faster than v2
> in the 4KB-8KB range, it remains >2% slower than the original.
> 
> It's not clear why the larger cases become slower - a small overhead
> should quickly become unnoticeable with larger sizes. However here
> the slowdown seems constant, so it may be caused by something else.

Changing all area in one time makes problem complicate and hard to
solve.
So why don't we divide the changes into several patches?

1. update for 0B-1KB.
   We can agree at once, because apparently the performance of the
   master is slower than Patch V1 and V2.

2. update for 1KB-8KB
   This part might be the last patch, because the master is faster than
   Patch V1 and V2.

3. update for 8KB-64KB
   This part would be relatively easier, because Patch V1 performance
   is almost same as the master

4. update for 64KB-64MB
   This part would be also relatively easier, because Patch V2
   performance even without DC ZVA is faster than the master.

5. update for zero fill more than cache line size 256
   If DC ZVA doesn't contribute the non zero performance more than
   8MB, we should apply DC ZVA to zero data more than cache line size
   256 such as memset_generic does.

> > Theoretically DC ZVA should make data fill operation from memory to L2
> > cache omit when store instruction is called.
> > Therefor non zero data store should become slower without DC ZVA.
> > Do you have any idea why Patch V2 is faster in the range more than 8MB
> > of memset-large without calling DC ZVA?
> 
> I have no idea, it's possible a different ZVA distance might work out better.
> However having lots of different loop variations creates more opportunities
> for performance anomalies...

I tested it before and reported in the mail [1], and compare the
performance with DC ZVA with without DC ZVA for non zero more than 8MB
See the graph [21] and [24] in the mail[1].
This is the expected behavior.

[1] https://sourceware.org/pipermail/libc-alpha/2021-April/125002.html
[21] https://drive.google.com/file/d/19cHvU2lxF28DW9_Z5_5O6gOOdUmVz_ps/view
[24] https://drive.google.com/file/d/1l6pDhuPWDLy5egQ6BhRIYRvshvDeIrGl/view

Thanks.
Naohiro
Wilco Dijkstra July 22, 2021, 5:06 p.m. UTC | #6
Hi Naohiro,

I tweaked memset a bit further to get v3 which looks better overall:
 
memset    v2     v3
all     4.72%   9.27%
0-1024 28.80%  44.03% 
1k-4k  -2.65%   2.57% 
4k-8k  -2.77%  -0.58%

> Changing all area in one time makes problem complicate and hard to
> solve. So why don't we divide the changes into several patches?

I've split the patch similar to what you suggested.

> I tested it before and reported in the mail [1], and compare the
> performance with DC ZVA with without DC ZVA for non zero more than 8MB
> See the graph [21] and [24] in the mail[1].
> This is the expected behavior.
>
> [1] https://sourceware.org/pipermail/libc-alpha/2021-April/125002.html
> [21] https://drive.google.com/file/d/19cHvU2lxF28DW9_Z5_5O6gOOdUmVz_ps/view
> [24] https://drive.google.com/file/d/1l6pDhuPWDLy5egQ6BhRIYRvshvDeIrGl/view

The v2 results appear much faster (eg. at 16MB v2 does 22.21 rather than 15 in [21]),
so while the DC ZVA version of the original loop was faster, the new loop without
DC ZVA is even faster.

Cheers,
Wilco
diff mbox series

Patch

diff --git a/sysdeps/aarch64/multiarch/memcpy_a64fx.S b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
index 65528405bb12373731e895c7030ccef23b88c17f..425148300913aadd8b144d17e7ee2b496f65008e 100644
--- a/sysdeps/aarch64/multiarch/memcpy_a64fx.S
+++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
@@ -38,7 +38,6 @@ 
 #define dest_ptr	x7
 #define src_ptr		x8
 #define vector_length	x9
-#define cl_remainder	x10	// CACHE_LINE_SIZE remainder
 
 #if HAVE_AARCH64_SVE_ASM
 # if IS_IN (libc)
@@ -47,14 +46,6 @@ 
 
 	.arch armv8.2-a+sve
 
-	.macro dc_zva times
-	dc	zva, tmp1
-	add	tmp1, tmp1, CACHE_LINE_SIZE
-	.if \times-1
-	dc_zva "(\times-1)"
-	.endif
-	.endm
-
 	.macro ld1b_unroll8
 	ld1b	z0.b, p0/z, [src_ptr, #0, mul vl]
 	ld1b	z1.b, p0/z, [src_ptr, #1, mul vl]
@@ -106,69 +97,49 @@ 
 
 	.macro shortcut_for_small_size exit
 	// if rest <= vector_length * 2
-	whilelo	p0.b, xzr, n
+	whilelo p0.b, xzr, n
 	whilelo	p1.b, vector_length, n
-	b.last	1f
 	ld1b	z0.b, p0/z, [src, #0, mul vl]
 	ld1b	z1.b, p1/z, [src, #1, mul vl]
+	b.last	1f
 	st1b	z0.b, p0, [dest, #0, mul vl]
 	st1b	z1.b, p1, [dest, #1, mul vl]
 	ret
+
 1:	// if rest > vector_length * 8
 	cmp	n, vector_length, lsl 3 // vector_length * 8
 	b.hi	\exit
+
 	// if rest <= vector_length * 4
 	lsl	tmp1, vector_length, 1  // vector_length * 2
-	whilelo	p2.b, tmp1, n
-	incb	tmp1
-	whilelo	p3.b, tmp1, n
-	b.last	1f
-	ld1b	z0.b, p0/z, [src, #0, mul vl]
-	ld1b	z1.b, p1/z, [src, #1, mul vl]
+	sub	n, n, tmp1
+	whilelo p2.b, xzr, n
+	whilelo p3.b, vector_length, n
 	ld1b	z2.b, p2/z, [src, #2, mul vl]
 	ld1b	z3.b, p3/z, [src, #3, mul vl]
-	st1b	z0.b, p0, [dest, #0, mul vl]
-	st1b	z1.b, p1, [dest, #1, mul vl]
-	st1b	z2.b, p2, [dest, #2, mul vl]
-	st1b	z3.b, p3, [dest, #3, mul vl]
-	ret
-1:	// if rest <= vector_length * 8
-	lsl	tmp1, vector_length, 2  // vector_length * 4
-	whilelo	p4.b, tmp1, n
-	incb	tmp1
-	whilelo	p5.b, tmp1, n
 	b.last	1f
-	ld1b	z0.b, p0/z, [src, #0, mul vl]
-	ld1b	z1.b, p1/z, [src, #1, mul vl]
-	ld1b	z2.b, p2/z, [src, #2, mul vl]
-	ld1b	z3.b, p3/z, [src, #3, mul vl]
-	ld1b	z4.b, p4/z, [src, #4, mul vl]
-	ld1b	z5.b, p5/z, [src, #5, mul vl]
 	st1b	z0.b, p0, [dest, #0, mul vl]
-	st1b	z1.b, p1, [dest, #1, mul vl]
+	st1b	z1.b, p0, [dest, #1, mul vl]
 	st1b	z2.b, p2, [dest, #2, mul vl]
 	st1b	z3.b, p3, [dest, #3, mul vl]
-	st1b	z4.b, p4, [dest, #4, mul vl]
-	st1b	z5.b, p5, [dest, #5, mul vl]
 	ret
-1:	lsl	tmp1, vector_length, 2	// vector_length * 4
-	incb	tmp1			// vector_length * 5
-	incb	tmp1			// vector_length * 6
-	whilelo	p6.b, tmp1, n
-	incb	tmp1
-	whilelo	p7.b, tmp1, n
-	ld1b	z0.b, p0/z, [src, #0, mul vl]
-	ld1b	z1.b, p1/z, [src, #1, mul vl]
-	ld1b	z2.b, p2/z, [src, #2, mul vl]
-	ld1b	z3.b, p3/z, [src, #3, mul vl]
+
+1:	// if rest <= vector_length * 8
+	sub	n, n, tmp1
+	add	tmp2, tmp1, vector_length
+	whilelo p4.b, xzr, n
+	whilelo p5.b, vector_length, n
+	whilelo p6.b, tmp1, n
+	whilelo p7.b, tmp2, n
+
 	ld1b	z4.b, p4/z, [src, #4, mul vl]
 	ld1b	z5.b, p5/z, [src, #5, mul vl]
 	ld1b	z6.b, p6/z, [src, #6, mul vl]
 	ld1b	z7.b, p7/z, [src, #7, mul vl]
 	st1b	z0.b, p0, [dest, #0, mul vl]
-	st1b	z1.b, p1, [dest, #1, mul vl]
-	st1b	z2.b, p2, [dest, #2, mul vl]
-	st1b	z3.b, p3, [dest, #3, mul vl]
+	st1b	z1.b, p0, [dest, #1, mul vl]
+	st1b	z2.b, p0, [dest, #2, mul vl]
+	st1b	z3.b, p0, [dest, #3, mul vl]
 	st1b	z4.b, p4, [dest, #4, mul vl]
 	st1b	z5.b, p5, [dest, #5, mul vl]
 	st1b	z6.b, p6, [dest, #6, mul vl]
@@ -182,8 +153,8 @@  ENTRY (MEMCPY)
 	PTR_ARG (1)
 	SIZE_ARG (2)
 
-L(memcpy):
 	cntb	vector_length
+L(memmove_small):
 	// shortcut for less than vector_length * 8
 	// gives a free ptrue to p0.b for n >= vector_length
 	shortcut_for_small_size L(vl_agnostic)
@@ -201,135 +172,107 @@  L(vl_agnostic): // VL Agnostic
 
 L(unroll8): // unrolling and software pipeline
 	lsl	tmp1, vector_length, 3	// vector_length * 8
-	.p2align 3
-	cmp	 rest, tmp1
-	b.cc	L(last)
+	sub	rest, rest, tmp1
 	ld1b_unroll8
 	add	src_ptr, src_ptr, tmp1
-	sub	rest, rest, tmp1
-	cmp	rest, tmp1
+	subs	rest, rest, tmp1
 	b.cc	2f
-	.p2align 3
+	.p2align 4
 1:	stld1b_unroll8
 	add	dest_ptr, dest_ptr, tmp1
 	add	src_ptr, src_ptr, tmp1
-	sub	rest, rest, tmp1
-	cmp	rest, tmp1
-	b.ge	1b
+	subs	rest, rest, tmp1
+	b.hs	1b
 2:	st1b_unroll8
 	add	dest_ptr, dest_ptr, tmp1
+	add	rest, rest, tmp1
 
 	.p2align 3
 L(last):
-	whilelo	p0.b, xzr, rest
+	whilelo p0.b, xzr, rest
 	whilelo	p1.b, vector_length, rest
-	b.last	1f
-	ld1b	z0.b, p0/z, [src_ptr, #0, mul vl]
-	ld1b	z1.b, p1/z, [src_ptr, #1, mul vl]
-	st1b	z0.b, p0, [dest_ptr, #0, mul vl]
-	st1b	z1.b, p1, [dest_ptr, #1, mul vl]
-	ret
-1:	lsl	tmp1, vector_length, 1	// vector_length * 2
-	whilelo	p2.b, tmp1, rest
-	incb	tmp1
-	whilelo	p3.b, tmp1, rest
-	b.last	1f
-	ld1b	z0.b, p0/z, [src_ptr, #0, mul vl]
-	ld1b	z1.b, p1/z, [src_ptr, #1, mul vl]
-	ld1b	z2.b, p2/z, [src_ptr, #2, mul vl]
-	ld1b	z3.b, p3/z, [src_ptr, #3, mul vl]
-	st1b	z0.b, p0, [dest_ptr, #0, mul vl]
-	st1b	z1.b, p1, [dest_ptr, #1, mul vl]
-	st1b	z2.b, p2, [dest_ptr, #2, mul vl]
-	st1b	z3.b, p3, [dest_ptr, #3, mul vl]
-	ret
-1:	lsl	tmp1, vector_length, 2	// vector_length * 4
-	whilelo	p4.b, tmp1, rest
-	incb	tmp1
-	whilelo	p5.b, tmp1, rest
-	incb	tmp1
-	whilelo	p6.b, tmp1, rest
-	incb	tmp1
-	whilelo	p7.b, tmp1, rest
 	ld1b	z0.b, p0/z, [src_ptr, #0, mul vl]
 	ld1b	z1.b, p1/z, [src_ptr, #1, mul vl]
+	b.nlast	1f
+
+	lsl	tmp1, vector_length, 1  // vector_length * 2
+	sub	rest, rest, tmp1
+	whilelo p2.b, xzr, rest
+	whilelo p3.b, vector_length, rest
 	ld1b	z2.b, p2/z, [src_ptr, #2, mul vl]
 	ld1b	z3.b, p3/z, [src_ptr, #3, mul vl]
+        b.nlast  2f
+
+	sub	rest, rest, tmp1
+	add	tmp2, tmp1, vector_length // vector_length * 3
+	whilelo p4.b, xzr, rest
+	whilelo p5.b, vector_length, rest
+	whilelo p6.b, tmp1, rest
+	whilelo p7.b, tmp2, rest
+
 	ld1b	z4.b, p4/z, [src_ptr, #4, mul vl]
 	ld1b	z5.b, p5/z, [src_ptr, #5, mul vl]
 	ld1b	z6.b, p6/z, [src_ptr, #6, mul vl]
 	ld1b	z7.b, p7/z, [src_ptr, #7, mul vl]
-	st1b	z0.b, p0, [dest_ptr, #0, mul vl]
-	st1b	z1.b, p1, [dest_ptr, #1, mul vl]
-	st1b	z2.b, p2, [dest_ptr, #2, mul vl]
-	st1b	z3.b, p3, [dest_ptr, #3, mul vl]
 	st1b	z4.b, p4, [dest_ptr, #4, mul vl]
 	st1b	z5.b, p5, [dest_ptr, #5, mul vl]
 	st1b	z6.b, p6, [dest_ptr, #6, mul vl]
 	st1b	z7.b, p7, [dest_ptr, #7, mul vl]
+2:	st1b	z2.b, p2, [dest_ptr, #2, mul vl]
+	st1b	z3.b, p3, [dest_ptr, #3, mul vl]
+1:	st1b	z0.b, p0, [dest_ptr, #0, mul vl]
+	st1b	z1.b, p1, [dest_ptr, #1, mul vl]
 	ret
 
 L(L2):
 	// align dest address at CACHE_LINE_SIZE byte boundary
-	mov	tmp1, CACHE_LINE_SIZE
-	ands	tmp2, dest_ptr, CACHE_LINE_SIZE - 1
-	// if cl_remainder == 0
-	b.eq	L(L2_dc_zva)
-	sub	cl_remainder, tmp1, tmp2
-	// process remainder until the first CACHE_LINE_SIZE boundary
-	whilelo	p1.b, xzr, cl_remainder	// keep p0.b all true
-	whilelo	p2.b, vector_length, cl_remainder
-	b.last	1f
-	ld1b	z1.b, p1/z, [src_ptr, #0, mul vl]
-	ld1b	z2.b, p2/z, [src_ptr, #1, mul vl]
-	st1b	z1.b, p1, [dest_ptr, #0, mul vl]
-	st1b	z2.b, p2, [dest_ptr, #1, mul vl]
-	b	2f
-1:	lsl	tmp1, vector_length, 1	// vector_length * 2
-	whilelo	p3.b, tmp1, cl_remainder
-	incb	tmp1
-	whilelo	p4.b, tmp1, cl_remainder
-	ld1b	z1.b, p1/z, [src_ptr, #0, mul vl]
-	ld1b	z2.b, p2/z, [src_ptr, #1, mul vl]
-	ld1b	z3.b, p3/z, [src_ptr, #2, mul vl]
-	ld1b	z4.b, p4/z, [src_ptr, #3, mul vl]
-	st1b	z1.b, p1, [dest_ptr, #0, mul vl]
-	st1b	z2.b, p2, [dest_ptr, #1, mul vl]
-	st1b	z3.b, p3, [dest_ptr, #2, mul vl]
-	st1b	z4.b, p4, [dest_ptr, #3, mul vl]
-2:	add	dest_ptr, dest_ptr, cl_remainder
-	add	src_ptr, src_ptr, cl_remainder
-	sub	rest, rest, cl_remainder
+	and	tmp1, dest_ptr, CACHE_LINE_SIZE - 1
+	sub	tmp1, tmp1, CACHE_LINE_SIZE
+	ld1b	z1.b, p0/z, [src_ptr, #0, mul vl]
+	ld1b	z2.b, p0/z, [src_ptr, #1, mul vl]
+	ld1b	z3.b, p0/z, [src_ptr, #2, mul vl]
+	ld1b	z4.b, p0/z, [src_ptr, #3, mul vl]
+	st1b	z1.b, p0, [dest_ptr, #0, mul vl]
+	st1b	z2.b, p0, [dest_ptr, #1, mul vl]
+	st1b	z3.b, p0, [dest_ptr, #2, mul vl]
+	st1b	z4.b, p0, [dest_ptr, #3, mul vl]
+	sub	dest_ptr, dest_ptr, tmp1
+	sub	src_ptr, src_ptr, tmp1
+	add	rest, rest, tmp1
 
 L(L2_dc_zva):
-	// zero fill
-	and	tmp1, dest, 0xffffffffffffff
-	and	tmp2, src, 0xffffffffffffff
-	subs	tmp1, tmp1, tmp2	// diff
-	b.ge	1f
-	neg	tmp1, tmp1
-1:	mov	tmp3, ZF_DIST + CACHE_LINE_SIZE * 2
-	cmp	tmp1, tmp3
+	// check for overlap
+	sub	tmp1, src_ptr, dest_ptr
+	and	tmp1, tmp1, 0xffffffffffffff	// clear tag bits
+	mov	tmp2, ZF_DIST
+	cmp	tmp1, tmp2
 	b.lo	L(unroll8)
+
+	// zero fill loop
 	mov	tmp1, dest_ptr
-	dc_zva	(ZF_DIST / CACHE_LINE_SIZE) - 1
+	mov	tmp3, ZF_DIST / CACHE_LINE_SIZE
+1:	dc	zva, tmp1
+	add	tmp1, tmp1, CACHE_LINE_SIZE
+	subs	tmp3, tmp3, 1
+	b.ne	1b
+
+	mov	tmp3, ZF_DIST + CACHE_LINE_SIZE * 2
 	// unroll
-	ld1b_unroll8	// this line has to be after "b.lo L(unroll8)"
-	add	 src_ptr, src_ptr, CACHE_LINE_SIZE * 2
-	sub	 rest, rest, CACHE_LINE_SIZE * 2
-	mov	 tmp1, ZF_DIST
-	.p2align 3
-1:	stld1b_unroll4a
-	add	tmp2, dest_ptr, tmp1	// dest_ptr + ZF_DIST
-	dc	zva, tmp2
+	ld1b_unroll8
+	add	src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+	sub	rest, rest, CACHE_LINE_SIZE * 2
+	.p2align 4
+2:	stld1b_unroll4a
+	add	tmp1, dest_ptr, tmp2	// dest_ptr + ZF_DIST
+	dc	zva, tmp1
 	stld1b_unroll4b
-	add	tmp2, tmp2, CACHE_LINE_SIZE
-	dc	zva, tmp2
+	add	tmp1, tmp1, CACHE_LINE_SIZE
+	dc	zva, tmp1
 	add	dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
 	add	src_ptr, src_ptr, CACHE_LINE_SIZE * 2
 	sub	rest, rest, CACHE_LINE_SIZE * 2
-	cmp	rest, tmp3	// ZF_DIST + CACHE_LINE_SIZE * 2
-	b.ge	1b
+	cmp	rest, tmp3
+	b.hs	2b
 	st1b_unroll8
 	add	dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
 	b	L(unroll8)
@@ -338,68 +281,50 @@  END (MEMCPY)
 libc_hidden_builtin_def (MEMCPY)
 
 
-ENTRY (MEMMOVE)
+ENTRY_ALIGN (MEMMOVE, 4)
 
 	PTR_ARG (0)
 	PTR_ARG (1)
 	SIZE_ARG (2)
 
-	// remove tag address
-	// dest has to be immutable because it is the return value
-	// src has to be immutable because it is used in L(bwd_last)
-	and	tmp2, dest, 0xffffffffffffff	// save dest_notag into tmp2
-	and	tmp3, src, 0xffffffffffffff	// save src_notag intp tmp3
-	cmp	n, 0
-	ccmp	tmp2, tmp3, 4, ne
-	b.ne	1f
-	ret
-1:	cntb	vector_length
-	// shortcut for less than vector_length * 8
-	// gives a free ptrue to p0.b for n >= vector_length
-	// tmp2 and tmp3 should not be used in this macro to keep
-	// notag addresses
-	shortcut_for_small_size L(dispatch)
-	// end of shortcut
-
-L(dispatch):
-	// tmp2 = dest_notag, tmp3 = src_notag
-	// diff = dest_notag - src_notag
-	sub	tmp1, tmp2, tmp3
-	// if diff <= 0 || diff >= n then memcpy
-	cmp	tmp1, 0
-	ccmp	tmp1, n, 2, gt
-	b.cs	L(vl_agnostic)
-
-L(bwd_start):
-	mov	rest, n
-	add	dest_ptr, dest, n	// dest_end
-	add	src_ptr, src, n		// src_end
+	cntb	vector_length
+        // diff = dest - src
+	sub	tmp1, dest, src
+	ands	tmp1, tmp1, 0xffffffffffffff    // clear tag bits
+	b.eq	L(full_overlap)
 
-L(bwd_unroll8): // unrolling and software pipeline
-	lsl	tmp1, vector_length, 3	// vector_length * 8
-	.p2align 3
-	cmp	rest, tmp1
-	b.cc	L(bwd_last)
-	sub	src_ptr, src_ptr, tmp1
+	cmp	n, vector_length, lsl 3 // vector_length * 8
+	b.ls	L(memmove_small)
+
+	ptrue	p0.b
+	// if diff < 0 || diff >= n then memcpy
+	cmp	tmp1, n
+	b.hs	L(vl_agnostic)
+
+	// unrolling and software pipeline
+	lsl	tmp1, vector_length, 3  // vector_length * 8
+	add	dest_ptr, dest, n       // dest_end
+	sub	rest, n, tmp1
+	add	src_ptr, src, rest	// src_end
 	ld1b_unroll8
-	sub	rest, rest, tmp1
-	cmp	rest, tmp1
+	subs	rest, rest, tmp1
 	b.cc	2f
-	.p2align 3
+	.p2align 4
 1:	sub	src_ptr, src_ptr, tmp1
 	sub	dest_ptr, dest_ptr, tmp1
 	stld1b_unroll8
-	sub	rest, rest, tmp1
-	cmp	rest, tmp1
-	b.ge	1b
+	subs	rest, rest, tmp1
+	b.hs	1b
 2:	sub	dest_ptr, dest_ptr, tmp1
 	st1b_unroll8
-
-L(bwd_last):
+	add	rest, rest, tmp1
 	mov	dest_ptr, dest
 	mov	src_ptr, src
 	b	L(last)
 
+L(full_overlap):
+	ret
+
 END (MEMMOVE)
 libc_hidden_builtin_def (MEMMOVE)
 # endif /* IS_IN (libc) */