diff mbox

[AArch64] Fix invalid assembler in scalar_intrinsics.c test

Message ID 000001ce56ff$adb6f160$0924d420$@bolton@arm.com
State New
Headers show

Commit Message

Ian Bolton May 22, 2013, 3:18 p.m. UTC
The test file scalar_intrinsics.c (in gcc.target/aarch64)
is currently compile-only.

If you attempt to make it run, as opposed to just generate
assembler, you can't because it won't assemble.

There are two issues causing trouble here:

1) Use of invalid instruction "mov d0, d1".
   It should be "mov d0, v1.d[0]".

2) The vdupd_lane_s64 and vdupd_lane_u64 calls are being given
   a lane that is out of range, which causes invalid assembler
   output.

This patch fixes both, so that we can build on this to make
executable test cases for scalar intrinsics.

OK for trunk?

Cheers,
Ian


2013-05-22  Ian Bolton  <ian.bolton@arm.com>

testsuite/
	* gcc.target/aarch64/scalar_intrinsics.c (force_simd):
	Use a valid instruction.
	(test_vdupd_lane_s64): Pass a valid lane argument.
	(test_vdupd_lane_u64): Likewise.
diff mbox

Patch

diff --git a/gcc/testsuite/gcc.target/aarch64/scalar_intrinsics.c b/gcc/testsuite/gcc.target/aarch64/scalar_intrinsics.c
index 7427c62..16537ce 100644
--- a/gcc/testsuite/gcc.target/aarch64/scalar_intrinsics.c
+++ b/gcc/testsuite/gcc.target/aarch64/scalar_intrinsics.c
@@ -4,7 +4,7 @@ 
 #include <arm_neon.h>
 
 /* Used to force a variable to a SIMD register.  */
-#define force_simd(V1)   asm volatile ("mov %d0, %d1"		\
+#define force_simd(V1)   asm volatile ("mov %d0, %1.d[0]"	\
 	   : "=w"(V1)						\
 	   : "w"(V1)						\
 	   : /* No clobbers */);
@@ -228,13 +228,13 @@  test_vdups_lane_u32 (uint32x4_t a)
 int64x1_t
 test_vdupd_lane_s64 (int64x2_t a)
 {
-  return vdupd_lane_s64 (a, 2);
+  return vdupd_lane_s64 (a, 1);
 }
 
 uint64x1_t
 test_vdupd_lane_u64 (uint64x2_t a)
 {
-  return vdupd_lane_u64 (a, 2);
+  return vdupd_lane_u64 (a, 1);
 }
 
 /* { dg-final { scan-assembler-times "\\tcmtst\\td\[0-9\]+, d\[0-9\]+, d\[0-9\]+" 2 } } */