Message ID | 3b319753-5b00-8cf6-5a8a-804117902774@arm.com |
---|---|
State | New |
Headers | show |
Series | [Ping,Arm] ACLE 8-bit integer matrix multiply-accumulate intrinsics | expand |
Hi Dennis, On 2/11/20 12:03 PM, Dennis Zhang wrote: > Hi all, > > On 16/12/2019 13:45, Dennis Zhang wrote: > > Hi all, > > > > This patch is part of a series adding support for Armv8.6-A features. > > It depends on the Arm Armv8.6-A CLI patch, > > https://gcc.gnu.org/ml/gcc-patches/2019-11/msg02195.html. > > It also depends on the Armv8.6-A effective target checking patch, > > https://gcc.gnu.org/ml/gcc-patches/2019-12/msg00857.html. > > It also depends on the ARMv8.6-A I8MM dot product patch for using the > > same builtin qualifier > > https://gcc.gnu.org/ml/gcc-patches/2019-12/msg00945.html. > > > > This patch adds intrinsics for matrix multiply-accumulate operations > > including vmmlaq_s32, vmmlaq_u32, and vusmmlaq_s32. > > > > ACLE documents are at https://developer.arm.com/docs/101028/latest > > ISA documents are at https://developer.arm.com/docs/ddi0596/latest > > > > Regtested for arm-none-linux-gnueabi-armv8.2-a. > > > > Is it OK for trunk please? > > This is ok. Thanks, Kyrill > > Thanks, > > Dennis > > > > gcc/ChangeLog: > > > > 2019-12-10 Dennis Zhang <dennis.zhang@arm.com> > > > > * config/arm/arm_neon.h (vmmlaq_s32, vmmlaq_u32, vusmmlaq_s32): > New. > > * config/arm/arm_neon_builtins.def (smmla, ummla, usmmla): New. > > * config/arm/iterators.md (MATMUL): New. > > (sup): Add UNSPEC_MATMUL_S, UNSPEC_MATMUL_U, and UNSPEC_MATMUL_US. > > (mmla_sfx): New. > > * config/arm/neon.md (neon_<sup>mmlav16qi): New. > > * config/arm/unspecs.md (UNSPEC_MATMUL_S): New. > > (UNSPEC_MATMUL_U, UNSPEC_MATMUL_US): New. > > > > gcc/testsuite/ChangeLog: > > > > 2019-12-10 Dennis Zhang <dennis.zhang@arm.com> > > > > * gcc.target/arm/simd/vmmla_1.c: New test. > > This patch has been updated according to the feedback on related AArch64 > version at https://gcc.gnu.org/ml/gcc-patches/2020-01/msg01591.html > > Regtested. OK to commit please? > > Many thanks > Dennis > > gcc/ChangeLog: > > 2020-02-11 Dennis Zhang <dennis.zhang@arm.com> > > * config/arm/arm-builtins.c (USTERNOP_QUALIFIERS): New macro. > * config/arm/arm_neon.h (vmmlaq_s32, vmmlaq_u32, > vusmmlaq_s32): New. > * config/arm/arm_neon_builtins.def (smmla, ummla, usmmla): New. > * config/arm/iterators.md (MATMUL): New iterator. > (sup): Add UNSPEC_MATMUL_S, UNSPEC_MATMUL_U, and UNSPEC_MATMUL_US. > (mmla_sfx): New attribute. > * config/arm/neon.md (neon_<sup>mmlav16qi): New. > * config/arm/unspecs.md (UNSPEC_MATMUL_S, UNSPEC_MATMUL_U): New. > (UNSPEC_MATMUL_US): New. > > gcc/testsuite/ChangeLog: > > 2020-02-11 Dennis Zhang <dennis.zhang@arm.com> > > * gcc.target/arm/simd/vmmla_1.c: New test.
Hi Kyrill, On 21/02/2020 11:47, Kyrill Tkachov wrote: > Hi Dennis, > > On 2/11/20 12:03 PM, Dennis Zhang wrote: >> Hi all, >> >> On 16/12/2019 13:45, Dennis Zhang wrote: >> > Hi all, >> > >> > This patch is part of a series adding support for Armv8.6-A features. >> > It depends on the Arm Armv8.6-A CLI patch, >> > https://gcc.gnu.org/ml/gcc-patches/2019-11/msg02195.html. >> > It also depends on the Armv8.6-A effective target checking patch, >> > https://gcc.gnu.org/ml/gcc-patches/2019-12/msg00857.html. >> > It also depends on the ARMv8.6-A I8MM dot product patch for using the >> > same builtin qualifier >> > https://gcc.gnu.org/ml/gcc-patches/2019-12/msg00945.html. >> > >> > This patch adds intrinsics for matrix multiply-accumulate operations >> > including vmmlaq_s32, vmmlaq_u32, and vusmmlaq_s32. >> > >> > ACLE documents are at https://developer.arm.com/docs/101028/latest >> > ISA documents are at https://developer.arm.com/docs/ddi0596/latest >> > >> > Regtested for arm-none-linux-gnueabi-armv8.2-a. >> > >> > Is it OK for trunk please? >> > > > This is ok. > > Thanks, > > Kyrill > Thanks a lot for the approval. The patch has been pushed as 436016f45694c7236e2e9f9db2adb0b4d9bf6b94. Bests Dennis
diff --git a/gcc/config/arm/arm-builtins.c b/gcc/config/arm/arm-builtins.c index 7f279cca668..60c65c1772f 100644 --- a/gcc/config/arm/arm-builtins.c +++ b/gcc/config/arm/arm-builtins.c @@ -122,6 +122,11 @@ arm_unsigned_uternop_qualifiers[SIMD_MAX_BUILTIN_ARGS] qualifier_unsigned }; #define UTERNOP_QUALIFIERS (arm_unsigned_uternop_qualifiers) +static enum arm_type_qualifiers +arm_usternop_qualifiers[SIMD_MAX_BUILTIN_ARGS] + = { qualifier_none, qualifier_none, qualifier_unsigned, qualifier_none }; +#define USTERNOP_QUALIFIERS (arm_usternop_qualifiers) + /* T (T, immediate). */ static enum arm_type_qualifiers arm_binop_imm_qualifiers[SIMD_MAX_BUILTIN_ARGS] diff --git a/gcc/config/arm/arm_neon.h b/gcc/config/arm/arm_neon.h index 3c78f435009..7461c90e3fe 100644 --- a/gcc/config/arm/arm_neon.h +++ b/gcc/config/arm/arm_neon.h @@ -18745,6 +18745,34 @@ vcmlaq_rot270_laneq_f32 (float32x4_t __r, float32x4_t __a, float32x4_t __b, #pragma GCC pop_options #endif +/* AdvSIMD 8-bit Integer Matrix Multiply (I8MM) intrinsics. */ + +#pragma GCC push_options +#pragma GCC target ("arch=armv8.2-a+i8mm") + +__extension__ extern __inline int32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmmlaq_s32 (int32x4_t __r, int8x16_t __a, int8x16_t __b) +{ + return __builtin_neon_smmlav16qi (__r, __a, __b); +} + +__extension__ extern __inline uint32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vmmlaq_u32 (uint32x4_t __r, uint8x16_t __a, uint8x16_t __b) +{ + return __builtin_neon_ummlav16qi_uuuu (__r, __a, __b); +} + +__extension__ extern __inline int32x4_t +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__)) +vusmmlaq_s32 (int32x4_t __r, uint8x16_t __a, int8x16_t __b) +{ + return __builtin_neon_usmmlav16qi_ssus (__r, __a, __b); +} + +#pragma GCC pop_options + #ifdef __cplusplus } #endif diff --git a/gcc/config/arm/arm_neon_builtins.def b/gcc/config/arm/arm_neon_builtins.def index e9ff4e501cb..d304cdb33cc 100644 --- a/gcc/config/arm/arm_neon_builtins.def +++ b/gcc/config/arm/arm_neon_builtins.def @@ -373,3 +373,7 @@ VAR2 (MAC_LANE_PAIR, vcmlaq_lane0, v4sf, v8hf) VAR2 (MAC_LANE_PAIR, vcmlaq_lane90, v4sf, v8hf) VAR2 (MAC_LANE_PAIR, vcmlaq_lane180, v4sf, v8hf) VAR2 (MAC_LANE_PAIR, vcmlaq_lane270, v4sf, v8hf) + +VAR1 (TERNOP, smmla, v16qi) +VAR1 (UTERNOP, ummla, v16qi) +VAR1 (USTERNOP, usmmla, v16qi) diff --git a/gcc/config/arm/iterators.md b/gcc/config/arm/iterators.md index 33e29509f00..141ad96d6db 100644 --- a/gcc/config/arm/iterators.md +++ b/gcc/config/arm/iterators.md @@ -485,6 +485,8 @@ (define_int_iterator VCADD [UNSPEC_VCADD90 UNSPEC_VCADD270]) (define_int_iterator VCMLA [UNSPEC_VCMLA UNSPEC_VCMLA90 UNSPEC_VCMLA180 UNSPEC_VCMLA270]) +(define_int_iterator MATMUL [UNSPEC_MATMUL_S UNSPEC_MATMUL_U UNSPEC_MATMUL_US]) + ;;---------------------------------------------------------------------------- ;; Mode attributes ;;---------------------------------------------------------------------------- @@ -939,6 +941,7 @@ (UNSPEC_VCVTH_S "s") (UNSPEC_VCVTH_U "u") (UNSPEC_DOT_S "s") (UNSPEC_DOT_U "u") (UNSPEC_SSAT16 "s") (UNSPEC_USAT16 "u") + (UNSPEC_MATMUL_S "s") (UNSPEC_MATMUL_U "u") (UNSPEC_MATMUL_US "us") ]) (define_int_attr vfml_half @@ -1107,6 +1110,9 @@ (UNSPEC_SMUADX "smuadx") (UNSPEC_SSAT16 "ssat16") (UNSPEC_USAT16 "usat16")]) +(define_int_attr mmla_sfx [(UNSPEC_MATMUL_S "s8") (UNSPEC_MATMUL_U "u8") + (UNSPEC_MATMUL_US "s8")]) + ;; Both kinds of return insn. (define_code_iterator RETURNS [return simple_return]) (define_code_attr return_str [(return "") (simple_return "simple_")]) diff --git a/gcc/config/arm/neon.md b/gcc/config/arm/neon.md index 6087ca6f2ba..f9f6176a596 100644 --- a/gcc/config/arm/neon.md +++ b/gcc/config/arm/neon.md @@ -6552,3 +6552,14 @@ if (BYTES_BIG_ENDIAN) "vabd.<V_if_elem> %<V_reg>0, %<V_reg>1, %<V_reg>2" [(set_attr "type" "neon_fp_abd_s<q>")] ) + +(define_insn "neon_<sup>mmlav16qi" + [(set (match_operand:V4SI 0 "register_operand" "=w") + (plus:V4SI + (unspec:V4SI [(match_operand:V16QI 2 "register_operand" "w") + (match_operand:V16QI 3 "register_operand" "w")] MATMUL) + (match_operand:V4SI 1 "register_operand" "0")))] + "TARGET_I8MM" + "v<sup>mmla.<mmla_sfx>\t%q0, %q2, %q3" + [(set_attr "type" "neon_mla_s_q")] +) diff --git a/gcc/config/arm/unspecs.md b/gcc/config/arm/unspecs.md index 8f4a705f43e..782c319a169 100644 --- a/gcc/config/arm/unspecs.md +++ b/gcc/config/arm/unspecs.md @@ -501,4 +501,7 @@ UNSPEC_VCMLA90 UNSPEC_VCMLA180 UNSPEC_VCMLA270 + UNSPEC_MATMUL_S + UNSPEC_MATMUL_U + UNSPEC_MATMUL_US ]) diff --git a/gcc/testsuite/gcc.target/arm/simd/vmmla_1.c b/gcc/testsuite/gcc.target/arm/simd/vmmla_1.c new file mode 100644 index 00000000000..b766a9141ce --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/simd/vmmla_1.c @@ -0,0 +1,28 @@ +/* { dg-do assemble } */ +/* { dg-require-effective-target arm_v8_2a_i8mm_ok } */ +/* { dg-options "-save-temps -O2" } */ +/* { dg-additional-options "-march=armv8.2-a+i8mm" } */ + +#include "arm_neon.h" + +int32x4_t +test_vmmlaq_s32 (int32x4_t r, int8x16_t a, int8x16_t b) +{ + return vmmlaq_s32 (r, a, b); +} + +uint32x4_t +test_vmmlaq_u32 (uint32x4_t r, uint8x16_t a, uint8x16_t b) +{ + return vmmlaq_u32 (r, a, b); +} + +int32x4_t +test_vusmmlaq_s32 (int32x4_t r, uint8x16_t a, int8x16_t b) +{ + return vusmmlaq_s32 (r, a, b); +} + +/* { dg-final { scan-assembler-times {\tvsmmla.s8\tq[0-9]+, q[0-9]+, q[0-9]+} 1 } } */ +/* { dg-final { scan-assembler-times {\tvummla.u8\tq[0-9]+, q[0-9]+, q[0-9]+} 1 } } */ +/* { dg-final { scan-assembler-times {\tvusmmla.s8\tq[0-9]+, q[0-9]+, q[0-9]+} 1 } } */