diff mbox series

[AArch64] Use intrinsics for widening multiplies (PR91598)

Message ID AM5PR0801MB203547524838AC3BA4B3D3B483E30@AM5PR0801MB2035.eurprd08.prod.outlook.com
State New
Headers show
Series [AArch64] Use intrinsics for widening multiplies (PR91598) | expand

Commit Message

Wilco Dijkstra March 6, 2020, 3:03 p.m. UTC
Inline assembler instructions don't have latency info and the scheduler does
not attempt to schedule them at all - it does not even honor latencies of
asm source operands. As a result, SIMD intrinsics which are implemented using
inline assembler perform very poorly, particularly on in-order cores.
Fix this by adding new patterns and intrinsics for widening multiplies, which
results in a 63% speedup for the example in the PR. This fixes the performance
regression.

Passes regress&bootstrap.

ChangeLog:
2020-03-06  Wilco Dijkstra  <wdijkstr@arm.com>

	PR target/91598
	* config/aarch64/aarch64-builtins.c (TYPES_TERNOPU_LANE): Add define.
	* config/aarch64/aarch64-simd.md
	(aarch64_vec_<su>mult_lane<Qlane>): Add new insn for widening lane mul.
	(aarch64_vec_<su>mlal_lane<Qlane>): Likewise.
	* config/aarch64/aarch64-simd-builtins.def: Add intrinsics.
	* config/aarch64/arm_neon.h:
	(vmlal_lane_s16): Expand using intrinsics rather than inline asm.
	(vmlal_lane_u16): Likewise.
	(vmlal_lane_s32): Likewise.
	(vmlal_lane_u32): Likewise.
	(vmlal_laneq_s16): Likewise.
	(vmlal_laneq_u16): Likewise.
	(vmlal_laneq_s32): Likewise.
	(vmlal_laneq_u32): Likewise.
	(vmull_lane_s16): Likewise.
	(vmull_lane_u16): Likewise.
	(vmull_lane_s32): Likewise.
	(vmull_lane_u32): Likewise.
	(vmull_laneq_s16): Likewise.
	(vmull_laneq_u16): Likewise.
	(vmull_laneq_s32): Likewise.
	(vmull_laneq_u32): Likewise.
	* config/aarch64/iterators.md (Vtype2): Add new iterator for lane mul.
	(Qlane): Likewise.

---

Comments

Richard Sandiford March 6, 2020, 5:29 p.m. UTC | #1
> +;; vmlal_lane_s16 intrinsics
> +(define_insn "aarch64_vec_<su>mlal_lane<Qlane>"
> +  [(set (match_operand:<VWIDE> 0 "register_operand" "=w")
> +	(plus:<VWIDE> (match_operand:<VWIDE> 1 "register_operand" "0")
> +	  (mult:<VWIDE>
> +	    (ANY_EXTEND:<VWIDE>
> +	      (match_operand:<VCOND> 2 "register_operand" "w"))
> +	    (ANY_EXTEND:<VWIDE>
> +	      (vec_duplicate:<VCOND>
> +		(vec_select:<VEL>
> +		  (match_operand:VDQHS 3 "register_operand" "<vwx>")
> +		  (parallel [(match_operand:SI 4 "immediate_operand" "i")])))))))]
> +  "TARGET_SIMD"
> +  {
> +    operands[4] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[4]));
> +    return "<su>mlal\\t%0.<Vwtype>, %2.<Vtype2>, %3.<Vetype>[%4]";
> +  }
> +  [(set_attr "type" "neon_mla_<Vetype>_scalar_long")]
> +)
> +

The canonical order is to have the (mult ...) first and the register
operand second.  (No need to change the operand numbering though,
just swapping them as-is should be fine.)

> diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md
> index ec1b92c5379f7c33446d0ac3556f6358fb7433d3..2f4b553a9a433773b222ce9f0bede3630ff0624c 100644
> --- a/gcc/config/aarch64/iterators.md
> +++ b/gcc/config/aarch64/iterators.md
> @@ -980,6 +980,13 @@ (define_mode_attr Vtype [(V8QI "8b") (V16QI "16b")
>  			 (V4SF "4s") (V2DF "2d")
>  			 (V4HF "4h") (V8HF "8h")])
> 
> +;; Map mode to type used in widening multiplies.
> +(define_mode_attr Vtype2 [(V4HI "4h") (V8HI "4h") (V2SI "2s") (V4SI "2s")])

How about Vcondtype, to make it clearer that it's the Vtype associated
with VCOND?

OK with those changes, thanks.

Richard
Christophe Lyon March 9, 2020, 3:42 p.m. UTC | #2
On Fri, 6 Mar 2020 at 16:03, Wilco Dijkstra <Wilco.Dijkstra@arm.com> wrote:
>
> Inline assembler instructions don't have latency info and the scheduler does
> not attempt to schedule them at all - it does not even honor latencies of
> asm source operands. As a result, SIMD intrinsics which are implemented using
> inline assembler perform very poorly, particularly on in-order cores.
> Fix this by adding new patterns and intrinsics for widening multiplies, which
> results in a 63% speedup for the example in the PR. This fixes the performance
> regression.
>
> Passes regress&bootstrap.
>
> ChangeLog:
> 2020-03-06  Wilco Dijkstra  <wdijkstr@arm.com>
>
>         PR target/91598
>         * config/aarch64/aarch64-builtins.c (TYPES_TERNOPU_LANE): Add define.
>         * config/aarch64/aarch64-simd.md
>         (aarch64_vec_<su>mult_lane<Qlane>): Add new insn for widening lane mul.
>         (aarch64_vec_<su>mlal_lane<Qlane>): Likewise.
>         * config/aarch64/aarch64-simd-builtins.def: Add intrinsics.
>         * config/aarch64/arm_neon.h:
>         (vmlal_lane_s16): Expand using intrinsics rather than inline asm.
>         (vmlal_lane_u16): Likewise.
>         (vmlal_lane_s32): Likewise.
>         (vmlal_lane_u32): Likewise.
>         (vmlal_laneq_s16): Likewise.
>         (vmlal_laneq_u16): Likewise.
>         (vmlal_laneq_s32): Likewise.
>         (vmlal_laneq_u32): Likewise.
>         (vmull_lane_s16): Likewise.
>         (vmull_lane_u16): Likewise.
>         (vmull_lane_s32): Likewise.
>         (vmull_lane_u32): Likewise.
>         (vmull_laneq_s16): Likewise.
>         (vmull_laneq_u16): Likewise.
>         (vmull_laneq_s32): Likewise.
>         (vmull_laneq_u32): Likewise.
>         * config/aarch64/iterators.md (Vtype2): Add new iterator for lane mul.
>         (Qlane): Likewise.
>


Hi Wilco,

I noticed a regression introduced by Delia's patch "aarch64: ACLE
intrinsics for BFCVTN, BFCVTN2 and BFCVT":
(on aarch64-linux-gnu)
FAIL: g++.dg/cpp0x/variadic-sizeof4.C  -std=c++14 (internal compiler error)

I couldn't reproduce it with current ToT, until I realized that your
patch fixes it. However, I'm wondering whether that's expected given
the context of both patches....

Christophe


> ---
> diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aarch64-builtins.c
> index 9c9c6d86ae29fcbcf42e84408c5e94990fed8348..5744e68ea08722dcc387254f44408eb0fd3ffe6e 100644
> --- a/gcc/config/aarch64/aarch64-builtins.c
> +++ b/gcc/config/aarch64/aarch64-builtins.c
> @@ -175,6 +175,11 @@ aarch64_types_ternopu_qualifiers[SIMD_MAX_BUILTIN_ARGS]
>        qualifier_unsigned, qualifier_unsigned };
>  #define TYPES_TERNOPU (aarch64_types_ternopu_qualifiers)
>  static enum aarch64_type_qualifiers
> +aarch64_types_ternopu_lane_qualifiers[SIMD_MAX_BUILTIN_ARGS]
> +  = { qualifier_unsigned, qualifier_unsigned,
> +      qualifier_unsigned, qualifier_lane_index };
> +#define TYPES_TERNOPU_LANE (aarch64_types_ternopu_lane_qualifiers)
> +static enum aarch64_type_qualifiers
>  aarch64_types_ternopu_imm_qualifiers[SIMD_MAX_BUILTIN_ARGS]
>    = { qualifier_unsigned, qualifier_unsigned,
>        qualifier_unsigned, qualifier_immediate };
> diff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def
> index d8bb96f8ed60648477f952ea6b88eae67cc9c921..e256e9c2086b48dfb1d95ce8391651ec9e86b696 100644
> --- a/gcc/config/aarch64/aarch64-simd-builtins.def
> +++ b/gcc/config/aarch64/aarch64-simd-builtins.def
> @@ -191,6 +191,15 @@
>    BUILTIN_VQW (BINOP, vec_widen_smult_hi_, 10)
>    BUILTIN_VQW (BINOPU, vec_widen_umult_hi_, 10)
>
> +  BUILTIN_VD_HSI (TERNOP_LANE, vec_smult_lane_, 0)
> +  BUILTIN_VD_HSI (QUADOP_LANE, vec_smlal_lane_, 0)
> +  BUILTIN_VD_HSI (TERNOP_LANE, vec_smult_laneq_, 0)
> +  BUILTIN_VD_HSI (QUADOP_LANE, vec_smlal_laneq_, 0)
> +  BUILTIN_VD_HSI (TERNOPU_LANE, vec_umult_lane_, 0)
> +  BUILTIN_VD_HSI (QUADOPU_LANE, vec_umlal_lane_, 0)
> +  BUILTIN_VD_HSI (TERNOPU_LANE, vec_umult_laneq_, 0)
> +  BUILTIN_VD_HSI (QUADOPU_LANE, vec_umlal_laneq_, 0)
> +
>    BUILTIN_VSD_HSI (BINOP, sqdmull, 0)
>    BUILTIN_VSD_HSI (TERNOP_LANE, sqdmull_lane, 0)
>    BUILTIN_VSD_HSI (TERNOP_LANE, sqdmull_laneq, 0)
> diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
> index 999d80667b7cf06040515958c747d8bca0728acc..ccf4e394c1f6aa7d0adb23cfcd8da1b6d40d7ebf 100644
> --- a/gcc/config/aarch64/aarch64-simd.md
> +++ b/gcc/config/aarch64/aarch64-simd.md
> @@ -1892,6 +1892,45 @@ (define_expand "vec_widen_<su>mult_hi_<mode>"
>   }
>  )
>
> +;; vmull_lane_s16 intrinsics
> +(define_insn "aarch64_vec_<su>mult_lane<Qlane>"
> +  [(set (match_operand:<VWIDE> 0 "register_operand" "=w")
> +       (mult:<VWIDE>
> +         (ANY_EXTEND:<VWIDE>
> +           (match_operand:<VCOND> 1 "register_operand" "w"))
> +         (ANY_EXTEND:<VWIDE>
> +           (vec_duplicate:<VCOND>
> +             (vec_select:<VEL>
> +               (match_operand:VDQHS 2 "register_operand" "<vwx>")
> +               (parallel [(match_operand:SI 3 "immediate_operand" "i")]))))))]
> +  "TARGET_SIMD"
> +  {
> +    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));
> +    return "<su>mull\\t%0.<Vwtype>, %1.<Vtype2>, %2.<Vetype>[%3]";
> +  }
> +  [(set_attr "type" "neon_mul_<Vetype>_scalar_long")]
> +)
> +
> +;; vmlal_lane_s16 intrinsics
> +(define_insn "aarch64_vec_<su>mlal_lane<Qlane>"
> +  [(set (match_operand:<VWIDE> 0 "register_operand" "=w")
> +       (plus:<VWIDE> (match_operand:<VWIDE> 1 "register_operand" "0")
> +         (mult:<VWIDE>
> +           (ANY_EXTEND:<VWIDE>
> +             (match_operand:<VCOND> 2 "register_operand" "w"))
> +           (ANY_EXTEND:<VWIDE>
> +             (vec_duplicate:<VCOND>
> +               (vec_select:<VEL>
> +                 (match_operand:VDQHS 3 "register_operand" "<vwx>")
> +                 (parallel [(match_operand:SI 4 "immediate_operand" "i")])))))))]
> +  "TARGET_SIMD"
> +  {
> +    operands[4] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[4]));
> +    return "<su>mlal\\t%0.<Vwtype>, %2.<Vtype2>, %3.<Vetype>[%4]";
> +  }
> +  [(set_attr "type" "neon_mla_<Vetype>_scalar_long")]
> +)
> +
>  ;; FP vector operations.
>  ;; AArch64 AdvSIMD supports single-precision (32-bit) and
>  ;; double-precision (64-bit) floating-point data types and arithmetic as
> diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h
> index b6f42ac630295d9b827e2763cf487ccfb5bfe64b..700dd57ccd1b7ced731a92e43bc71911ad1c93cb 100644
> --- a/gcc/config/aarch64/arm_neon.h
> +++ b/gcc/config/aarch64/arm_neon.h
> @@ -7700,117 +7700,61 @@ vmlal_high_u32 (uint64x2_t __a, uint32x4_t __b, uint32x4_t __c)
>    return __result;
>  }
>
> -#define vmlal_lane_s16(a, b, c, d)                                      \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       int16x4_t c_ = (c);                                              \
> -       int16x4_t b_ = (b);                                              \
> -       int32x4_t a_ = (a);                                              \
> -       int32x4_t result;                                                \
> -       __asm__ ("smlal %0.4s,%2.4h,%3.h[%4]"                            \
> -                : "=w"(result)                                          \
> -                : "0"(a_), "w"(b_), "x"(c_), "i"(d)                     \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline int32x4_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmlal_lane_s16 (int32x4_t __acc, int16x4_t __a, int16x4_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_smlal_lane_v4hi (__acc, __a, __b, __c);
> +}
>
> -#define vmlal_lane_s32(a, b, c, d)                                      \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       int32x2_t c_ = (c);                                              \
> -       int32x2_t b_ = (b);                                              \
> -       int64x2_t a_ = (a);                                              \
> -       int64x2_t result;                                                \
> -       __asm__ ("smlal %0.2d,%2.2s,%3.s[%4]"                            \
> -                : "=w"(result)                                          \
> -                : "0"(a_), "w"(b_), "w"(c_), "i"(d)                     \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline int64x2_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmlal_lane_s32 (int64x2_t __acc, int32x2_t __a, int32x2_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_smlal_lane_v2si (__acc, __a, __b, __c);
> +}
>
> -#define vmlal_lane_u16(a, b, c, d)                                      \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       uint16x4_t c_ = (c);                                             \
> -       uint16x4_t b_ = (b);                                             \
> -       uint32x4_t a_ = (a);                                             \
> -       uint32x4_t result;                                               \
> -       __asm__ ("umlal %0.4s,%2.4h,%3.h[%4]"                            \
> -                : "=w"(result)                                          \
> -                : "0"(a_), "w"(b_), "x"(c_), "i"(d)                     \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline uint32x4_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmlal_lane_u16 (uint32x4_t __acc, uint16x4_t __a, uint16x4_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_umlal_lane_v4hi_uuuus (__acc, __a, __b, __c);
> +}
>
> -#define vmlal_lane_u32(a, b, c, d)                                      \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       uint32x2_t c_ = (c);                                             \
> -       uint32x2_t b_ = (b);                                             \
> -       uint64x2_t a_ = (a);                                             \
> -       uint64x2_t result;                                               \
> -       __asm__ ("umlal %0.2d, %2.2s, %3.s[%4]"                          \
> -                : "=w"(result)                                          \
> -                : "0"(a_), "w"(b_), "w"(c_), "i"(d)                     \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline uint64x2_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmlal_lane_u32 (uint64x2_t __acc, uint32x2_t __a, uint32x2_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_umlal_lane_v2si_uuuus (__acc, __a, __b, __c);
> +}
>
> -#define vmlal_laneq_s16(a, b, c, d)                                     \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       int16x8_t c_ = (c);                                              \
> -       int16x4_t b_ = (b);                                              \
> -       int32x4_t a_ = (a);                                              \
> -       int32x4_t result;                                                \
> -       __asm__ ("smlal %0.4s, %2.4h, %3.h[%4]"                          \
> -                : "=w"(result)                                          \
> -                : "0"(a_), "w"(b_), "x"(c_), "i"(d)                     \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline int32x4_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmlal_laneq_s16 (int32x4_t __acc, int16x4_t __a, int16x8_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_smlal_laneq_v4hi (__acc, __a, __b, __c);
> +}
>
> -#define vmlal_laneq_s32(a, b, c, d)                                     \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       int32x4_t c_ = (c);                                              \
> -       int32x2_t b_ = (b);                                              \
> -       int64x2_t a_ = (a);                                              \
> -       int64x2_t result;                                                \
> -       __asm__ ("smlal %0.2d, %2.2s, %3.s[%4]"                          \
> -                : "=w"(result)                                          \
> -                : "0"(a_), "w"(b_), "w"(c_), "i"(d)                     \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline int64x2_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmlal_laneq_s32 (int64x2_t __acc, int32x2_t __a, int32x4_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_smlal_laneq_v2si (__acc, __a, __b, __c);
> +}
>
> -#define vmlal_laneq_u16(a, b, c, d)                                     \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       uint16x8_t c_ = (c);                                             \
> -       uint16x4_t b_ = (b);                                             \
> -       uint32x4_t a_ = (a);                                             \
> -       uint32x4_t result;                                               \
> -       __asm__ ("umlal %0.4s, %2.4h, %3.h[%4]"                          \
> -                : "=w"(result)                                          \
> -                : "0"(a_), "w"(b_), "x"(c_), "i"(d)                     \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline uint32x4_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmlal_laneq_u16 (uint32x4_t __acc, uint16x4_t __a, uint16x8_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_umlal_laneq_v4hi_uuuus (__acc, __a, __b, __c);
> +}
>
> -#define vmlal_laneq_u32(a, b, c, d)                                     \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       uint32x4_t c_ = (c);                                             \
> -       uint32x2_t b_ = (b);                                             \
> -       uint64x2_t a_ = (a);                                             \
> -       uint64x2_t result;                                               \
> -       __asm__ ("umlal %0.2d, %2.2s, %3.s[%4]"                          \
> -                : "=w"(result)                                          \
> -                : "0"(a_), "w"(b_), "w"(c_), "i"(d)                     \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline uint64x2_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmlal_laneq_u32 (uint64x2_t __acc, uint32x2_t __a, uint32x4_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_umlal_laneq_v2si_uuuus (__acc, __a, __b, __c);
> +}
>
>  __extension__ extern __inline int32x4_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> @@ -9289,109 +9233,61 @@ vmull_high_u32 (uint32x4_t __a, uint32x4_t __b)
>    return __builtin_aarch64_vec_widen_umult_hi_v4si_uuu (__a, __b);
>  }
>
> -#define vmull_lane_s16(a, b, c)                                         \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       int16x4_t b_ = (b);                                              \
> -       int16x4_t a_ = (a);                                              \
> -       int32x4_t result;                                                \
> -       __asm__ ("smull %0.4s,%1.4h,%2.h[%3]"                            \
> -                : "=w"(result)                                          \
> -                : "w"(a_), "x"(b_), "i"(c)                              \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline int32x4_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmull_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_smult_lane_v4hi (__a, __b, __c);
> +}
>
> -#define vmull_lane_s32(a, b, c)                                         \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       int32x2_t b_ = (b);                                              \
> -       int32x2_t a_ = (a);                                              \
> -       int64x2_t result;                                                \
> -       __asm__ ("smull %0.2d,%1.2s,%2.s[%3]"                            \
> -                : "=w"(result)                                          \
> -                : "w"(a_), "w"(b_), "i"(c)                              \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline int64x2_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmull_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_smult_lane_v2si (__a, __b, __c);
> +}
>
> -#define vmull_lane_u16(a, b, c)                                         \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       uint16x4_t b_ = (b);                                             \
> -       uint16x4_t a_ = (a);                                             \
> -       uint32x4_t result;                                               \
> -       __asm__ ("umull %0.4s,%1.4h,%2.h[%3]"                            \
> -                : "=w"(result)                                          \
> -                : "w"(a_), "x"(b_), "i"(c)                              \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline uint32x4_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmull_lane_u16 (uint16x4_t __a, uint16x4_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_umult_lane_v4hi_uuus (__a, __b, __c);
> +}
>
> -#define vmull_lane_u32(a, b, c)                                         \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       uint32x2_t b_ = (b);                                             \
> -       uint32x2_t a_ = (a);                                             \
> -       uint64x2_t result;                                               \
> -       __asm__ ("umull %0.2d, %1.2s, %2.s[%3]"                          \
> -                : "=w"(result)                                          \
> -                : "w"(a_), "w"(b_), "i"(c)                              \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline uint64x2_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmull_lane_u32 (uint32x2_t __a, uint32x2_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_umult_lane_v2si_uuus (__a, __b, __c);
> +}
>
> -#define vmull_laneq_s16(a, b, c)                                        \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       int16x8_t b_ = (b);                                              \
> -       int16x4_t a_ = (a);                                              \
> -       int32x4_t result;                                                \
> -       __asm__ ("smull %0.4s, %1.4h, %2.h[%3]"                          \
> -                : "=w"(result)                                          \
> -                : "w"(a_), "x"(b_), "i"(c)                              \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline int32x4_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmull_laneq_s16 (int16x4_t __a, int16x8_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_smult_laneq_v4hi (__a, __b, __c);
> +}
>
> -#define vmull_laneq_s32(a, b, c)                                        \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       int32x4_t b_ = (b);                                              \
> -       int32x2_t a_ = (a);                                              \
> -       int64x2_t result;                                                \
> -       __asm__ ("smull %0.2d, %1.2s, %2.s[%3]"                          \
> -                : "=w"(result)                                          \
> -                : "w"(a_), "w"(b_), "i"(c)                              \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline int64x2_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmull_laneq_s32 (int32x2_t __a, int32x4_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_smult_laneq_v2si (__a, __b, __c);
> +}
>
> -#define vmull_laneq_u16(a, b, c)                                        \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       uint16x8_t b_ = (b);                                             \
> -       uint16x4_t a_ = (a);                                             \
> -       uint32x4_t result;                                               \
> -       __asm__ ("umull %0.4s, %1.4h, %2.h[%3]"                          \
> -                : "=w"(result)                                          \
> -                : "w"(a_), "x"(b_), "i"(c)                              \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline uint32x4_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmull_laneq_u16 (uint16x4_t __a, uint16x8_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_umult_laneq_v4hi_uuus (__a, __b, __c);
> +}
>
> -#define vmull_laneq_u32(a, b, c)                                        \
> -  __extension__                                                         \
> -    ({                                                                  \
> -       uint32x4_t b_ = (b);                                             \
> -       uint32x2_t a_ = (a);                                             \
> -       uint64x2_t result;                                               \
> -       __asm__ ("umull %0.2d, %1.2s, %2.s[%3]"                          \
> -                : "=w"(result)                                          \
> -                : "w"(a_), "w"(b_), "i"(c)                              \
> -                : /* No clobbers */);                                   \
> -       result;                                                          \
> -     })
> +__extension__ extern __inline uint64x2_t
> +__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> +vmull_laneq_u32 (uint32x2_t __a, uint32x4_t __b, const int __c)
> +{
> +  return __builtin_aarch64_vec_umult_laneq_v2si_uuus (__a, __b, __c);
> +}
>
>  __extension__ extern __inline int32x4_t
>  __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
> diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md
> index ec1b92c5379f7c33446d0ac3556f6358fb7433d3..2f4b553a9a433773b222ce9f0bede3630ff0624c 100644
> --- a/gcc/config/aarch64/iterators.md
> +++ b/gcc/config/aarch64/iterators.md
> @@ -980,6 +980,13 @@ (define_mode_attr Vtype [(V8QI "8b") (V16QI "16b")
>                          (V4SF "4s") (V2DF "2d")
>                          (V4HF "4h") (V8HF "8h")])
>
> +;; Map mode to type used in widening multiplies.
> +(define_mode_attr Vtype2 [(V4HI "4h") (V8HI "4h") (V2SI "2s") (V4SI "2s")])
> +
> +;; Map lane mode to name
> +(define_mode_attr Qlane [(V4HI "_v4hi") (V8HI  "q_v4hi")
> +                        (V2SI "_v2si") (V4SI  "q_v2si")])
> +
>  (define_mode_attr Vrevsuff [(V4HI "16") (V8HI "16") (V2SI "32")
>                              (V4SI "32") (V2DI "64")])
>
Wilco Dijkstra March 9, 2020, 5:25 p.m. UTC | #3
Hi Christophe,

> I noticed a regression introduced by Delia's patch "aarch64: ACLE
> intrinsics for BFCVTN, BFCVTN2 and BFCVT":
> (on aarch64-linux-gnu)
> FAIL: g++.dg/cpp0x/variadic-sizeof4.C  -std=c++14 (internal compiler error)
>
> I couldn't reproduce it with current ToT, until I realized that your
> patch fixes it. However, I'm wondering whether that's expected given
> the context of both patches....

It sounds like this is memory corruption. Neither patch should have changed
anything in the C++ frontend.

Cheers,
Wilco
Andrew Pinski March 9, 2020, 5:30 p.m. UTC | #4
On Mon, Mar 9, 2020 at 10:26 AM Wilco Dijkstra <Wilco.Dijkstra@arm.com> wrote:
>
> Hi Christophe,
>
> > I noticed a regression introduced by Delia's patch "aarch64: ACLE
> > intrinsics for BFCVTN, BFCVTN2 and BFCVT":
> > (on aarch64-linux-gnu)
> > FAIL: g++.dg/cpp0x/variadic-sizeof4.C  -std=c++14 (internal compiler error)
> >
> > I couldn't reproduce it with current ToT, until I realized that your
> > patch fixes it. However, I'm wondering whether that's expected given
> > the context of both patches....
>
> It sounds like this is memory corruption. Neither patch should have changed
> anything in the C++ frontend.

It sounds like some GC issue.  The patch would have changed a few
things related to the front-end though.  Mainly the decl UIDs do
increase due to the new builtins.  Note most likely Deli's patch did
the same too.

Thanks,
Andrew Pinski

>
> Cheers,
> Wilco
>
diff mbox series

Patch

diff --git a/gcc/config/aarch64/aarch64-builtins.c b/gcc/config/aarch64/aarch64-builtins.c
index 9c9c6d86ae29fcbcf42e84408c5e94990fed8348..5744e68ea08722dcc387254f44408eb0fd3ffe6e 100644
--- a/gcc/config/aarch64/aarch64-builtins.c
+++ b/gcc/config/aarch64/aarch64-builtins.c
@@ -175,6 +175,11 @@  aarch64_types_ternopu_qualifiers[SIMD_MAX_BUILTIN_ARGS]
       qualifier_unsigned, qualifier_unsigned };
 #define TYPES_TERNOPU (aarch64_types_ternopu_qualifiers)
 static enum aarch64_type_qualifiers
+aarch64_types_ternopu_lane_qualifiers[SIMD_MAX_BUILTIN_ARGS]
+  = { qualifier_unsigned, qualifier_unsigned,
+      qualifier_unsigned, qualifier_lane_index };
+#define TYPES_TERNOPU_LANE (aarch64_types_ternopu_lane_qualifiers)
+static enum aarch64_type_qualifiers
 aarch64_types_ternopu_imm_qualifiers[SIMD_MAX_BUILTIN_ARGS]
   = { qualifier_unsigned, qualifier_unsigned,
       qualifier_unsigned, qualifier_immediate };
diff --git a/gcc/config/aarch64/aarch64-simd-builtins.def b/gcc/config/aarch64/aarch64-simd-builtins.def
index d8bb96f8ed60648477f952ea6b88eae67cc9c921..e256e9c2086b48dfb1d95ce8391651ec9e86b696 100644
--- a/gcc/config/aarch64/aarch64-simd-builtins.def
+++ b/gcc/config/aarch64/aarch64-simd-builtins.def
@@ -191,6 +191,15 @@ 
   BUILTIN_VQW (BINOP, vec_widen_smult_hi_, 10)
   BUILTIN_VQW (BINOPU, vec_widen_umult_hi_, 10)
 
+  BUILTIN_VD_HSI (TERNOP_LANE, vec_smult_lane_, 0)
+  BUILTIN_VD_HSI (QUADOP_LANE, vec_smlal_lane_, 0)
+  BUILTIN_VD_HSI (TERNOP_LANE, vec_smult_laneq_, 0)
+  BUILTIN_VD_HSI (QUADOP_LANE, vec_smlal_laneq_, 0)
+  BUILTIN_VD_HSI (TERNOPU_LANE, vec_umult_lane_, 0)
+  BUILTIN_VD_HSI (QUADOPU_LANE, vec_umlal_lane_, 0)
+  BUILTIN_VD_HSI (TERNOPU_LANE, vec_umult_laneq_, 0)
+  BUILTIN_VD_HSI (QUADOPU_LANE, vec_umlal_laneq_, 0)
+
   BUILTIN_VSD_HSI (BINOP, sqdmull, 0)
   BUILTIN_VSD_HSI (TERNOP_LANE, sqdmull_lane, 0)
   BUILTIN_VSD_HSI (TERNOP_LANE, sqdmull_laneq, 0)
diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md
index 999d80667b7cf06040515958c747d8bca0728acc..ccf4e394c1f6aa7d0adb23cfcd8da1b6d40d7ebf 100644
--- a/gcc/config/aarch64/aarch64-simd.md
+++ b/gcc/config/aarch64/aarch64-simd.md
@@ -1892,6 +1892,45 @@  (define_expand "vec_widen_<su>mult_hi_<mode>"
  }
 )
 
+;; vmull_lane_s16 intrinsics
+(define_insn "aarch64_vec_<su>mult_lane<Qlane>"
+  [(set (match_operand:<VWIDE> 0 "register_operand" "=w")
+	(mult:<VWIDE>
+	  (ANY_EXTEND:<VWIDE>
+	    (match_operand:<VCOND> 1 "register_operand" "w"))
+	  (ANY_EXTEND:<VWIDE>
+	    (vec_duplicate:<VCOND>
+	      (vec_select:<VEL>
+		(match_operand:VDQHS 2 "register_operand" "<vwx>")
+		(parallel [(match_operand:SI 3 "immediate_operand" "i")]))))))]
+  "TARGET_SIMD"
+  {
+    operands[3] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[3]));
+    return "<su>mull\\t%0.<Vwtype>, %1.<Vtype2>, %2.<Vetype>[%3]";
+  }
+  [(set_attr "type" "neon_mul_<Vetype>_scalar_long")]
+)
+
+;; vmlal_lane_s16 intrinsics
+(define_insn "aarch64_vec_<su>mlal_lane<Qlane>"
+  [(set (match_operand:<VWIDE> 0 "register_operand" "=w")
+	(plus:<VWIDE> (match_operand:<VWIDE> 1 "register_operand" "0")
+	  (mult:<VWIDE>
+	    (ANY_EXTEND:<VWIDE>
+	      (match_operand:<VCOND> 2 "register_operand" "w"))
+	    (ANY_EXTEND:<VWIDE>
+	      (vec_duplicate:<VCOND>
+		(vec_select:<VEL>
+		  (match_operand:VDQHS 3 "register_operand" "<vwx>")
+		  (parallel [(match_operand:SI 4 "immediate_operand" "i")])))))))]
+  "TARGET_SIMD"
+  {
+    operands[4] = aarch64_endian_lane_rtx (<MODE>mode, INTVAL (operands[4]));
+    return "<su>mlal\\t%0.<Vwtype>, %2.<Vtype2>, %3.<Vetype>[%4]";
+  }
+  [(set_attr "type" "neon_mla_<Vetype>_scalar_long")]
+)
+
 ;; FP vector operations.
 ;; AArch64 AdvSIMD supports single-precision (32-bit) and 
 ;; double-precision (64-bit) floating-point data types and arithmetic as
diff --git a/gcc/config/aarch64/arm_neon.h b/gcc/config/aarch64/arm_neon.h
index b6f42ac630295d9b827e2763cf487ccfb5bfe64b..700dd57ccd1b7ced731a92e43bc71911ad1c93cb 100644
--- a/gcc/config/aarch64/arm_neon.h
+++ b/gcc/config/aarch64/arm_neon.h
@@ -7700,117 +7700,61 @@  vmlal_high_u32 (uint64x2_t __a, uint32x4_t __b, uint32x4_t __c)
   return __result;
 }
 
-#define vmlal_lane_s16(a, b, c, d)                                      \
-  __extension__                                                         \
-    ({                                                                  \
-       int16x4_t c_ = (c);                                              \
-       int16x4_t b_ = (b);                                              \
-       int32x4_t a_ = (a);                                              \
-       int32x4_t result;                                                \
-       __asm__ ("smlal %0.4s,%2.4h,%3.h[%4]"                            \
-                : "=w"(result)                                          \
-                : "0"(a_), "w"(b_), "x"(c_), "i"(d)                     \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline int32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmlal_lane_s16 (int32x4_t __acc, int16x4_t __a, int16x4_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_smlal_lane_v4hi (__acc, __a, __b, __c);
+}
 
-#define vmlal_lane_s32(a, b, c, d)                                      \
-  __extension__                                                         \
-    ({                                                                  \
-       int32x2_t c_ = (c);                                              \
-       int32x2_t b_ = (b);                                              \
-       int64x2_t a_ = (a);                                              \
-       int64x2_t result;                                                \
-       __asm__ ("smlal %0.2d,%2.2s,%3.s[%4]"                            \
-                : "=w"(result)                                          \
-                : "0"(a_), "w"(b_), "w"(c_), "i"(d)                     \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline int64x2_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmlal_lane_s32 (int64x2_t __acc, int32x2_t __a, int32x2_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_smlal_lane_v2si (__acc, __a, __b, __c);
+}
 
-#define vmlal_lane_u16(a, b, c, d)                                      \
-  __extension__                                                         \
-    ({                                                                  \
-       uint16x4_t c_ = (c);                                             \
-       uint16x4_t b_ = (b);                                             \
-       uint32x4_t a_ = (a);                                             \
-       uint32x4_t result;                                               \
-       __asm__ ("umlal %0.4s,%2.4h,%3.h[%4]"                            \
-                : "=w"(result)                                          \
-                : "0"(a_), "w"(b_), "x"(c_), "i"(d)                     \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline uint32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmlal_lane_u16 (uint32x4_t __acc, uint16x4_t __a, uint16x4_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_umlal_lane_v4hi_uuuus (__acc, __a, __b, __c);
+}
 
-#define vmlal_lane_u32(a, b, c, d)                                      \
-  __extension__                                                         \
-    ({                                                                  \
-       uint32x2_t c_ = (c);                                             \
-       uint32x2_t b_ = (b);                                             \
-       uint64x2_t a_ = (a);                                             \
-       uint64x2_t result;                                               \
-       __asm__ ("umlal %0.2d, %2.2s, %3.s[%4]"                          \
-                : "=w"(result)                                          \
-                : "0"(a_), "w"(b_), "w"(c_), "i"(d)                     \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline uint64x2_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmlal_lane_u32 (uint64x2_t __acc, uint32x2_t __a, uint32x2_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_umlal_lane_v2si_uuuus (__acc, __a, __b, __c);
+}
 
-#define vmlal_laneq_s16(a, b, c, d)                                     \
-  __extension__                                                         \
-    ({                                                                  \
-       int16x8_t c_ = (c);                                              \
-       int16x4_t b_ = (b);                                              \
-       int32x4_t a_ = (a);                                              \
-       int32x4_t result;                                                \
-       __asm__ ("smlal %0.4s, %2.4h, %3.h[%4]"                          \
-                : "=w"(result)                                          \
-                : "0"(a_), "w"(b_), "x"(c_), "i"(d)                     \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline int32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmlal_laneq_s16 (int32x4_t __acc, int16x4_t __a, int16x8_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_smlal_laneq_v4hi (__acc, __a, __b, __c);
+}
 
-#define vmlal_laneq_s32(a, b, c, d)                                     \
-  __extension__                                                         \
-    ({                                                                  \
-       int32x4_t c_ = (c);                                              \
-       int32x2_t b_ = (b);                                              \
-       int64x2_t a_ = (a);                                              \
-       int64x2_t result;                                                \
-       __asm__ ("smlal %0.2d, %2.2s, %3.s[%4]"                          \
-                : "=w"(result)                                          \
-                : "0"(a_), "w"(b_), "w"(c_), "i"(d)                     \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline int64x2_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmlal_laneq_s32 (int64x2_t __acc, int32x2_t __a, int32x4_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_smlal_laneq_v2si (__acc, __a, __b, __c);
+}
 
-#define vmlal_laneq_u16(a, b, c, d)                                     \
-  __extension__                                                         \
-    ({                                                                  \
-       uint16x8_t c_ = (c);                                             \
-       uint16x4_t b_ = (b);                                             \
-       uint32x4_t a_ = (a);                                             \
-       uint32x4_t result;                                               \
-       __asm__ ("umlal %0.4s, %2.4h, %3.h[%4]"                          \
-                : "=w"(result)                                          \
-                : "0"(a_), "w"(b_), "x"(c_), "i"(d)                     \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline uint32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmlal_laneq_u16 (uint32x4_t __acc, uint16x4_t __a, uint16x8_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_umlal_laneq_v4hi_uuuus (__acc, __a, __b, __c);
+}
 
-#define vmlal_laneq_u32(a, b, c, d)                                     \
-  __extension__                                                         \
-    ({                                                                  \
-       uint32x4_t c_ = (c);                                             \
-       uint32x2_t b_ = (b);                                             \
-       uint64x2_t a_ = (a);                                             \
-       uint64x2_t result;                                               \
-       __asm__ ("umlal %0.2d, %2.2s, %3.s[%4]"                          \
-                : "=w"(result)                                          \
-                : "0"(a_), "w"(b_), "w"(c_), "i"(d)                     \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline uint64x2_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmlal_laneq_u32 (uint64x2_t __acc, uint32x2_t __a, uint32x4_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_umlal_laneq_v2si_uuuus (__acc, __a, __b, __c);
+}
 
 __extension__ extern __inline int32x4_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
@@ -9289,109 +9233,61 @@  vmull_high_u32 (uint32x4_t __a, uint32x4_t __b)
   return __builtin_aarch64_vec_widen_umult_hi_v4si_uuu (__a, __b);
 }
 
-#define vmull_lane_s16(a, b, c)                                         \
-  __extension__                                                         \
-    ({                                                                  \
-       int16x4_t b_ = (b);                                              \
-       int16x4_t a_ = (a);                                              \
-       int32x4_t result;                                                \
-       __asm__ ("smull %0.4s,%1.4h,%2.h[%3]"                            \
-                : "=w"(result)                                          \
-                : "w"(a_), "x"(b_), "i"(c)                              \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline int32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmull_lane_s16 (int16x4_t __a, int16x4_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_smult_lane_v4hi (__a, __b, __c);
+}
 
-#define vmull_lane_s32(a, b, c)                                         \
-  __extension__                                                         \
-    ({                                                                  \
-       int32x2_t b_ = (b);                                              \
-       int32x2_t a_ = (a);                                              \
-       int64x2_t result;                                                \
-       __asm__ ("smull %0.2d,%1.2s,%2.s[%3]"                            \
-                : "=w"(result)                                          \
-                : "w"(a_), "w"(b_), "i"(c)                              \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline int64x2_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmull_lane_s32 (int32x2_t __a, int32x2_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_smult_lane_v2si (__a, __b, __c);
+}
 
-#define vmull_lane_u16(a, b, c)                                         \
-  __extension__                                                         \
-    ({                                                                  \
-       uint16x4_t b_ = (b);                                             \
-       uint16x4_t a_ = (a);                                             \
-       uint32x4_t result;                                               \
-       __asm__ ("umull %0.4s,%1.4h,%2.h[%3]"                            \
-                : "=w"(result)                                          \
-                : "w"(a_), "x"(b_), "i"(c)                              \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline uint32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmull_lane_u16 (uint16x4_t __a, uint16x4_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_umult_lane_v4hi_uuus (__a, __b, __c);
+}
 
-#define vmull_lane_u32(a, b, c)                                         \
-  __extension__                                                         \
-    ({                                                                  \
-       uint32x2_t b_ = (b);                                             \
-       uint32x2_t a_ = (a);                                             \
-       uint64x2_t result;                                               \
-       __asm__ ("umull %0.2d, %1.2s, %2.s[%3]"                          \
-                : "=w"(result)                                          \
-                : "w"(a_), "w"(b_), "i"(c)                              \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline uint64x2_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmull_lane_u32 (uint32x2_t __a, uint32x2_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_umult_lane_v2si_uuus (__a, __b, __c);
+}
 
-#define vmull_laneq_s16(a, b, c)                                        \
-  __extension__                                                         \
-    ({                                                                  \
-       int16x8_t b_ = (b);                                              \
-       int16x4_t a_ = (a);                                              \
-       int32x4_t result;                                                \
-       __asm__ ("smull %0.4s, %1.4h, %2.h[%3]"                          \
-                : "=w"(result)                                          \
-                : "w"(a_), "x"(b_), "i"(c)                              \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline int32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmull_laneq_s16 (int16x4_t __a, int16x8_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_smult_laneq_v4hi (__a, __b, __c);
+}
 
-#define vmull_laneq_s32(a, b, c)                                        \
-  __extension__                                                         \
-    ({                                                                  \
-       int32x4_t b_ = (b);                                              \
-       int32x2_t a_ = (a);                                              \
-       int64x2_t result;                                                \
-       __asm__ ("smull %0.2d, %1.2s, %2.s[%3]"                          \
-                : "=w"(result)                                          \
-                : "w"(a_), "w"(b_), "i"(c)                              \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline int64x2_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmull_laneq_s32 (int32x2_t __a, int32x4_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_smult_laneq_v2si (__a, __b, __c);
+}
 
-#define vmull_laneq_u16(a, b, c)                                        \
-  __extension__                                                         \
-    ({                                                                  \
-       uint16x8_t b_ = (b);                                             \
-       uint16x4_t a_ = (a);                                             \
-       uint32x4_t result;                                               \
-       __asm__ ("umull %0.4s, %1.4h, %2.h[%3]"                          \
-                : "=w"(result)                                          \
-                : "w"(a_), "x"(b_), "i"(c)                              \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline uint32x4_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmull_laneq_u16 (uint16x4_t __a, uint16x8_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_umult_laneq_v4hi_uuus (__a, __b, __c);
+}
 
-#define vmull_laneq_u32(a, b, c)                                        \
-  __extension__                                                         \
-    ({                                                                  \
-       uint32x4_t b_ = (b);                                             \
-       uint32x2_t a_ = (a);                                             \
-       uint64x2_t result;                                               \
-       __asm__ ("umull %0.2d, %1.2s, %2.s[%3]"                          \
-                : "=w"(result)                                          \
-                : "w"(a_), "w"(b_), "i"(c)                              \
-                : /* No clobbers */);                                   \
-       result;                                                          \
-     })
+__extension__ extern __inline uint64x2_t
+__attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
+vmull_laneq_u32 (uint32x2_t __a, uint32x4_t __b, const int __c)
+{
+  return __builtin_aarch64_vec_umult_laneq_v2si_uuus (__a, __b, __c);
+}
 
 __extension__ extern __inline int32x4_t
 __attribute__ ((__always_inline__, __gnu_inline__, __artificial__))
diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md
index ec1b92c5379f7c33446d0ac3556f6358fb7433d3..2f4b553a9a433773b222ce9f0bede3630ff0624c 100644
--- a/gcc/config/aarch64/iterators.md
+++ b/gcc/config/aarch64/iterators.md
@@ -980,6 +980,13 @@  (define_mode_attr Vtype [(V8QI "8b") (V16QI "16b")
 			 (V4SF "4s") (V2DF "2d")
 			 (V4HF "4h") (V8HF "8h")])
 
+;; Map mode to type used in widening multiplies.
+(define_mode_attr Vtype2 [(V4HI "4h") (V8HI "4h") (V2SI "2s") (V4SI "2s")])
+
+;; Map lane mode to name
+(define_mode_attr Qlane [(V4HI "_v4hi") (V8HI  "q_v4hi")
+			 (V2SI "_v2si") (V4SI  "q_v2si")])
+
 (define_mode_attr Vrevsuff [(V4HI "16") (V8HI "16") (V2SI "32")
                             (V4SI "32") (V2DI "64")])