[{"id":3188632,"web_url":"http://patchwork.ozlabs.org/comment/3188632/","msgid":"<mpt8r8rx5wm.fsf@arm.com>","list_archive_url":null,"date":"2023-09-27T10:32:57","subject":"Re: [PATCH]AArch64 Add special patterns for creating DI scalar and\n vector constant 1 << 63 [PR109154]","submitter":{"id":64746,"url":"http://patchwork.ozlabs.org/api/people/64746/","name":"Richard Sandiford","email":"richard.sandiford@arm.com"},"content":"Tamar Christina <tamar.christina@arm.com> writes:\n> Hi All,\n>\n> This adds a way to generate special sequences for creation of constants for\n> which we don't have single instructions sequences which would have normally\n> lead to a GP -> FP transfer or a literal load.\n>\n> The patch starts out by adding support for creating 1 << 63 using fneg (mov 0).\n>\n> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.\n>\n> Ok for master?\n>\n> Thanks,\n> Tamar\n>\n> gcc/ChangeLog:\n>\n> \tPR tree-optimization/109154\n> \t* config/aarch64/aarch64-protos.h (aarch64_simd_special_constant_p):\n> \tNew.\n> \t* config/aarch64/aarch64-simd.md (*aarch64_simd_mov<VQMOV:mode>): Add\n> \tnew coden for special constants.\n> \t* config/aarch64/aarch64.cc (aarch64_extract_vec_duplicate_wide_int):\n> \tTake optional mode.\n> \t(aarch64_simd_special_constant_p): New.\n> \t* config/aarch64/aarch64.md (*movdi_aarch64): Add new codegen for\n> \tspecial constants.\n> \t* config/aarch64/constraints.md (Dx): new.\n>\n> gcc/testsuite/ChangeLog:\n>\n> \tPR tree-optimization/109154\n> \t* gcc.target/aarch64/fneg-abs_1.c: Updated.\n> \t* gcc.target/aarch64/fneg-abs_2.c: Updated.\n> \t* gcc.target/aarch64/fneg-abs_4.c: Updated.\n>\n> --- inline copy of patch -- \n> diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h\n> index 70303d6fd953e0c397b9138ede8858c2db2e53db..2af9f6a774c20268bf90756c17064bbff8f8ff87 100644\n> --- a/gcc/config/aarch64/aarch64-protos.h\n> +++ b/gcc/config/aarch64/aarch64-protos.h\n> @@ -827,6 +827,7 @@ bool aarch64_sve_ptrue_svpattern_p (rtx, struct simd_immediate_info *);\n>  bool aarch64_simd_valid_immediate (rtx, struct simd_immediate_info *,\n>  \t\t\tenum simd_immediate_check w = AARCH64_CHECK_MOV);\n>  rtx aarch64_check_zero_based_sve_index_immediate (rtx);\n> +bool aarch64_simd_special_constant_p (rtx, rtx, machine_mode);\n>  bool aarch64_sve_index_immediate_p (rtx);\n>  bool aarch64_sve_arith_immediate_p (machine_mode, rtx, bool);\n>  bool aarch64_sve_sqadd_sqsub_immediate_p (machine_mode, rtx, bool);\n> diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md\n> index 7b4d5a37a9795fefda785aaacc246918826ed0a2..63c802d942a186b5a94c66d2e83828a82a88ffa8 100644\n> --- a/gcc/config/aarch64/aarch64-simd.md\n> +++ b/gcc/config/aarch64/aarch64-simd.md\n> @@ -181,17 +181,28 @@ (define_insn_and_split \"*aarch64_simd_mov<VQMOV:mode>\"\n>       [?r , r ; multiple           , *   , 8] #\n>       [w  , Dn; neon_move<q>       , simd, 4] << aarch64_output_simd_mov_immediate (operands[1], 128);\n>       [w  , Dz; fmov               , *   , 4] fmov\\t%d0, xzr\n> +     [w  , Dx; neon_move          , simd, 8] #\n>    }\n>    \"&& reload_completed\n> -   && !(FP_REGNUM_P (REGNO (operands[0]))\n> -\t&& FP_REGNUM_P (REGNO (operands[1])))\"\n> +   && (!(FP_REGNUM_P (REGNO (operands[0]))\n> +\t && FP_REGNUM_P (REGNO (operands[1])))\n> +       || (aarch64_simd_special_constant_p (operands[1], NULL_RTX, <MODE>mode)\n> +\t   && FP_REGNUM_P (REGNO (operands[0]))))\"\n\nUnless I'm missing something, the new test is already covered by the:\n\n  !(FP_REGNUM_P (REGNO (operands[0]))\n    && FP_REGNUM_P (REGNO (operands[1]))\n\n>    [(const_int 0)]\n>    {\n>      if (GP_REGNUM_P (REGNO (operands[0]))\n>  \t&& GP_REGNUM_P (REGNO (operands[1])))\n>        aarch64_simd_emit_reg_reg_move (operands, DImode, 2);\n>      else\n> -      aarch64_split_simd_move (operands[0], operands[1]);\n> +      {\n> +\tif (FP_REGNUM_P (REGNO (operands[0]))\n> +\t    && <MODE>mode == V2DImode\n> +\t    && aarch64_simd_special_constant_p (operands[1], operands[0],\n> +\t\t\t\t\t\t<MODE>mode))\n> +\t  ;\n\nThis looked odd at first, since _p functions don't normally have\nside effects.  So it looked like this case was expanding to nothing.\n\nHow about renaming aarch64_simd_special_constant_p to\naarch64_maybe_generate_simd_constant, and then making\naarch64_simd_special_constant_p a wrapper that passes the NULL_RTX?\nMinor nit, but most other functions put the destination first.\n\n> +\telse\n> +\t  aarch64_split_simd_move (operands[0], operands[1]);\n> +      }\n>      DONE;\n>    }\n>  )\n> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc\n> index 3739a44bfd909b69a76529cc6b0ae2f01d6fb36e..6e7ee446f1b31ee8bcf121c97c1c6fa87725bf42 100644\n> --- a/gcc/config/aarch64/aarch64.cc\n> +++ b/gcc/config/aarch64/aarch64.cc\n> @@ -11799,16 +11799,18 @@ aarch64_get_condition_code_1 (machine_mode mode, enum rtx_code comp_code)\n>  /* Return true if X is a CONST_INT, CONST_WIDE_INT or a constant vector\n>     duplicate of such constants.  If so, store in RET_WI the wide_int\n>     representation of the constant paired with the inner mode of the vector mode\n> -   or TImode for scalar X constants.  */\n> +   or SMODE for scalar X constants.  If SMODE is not provided then TImode is\n> +   used.  */\n\ns/SMODE/MODE/, based on the code.\n\n>  \n>  static bool\n> -aarch64_extract_vec_duplicate_wide_int (rtx x, wide_int *ret_wi)\n> +aarch64_extract_vec_duplicate_wide_int (rtx x, wide_int *ret_wi,\n> +\t\t\t\t\tscalar_mode mode = TImode)\n>  {\n>    rtx elt = unwrap_const_vec_duplicate (x);\n>    if (!CONST_SCALAR_INT_P (elt))\n>      return false;\n>    scalar_mode smode\n> -    = CONST_SCALAR_INT_P (x) ? TImode : GET_MODE_INNER (GET_MODE (x));\n> +    = CONST_SCALAR_INT_P (x) ? mode : GET_MODE_INNER (GET_MODE (x));\n>    *ret_wi = rtx_mode_t (elt, smode);\n>    return true;\n>  }\n> @@ -11857,6 +11859,43 @@ aarch64_const_vec_all_same_in_range_p (rtx x,\n>  \t  && IN_RANGE (INTVAL (elt), minval, maxval));\n>  }\n>  \n> +/* Some constants can't be made using normal mov instructions in Advanced SIMD\n> +   but we can still create them in various ways.  If the constant in VAL can be\n> +   created using alternate methods then if TARGET then return true and set\n> +   TARGET to the rtx for the sequence, otherwise return false if sequence is\n> +   not possible.  */\n\nThe return true bit applies regardless of TARGET.\n\n> +\n> +bool\n> +aarch64_simd_special_constant_p (rtx val, rtx target, machine_mode mode)\n> +{\n> +  wide_int wval;\n> +  machine_mode tmode = GET_MODE (val);\n> +  auto smode = GET_MODE_INNER (tmode != VOIDmode ? tmode : mode);\n\nCan we not use \"mode\" unconditionally?\n\n> +  if (!aarch64_extract_vec_duplicate_wide_int (val, &wval, smode))\n> +    return false;\n> +\n> +  /* For Advanced SIMD we can create an integer with only the top bit set\n> +     using fneg (0.0f).  */\n> +  if (TARGET_SIMD\n> +      && !TARGET_SVE\n> +      && smode == DImode\n> +      && wi::only_sign_bit_p (wval))\n> +    {\n> +      if (!target)\n> +\treturn true;\n> +\n> +      /* Use the same base type as aarch64_gen_shareable_zero.  */\n> +      rtx zero = CONST0_RTX (V4SImode);\n> +      emit_move_insn (target, lowpart_subreg (mode, zero, V4SImode));\n\nThe lowpart_subreg should simplify this back into CONST0_RTX (mode),\nmaking it no different from:\n\n    emti_move_insn (target, CONST0_RTX (mode));\n\nIf the intention is to share zeros between modes (sounds good!),\nthen I think the subreg needs to be on the lhs instead.\n\n> +      rtx neg = lowpart_subreg (V2DFmode, target, mode);\n> +      emit_insn (gen_negv2df2 (neg, lowpart_subreg (V2DFmode, target, mode)));\n\nThe rhs seems simpler as copy_rtx (neg).  (Even the copy_rtx shouldn't\nbe needed after RA, but it's probably more future-proof to keep it.)\n\n> +      emit_move_insn (target, lowpart_subreg (mode, neg, V2DFmode));\n\nThis shouldn't be needed, since neg is already a reference to target.\n\nOverall, looks like a nice change/framework.\n\nThanks,\nRichard\n\n> +      return true;\n> +    }\n> +\n> +  return false;\n> +}\n> +\n>  bool\n>  aarch64_const_vec_all_same_int_p (rtx x, HOST_WIDE_INT val)\n>  {\n> diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md\n> index 634cfd33b41d0f945ca00d8efc9eff1ede490544..b51f979dba12b726bff0c1109b75c6d2c7ae41ab 100644\n> --- a/gcc/config/aarch64/aarch64.md\n> +++ b/gcc/config/aarch64/aarch64.md\n> @@ -1340,13 +1340,21 @@ (define_insn_and_split \"*movdi_aarch64\"\n>       [r, w  ; f_mrc    , fp  , 4] fmov\\t%x0, %d1\n>       [w, w  ; fmov     , fp  , 4] fmov\\t%d0, %d1\n>       [w, Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);\n> +     [w, Dx ; neon_move, simd, 8] #\n>    }\n> -  \"CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode)\n> -   && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))\"\n> +  \"CONST_INT_P (operands[1])\n> +   && REG_P (operands[0])\n> +   && ((!aarch64_move_imm (INTVAL (operands[1]), DImode)\n> +\t&& GP_REGNUM_P (REGNO (operands[0])))\n> +       || (aarch64_simd_special_constant_p (operands[1], NULL_RTX, DImode)\n> +\t   && FP_REGNUM_P (REGNO (operands[0]))))\"\n>    [(const_int 0)]\n>    {\n> +    if (GP_REGNUM_P (REGNO (operands[0])))\n>        aarch64_expand_mov_immediate (operands[0], operands[1]);\n> -      DONE;\n> +    else\n> +      aarch64_simd_special_constant_p (operands[1], operands[0], DImode);\n> +    DONE;\n>    }\n>  )\n>  \n> diff --git a/gcc/config/aarch64/constraints.md b/gcc/config/aarch64/constraints.md\n> index 371a00827d84d8ea4a06ba2b00a761d3b179ae90..11cf5a0d16b3364a7a4d0b2a2e5bb33063151479 100644\n> --- a/gcc/config/aarch64/constraints.md\n> +++ b/gcc/config/aarch64/constraints.md\n> @@ -488,6 +488,14 @@ (define_constraint \"Dr\"\n>   (and (match_code \"const,const_vector\")\n>        (match_test \"aarch64_simd_shift_imm_p (op, GET_MODE (op),\n>  \t\t\t\t\t\t false)\")))\n> +\n> +(define_constraint \"Dx\"\n> +  \"@internal\n> + A constraint that matches a vector of 64-bit immediates which we don't have a\n> + single instruction to create but that we can create in creative ways.\"\n> + (and (match_code \"const_int,const,const_vector\")\n> +      (match_test \"aarch64_simd_special_constant_p (op, NULL_RTX, DImode)\")))\n> +\n>  (define_constraint \"Dz\"\n>    \"@internal\n>   A constraint that matches a vector of immediate zero.\"\n> diff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\n> index f823013c3ddf6b3a266c3abfcbf2642fc2a75fa6..43c37e21b50e13c09b8d6850686e88465cd8482a 100644\n> --- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\n> +++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\n> @@ -28,8 +28,8 @@ float32x4_t t2 (float32x4_t a)\n>  \n>  /*\n>  ** t3:\n> -**\tadrp\tx0, .LC[0-9]+\n> -**\tldr\tq[0-9]+, \\[x0, #:lo12:.LC0\\]\n> +**\tmovi\tv[0-9]+.4s, 0\n> +**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n>  **\torr\tv[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b\n>  **\tret\n>  */\n> diff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\n> index 141121176b309e4b2aa413dc55271a6e3c93d5e1..fb14ec3e2210e0feeff80f2410d777d3046a9f78 100644\n> --- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\n> +++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\n> @@ -20,8 +20,8 @@ float32_t f1 (float32_t a)\n>  \n>  /*\n>  ** f2:\n> -**\tmov\tx0, -9223372036854775808\n> -**\tfmov\td[0-9]+, x0\n> +**\tfmov\td[0-9]+, xzr\n> +**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n>  **\torr\tv[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b\n>  **\tret\n>  */\n> @@ -29,3 +29,4 @@ float64_t f2 (float64_t a)\n>  {\n>    return -fabs (a);\n>  }\n> +\n> diff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\n> index 10879dea74462d34b26160eeb0bd54ead063166b..4ea0105f6c0a9756070bcc60d34f142f53d8242c 100644\n> --- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\n> +++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\n> @@ -8,8 +8,8 @@\n>  \n>  /*\n>  ** negabs:\n> -**\tmov\tx0, -9223372036854775808\n> -**\tfmov\td[0-9]+, x0\n> +**\tfmov\td[0-9]+, xzr\n> +**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n>  **\torr\tv[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b\n>  **\tret\n>  */","headers":{"Return-Path":"<gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org;\n envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org;\n dmarc=pass (p=none dis=none) header.from=arm.com","sourceware.org; spf=pass smtp.mailfrom=arm.com"],"Received":["from server2.sourceware.org (server2.sourceware.org\n [IPv6:2620:52:3:1:0:246e:9693:128c])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4RwXy23hhJz1yp0\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 27 Sep 2023 20:33:14 +1000 (AEST)","from server2.sourceware.org (localhost [IPv6:::1])\n\tby sourceware.org (Postfix) with ESMTP id 495D83861849\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 27 Sep 2023 10:33:12 +0000 (GMT)","from foss.arm.com (foss.arm.com [217.140.110.172])\n by sourceware.org (Postfix) with ESMTP id 3C59F3858436\n for <gcc-patches@gcc.gnu.org>; Wed, 27 Sep 2023 10:33:00 +0000 (GMT)","from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])\n by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0FCF31FB;\n Wed, 27 Sep 2023 03:33:38 -0700 (PDT)","from localhost (e121540-lin.manchester.arm.com [10.32.110.72])\n by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E6EBE3F59C;\n Wed, 27 Sep 2023 03:32:58 -0700 (PDT)"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 3C59F3858436","From":"Richard Sandiford <richard.sandiford@arm.com>","To":"Tamar Christina <tamar.christina@arm.com>","Mail-Followup-To":"Tamar Christina <tamar.christina@arm.com>,\n gcc-patches@gcc.gnu.org, nd@arm.com, Richard.Earnshaw@arm.com,\n Marcus.Shawcroft@arm.com, Kyrylo.Tkachov@arm.com, richard.sandiford@arm.com","Cc":"gcc-patches@gcc.gnu.org, nd@arm.com, Richard.Earnshaw@arm.com,\n Marcus.Shawcroft@arm.com, Kyrylo.Tkachov@arm.com","Subject":"Re: [PATCH]AArch64 Add special patterns for creating DI scalar and\n vector constant 1 << 63 [PR109154]","References":"<patch-17722-tamar@arm.com>","Date":"Wed, 27 Sep 2023 11:32:57 +0100","In-Reply-To":"<patch-17722-tamar@arm.com> (Tamar Christina's message of \"Wed,\n 27 Sep 2023 01:52:29 +0100\")","Message-ID":"<mpt8r8rx5wm.fsf@arm.com>","User-Agent":"Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)","MIME-Version":"1.0","Content-Type":"text/plain","X-Spam-Status":"No, score=-24.5 required=5.0 tests=BAYES_00, GIT_PATCH_0,\n KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, KAM_LOTSOFHASH,\n KAM_SHORT, SPF_HELO_NONE, SPF_NONE,\n TXREP autolearn=ham autolearn_force=no version=3.4.6","X-Spam-Checker-Version":"SpamAssassin 3.4.6 (2021-04-09) on\n server2.sourceware.org","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Errors-To":"gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org"}},{"id":3193901,"web_url":"http://patchwork.ozlabs.org/comment/3193901/","msgid":"<VI1PR08MB53251FDBCDC1C958595E9F9CFFCAA@VI1PR08MB5325.eurprd08.prod.outlook.com>","list_archive_url":null,"date":"2023-10-05T18:18:15","subject":"RE: [PATCH]AArch64 Add special patterns for creating DI scalar and\n vector constant 1 << 63 [PR109154]","submitter":{"id":69689,"url":"http://patchwork.ozlabs.org/api/people/69689/","name":"Tamar Christina","email":"Tamar.Christina@arm.com"},"content":"Hi,\n\n> The lowpart_subreg should simplify this back into CONST0_RTX (mode),\n> making it no different from:\n> \n>     emti_move_insn (target, CONST0_RTX (mode));\n> \n> If the intention is to share zeros between modes (sounds good!), then I think\n> the subreg needs to be on the lhs instead.\n> \n> > +      rtx neg = lowpart_subreg (V2DFmode, target, mode);\n> > +      emit_insn (gen_negv2df2 (neg, lowpart_subreg (V2DFmode, target,\n> > + mode)));\n> \n> The rhs seems simpler as copy_rtx (neg).  (Even the copy_rtx shouldn't be\n> needed after RA, but it's probably more future-proof to keep it.)\n> \n> > +      emit_move_insn (target, lowpart_subreg (mode, neg, V2DFmode));\n> \n> This shouldn't be needed, since neg is already a reference to target.\n> \n> Overall, looks like a nice change/framework.\n\nUpdated the patch, and in te process also realized this can be used for the\nvector variants:\n\nHi All,\n\nThis adds a way to generate special sequences for creation of constants for\nwhich we don't have single instructions sequences which would have normally\nlead to a GP -> FP transfer or a literal load.\n\nThe patch starts out by adding support for creating 1 << 63 using fneg (mov 0).\n\nBootstrapped Regtested on aarch64-none-linux-gnu and no issues.\n\nOk for master?\n\nThanks,\nTamar\n\ngcc/ChangeLog:\n\n\tPR tree-optimization/109154\n\t* config/aarch64/aarch64-protos.h (aarch64_simd_special_constant_p,\n\taarch64_maybe_generate_simd_constant): New.\n\t* config/aarch64/aarch64-simd.md (*aarch64_simd_mov<VQMOV:mode>,\n\t*aarch64_simd_mov<VDMOV:mode>): Add new coden for special constants.\n\t* config/aarch64/aarch64.cc (aarch64_extract_vec_duplicate_wide_int):\n\tTake optional mode.\n\t(aarch64_simd_special_constant_p,\n\taarch64_maybe_generate_simd_constant): New.\n\t* config/aarch64/aarch64.md (*movdi_aarch64): Add new codegen for\n\tspecial constants.\n\t* config/aarch64/constraints.md (Dx): new.\n\ngcc/testsuite/ChangeLog:\n\n\tPR tree-optimization/109154\n\t* gcc.target/aarch64/fneg-abs_1.c: Updated.\n\t* gcc.target/aarch64/fneg-abs_2.c: Updated.\n\t* gcc.target/aarch64/fneg-abs_4.c: Updated.\n\t* gcc.target/aarch64/dbl_mov_immediate_1.c: Updated.\n\n--- inline copy of patch ---\n\ndiff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h\nindex 60a55f4bc1956786ea687fc7cad7ec9e4a84e1f0..36d6c688bc888a51a9de174bd3665aebe891b8b1 100644\n--- a/gcc/config/aarch64/aarch64-protos.h\n+++ b/gcc/config/aarch64/aarch64-protos.h\n@@ -831,6 +831,8 @@ bool aarch64_sve_ptrue_svpattern_p (rtx, struct simd_immediate_info *);\n bool aarch64_simd_valid_immediate (rtx, struct simd_immediate_info *,\n \t\t\tenum simd_immediate_check w = AARCH64_CHECK_MOV);\n rtx aarch64_check_zero_based_sve_index_immediate (rtx);\n+bool aarch64_maybe_generate_simd_constant (rtx, rtx, machine_mode);\n+bool aarch64_simd_special_constant_p (rtx, machine_mode);\n bool aarch64_sve_index_immediate_p (rtx);\n bool aarch64_sve_arith_immediate_p (machine_mode, rtx, bool);\n bool aarch64_sve_sqadd_sqsub_immediate_p (machine_mode, rtx, bool);\ndiff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md\nindex 81ff5bad03d598fa0d48df93d172a28bc0d1d92e..33eceb436584ff73c7271f93639f2246d1af19e0 100644\n--- a/gcc/config/aarch64/aarch64-simd.md\n+++ b/gcc/config/aarch64/aarch64-simd.md\n@@ -142,26 +142,35 @@ (define_insn \"aarch64_dup_lane_<vswap_width_name><mode>\"\n   [(set_attr \"type\" \"neon_dup<q>\")]\n )\n \n-(define_insn \"*aarch64_simd_mov<VDMOV:mode>\"\n+(define_insn_and_split \"*aarch64_simd_mov<VDMOV:mode>\"\n   [(set (match_operand:VDMOV 0 \"nonimmediate_operand\")\n \t(match_operand:VDMOV 1 \"general_operand\"))]\n   \"TARGET_FLOAT\n    && (register_operand (operands[0], <MODE>mode)\n        || aarch64_simd_reg_or_zero (operands[1], <MODE>mode))\"\n-  {@ [cons: =0, 1; attrs: type, arch]\n-     [w , m ; neon_load1_1reg<q> , *   ] ldr\\t%d0, %1\n-     [r , m ; load_8             , *   ] ldr\\t%x0, %1\n-     [m , Dz; store_8            , *   ] str\\txzr, %0\n-     [m , w ; neon_store1_1reg<q>, *   ] str\\t%d1, %0\n-     [m , r ; store_8            , *   ] str\\t%x1, %0\n-     [w , w ; neon_logic<q>      , simd] mov\\t%0.<Vbtype>, %1.<Vbtype>\n-     [w , w ; neon_logic<q>      , *   ] fmov\\t%d0, %d1\n-     [?r, w ; neon_to_gp<q>      , simd] umov\\t%0, %1.d[0]\n-     [?r, w ; neon_to_gp<q>      , *   ] fmov\\t%x0, %d1\n-     [?w, r ; f_mcr              , *   ] fmov\\t%d0, %1\n-     [?r, r ; mov_reg            , *   ] mov\\t%0, %1\n-     [w , Dn; neon_move<q>       , simd] << aarch64_output_simd_mov_immediate (operands[1], 64);\n-     [w , Dz; f_mcr              , *   ] fmov\\t%d0, xzr\n+  {@ [cons: =0, 1; attrs: type, arch, length]\n+     [w , m ; neon_load1_1reg<q> , *   , *] ldr\\t%d0, %1\n+     [r , m ; load_8             , *   , *] ldr\\t%x0, %1\n+     [m , Dz; store_8            , *   , *] str\\txzr, %0\n+     [m , w ; neon_store1_1reg<q>, *   , *] str\\t%d1, %0\n+     [m , r ; store_8            , *   , *] str\\t%x1, %0\n+     [w , w ; neon_logic<q>      , simd, *] mov\\t%0.<Vbtype>, %1.<Vbtype>\n+     [w , w ; neon_logic<q>      , *   , *] fmov\\t%d0, %d1\n+     [?r, w ; neon_to_gp<q>      , simd, *] umov\\t%0, %1.d[0]\n+     [?r, w ; neon_to_gp<q>      , *   , *] fmov\\t%x0, %d1\n+     [?w, r ; f_mcr              , *   , *] fmov\\t%d0, %1\n+     [?r, r ; mov_reg            , *   , *] mov\\t%0, %1\n+     [w , Dn; neon_move<q>       , simd, *] << aarch64_output_simd_mov_immediate (operands[1], 64);\n+     [w , Dz; f_mcr              , *   , *] fmov\\t%d0, xzr\n+     [w , Dx; neon_move          , simd, 8] #\n+  }\n+  \"CONST_INT_P (operands[1])\n+   && aarch64_simd_special_constant_p (operands[1], <MODE>mode)\n+   && FP_REGNUM_P (REGNO (operands[0]))\"\n+  [(const_int 0)]\n+  {\n+    aarch64_maybe_generate_simd_constant (operands[0], operands[1], <MODE>mode);\n+    DONE;\n   }\n )\n \n@@ -181,19 +190,30 @@ (define_insn_and_split \"*aarch64_simd_mov<VQMOV:mode>\"\n      [?r , r ; multiple           , *   , 8] #\n      [w  , Dn; neon_move<q>       , simd, 4] << aarch64_output_simd_mov_immediate (operands[1], 128);\n      [w  , Dz; fmov               , *   , 4] fmov\\t%d0, xzr\n+     [w  , Dx; neon_move          , simd, 8] #\n   }\n   \"&& reload_completed\n-   && (REG_P (operands[0])\n+   && ((REG_P (operands[0])\n \t&& REG_P (operands[1])\n \t&& !(FP_REGNUM_P (REGNO (operands[0]))\n-\t     && FP_REGNUM_P (REGNO (operands[1]))))\"\n+\t     && FP_REGNUM_P (REGNO (operands[1]))))\n+       || (aarch64_simd_special_constant_p (operands[1], <MODE>mode)\n+\t   && FP_REGNUM_P (REGNO (operands[0]))))\"\n   [(const_int 0)]\n   {\n     if (GP_REGNUM_P (REGNO (operands[0]))\n \t&& GP_REGNUM_P (REGNO (operands[1])))\n       aarch64_simd_emit_reg_reg_move (operands, DImode, 2);\n     else\n-      aarch64_split_simd_move (operands[0], operands[1]);\n+      {\n+\tif (FP_REGNUM_P (REGNO (operands[0]))\n+\t    && <MODE>mode == V2DImode\n+\t    && aarch64_maybe_generate_simd_constant (operands[0], operands[1],\n+\t\t\t\t\t\t     <MODE>mode))\n+\t  ;\n+\telse\n+\t  aarch64_split_simd_move (operands[0], operands[1]);\n+      }\n     DONE;\n   }\n )\ndiff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc\nindex 9fbfc548a891f5d11940c6fd3c49a14bfbdec886..c5cf42f7801b291754840dcc5b304577e8e0d391 100644\n--- a/gcc/config/aarch64/aarch64.cc\n+++ b/gcc/config/aarch64/aarch64.cc\n@@ -11873,16 +11873,18 @@ aarch64_get_condition_code_1 (machine_mode mode, enum rtx_code comp_code)\n /* Return true if X is a CONST_INT, CONST_WIDE_INT or a constant vector\n    duplicate of such constants.  If so, store in RET_WI the wide_int\n    representation of the constant paired with the inner mode of the vector mode\n-   or TImode for scalar X constants.  */\n+   or MODE for scalar X constants.  If MODE is not provided then TImode is\n+   used.  */\n \n static bool\n-aarch64_extract_vec_duplicate_wide_int (rtx x, wide_int *ret_wi)\n+aarch64_extract_vec_duplicate_wide_int (rtx x, wide_int *ret_wi,\n+\t\t\t\t\tscalar_mode mode = TImode)\n {\n   rtx elt = unwrap_const_vec_duplicate (x);\n   if (!CONST_SCALAR_INT_P (elt))\n     return false;\n   scalar_mode smode\n-    = CONST_SCALAR_INT_P (x) ? TImode : GET_MODE_INNER (GET_MODE (x));\n+    = CONST_SCALAR_INT_P (x) ? mode : GET_MODE_INNER (GET_MODE (x));\n   *ret_wi = rtx_mode_t (elt, smode);\n   return true;\n }\n@@ -11931,6 +11933,49 @@ aarch64_const_vec_all_same_in_range_p (rtx x,\n \t  && IN_RANGE (INTVAL (elt), minval, maxval));\n }\n \n+/* Some constants can't be made using normal mov instructions in Advanced SIMD\n+   but we can still create them in various ways.  If the constant in VAL can be\n+   created using alternate methods then if possible then return true and\n+   additionally set TARGET to the rtx for the sequence if TARGET is not NULL.\n+   Otherwise return false if sequence is not possible.  */\n+\n+bool\n+aarch64_maybe_generate_simd_constant (rtx target, rtx val, machine_mode mode)\n+{\n+  wide_int wval;\n+  auto smode = GET_MODE_INNER (mode);\n+  if (!aarch64_extract_vec_duplicate_wide_int (val, &wval, smode))\n+    return false;\n+\n+  /* For Advanced SIMD we can create an integer with only the top bit set\n+     using fneg (0.0f).  */\n+  if (TARGET_SIMD\n+      && !TARGET_SVE\n+      && smode == DImode\n+      && wi::only_sign_bit_p (wval))\n+    {\n+      if (!target)\n+\treturn true;\n+\n+      /* Use the same base type as aarch64_gen_shareable_zero.  */\n+      rtx zero = CONST0_RTX (V4SImode);\n+      emit_move_insn (lowpart_subreg (V4SImode, target, mode), zero);\n+      rtx neg = lowpart_subreg (V2DFmode, target, mode);\n+      emit_insn (gen_negv2df2 (neg, copy_rtx (neg)));\n+      return true;\n+    }\n+\n+  return false;\n+}\n+\n+/* Check if the value in VAL with mode MODE can be created using special\n+   instruction sequences.  */\n+\n+bool aarch64_simd_special_constant_p (rtx val, machine_mode mode)\n+{\n+  return aarch64_maybe_generate_simd_constant (NULL_RTX, val, mode);\n+}\n+\n bool\n aarch64_const_vec_all_same_int_p (rtx x, HOST_WIDE_INT val)\n {\ndiff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md\nindex 32c7adc89281b249b52ecedf2f1678749c289d18..6f7a6cd1830e5b7cdb3eab76f3143964278a8561 100644\n--- a/gcc/config/aarch64/aarch64.md\n+++ b/gcc/config/aarch64/aarch64.md\n@@ -1341,13 +1341,21 @@ (define_insn_and_split \"*movdi_aarch64\"\n      [r, w  ; f_mrc    , fp  , 4] fmov\\t%x0, %d1\n      [w, w  ; fmov     , fp  , 4] fmov\\t%d0, %d1\n      [w, Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);\n-  }\n-  \"CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode)\n-   && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))\"\n+     [w, Dx ; neon_move, simd, 8] #\n+  }\n+  \"CONST_INT_P (operands[1])\n+   && REG_P (operands[0])\n+   && ((!aarch64_move_imm (INTVAL (operands[1]), DImode)\n+\t&& GP_REGNUM_P (REGNO (operands[0])))\n+       || (aarch64_simd_special_constant_p (operands[1], DImode)\n+\t   && FP_REGNUM_P (REGNO (operands[0]))))\"\n   [(const_int 0)]\n   {\n+    if (GP_REGNUM_P (REGNO (operands[0])))\n       aarch64_expand_mov_immediate (operands[0], operands[1]);\n-      DONE;\n+    else\n+      aarch64_maybe_generate_simd_constant (operands[0], operands[1], DImode);\n+    DONE;\n   }\n )\n \ndiff --git a/gcc/config/aarch64/constraints.md b/gcc/config/aarch64/constraints.md\nindex 371a00827d84d8ea4a06ba2b00a761d3b179ae90..b3922bcb9a8b362c995c96c6d1c6eef034990251 100644\n--- a/gcc/config/aarch64/constraints.md\n+++ b/gcc/config/aarch64/constraints.md\n@@ -488,6 +488,14 @@ (define_constraint \"Dr\"\n  (and (match_code \"const,const_vector\")\n       (match_test \"aarch64_simd_shift_imm_p (op, GET_MODE (op),\n \t\t\t\t\t\t false)\")))\n+\n+(define_constraint \"Dx\"\n+  \"@internal\n+ A constraint that matches a vector of 64-bit immediates which we don't have a\n+ single instruction to create but that we can create in creative ways.\"\n+ (and (match_code \"const_int,const,const_vector\")\n+      (match_test \"aarch64_simd_special_constant_p (op, DImode)\")))\n+\n (define_constraint \"Dz\"\n   \"@internal\n  A constraint that matches a vector of immediate zero.\"\ndiff --git a/gcc/testsuite/gcc.target/aarch64/dbl_mov_immediate_1.c b/gcc/testsuite/gcc.target/aarch64/dbl_mov_immediate_1.c\nindex ba6a230457ba7a86f1939665fe9177ecdb45f935..fb9088e9d2849c0ea10a8741795181a0543c3cb2 100644\n--- a/gcc/testsuite/gcc.target/aarch64/dbl_mov_immediate_1.c\n+++ b/gcc/testsuite/gcc.target/aarch64/dbl_mov_immediate_1.c\n@@ -48,6 +48,8 @@ double d4(void)\n \n /* { dg-final { scan-assembler-times \"mov\\tx\\[0-9\\]+, 25838523252736\"       1 } } */\n /* { dg-final { scan-assembler-times \"movk\\tx\\[0-9\\]+, 0x40fe, lsl 48\"      1 } } */\n-/* { dg-final { scan-assembler-times \"mov\\tx\\[0-9\\]+, -9223372036854775808\" 1 } } */\n-/* { dg-final { scan-assembler-times \"fmov\\td\\[0-9\\]+, x\\[0-9\\]+\"           2 } } */\n+/* { dg-final { scan-assembler-times \"mov\\tx\\[0-9\\]+, -9223372036854775808\" 0 } } */\n+/* { dg-final { scan-assembler-times {movi\\tv[0-9]+.2d, #0} 1 } } */\n+/* { dg-final { scan-assembler-times {fneg\\tv[0-9]+.2d, v[0-9]+.2d} 1 } } */\n+/* { dg-final { scan-assembler-times \"fmov\\td\\[0-9\\]+, x\\[0-9\\]+\"           1 } } */\n \ndiff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\nindex f823013c3ddf6b3a266c3abfcbf2642fc2a75fa6..43c37e21b50e13c09b8d6850686e88465cd8482a 100644\n--- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\n+++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\n@@ -28,8 +28,8 @@ float32x4_t t2 (float32x4_t a)\n \n /*\n ** t3:\n-**\tadrp\tx0, .LC[0-9]+\n-**\tldr\tq[0-9]+, \\[x0, #:lo12:.LC0\\]\n+**\tmovi\tv[0-9]+.4s, 0\n+**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n **\torr\tv[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b\n **\tret\n */\ndiff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\nindex 141121176b309e4b2aa413dc55271a6e3c93d5e1..fb14ec3e2210e0feeff80f2410d777d3046a9f78 100644\n--- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\n+++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\n@@ -20,8 +20,8 @@ float32_t f1 (float32_t a)\n \n /*\n ** f2:\n-**\tmov\tx0, -9223372036854775808\n-**\tfmov\td[0-9]+, x0\n+**\tfmov\td[0-9]+, xzr\n+**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n **\torr\tv[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b\n **\tret\n */\n@@ -29,3 +29,4 @@ float64_t f2 (float64_t a)\n {\n   return -fabs (a);\n }\n+\ndiff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\nindex 10879dea74462d34b26160eeb0bd54ead063166b..4ea0105f6c0a9756070bcc60d34f142f53d8242c 100644\n--- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\n+++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\n@@ -8,8 +8,8 @@\n \n /*\n ** negabs:\n-**\tmov\tx0, -9223372036854775808\n-**\tfmov\td[0-9]+, x0\n+**\tfmov\td[0-9]+, xzr\n+**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n **\torr\tv[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b\n **\tret\n */","headers":{"Return-Path":"<gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=armh.onmicrosoft.com header.i=@armh.onmicrosoft.com\n header.a=rsa-sha256 header.s=selector2-armh-onmicrosoft-com\n header.b=uh7+OxoF;\n\tdkim=pass (1024-bit key) header.d=armh.onmicrosoft.com\n header.i=@armh.onmicrosoft.com header.a=rsa-sha256\n header.s=selector2-armh-onmicrosoft-com header.b=uh7+OxoF;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=8.43.85.97; helo=server2.sourceware.org;\n envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org;\n dmarc=pass (p=none dis=none) header.from=arm.com","sourceware.org; spf=pass smtp.mailfrom=arm.com"],"Received":["from server2.sourceware.org (ip-8-43-85-97.sourceware.org\n [8.43.85.97])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4S1fw01Z5xz1yq9\n\tfor <incoming@patchwork.ozlabs.org>; Fri,  6 Oct 2023 05:19:12 +1100 (AEDT)","from server2.sourceware.org (localhost [IPv6:::1])\n\tby sourceware.org (Postfix) with ESMTP id 281503856951\n\tfor <incoming@patchwork.ozlabs.org>; Thu,  5 Oct 2023 18:19:10 +0000 (GMT)","from EUR05-VI1-obe.outbound.protection.outlook.com\n (mail-vi1eur05on2087.outbound.protection.outlook.com [40.107.21.87])\n by sourceware.org (Postfix) with ESMTPS id 88FDC3858C5F\n for <gcc-patches@gcc.gnu.org>; Thu,  5 Oct 2023 18:18:30 +0000 (GMT)","from AS4P189CA0017.EURP189.PROD.OUTLOOK.COM (2603:10a6:20b:5db::7)\n by PR3PR08MB5788.eurprd08.prod.outlook.com (2603:10a6:102:8b::15) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.28; Thu, 5 Oct\n 2023 18:18:26 +0000","from AM7EUR03FT008.eop-EUR03.prod.protection.outlook.com\n (2603:10a6:20b:5db:cafe::6) by AS4P189CA0017.outlook.office365.com\n (2603:10a6:20b:5db::7) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.29 via Frontend\n Transport; Thu, 5 Oct 2023 18:18:26 +0000","from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by\n AM7EUR03FT008.mail.protection.outlook.com (100.127.141.25) with\n Microsoft\n SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.6863.29 via Frontend Transport; Thu, 5 Oct 2023 18:18:26 +0000","(\"Tessian outbound fdf44c93bd44:v211\");\n Thu, 05 Oct 2023 18:18:25 +0000","from 05b4702bfa96.3\n by 64aa7808-outbound-1.mta.getcheckrecipient.com id\n ABAEF9B6-3128-4A51-9F5B-42234BAA1A9B.1;\n Thu, 05 Oct 2023 18:18:19 +0000","from EUR05-DB8-obe.outbound.protection.outlook.com\n by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id\n 05b4702bfa96.3\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);\n Thu, 05 Oct 2023 18:18:19 +0000","from VI1PR08MB5325.eurprd08.prod.outlook.com (2603:10a6:803:13e::17)\n by DU0PR08MB9203.eurprd08.prod.outlook.com (2603:10a6:10:417::12)\n with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.29; Thu, 5 Oct\n 2023 18:18:16 +0000","from VI1PR08MB5325.eurprd08.prod.outlook.com\n ([fe80::662f:8e26:1bf8:aaa1]) by VI1PR08MB5325.eurprd08.prod.outlook.com\n ([fe80::662f:8e26:1bf8:aaa1%7]) with mapi id 15.20.6838.033; Thu, 5 Oct 2023\n 18:18:15 +0000"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 88FDC3858C5F","DKIM-Signature":["v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;\n s=selector2-armh-onmicrosoft-com;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=/pl78/yuqk0cf9GIF4CZbymCHr7aL64/eJGCGubye4A=;\n b=uh7+OxoF/APdBmqFaP66vm+7tyDgTNosopGXIk1l/S3JFIAmKOoweihu2idzlo0qzWgzGcgeJUPmMir+4+6+sAhbj7gBIWFVSTpM8KtRNhQ7TbkkgJRHqAczGv8XyelrLmPlJvjvX7m7ayRX6XEOMlMfG2ofbc4K+lLd483kkQ8=","v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;\n s=selector2-armh-onmicrosoft-com;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=/pl78/yuqk0cf9GIF4CZbymCHr7aL64/eJGCGubye4A=;\n b=uh7+OxoF/APdBmqFaP66vm+7tyDgTNosopGXIk1l/S3JFIAmKOoweihu2idzlo0qzWgzGcgeJUPmMir+4+6+sAhbj7gBIWFVSTpM8KtRNhQ7TbkkgJRHqAczGv8XyelrLmPlJvjvX7m7ayRX6XEOMlMfG2ofbc4K+lLd483kkQ8="],"X-MS-Exchange-Authentication-Results":"spf=pass (sender IP is 63.35.35.123)\n smtp.mailfrom=arm.com; dkim=pass (signature was verified)\n header.d=armh.onmicrosoft.com;dmarc=pass action=none header.from=arm.com;","Received-SPF":"Pass (protection.outlook.com: domain of arm.com designates\n 63.35.35.123 as permitted sender) receiver=protection.outlook.com;\n client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;\n pr=C","X-CheckRecipientChecked":"true","X-CR-MTA-CID":"ffc9db7ef452cda1","X-CR-MTA-TID":"64aa7808","ARC-Seal":"i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;\n b=Cd6niD4ECk3NpCRVHHl22Hx3lXKlUkZ58IZuR2uTNk/JOa1ZKkkQA2AIECOTO9c/JlZaTGFTVjkv/MT4arKTNdp33VbqCct/9KTijN4LZGoMnZk8/IF+/DNFIsTn7L0thgFqgYvZ7j+eNZChN7/Dj1Oq9hgYlUDfD0rIbU8F2Wu5SwIJHcF26utctv7axk83faHRYVn01BX2DnaosycsuRZJ/U/QmUscWFHJ4zJk3uafX/ecJ8CHSZK5XUfZ8Yo+2BfQDQmbjJvbiP1Mfuh2+l9/XTCZvazddgYlh5aX/nLiQJo+0HYZBLo+0cJvR1CZtCZRvFm92gFKmXFN5ivz+w==","ARC-Message-Signature":"i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector9901;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=/pl78/yuqk0cf9GIF4CZbymCHr7aL64/eJGCGubye4A=;\n b=cdASGRFlQbXrqi2kPfofbnHypgYaBLCq7AWOOuuNYk61ebX82FoRIEbh4i7q2K2bK2fRjPp2yVXyn/njSTaQ9NcjdwboKclrkgs+U6K4ojD8jlMtbCbb9RTZrlIhFYlbugNGKGdQUa3dKbWttPACkz8BtqHfYta+p9FX9bmm+bKlWmsnHZoJMLrv1ez029HIapvgRFw4qBLojYo9h1e/D94QbT2wtF01Zqb+dkgNPEmhgkQZcxYW2S9ob9knxTfXPuvHBZJoVn6ErlVx0ZBxyzMsAwvkUowW/yalgJwK3Q59FEYxQma1sH1OVzc3Ik/mwqDl/174TsDiyUhwL8tU3g==","ARC-Authentication-Results":"i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass\n header.d=arm.com; arc=none","From":"Tamar Christina <Tamar.Christina@arm.com>","To":"Richard Sandiford <Richard.Sandiford@arm.com>","CC":"\"gcc-patches@gcc.gnu.org\" <gcc-patches@gcc.gnu.org>, nd <nd@arm.com>,\n Richard Earnshaw <Richard.Earnshaw@arm.com>, Marcus Shawcroft\n <Marcus.Shawcroft@arm.com>, Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>","Subject":"RE: [PATCH]AArch64 Add special patterns for creating DI scalar and\n vector constant 1 << 63 [PR109154]","Thread-Topic":"[PATCH]AArch64 Add special patterns for creating DI scalar and\n vector constant 1 << 63 [PR109154]","Thread-Index":"AQHZ8NzlnfiIf3gzqEutRtLoRWPFYLAuekpsgA0T5BA=","Date":"Thu, 5 Oct 2023 18:18:15 +0000","Message-ID":"\n <VI1PR08MB53251FDBCDC1C958595E9F9CFFCAA@VI1PR08MB5325.eurprd08.prod.outlook.com>","References":"<patch-17722-tamar@arm.com> <mpt8r8rx5wm.fsf@arm.com>","In-Reply-To":"<mpt8r8rx5wm.fsf@arm.com>","Accept-Language":"en-US","Content-Language":"en-US","X-MS-Has-Attach":"yes","X-MS-TNEF-Correlator":"","Authentication-Results-Original":"dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=arm.com;","x-ms-traffictypediagnostic":"\n VI1PR08MB5325:EE_|DU0PR08MB9203:EE_|AM7EUR03FT008:EE_|PR3PR08MB5788:EE_","X-MS-Office365-Filtering-Correlation-Id":"1c188555-3fe5-48ee-138d-08dbc5cf77bf","x-checkrecipientrouted":"true","nodisclaimer":"true","X-MS-Exchange-SenderADCheck":"1","X-MS-Exchange-AntiSpam-Relay":"0","X-Microsoft-Antispam-Untrusted":"BCL:0;","X-Microsoft-Antispam-Message-Info-Original":"\n kZ065wdsTXk2nhD3g9Bg552XmBgdtGdixAZHQClpAR88o67SP70NkdGY1xNu0n1kFqSFZ9sUpZE7v4iWbqEN/7omOURq82puvinG19QL6I3bs660TVdUtYIFLX5+upQEbhRip1sciNAmOz8JFxF9Yx7DBHuDFaatN5/hZ1JAlIB9vjqDA/7S0ba/9bq82JwbJwvis8YFH+ssj16ackJvX/5hE9OCMzQYZayWb2NB9PtpV96FE8ygdOdmEI0b23/lP4IClPxvdc8o9r6D28p1gz+POaaeZ606PBUF/SiTUuBqDfVA525V7J3Mu6pvzMlWBKZaBzxi8bANRLxanaBg0JDuCGPM70wK9XN8Iq+ffl+8IeZ/9T4TFRT+q2KEOB6UFGs/ISVEqZMIp8uzVVTWxB6tRFZtrjcQUlqwUzIxhOacJPcnksp7eXACy8F0OeSwRd2Mcd8lhasDiT9j6M9RhdIw66aTfySNDJtyQJywLqaxRLlspezKEqMwb/ryc4xmaa2YPHGLjEONp9a8AG8+/XtVTOnaOhiGMk/zZiPLYu3ZwWIbKllyrZS2kYgyqfor8wIgDWLdZHGUz/E2pg7h+SQPBpdbr4NMnB2suNtkYCv3T9GBxp1vNmt7itVn6Cnd4/WbK3S4TEsVTxopdgFdAQ==","X-Forefront-Antispam-Report-Untrusted":"CIP:255.255.255.255; CTRY:; LANG:en;\n SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR08MB5325.eurprd08.prod.outlook.com;\n PTR:; CAT:NONE;\n SFS:(13230031)(39860400002)(396003)(346002)(366004)(136003)(376002)(230922051799003)(64100799003)(186009)(451199024)(1800799009)(6636002)(316002)(8936002)(8676002)(30864003)(66476007)(66946007)(66556008)(66446008)(84970400001)(41300700001)(64756008)(54906003)(76116006)(83380400001)(2906002)(6862004)(5660300002)(4326008)(52536014)(478600001)(71200400001)(9686003)(55016003)(26005)(7696005)(6506007)(122000001)(99936003)(38070700005)(38100700002)(86362001)(33656002);\n DIR:OUT; SFP:1101;","Content-Type":"multipart/mixed;\n boundary=\"_002_VI1PR08MB53251FDBCDC1C958595E9F9CFFCAAVI1PR08MB5325eurp_\"","MIME-Version":"1.0","X-MS-Exchange-Transport-CrossTenantHeadersStamped":["DU0PR08MB9203","PR3PR08MB5788"],"Original-Authentication-Results":"dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=arm.com;","X-EOPAttributedMessage":"0","X-MS-Exchange-Transport-CrossTenantHeadersStripped":"\n AM7EUR03FT008.eop-EUR03.prod.protection.outlook.com","X-MS-PublicTrafficType":"Email","X-MS-Office365-Filtering-Correlation-Id-Prvs":"\n 8f7d99ab-eb3f-4d1b-1fa8-08dbc5cf71a3","X-Microsoft-Antispam":"BCL:0;","X-Microsoft-Antispam-Message-Info":"\n jl2v4haWpmu7SFbf/lFTmVo8xIvYJJ+r0Z5NksnZTQfjzUcfk8YWBmOwJBhuGU+H329vKniVhR89FgzGjRD5EUpQF8PjQAQ6lJLs7Nz/17sVpq6v8ZFl32Pu1fitsveIimFyXL0mlHP608BbOM+eTv2tmXevH1ZnuQpNitSfHxko9fC3rff1GeJMztg4vvhJGhKB4LPT1lHX+ubct7TOpmmWvOlfC/i7SJD8DkW0cJAQMNKs2XYvls33QUlI9OhRQ1FDtgkM2povIIm9vUwhnnRtlObG6VdcwXCpvmZNRLsqg6wOXqMsM2Yc/nczQeqEVlVEYtffOD9GxXiXqeIZ/IOs37Z95AlyimEb5QVHHEfKYsWUrhA8v6g+f9iMA2udIxnSIOXUnor3nDysTwSKMdryOHmTl0GC7aOGeP2eZj1283ENhSgV6DOWM+Fz+uxupA7+PUfDNnTAwikfD/nI79Dl2h2nZbnazcKuoiKO4MTmlSAzQjjNZ8q6vIJaVeQStFMiIU3zrtCs6YSyXJ1uBZwQsXVmXGGWDBFj99OKIqdAu0kFtRmb4t/8q7U5S6NNSiu4zQrOglbcIoghr1YJ2NE1E1mMS680fn/ruGK6gIr8p8Nvgm6U6+LsOD0oYSk5e/k21H4IlP+zSChpkP9TmWiw0yc0khGgUOEpYf0ztVUjuxrAPqealKnvzTtcgn6wt/73XtJpzRRPxE5zol8h3GE3iieNp+F3nZFHUjUZsHxa3iWJPEEdLVWMSBZrFnYOZt/ISlhJdUg4BfH+00tzXA==","X-Forefront-Antispam-Report":"CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;\n IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;\n PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE;\n SFS:(13230031)(4636009)(396003)(376002)(136003)(346002)(39860400002)(230922051799003)(1800799009)(451199024)(82310400011)(186009)(64100799003)(46966006)(36840700001)(40470700004)(316002)(336012)(36860700001)(55016003)(81166007)(99936003)(40460700003)(356005)(40480700001)(33656002)(86362001)(82740400003)(83380400001)(47076005)(41300700001)(54906003)(8676002)(4326008)(6862004)(5660300002)(52536014)(84970400001)(8936002)(2906002)(30864003)(26005)(70206006)(6636002)(70586007)(7696005)(235185007)(6506007)(9686003)(478600001);\n DIR:OUT; SFP:1101;","X-OriginatorOrg":"arm.com","X-MS-Exchange-CrossTenant-OriginalArrivalTime":"05 Oct 2023 18:18:26.0345 (UTC)","X-MS-Exchange-CrossTenant-Network-Message-Id":"\n 1c188555-3fe5-48ee-138d-08dbc5cf77bf","X-MS-Exchange-CrossTenant-Id":"f34e5979-57d9-4aaa-ad4d-b122a662184d","X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp":"\n TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];\n Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]","X-MS-Exchange-CrossTenant-AuthSource":"\n AM7EUR03FT008.eop-EUR03.prod.protection.outlook.com","X-MS-Exchange-CrossTenant-AuthAs":"Anonymous","X-MS-Exchange-CrossTenant-FromEntityHeader":"HybridOnPrem","X-Spam-Status":"No, score=-12.2 required=5.0 tests=BAYES_00, DKIM_SIGNED,\n DKIM_VALID, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_NONE, KAM_LOTSOFHASH,\n KAM_SHORT, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, TXREP, T_SPF_TEMPERROR,\n UNPARSEABLE_RELAY,\n URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6","X-Spam-Checker-Version":"SpamAssassin 3.4.6 (2021-04-09) on\n server2.sourceware.org","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Errors-To":"gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org"}},{"id":3193963,"web_url":"http://patchwork.ozlabs.org/comment/3193963/","msgid":"<mpth6n4n96y.fsf@arm.com>","list_archive_url":null,"date":"2023-10-05T19:47:01","subject":"Re: [PATCH]AArch64 Add special patterns for creating DI scalar and\n vector constant 1 << 63 [PR109154]","submitter":{"id":64746,"url":"http://patchwork.ozlabs.org/api/people/64746/","name":"Richard Sandiford","email":"richard.sandiford@arm.com"},"content":"Tamar Christina <Tamar.Christina@arm.com> writes:\n> Hi,\n>\n>> The lowpart_subreg should simplify this back into CONST0_RTX (mode),\n>> making it no different from:\n>> \n>>     emti_move_insn (target, CONST0_RTX (mode));\n>> \n>> If the intention is to share zeros between modes (sounds good!), then I think\n>> the subreg needs to be on the lhs instead.\n>> \n>> > +      rtx neg = lowpart_subreg (V2DFmode, target, mode);\n>> > +      emit_insn (gen_negv2df2 (neg, lowpart_subreg (V2DFmode, target,\n>> > + mode)));\n>> \n>> The rhs seems simpler as copy_rtx (neg).  (Even the copy_rtx shouldn't be\n>> needed after RA, but it's probably more future-proof to keep it.)\n>> \n>> > +      emit_move_insn (target, lowpart_subreg (mode, neg, V2DFmode));\n>> \n>> This shouldn't be needed, since neg is already a reference to target.\n>> \n>> Overall, looks like a nice change/framework.\n>\n> Updated the patch, and in te process also realized this can be used for the\n> vector variants:\n>\n> Hi All,\n>\n> This adds a way to generate special sequences for creation of constants for\n> which we don't have single instructions sequences which would have normally\n> lead to a GP -> FP transfer or a literal load.\n>\n> The patch starts out by adding support for creating 1 << 63 using fneg (mov 0).\n>\n> Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.\n>\n> Ok for master?\n>\n> Thanks,\n> Tamar\n>\n> gcc/ChangeLog:\n>\n> \tPR tree-optimization/109154\n> \t* config/aarch64/aarch64-protos.h (aarch64_simd_special_constant_p,\n> \taarch64_maybe_generate_simd_constant): New.\n> \t* config/aarch64/aarch64-simd.md (*aarch64_simd_mov<VQMOV:mode>,\n> \t*aarch64_simd_mov<VDMOV:mode>): Add new coden for special constants.\n> \t* config/aarch64/aarch64.cc (aarch64_extract_vec_duplicate_wide_int):\n> \tTake optional mode.\n> \t(aarch64_simd_special_constant_p,\n> \taarch64_maybe_generate_simd_constant): New.\n> \t* config/aarch64/aarch64.md (*movdi_aarch64): Add new codegen for\n> \tspecial constants.\n> \t* config/aarch64/constraints.md (Dx): new.\n>\n> gcc/testsuite/ChangeLog:\n>\n> \tPR tree-optimization/109154\n> \t* gcc.target/aarch64/fneg-abs_1.c: Updated.\n> \t* gcc.target/aarch64/fneg-abs_2.c: Updated.\n> \t* gcc.target/aarch64/fneg-abs_4.c: Updated.\n> \t* gcc.target/aarch64/dbl_mov_immediate_1.c: Updated.\n>\n> --- inline copy of patch ---\n>\n> diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h\n> index 60a55f4bc1956786ea687fc7cad7ec9e4a84e1f0..36d6c688bc888a51a9de174bd3665aebe891b8b1 100644\n> --- a/gcc/config/aarch64/aarch64-protos.h\n> +++ b/gcc/config/aarch64/aarch64-protos.h\n> @@ -831,6 +831,8 @@ bool aarch64_sve_ptrue_svpattern_p (rtx, struct simd_immediate_info *);\n>  bool aarch64_simd_valid_immediate (rtx, struct simd_immediate_info *,\n>  \t\t\tenum simd_immediate_check w = AARCH64_CHECK_MOV);\n>  rtx aarch64_check_zero_based_sve_index_immediate (rtx);\n> +bool aarch64_maybe_generate_simd_constant (rtx, rtx, machine_mode);\n> +bool aarch64_simd_special_constant_p (rtx, machine_mode);\n>  bool aarch64_sve_index_immediate_p (rtx);\n>  bool aarch64_sve_arith_immediate_p (machine_mode, rtx, bool);\n>  bool aarch64_sve_sqadd_sqsub_immediate_p (machine_mode, rtx, bool);\n> diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md\n> index 81ff5bad03d598fa0d48df93d172a28bc0d1d92e..33eceb436584ff73c7271f93639f2246d1af19e0 100644\n> --- a/gcc/config/aarch64/aarch64-simd.md\n> +++ b/gcc/config/aarch64/aarch64-simd.md\n> @@ -142,26 +142,35 @@ (define_insn \"aarch64_dup_lane_<vswap_width_name><mode>\"\n>    [(set_attr \"type\" \"neon_dup<q>\")]\n>  )\n>  \n> -(define_insn \"*aarch64_simd_mov<VDMOV:mode>\"\n> +(define_insn_and_split \"*aarch64_simd_mov<VDMOV:mode>\"\n>    [(set (match_operand:VDMOV 0 \"nonimmediate_operand\")\n>  \t(match_operand:VDMOV 1 \"general_operand\"))]\n>    \"TARGET_FLOAT\n>     && (register_operand (operands[0], <MODE>mode)\n>         || aarch64_simd_reg_or_zero (operands[1], <MODE>mode))\"\n> -  {@ [cons: =0, 1; attrs: type, arch]\n> -     [w , m ; neon_load1_1reg<q> , *   ] ldr\\t%d0, %1\n> -     [r , m ; load_8             , *   ] ldr\\t%x0, %1\n> -     [m , Dz; store_8            , *   ] str\\txzr, %0\n> -     [m , w ; neon_store1_1reg<q>, *   ] str\\t%d1, %0\n> -     [m , r ; store_8            , *   ] str\\t%x1, %0\n> -     [w , w ; neon_logic<q>      , simd] mov\\t%0.<Vbtype>, %1.<Vbtype>\n> -     [w , w ; neon_logic<q>      , *   ] fmov\\t%d0, %d1\n> -     [?r, w ; neon_to_gp<q>      , simd] umov\\t%0, %1.d[0]\n> -     [?r, w ; neon_to_gp<q>      , *   ] fmov\\t%x0, %d1\n> -     [?w, r ; f_mcr              , *   ] fmov\\t%d0, %1\n> -     [?r, r ; mov_reg            , *   ] mov\\t%0, %1\n> -     [w , Dn; neon_move<q>       , simd] << aarch64_output_simd_mov_immediate (operands[1], 64);\n> -     [w , Dz; f_mcr              , *   ] fmov\\t%d0, xzr\n> +  {@ [cons: =0, 1; attrs: type, arch, length]\n> +     [w , m ; neon_load1_1reg<q> , *   , *] ldr\\t%d0, %1\n> +     [r , m ; load_8             , *   , *] ldr\\t%x0, %1\n> +     [m , Dz; store_8            , *   , *] str\\txzr, %0\n> +     [m , w ; neon_store1_1reg<q>, *   , *] str\\t%d1, %0\n> +     [m , r ; store_8            , *   , *] str\\t%x1, %0\n> +     [w , w ; neon_logic<q>      , simd, *] mov\\t%0.<Vbtype>, %1.<Vbtype>\n> +     [w , w ; neon_logic<q>      , *   , *] fmov\\t%d0, %d1\n> +     [?r, w ; neon_to_gp<q>      , simd, *] umov\\t%0, %1.d[0]\n> +     [?r, w ; neon_to_gp<q>      , *   , *] fmov\\t%x0, %d1\n> +     [?w, r ; f_mcr              , *   , *] fmov\\t%d0, %1\n> +     [?r, r ; mov_reg            , *   , *] mov\\t%0, %1\n> +     [w , Dn; neon_move<q>       , simd, *] << aarch64_output_simd_mov_immediate (operands[1], 64);\n> +     [w , Dz; f_mcr              , *   , *] fmov\\t%d0, xzr\n> +     [w , Dx; neon_move          , simd, 8] #\n> +  }\n> +  \"CONST_INT_P (operands[1])\n> +   && aarch64_simd_special_constant_p (operands[1], <MODE>mode)\n> +   && FP_REGNUM_P (REGNO (operands[0]))\"\n> +  [(const_int 0)]\n> +  {\n> +    aarch64_maybe_generate_simd_constant (operands[0], operands[1], <MODE>mode);\n> +    DONE;\n>    }\n>  )\n>  \n> @@ -181,19 +190,30 @@ (define_insn_and_split \"*aarch64_simd_mov<VQMOV:mode>\"\n>       [?r , r ; multiple           , *   , 8] #\n>       [w  , Dn; neon_move<q>       , simd, 4] << aarch64_output_simd_mov_immediate (operands[1], 128);\n>       [w  , Dz; fmov               , *   , 4] fmov\\t%d0, xzr\n> +     [w  , Dx; neon_move          , simd, 8] #\n>    }\n>    \"&& reload_completed\n> -   && (REG_P (operands[0])\n> +   && ((REG_P (operands[0])\n>  \t&& REG_P (operands[1])\n>  \t&& !(FP_REGNUM_P (REGNO (operands[0]))\n> -\t     && FP_REGNUM_P (REGNO (operands[1]))))\"\n> +\t     && FP_REGNUM_P (REGNO (operands[1]))))\n> +       || (aarch64_simd_special_constant_p (operands[1], <MODE>mode)\n> +\t   && FP_REGNUM_P (REGNO (operands[0]))))\"\n>    [(const_int 0)]\n>    {\n>      if (GP_REGNUM_P (REGNO (operands[0]))\n>  \t&& GP_REGNUM_P (REGNO (operands[1])))\n>        aarch64_simd_emit_reg_reg_move (operands, DImode, 2);\n>      else\n> -      aarch64_split_simd_move (operands[0], operands[1]);\n> +      {\n> +\tif (FP_REGNUM_P (REGNO (operands[0]))\n> +\t    && <MODE>mode == V2DImode\n> +\t    && aarch64_maybe_generate_simd_constant (operands[0], operands[1],\n> +\t\t\t\t\t\t     <MODE>mode))\n> +\t  ;\n> +\telse\n> +\t  aarch64_split_simd_move (operands[0], operands[1]);\n> +      }\n>      DONE;\n>    }\n>  )\n> diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc\n> index 9fbfc548a891f5d11940c6fd3c49a14bfbdec886..c5cf42f7801b291754840dcc5b304577e8e0d391 100644\n> --- a/gcc/config/aarch64/aarch64.cc\n> +++ b/gcc/config/aarch64/aarch64.cc\n> @@ -11873,16 +11873,18 @@ aarch64_get_condition_code_1 (machine_mode mode, enum rtx_code comp_code)\n>  /* Return true if X is a CONST_INT, CONST_WIDE_INT or a constant vector\n>     duplicate of such constants.  If so, store in RET_WI the wide_int\n>     representation of the constant paired with the inner mode of the vector mode\n> -   or TImode for scalar X constants.  */\n> +   or MODE for scalar X constants.  If MODE is not provided then TImode is\n> +   used.  */\n>  \n>  static bool\n> -aarch64_extract_vec_duplicate_wide_int (rtx x, wide_int *ret_wi)\n> +aarch64_extract_vec_duplicate_wide_int (rtx x, wide_int *ret_wi,\n> +\t\t\t\t\tscalar_mode mode = TImode)\n>  {\n>    rtx elt = unwrap_const_vec_duplicate (x);\n>    if (!CONST_SCALAR_INT_P (elt))\n>      return false;\n>    scalar_mode smode\n> -    = CONST_SCALAR_INT_P (x) ? TImode : GET_MODE_INNER (GET_MODE (x));\n> +    = CONST_SCALAR_INT_P (x) ? mode : GET_MODE_INNER (GET_MODE (x));\n>    *ret_wi = rtx_mode_t (elt, smode);\n>    return true;\n>  }\n> @@ -11931,6 +11933,49 @@ aarch64_const_vec_all_same_in_range_p (rtx x,\n>  \t  && IN_RANGE (INTVAL (elt), minval, maxval));\n>  }\n>  \n> +/* Some constants can't be made using normal mov instructions in Advanced SIMD\n> +   but we can still create them in various ways.  If the constant in VAL can be\n> +   created using alternate methods then if possible then return true and\n> +   additionally set TARGET to the rtx for the sequence if TARGET is not NULL.\n> +   Otherwise return false if sequence is not possible.  */\n> +\n> +bool\n> +aarch64_maybe_generate_simd_constant (rtx target, rtx val, machine_mode mode)\n> +{\n> +  wide_int wval;\n> +  auto smode = GET_MODE_INNER (mode);\n> +  if (!aarch64_extract_vec_duplicate_wide_int (val, &wval, smode))\n> +    return false;\n> +\n> +  /* For Advanced SIMD we can create an integer with only the top bit set\n> +     using fneg (0.0f).  */\n> +  if (TARGET_SIMD\n> +      && !TARGET_SVE\n> +      && smode == DImode\n> +      && wi::only_sign_bit_p (wval))\n> +    {\n> +      if (!target)\n> +\treturn true;\n> +\n> +      /* Use the same base type as aarch64_gen_shareable_zero.  */\n> +      rtx zero = CONST0_RTX (V4SImode);\n> +      emit_move_insn (lowpart_subreg (V4SImode, target, mode), zero);\n> +      rtx neg = lowpart_subreg (V2DFmode, target, mode);\n> +      emit_insn (gen_negv2df2 (neg, copy_rtx (neg)));\n> +      return true;\n> +    }\n> +\n> +  return false;\n> +}\n> +\n> +/* Check if the value in VAL with mode MODE can be created using special\n> +   instruction sequences.  */\n> +\n> +bool aarch64_simd_special_constant_p (rtx val, machine_mode mode)\n\nNit: should be line break after \"bool\".\n\n> +{\n> +  return aarch64_maybe_generate_simd_constant (NULL_RTX, val, mode);\n> +}\n> +\n>  bool\n>  aarch64_const_vec_all_same_int_p (rtx x, HOST_WIDE_INT val)\n>  {\n> diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md\n> index 32c7adc89281b249b52ecedf2f1678749c289d18..6f7a6cd1830e5b7cdb3eab76f3143964278a8561 100644\n> --- a/gcc/config/aarch64/aarch64.md\n> +++ b/gcc/config/aarch64/aarch64.md\n> @@ -1341,13 +1341,21 @@ (define_insn_and_split \"*movdi_aarch64\"\n>       [r, w  ; f_mrc    , fp  , 4] fmov\\t%x0, %d1\n>       [w, w  ; fmov     , fp  , 4] fmov\\t%d0, %d1\n>       [w, Dd ; neon_move, simd, 4] << aarch64_output_scalar_simd_mov_immediate (operands[1], DImode);\n> -  }\n> -  \"CONST_INT_P (operands[1]) && !aarch64_move_imm (INTVAL (operands[1]), DImode)\n> -   && REG_P (operands[0]) && GP_REGNUM_P (REGNO (operands[0]))\"\n> +     [w, Dx ; neon_move, simd, 8] #\n> +  }\n> +  \"CONST_INT_P (operands[1])\n> +   && REG_P (operands[0])\n> +   && ((!aarch64_move_imm (INTVAL (operands[1]), DImode)\n> +\t&& GP_REGNUM_P (REGNO (operands[0])))\n> +       || (aarch64_simd_special_constant_p (operands[1], DImode)\n> +\t   && FP_REGNUM_P (REGNO (operands[0]))))\"\n>    [(const_int 0)]\n>    {\n> +    if (GP_REGNUM_P (REGNO (operands[0])))\n>        aarch64_expand_mov_immediate (operands[0], operands[1]);\n> -      DONE;\n> +    else\n> +      aarch64_maybe_generate_simd_constant (operands[0], operands[1], DImode);\n> +    DONE;\n>    }\n>  )\n>  \n> diff --git a/gcc/config/aarch64/constraints.md b/gcc/config/aarch64/constraints.md\n> index 371a00827d84d8ea4a06ba2b00a761d3b179ae90..b3922bcb9a8b362c995c96c6d1c6eef034990251 100644\n> --- a/gcc/config/aarch64/constraints.md\n> +++ b/gcc/config/aarch64/constraints.md\n> @@ -488,6 +488,14 @@ (define_constraint \"Dr\"\n>   (and (match_code \"const,const_vector\")\n>        (match_test \"aarch64_simd_shift_imm_p (op, GET_MODE (op),\n>  \t\t\t\t\t\t false)\")))\n> +\n> +(define_constraint \"Dx\"\n> +  \"@internal\n> + A constraint that matches a vector of 64-bit immediates which we don't have a\n> + single instruction to create but that we can create in creative ways.\"\n> + (and (match_code \"const_int,const,const_vector\")\n> +      (match_test \"aarch64_simd_special_constant_p (op, DImode)\")))\n\nSince this is used by vector iterators that span multiple modes,\nI suppose we should test the mode too:\n\n (and (match_code \"const_int,const,const_vector\")\n      (match_test \"GET_MODE_INNER (GET_MODE (op)) == DImode\n\t\t   || GET_MODE (op) == VOIDmode\")\n      (match_test \"aarch64_simd_special_constant_p (op, DImode)\")))\n\nAlternatively, and I think this is probably better than adding the\ntest above, we could have a separate constraint for vectors and\nkeep Dx for DImode scalars:\n\n(define_constraint \"Dx\"\n  \"@internal\n A constraint that matches a scalar 64-bit immediate which we don't have a\n single instruction to create but that we can create in creative ways.\"\n (and (match_code \"const_int\")\n      (match_test \"aarch64_simd_special_constant_p (op, DImode)\")))\n\n(define_constraint \"Dy\"\n  \"@internal\n Like Dx, but for a vector of immediates (of any mode).\"\n (and (match_code \"const_vector\")\n      (match_test \"aarch64_simd_special_constant_p\n                    (op, GET_MODE_INNER (GET_MODE (op)))\")))\n\n(No need for the \"const\", that's legacy from before the VLA const_vector\nencoding.)\n\nI think we would need a split like that if we ever wanted to extend\nthis to SImode scalars.\n\nOK with those changes, thanks.\n\nRichard\n\n> +\n>  (define_constraint \"Dz\"\n>    \"@internal\n>   A constraint that matches a vector of immediate zero.\"\n> diff --git a/gcc/testsuite/gcc.target/aarch64/dbl_mov_immediate_1.c b/gcc/testsuite/gcc.target/aarch64/dbl_mov_immediate_1.c\n> index ba6a230457ba7a86f1939665fe9177ecdb45f935..fb9088e9d2849c0ea10a8741795181a0543c3cb2 100644\n> --- a/gcc/testsuite/gcc.target/aarch64/dbl_mov_immediate_1.c\n> +++ b/gcc/testsuite/gcc.target/aarch64/dbl_mov_immediate_1.c\n> @@ -48,6 +48,8 @@ double d4(void)\n>  \n>  /* { dg-final { scan-assembler-times \"mov\\tx\\[0-9\\]+, 25838523252736\"       1 } } */\n>  /* { dg-final { scan-assembler-times \"movk\\tx\\[0-9\\]+, 0x40fe, lsl 48\"      1 } } */\n> -/* { dg-final { scan-assembler-times \"mov\\tx\\[0-9\\]+, -9223372036854775808\" 1 } } */\n> -/* { dg-final { scan-assembler-times \"fmov\\td\\[0-9\\]+, x\\[0-9\\]+\"           2 } } */\n> +/* { dg-final { scan-assembler-times \"mov\\tx\\[0-9\\]+, -9223372036854775808\" 0 } } */\n> +/* { dg-final { scan-assembler-times {movi\\tv[0-9]+.2d, #0} 1 } } */\n> +/* { dg-final { scan-assembler-times {fneg\\tv[0-9]+.2d, v[0-9]+.2d} 1 } } */\n> +/* { dg-final { scan-assembler-times \"fmov\\td\\[0-9\\]+, x\\[0-9\\]+\"           1 } } */\n>  \n> diff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\n> index f823013c3ddf6b3a266c3abfcbf2642fc2a75fa6..43c37e21b50e13c09b8d6850686e88465cd8482a 100644\n> --- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\n> +++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_1.c\n> @@ -28,8 +28,8 @@ float32x4_t t2 (float32x4_t a)\n>  \n>  /*\n>  ** t3:\n> -**\tadrp\tx0, .LC[0-9]+\n> -**\tldr\tq[0-9]+, \\[x0, #:lo12:.LC0\\]\n> +**\tmovi\tv[0-9]+.4s, 0\n> +**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n>  **\torr\tv[0-9]+.16b, v[0-9]+.16b, v[0-9]+.16b\n>  **\tret\n>  */\n> diff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\n> index 141121176b309e4b2aa413dc55271a6e3c93d5e1..fb14ec3e2210e0feeff80f2410d777d3046a9f78 100644\n> --- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\n> +++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_2.c\n> @@ -20,8 +20,8 @@ float32_t f1 (float32_t a)\n>  \n>  /*\n>  ** f2:\n> -**\tmov\tx0, -9223372036854775808\n> -**\tfmov\td[0-9]+, x0\n> +**\tfmov\td[0-9]+, xzr\n> +**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n>  **\torr\tv[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b\n>  **\tret\n>  */\n> @@ -29,3 +29,4 @@ float64_t f2 (float64_t a)\n>  {\n>    return -fabs (a);\n>  }\n> +\n> diff --git a/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c b/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\n> index 10879dea74462d34b26160eeb0bd54ead063166b..4ea0105f6c0a9756070bcc60d34f142f53d8242c 100644\n> --- a/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\n> +++ b/gcc/testsuite/gcc.target/aarch64/fneg-abs_4.c\n> @@ -8,8 +8,8 @@\n>  \n>  /*\n>  ** negabs:\n> -**\tmov\tx0, -9223372036854775808\n> -**\tfmov\td[0-9]+, x0\n> +**\tfmov\td[0-9]+, xzr\n> +**\tfneg\tv[0-9]+.2d, v[0-9]+.2d\n>  **\torr\tv[0-9]+.8b, v[0-9]+.8b, v[0-9]+.8b\n>  **\tret\n>  */","headers":{"Return-Path":"<gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=8.43.85.97; helo=server2.sourceware.org;\n envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org;\n dmarc=pass (p=none dis=none) header.from=arm.com","sourceware.org; spf=pass smtp.mailfrom=arm.com"],"Received":["from server2.sourceware.org (ip-8-43-85-97.sourceware.org\n [8.43.85.97])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4S1ht64RBgz1yqD\n\tfor <incoming@patchwork.ozlabs.org>; Fri,  6 Oct 2023 06:47:42 +1100 (AEDT)","from server2.sourceware.org (localhost [IPv6:::1])\n\tby sourceware.org (Postfix) with ESMTP id 84DE63856DD0\n\tfor <incoming@patchwork.ozlabs.org>; Thu,  5 Oct 2023 19:47:40 +0000 (GMT)","from foss.arm.com (foss.arm.com [217.140.110.172])\n by sourceware.org (Postfix) with ESMTP id E6D9F3858C5F\n for <gcc-patches@gcc.gnu.org>; Thu,  5 Oct 2023 19:47:03 +0000 (GMT)","from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])\n by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 50E2BC15;\n Thu,  5 Oct 2023 12:47:42 -0700 (PDT)","from localhost (unknown [10.32.110.65])\n by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7B7DE3F641;\n Thu,  5 Oct 2023 12:47:02 -0700 (PDT)"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org E6D9F3858C5F","From":"Richard Sandiford <richard.sandiford@arm.com>","To":"Tamar Christina <Tamar.Christina@arm.com>","Mail-Followup-To":"Tamar Christina <Tamar.Christina@arm.com>,\n \"gcc-patches\\@gcc.gnu.org\" <gcc-patches@gcc.gnu.org>, nd <nd@arm.com>,\n Richard Earnshaw <Richard.Earnshaw@arm.com>,\n Marcus Shawcroft <Marcus.Shawcroft@arm.com>,\n Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>, richard.sandiford@arm.com","Cc":"\"gcc-patches\\@gcc.gnu.org\" <gcc-patches@gcc.gnu.org>, nd <nd@arm.com>,\n Richard Earnshaw <Richard.Earnshaw@arm.com>,\n Marcus Shawcroft <Marcus.Shawcroft@arm.com>,\n Kyrylo Tkachov <Kyrylo.Tkachov@arm.com>","Subject":"Re: [PATCH]AArch64 Add special patterns for creating DI scalar and\n vector constant 1 << 63 [PR109154]","References":"<patch-17722-tamar@arm.com> <mpt8r8rx5wm.fsf@arm.com>\n <VI1PR08MB53251FDBCDC1C958595E9F9CFFCAA@VI1PR08MB5325.eurprd08.prod.outlook.com>","Date":"Thu, 05 Oct 2023 20:47:01 +0100","In-Reply-To":"\n <VI1PR08MB53251FDBCDC1C958595E9F9CFFCAA@VI1PR08MB5325.eurprd08.prod.outlook.com>\n (Tamar Christina's message of \"Thu, 5 Oct 2023 18:18:15 +0000\")","Message-ID":"<mpth6n4n96y.fsf@arm.com>","User-Agent":"Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)","MIME-Version":"1.0","Content-Type":"text/plain","X-Spam-Status":"No, score=-24.8 required=5.0 tests=BAYES_00, GIT_PATCH_0,\n KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LOTSOFHASH, KAM_SHORT, SPF_HELO_NONE,\n TXREP, T_SPF_TEMPERROR,\n URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6","X-Spam-Checker-Version":"SpamAssassin 3.4.6 (2021-04-09) on\n server2.sourceware.org","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Errors-To":"gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org"}}]