From patchwork Thu Aug 25 11:39:42 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uros Bizjak X-Patchwork-Id: 111551 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 92816B6F70 for ; Thu, 25 Aug 2011 21:40:11 +1000 (EST) Received: (qmail 11141 invoked by alias); 25 Aug 2011 11:40:08 -0000 Received: (qmail 11082 invoked by uid 22791); 25 Aug 2011 11:40:02 -0000 X-SWARE-Spam-Status: No, hits=-1.3 required=5.0 tests=AWL, BAYES_40, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, TW_OV, TW_ZJ, T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: sourceware.org Received: from mail-pz0-f49.google.com (HELO mail-pz0-f49.google.com) (209.85.210.49) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 25 Aug 2011 11:39:43 +0000 Received: by pzk6 with SMTP id 6so3164504pzk.8 for ; Thu, 25 Aug 2011 04:39:43 -0700 (PDT) MIME-Version: 1.0 Received: by 10.143.76.10 with SMTP id d10mr3267536wfl.332.1314272382898; Thu, 25 Aug 2011 04:39:42 -0700 (PDT) Received: by 10.143.13.8 with HTTP; Thu, 25 Aug 2011 04:39:42 -0700 (PDT) Date: Thu, 25 Aug 2011 13:39:42 +0200 Message-ID: Subject: [PATCH, i386]: Remove Y2, Y3 and Y4 register constraints From: Uros Bizjak To: gcc-patches@gcc.gnu.org Cc: Richard Henderson , Jakub Jelinek Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Hello! Modernize i386 md files by using "enabled" attribute instead of Y2, Y3 and Y4 conditional register constraints. I will investigate other conditional register constraints as well. 2011-08-25 Uros Bizjak * config/i386/i386.md (isa): Add sse2, sse2_noavx, sse3, sse4 and sse4_noavx. (enabled): Handle sse2, sse2_noavx, sse3, sse4 and sse4_noavx. (*pushdf_rex64): Change Y2 register constraint to x. (*movdf_internal_rex64): Ditto. (*zero_extendsidi2_rex64): Ditto. (*movdi_internal): Change Y2 register constraint to x and add "isa" attribute. (*pushdf): Ditto. (*movdf internal): Ditto. (zero_extendsidi2_1): Ditto. (*truncdfdf_mixed): Ditto. (*truncxfdf2_mixed): Ditto. * config/i386/mmx.md (*mov_internal_rex64): Change Y2 register constraint to x. (*movv2sf_internal_rex64): Ditto. (*mov_internal): Change Y2 register constraint to x and add "isa" attribute. (*movv2sf_internal): Ditto. (*vec_extractv2si_1): Ditto. * config/i386/sse.md ("vec_set_0): Change Y2 and Y4 register constraints to x and update "isa" attribute. (*vec_interleave_highv2df): Change Y3 registerconstraint to x and update "isa" attribute. (*vec_interleave_lowv2df): Ditto. (*vec_concatv2df): Change Y2 register constraint to x and update "isa" attribute. (sse2_loadld): Ditto. (*vec_extractv2di_1): Ditto. (*vec_dupv4si): Ditto. (*vec_dupv2di): Ditto. (*vec_concatv4si): Ditto. (vec_concatv2di): Ditto. * config/i386/constraints.md (Y2): Remove. (Y3): Ditto. (Y4): Ditto. Tested on x86_64-pc-linux-gnu {,-m32}. I will wait for eventual comments before committing the patch to mainline SVN. Uros. Index: i386.md =================================================================== --- i386.md (revision 178053) +++ i386.md (working copy) @@ -711,11 +711,17 @@ (define_attr "movu" "0,1" (const_string "0")) ;; Used to control the "enabled" attribute on a per-instruction basis. -(define_attr "isa" "base,noavx,avx,bmi2" +(define_attr "isa" "base,sse2,sse2_noavx,sse3,sse4,sse4_noavx,noavx,avx,bmi2" (const_string "base")) (define_attr "enabled" "" - (cond [(eq_attr "isa" "noavx") (symbol_ref "!TARGET_AVX") + (cond [(eq_attr "isa" "sse2") (symbol_ref "TARGET_SSE2") + (eq_attr "isa" "sse2_noavx") + (symbol_ref "TARGET_SSE2 && !TARGET_AVX") + (eq_attr "isa" "sse3") (symbol_ref "TARGET_SSE3") + (eq_attr "isa" "sse4") (symbol_ref "TARGET_SSE4_1") + (eq_attr "isa" "sse4_noavx") + (symbol_ref "TARGET_SSE4_1 && !TARGET_AVX") (eq_attr "isa" "avx") (symbol_ref "TARGET_AVX") (eq_attr "isa" "bmi2") (symbol_ref "TARGET_BMI2") ] @@ -2153,9 +2159,9 @@ (define_insn "*movdi_internal" [(set (match_operand:DI 0 "nonimmediate_operand" - "=r ,o ,*y,m*y,*y,*Y2,m ,*Y2,*Y2,*x,m ,*x,*x,?*Y2,?*Ym") + "=r ,o ,*y,m*y,*y,*x,m ,*x,*x,*x,m ,*x,*x,?*x,?*Ym") (match_operand:DI 1 "general_operand" - "riFo,riF,C ,*y ,m ,C ,*Y2,*Y2,m ,C ,*x,*x,m ,*Ym ,*Y2"))] + "riFo,riF,C ,*y ,m ,C ,*x,*x,m ,C ,*x,*x,m ,*Ym,*x"))] "!TARGET_64BIT && !(MEM_P (operands[0]) && MEM_P (operands[1]))" { switch (get_attr_type (insn)) @@ -2198,9 +2204,12 @@ } } [(set (attr "isa") - (if_then_else (eq_attr "alternative" "9,10,11,12") - (const_string "noavx") - (const_string "*"))) + (cond [(eq_attr "alternative" "5,6,7,8,13,14") + (const_string "sse2") + (eq_attr "alternative" "9,10,11,12") + (const_string "noavx") + ] + (const_string "*"))) (set (attr "type") (cond [(eq_attr "alternative" "0,1") (const_string "multi") @@ -2770,7 +2779,7 @@ (define_insn "*pushdf_rex64" [(set (match_operand:DF 0 "push_operand" "=<,<,<") - (match_operand:DF 1 "general_no_elim_operand" "f,Yd*rFm,Y2"))] + (match_operand:DF 1 "general_no_elim_operand" "f,Yd*rFm,x"))] "TARGET_64BIT" { /* This insn should be already split before reg-stack. */ @@ -2786,13 +2795,14 @@ (define_insn "*pushdf" [(set (match_operand:DF 0 "push_operand" "=<,<,<") - (match_operand:DF 1 "general_no_elim_operand" "f,Yd*rFo,Y2"))] + (match_operand:DF 1 "general_no_elim_operand" "f,Yd*rFo,x"))] "!TARGET_64BIT" { /* This insn should be already split before reg-stack. */ gcc_unreachable (); } - [(set_attr "type" "multi") + [(set_attr "isa" "*,*,sse2") + (set_attr "type" "multi") (set_attr "unit" "i387,*,*") (set_attr "mode" "DF,DI,DF")]) @@ -2976,9 +2986,9 @@ (define_insn "*movdf_internal_rex64" [(set (match_operand:DF 0 "nonimmediate_operand" - "=f,m,f,?r,?m,?r,!o,Y2*x,Y2*x,Y2*x,m ,Yi,r ") + "=f,m,f,?r,?m,?r,!o,x,x,x,m,Yi,r ") (match_operand:DF 1 "general_operand" - "fm,f,G,rm,r ,F ,F ,C ,Y2*x,m ,Y2*x,r ,Yi"))] + "fm,f,G,rm,r ,F ,F ,C,x,m,x,r ,Yi"))] "TARGET_64BIT && !(MEM_P (operands[0]) && MEM_P (operands[1])) && (!can_create_pseudo_p () || (ix86_cmodel == CM_MEDIUM || ix86_cmodel == CM_LARGE) @@ -3112,9 +3122,9 @@ ;; Possible store forwarding (partial memory) stall in alternative 4. (define_insn "*movdf_internal" [(set (match_operand:DF 0 "nonimmediate_operand" - "=f,m,f,?Yd*r ,!o ,Y2*x,Y2*x,Y2*x,m ") + "=f,m,f,?Yd*r ,!o ,x,x,x,m,*x,*x,*x,m") (match_operand:DF 1 "general_operand" - "fm,f,G,Yd*roF,FYd*r,C ,Y2*x,m ,Y2*x"))] + "fm,f,G,Yd*roF,FYd*r,C,x,m,x,C ,*x,m ,*x"))] "!TARGET_64BIT && !(MEM_P (operands[0]) && MEM_P (operands[1])) && (!can_create_pseudo_p () || (ix86_cmodel == CM_MEDIUM || ix86_cmodel == CM_LARGE) @@ -3142,11 +3152,15 @@ return "#"; case 5: + case 9: return standard_sse_constant_opcode (insn, operands[1]); case 6: case 7: case 8: + case 10: + case 11: + case 12: switch (get_attr_mode (insn)) { case MODE_V2DF: @@ -3173,7 +3187,11 @@ gcc_unreachable (); } } - [(set_attr "type" "fmov,fmov,fmov,multi,multi,sselog1,ssemov,ssemov,ssemov") + [(set (attr "isa") + (if_then_else (eq_attr "alternative" "5,6,7,8") + (const_string "sse2") + (const_string "*"))) + (set_attr "type" "fmov,fmov,fmov,multi,multi,sselog1,ssemov,ssemov,ssemov,sselog1,ssemov,ssemov,ssemov") (set (attr "prefix") (if_then_else (eq_attr "alternative" "0,1,2,3,4") (const_string "orig") @@ -3191,12 +3209,12 @@ /* For SSE1, we have many fewer alternatives. */ (eq (symbol_ref "TARGET_SSE2") (const_int 0)) (if_then_else - (eq_attr "alternative" "5,6") + (eq_attr "alternative" "5,6,9,10") (const_string "V4SF") (const_string "V2SF")) /* xorps is one byte shorter. */ - (eq_attr "alternative" "5") + (eq_attr "alternative" "5,9") (cond [(ne (symbol_ref "optimize_function_for_size_p (cfun)") (const_int 0)) (const_string "V4SF") @@ -3211,7 +3229,7 @@ chains, otherwise use short move to avoid extra work. movaps encodes one byte shorter. */ - (eq_attr "alternative" "6") + (eq_attr "alternative" "6,10") (cond [(ne (symbol_ref "optimize_function_for_size_p (cfun)") (const_int 0)) @@ -3224,7 +3242,7 @@ /* For architectures resolving dependencies on register parts we may avoid extra work to zero out upper part of register. */ - (eq_attr "alternative" "7") + (eq_attr "alternative" "7,11") (if_then_else (ne (symbol_ref "TARGET_SSE_SPLIT_REGS") (const_int 0)) @@ -3445,7 +3463,7 @@ }) (define_insn "*zero_extendsidi2_rex64" - [(set (match_operand:DI 0 "nonimmediate_operand" "=r,o,?*Ym,?*y,?*Yi,*Y2") + [(set (match_operand:DI 0 "nonimmediate_operand" "=r,o,?*Ym,?*y,?*Yi,*x") (zero_extend:DI (match_operand:SI 1 "nonimmediate_operand" "rm,0,r ,m ,r ,m")))] "TARGET_64BIT" @@ -3470,7 +3488,7 @@ ;; %%% Kill me once multi-word ops are sane. (define_insn "zero_extendsidi2_1" - [(set (match_operand:DI 0 "nonimmediate_operand" "=r,?r,?o,?*Ym,?*y,?*Yi,*Y2") + [(set (match_operand:DI 0 "nonimmediate_operand" "=r,?r,?o,?*Ym,?*y,?*Yi,*x") (zero_extend:DI (match_operand:SI 1 "nonimmediate_operand" "0,rm,r ,r ,m ,r ,m"))) (clobber (reg:CC FLAGS_REG))] @@ -3483,7 +3501,8 @@ movd\t{%1, %0|%0, %1} %vmovd\t{%1, %0|%0, %1} %vmovd\t{%1, %0|%0, %1}" - [(set_attr "type" "multi,multi,multi,mmxmov,mmxmov,ssemov,ssemov") + [(set_attr "isa" "*,*,*,*,*,*,sse2") + (set_attr "type" "multi,multi,multi,mmxmov,mmxmov,ssemov,ssemov") (set_attr "prefix" "*,*,*,orig,orig,maybe_vex,maybe_vex") (set_attr "mode" "SI,SI,SI,DI,DI,TI,TI")]) @@ -4115,10 +4134,10 @@ (set_attr "mode" "SF")]) (define_insn "*truncdfsf_mixed" - [(set (match_operand:SF 0 "nonimmediate_operand" "=m,Y2 ,?f,?x,?*r") + [(set (match_operand:SF 0 "nonimmediate_operand" "=m,x ,?f,?x,?*r") (float_truncate:SF - (match_operand:DF 1 "nonimmediate_operand" "f ,Y2m,f ,f ,f"))) - (clobber (match_operand:SF 2 "memory_operand" "=X,X ,m ,m ,m"))] + (match_operand:DF 1 "nonimmediate_operand" "f ,xm,f ,f ,f"))) + (clobber (match_operand:SF 2 "memory_operand" "=X,X ,m ,m ,m"))] "TARGET_MIX_SSE_I387" { switch (which_alternative) @@ -4132,7 +4151,8 @@ return "#"; } } - [(set_attr "type" "fmov,ssecvt,multi,multi,multi") + [(set_attr "isa" "*,sse2,*,*,*") + (set_attr "type" "fmov,ssecvt,multi,multi,multi") (set_attr "unit" "*,*,i387,i387,i387") (set_attr "prefix" "orig,maybe_vex,orig,orig,orig") (set_attr "mode" "SF")]) @@ -4219,7 +4239,7 @@ (set_attr "mode" "SF")]) (define_insn "*truncxfdf2_mixed" - [(set (match_operand:DF 0 "nonimmediate_operand" "=m,?f,?Y2,?*r") + [(set (match_operand:DF 0 "nonimmediate_operand" "=m,?f,?x,?*r") (float_truncate:DF (match_operand:XF 1 "register_operand" "f ,f ,f ,f"))) (clobber (match_operand:DF 2 "memory_operand" "=X,m ,m ,m"))] @@ -4228,7 +4248,8 @@ gcc_assert (!which_alternative); return output_387_reg_move (insn, operands); } - [(set_attr "type" "fmov,multi,multi,multi") + [(set_attr "isa" "*,*,sse2,*") + (set_attr "type" "fmov,multi,multi,multi") (set_attr "unit" "*,i387,i387,i387") (set_attr "mode" "DF")]) @@ -4453,10 +4474,10 @@ ;; Avoid vector decoded forms of the instruction. (define_peephole2 - [(match_scratch:DF 2 "Y2") + [(match_scratch:DF 2 "x") (set (match_operand:SWI48x 0 "register_operand" "") (fix:SWI48x (match_operand:DF 1 "memory_operand" "")))] - "TARGET_AVOID_VECTOR_DECODE && optimize_insn_for_speed_p ()" + "TARGET_SSE2 && TARGET_AVOID_VECTOR_DECODE && optimize_insn_for_speed_p ()" [(set (match_dup 2) (match_dup 1)) (set (match_dup 0) (fix:SWI48x (match_dup 2)))]) Index: mmx.md =================================================================== --- mmx.md (revision 178053) +++ mmx.md (working copy) @@ -66,9 +66,9 @@ ;; movd instead of movq is required to handle broken assemblers. (define_insn "*mov_internal_rex64" [(set (match_operand:MMXMODEI8 0 "nonimmediate_operand" - "=rm,r,!?y,!y,!?y,m ,!y ,*Y2,x,x ,m,r ,Yi") + "=rm,r,!?y,!y,!?y,m ,!y ,*x,x,x ,m,r ,Yi") (match_operand:MMXMODEI8 1 "vector_move_operand" - "Cr ,m,C ,!y,m ,!?y,*Y2,!y ,C,xm,x,Yi,r"))] + "Cr ,m,C ,!y,m ,!?y,*x,!y ,C,xm,x,Yi,r"))] "TARGET_64BIT && TARGET_MMX && !(MEM_P (operands[0]) && MEM_P (operands[1]))" "@ @@ -113,9 +113,9 @@ (define_insn "*mov_internal" [(set (match_operand:MMXMODEI8 0 "nonimmediate_operand" - "=!?y,!y,!?y,m ,!y ,*Y2,*Y2,*Y2 ,m ,*x,*x,*x,m ,r ,m") + "=!?y,!y,!?y,m ,!y,*x,*x,*x ,m ,*x,*x,*x,m ,r ,m") (match_operand:MMXMODEI8 1 "vector_move_operand" - "C ,!y,m ,!?y,*Y2,!y ,C ,*Y2m,*Y2,C ,*x,m ,*x,irm,r"))] + "C ,!y,m ,!?y,*x,!y,C ,*xm,*x,C ,*x,m ,*x,irm,r"))] "!TARGET_64BIT && TARGET_MMX && !(MEM_P (operands[0]) && MEM_P (operands[1]))" "@ @@ -135,9 +135,12 @@ # #" [(set (attr "isa") - (if_then_else (eq_attr "alternative" "9,10,11,12") - (const_string "noavx") - (const_string "*"))) + (cond [(eq_attr "alternative" "4,5,6,7,8") + (const_string "sse2") + (eq_attr "alternative" "9,10,11,12") + (const_string "noavx") + ] + (const_string "*"))) (set (attr "type") (cond [(eq_attr "alternative" "0") (const_string "mmx") @@ -183,9 +186,9 @@ ;; movd instead of movq is required to handle broken assemblers. (define_insn "*movv2sf_internal_rex64" [(set (match_operand:V2SF 0 "nonimmediate_operand" - "=rm,r,!?y,!y,!?y,m ,!y ,*Y2,x,x,x,m,r ,Yi") + "=rm,r,!?y,!y,!?y,m ,!y,*x,x,x,x,m,r ,Yi") (match_operand:V2SF 1 "vector_move_operand" - "Cr ,m,C ,!y,m ,!?y,*Y2,!y ,C,x,m,x,Yi,r"))] + "Cr ,m,C ,!y,m ,!?y,*x,!y,C,x,m,x,Yi,r"))] "TARGET_64BIT && TARGET_MMX && !(MEM_P (operands[0]) && MEM_P (operands[1]))" "@ @@ -232,9 +235,9 @@ (define_insn "*movv2sf_internal" [(set (match_operand:V2SF 0 "nonimmediate_operand" - "=!?y,!y,!?y,m ,!y ,*Y2,*x,*x,*x,m ,r ,m") + "=!?y,!y,!?y,m ,!y,*x,*x,*x,*x,m ,r ,m") (match_operand:V2SF 1 "vector_move_operand" - "C ,!y,m ,!?y,*Y2,!y ,C ,*x,m ,*x,irm,r"))] + "C ,!y,m ,!?y,*x,!y,C ,*x,m ,*x,irm,r"))] "!TARGET_64BIT && TARGET_MMX && !(MEM_P (operands[0]) && MEM_P (operands[1]))" "@ @@ -250,7 +253,11 @@ %vmovlps\t{%1, %0|%0, %1} # #" - [(set (attr "type") + [(set (attr "isa") + (if_then_else (eq_attr "alternative" "4,5") + (const_string "sse2") + (const_string "*"))) + (set (attr "type") (cond [(eq_attr "alternative" "0") (const_string "mmx") (eq_attr "alternative" "1,2,3") @@ -1388,9 +1395,9 @@ ;; Avoid combining registers from different units in a single alternative, ;; see comment above inline_secondary_memory_needed function in i386.c (define_insn "*vec_extractv2si_1" - [(set (match_operand:SI 0 "nonimmediate_operand" "=y,Y2,Y2,x,y,x,r") + [(set (match_operand:SI 0 "nonimmediate_operand" "=y,x,x,x,y,x,r") (vec_select:SI - (match_operand:V2SI 1 "nonimmediate_operand" " 0,0 ,Y2,0,o,o,o") + (match_operand:V2SI 1 "nonimmediate_operand" " 0,0,x,0,o,o,o") (parallel [(const_int 1)])))] "TARGET_MMX && !(MEM_P (operands[0]) && MEM_P (operands[1]))" "@ @@ -1401,7 +1408,11 @@ # # #" - [(set_attr "type" "mmxcvt,sselog1,sselog1,sselog1,mmxmov,ssemov,imov") + [(set (attr "isa") + (if_then_else (eq_attr "alternative" "1,2") + (const_string "sse2") + (const_string "*"))) + (set_attr "type" "mmxcvt,sselog1,sselog1,sselog1,mmxmov,ssemov,imov") (set_attr "length_immediate" "*,*,1,*,*,*,*") (set_attr "mode" "DI,TI,TI,V4SF,SI,SI,SI")]) Index: constraints.md =================================================================== --- constraints.md (revision 178053) +++ constraints.md (working copy) @@ -87,9 +87,6 @@ ;; We use the Y prefix to denote any number of conditional register sets: ;; z First SSE register. -;; 2 SSE2 enabled -;; 3 SSE3 enabled -;; 4 SSE4_1 enabled ;; i SSE2 inter-unit moves enabled ;; m MMX inter-unit moves enabled ;; p Integer register when TARGET_PARTIAL_REG_STALL is disabled @@ -99,15 +96,6 @@ (define_register_constraint "Yz" "TARGET_SSE ? SSE_FIRST_REG : NO_REGS" "First SSE register (@code{%xmm0}).") -(define_register_constraint "Y2" "TARGET_SSE2 ? SSE_REGS : NO_REGS" - "@internal Any SSE register, when SSE2 is enabled.") - -(define_register_constraint "Y3" "TARGET_SSE3 ? SSE_REGS : NO_REGS" - "@internal Any SSE register, when SSE3 is enabled.") - -(define_register_constraint "Y4" "TARGET_SSE4_1 ? SSE_REGS : NO_REGS" - "@internal Any SSE register, when SSE4_1 is enabled.") - (define_register_constraint "Yi" "TARGET_SSE2 && TARGET_INTER_UNIT_MOVES ? SSE_REGS : NO_REGS" "@internal Any SSE register, when SSE2 and inter-unit moves are enabled.") Index: sse.md =================================================================== --- sse.md (revision 178053) +++ sse.md (working copy) @@ -3534,13 +3534,13 @@ ;; see comment above inline_secondary_memory_needed function in i386.c (define_insn "vec_set_0" [(set (match_operand:VI4F_128 0 "nonimmediate_operand" - "=Y4,Y2,Y2,x,x,x,Y4 ,x ,m,m ,m") + "=x,x,x ,x,x,x,x ,x ,m,m ,m") (vec_merge:VI4F_128 (vec_duplicate:VI4F_128 (match_operand: 2 "general_operand" - " Y4,m ,*r,m,x,x,*rm,*rm,x,fF,*r")) + " x,m,*r,m,x,x,*rm,*rm,x,fF,*r")) (match_operand:VI4F_128 1 "vector_move_operand" - " C ,C ,C ,C,0,x,0 ,x ,0,0 ,0") + " C,C,C ,C,0,x,0 ,x ,0,0 ,0") (const_int 1)))] "TARGET_SSE" "@ @@ -3555,7 +3555,7 @@ # # #" - [(set_attr "isa" "*,*,*,noavx,noavx,avx,noavx,avx,*,*,*") + [(set_attr "isa" "sse4,sse2,sse2,noavx,noavx,avx,sse4_noavx,avx,*,*,*") (set (attr "type") (cond [(eq_attr "alternative" "0,6,7") (const_string "sselog") @@ -3969,11 +3969,11 @@ }) (define_insn "*vec_interleave_highv2df" - [(set (match_operand:V2DF 0 "nonimmediate_operand" "=x,x,Y3,x,x,m") + [(set (match_operand:V2DF 0 "nonimmediate_operand" "=x,x,x,x,x,m") (vec_select:V2DF (vec_concat:V4DF - (match_operand:V2DF 1 "nonimmediate_operand" " 0,x,o ,o,o,x") - (match_operand:V2DF 2 "nonimmediate_operand" " x,x,1 ,0,x,0")) + (match_operand:V2DF 1 "nonimmediate_operand" " 0,x,o,o,o,x") + (match_operand:V2DF 2 "nonimmediate_operand" " x,x,1,0,x,0")) (parallel [(const_int 1) (const_int 3)])))] "TARGET_SSE2 && ix86_vec_interleave_v2df_operator_ok (operands, 1)" @@ -3984,7 +3984,7 @@ movlpd\t{%H1, %0|%0, %H1} vmovlpd\t{%H1, %2, %0|%0, %2, %H1} %vmovhpd\t{%1, %0|%0, %1}" - [(set_attr "isa" "noavx,avx,*,noavx,avx,*") + [(set_attr "isa" "noavx,avx,sse3,noavx,avx,*") (set_attr "type" "sselog,sselog,sselog,ssemov,ssemov,ssemov") (set_attr "prefix_data16" "*,*,*,1,*,1") (set_attr "prefix" "orig,vex,maybe_vex,orig,vex,maybe_vex") @@ -4071,11 +4071,11 @@ }) (define_insn "*vec_interleave_lowv2df" - [(set (match_operand:V2DF 0 "nonimmediate_operand" "=x,x,Y3,x,x,o") + [(set (match_operand:V2DF 0 "nonimmediate_operand" "=x,x,x,x,x,o") (vec_select:V2DF (vec_concat:V4DF - (match_operand:V2DF 1 "nonimmediate_operand" " 0,x,m ,0,x,0") - (match_operand:V2DF 2 "nonimmediate_operand" " x,x,1 ,m,m,x")) + (match_operand:V2DF 1 "nonimmediate_operand" " 0,x,m,0,x,0") + (match_operand:V2DF 2 "nonimmediate_operand" " x,x,1,m,m,x")) (parallel [(const_int 0) (const_int 2)])))] "TARGET_SSE2 && ix86_vec_interleave_v2df_operator_ok (operands, 0)" @@ -4086,7 +4086,7 @@ movhpd\t{%2, %0|%0, %2} vmovhpd\t{%2, %1, %0|%0, %1, %2} %vmovlpd\t{%2, %H0|%H0, %2}" - [(set_attr "isa" "noavx,avx,*,noavx,avx,*") + [(set_attr "isa" "noavx,avx,sse3,noavx,avx,*") (set_attr "type" "sselog,sselog,sselog,ssemov,ssemov,ssemov") (set_attr "prefix_data16" "*,*,*,1,*,1") (set_attr "prefix" "orig,vex,maybe_vex,orig,vex,maybe_vex") @@ -4606,10 +4606,10 @@ (set_attr "mode" "DF")]) (define_insn "*vec_concatv2df" - [(set (match_operand:V2DF 0 "register_operand" "=Y2,x,Y2,x,Y2,x,x") + [(set (match_operand:V2DF 0 "register_operand" "=x,x,x,x,x,x,x") (vec_concat:V2DF - (match_operand:DF 1 "nonimmediate_operand" " 0 ,x,0 ,x,m ,0,0") - (match_operand:DF 2 "vector_move_operand" " Y2,x,m ,m,C ,x,m")))] + (match_operand:DF 1 "nonimmediate_operand" " 0,x,0,x,m,0,0") + (match_operand:DF 2 "vector_move_operand" " x,x,m,m,C,x,m")))] "TARGET_SSE" "@ unpcklpd\t{%2, %0|%0, %2} @@ -4619,7 +4619,7 @@ %vmovsd\t{%1, %0|%0, %1} movlhps\t{%2, %0|%0, %2} movhps\t{%2, %0|%0, %2}" - [(set_attr "isa" "noavx,avx,noavx,avx,*,noavx,noavx") + [(set_attr "isa" "sse2_noavx,avx,sse2_noavx,avx,sse2,noavx,noavx") (set (attr "type") (if_then_else (eq_attr "alternative" "0,1") @@ -7123,11 +7123,11 @@ "operands[2] = CONST0_RTX (V4SImode);") (define_insn "sse2_loadld" - [(set (match_operand:V4SI 0 "register_operand" "=Y2,Yi,x,x,x") + [(set (match_operand:V4SI 0 "register_operand" "=x,Yi,x,x,x") (vec_merge:V4SI (vec_duplicate:V4SI - (match_operand:SI 2 "nonimmediate_operand" "m ,r ,m,x,x")) - (match_operand:V4SI 1 "reg_or_0_operand" "C ,C ,C,0,x") + (match_operand:SI 2 "nonimmediate_operand" "m ,r ,m,x,x")) + (match_operand:V4SI 1 "reg_or_0_operand" "C ,C ,C,0,x") (const_int 1)))] "TARGET_SSE" "@ @@ -7136,7 +7136,7 @@ movss\t{%2, %0|%0, %2} movss\t{%2, %0|%0, %2} vmovss\t{%2, %1, %0|%0, %1, %2}" - [(set_attr "isa" "*,*,noavx,noavx,avx") + [(set_attr "isa" "sse2,*,noavx,noavx,avx") (set_attr "type" "ssemov") (set_attr "prefix" "maybe_vex,maybe_vex,orig,orig,vex") (set_attr "mode" "TI,TI,V4SF,SF,SF")]) @@ -7232,9 +7232,9 @@ (set_attr "mode" "V2SF,TI,TI,TI,DI")]) (define_insn "*vec_extractv2di_1" - [(set (match_operand:DI 0 "nonimmediate_operand" "=m,Y2,Y2,Y2,x,x") + [(set (match_operand:DI 0 "nonimmediate_operand" "=m,x,x,x,x,x") (vec_select:DI - (match_operand:V2DI 1 "nonimmediate_operand" " x,0 ,Y2,o ,x,o") + (match_operand:V2DI 1 "nonimmediate_operand" " x,0,x,o,x,o") (parallel [(const_int 1)])))] "!TARGET_64BIT && TARGET_SSE && !(MEM_P (operands[0]) && MEM_P (operands[1]))" @@ -7245,7 +7245,7 @@ %vmovq\t{%H1, %0|%0, %H1} movhlps\t{%1, %0|%0, %1} movlps\t{%H1, %0|%0, %H1}" - [(set_attr "isa" "*,noavx,avx,*,noavx,noavx") + [(set_attr "isa" "*,sse2_noavx,avx,sse2,noavx,noavx") (set_attr "type" "ssemov,sseishft1,sseishft1,ssemov,ssemov,ssemov") (set_attr "length_immediate" "*,1,1,*,*,*") (set_attr "memory" "*,none,none,*,*,*") @@ -7267,14 +7267,15 @@ (set_attr "mode" "TI,V4SF")]) (define_insn "*vec_dupv4si" - [(set (match_operand:V4SI 0 "register_operand" "=Y2,x") + [(set (match_operand:V4SI 0 "register_operand" "=x,x") (vec_duplicate:V4SI - (match_operand:SI 1 "register_operand" " Y2,0")))] + (match_operand:SI 1 "register_operand" " x,0")))] "TARGET_SSE" "@ pshufd\t{$0, %1, %0|%0, %1, 0} shufps\t{$0, %0, %0|%0, %0, 0}" - [(set_attr "type" "sselog1") + [(set_attr "isa" "sse2,*") + (set_attr "type" "sselog1") (set_attr "length_immediate" "1") (set_attr "mode" "TI,V4SF")]) @@ -7293,14 +7294,15 @@ (set_attr "mode" "TI,TI,DF")]) (define_insn "*vec_dupv2di" - [(set (match_operand:V2DI 0 "register_operand" "=Y2,x") + [(set (match_operand:V2DI 0 "register_operand" "=x,x") (vec_duplicate:V2DI - (match_operand:DI 1 "register_operand" " 0 ,0")))] + (match_operand:DI 1 "register_operand" " 0,0")))] "TARGET_SSE" "@ punpcklqdq\t%0, %0 movlhps\t%0, %0" - [(set_attr "type" "sselog1,ssemov") + [(set_attr "isa" "sse2,*") + (set_attr "type" "sselog1,ssemov") (set_attr "mode" "TI,V4SF")]) (define_insn "*vec_concatv2si_sse4_1" @@ -7356,10 +7358,10 @@ (set_attr "mode" "V4SF,V4SF,DI,DI")]) (define_insn "*vec_concatv4si" - [(set (match_operand:V4SI 0 "register_operand" "=Y2,x,x,x,x") + [(set (match_operand:V4SI 0 "register_operand" "=x,x,x,x,x") (vec_concat:V4SI - (match_operand:V2SI 1 "register_operand" " 0 ,x,0,0,x") - (match_operand:V2SI 2 "nonimmediate_operand" " Y2,x,x,m,m")))] + (match_operand:V2SI 1 "register_operand" " 0,x,0,0,x") + (match_operand:V2SI 2 "nonimmediate_operand" " x,x,x,m,m")))] "TARGET_SSE" "@ punpcklqdq\t{%2, %0|%0, %2} @@ -7367,7 +7369,7 @@ movlhps\t{%2, %0|%0, %2} movhps\t{%2, %0|%0, %2} vmovhps\t{%2, %1, %0|%0, %1, %2}" - [(set_attr "isa" "noavx,avx,noavx,noavx,avx") + [(set_attr "isa" "sse2_noavx,avx,noavx,noavx,avx") (set_attr "type" "sselog,sselog,ssemov,ssemov,ssemov") (set_attr "prefix" "orig,vex,orig,orig,vex") (set_attr "mode" "TI,TI,V4SF,V2SF,V2SF")]) @@ -7375,12 +7377,12 @@ ;; movd instead of movq is required to handle broken assemblers. (define_insn "*vec_concatv2di_rex64" [(set (match_operand:V2DI 0 "register_operand" - "=Y4,x ,x ,Yi,!x,x,x,x,x") + "=x,x ,x ,Yi,!x,x,x,x,x") (vec_concat:V2DI (match_operand:DI 1 "nonimmediate_operand" - " 0 ,x ,xm,r ,*y,0,x,0,x") + " 0,x ,xm,r ,*y,0,x,0,x") (match_operand:DI 2 "vector_move_operand" - " rm,rm,C ,C ,C ,x,x,m,m")))] + "rm,rm,C ,C ,C ,x,x,m,m")))] "TARGET_64BIT" "@ pinsrq\t{$1, %2, %0|%0, %2, 1} @@ -7392,7 +7394,7 @@ vpunpcklqdq\t{%2, %1, %0|%0, %1, %2} movhps\t{%2, %0|%0, %2} vmovhps\t{%2, %1, %0|%0, %1, %2}" - [(set_attr "isa" "noavx,avx,*,*,*,noavx,avx,noavx,avx") + [(set_attr "isa" "sse4_noavx,avx,*,*,*,noavx,avx,noavx,avx") (set (attr "type") (if_then_else (eq_attr "alternative" "0,1,5,6") @@ -7410,10 +7412,10 @@ (set_attr "mode" "TI,TI,TI,TI,TI,TI,TI,V2SF,V2SF")]) (define_insn "vec_concatv2di" - [(set (match_operand:V2DI 0 "register_operand" "=Y2,?Y2,Y2,x,x,x,x") + [(set (match_operand:V2DI 0 "register_operand" "=x,?x,x,x,x,x,x") (vec_concat:V2DI - (match_operand:DI 1 "nonimmediate_operand" "Y2m,*y , 0,x,0,0,x") - (match_operand:DI 2 "vector_move_operand" " C , C ,Y2,x,x,m,m")))] + (match_operand:DI 1 "nonimmediate_operand" "xm,*y,0,x,0,0,x") + (match_operand:DI 2 "vector_move_operand" " C, C,x,x,x,m,m")))] "!TARGET_64BIT && TARGET_SSE" "@ %vmovq\t{%1, %0|%0, %1} @@ -7423,7 +7425,7 @@ movlhps\t{%2, %0|%0, %2} movhps\t{%2, %0|%0, %2} vmovhps\t{%2, %1, %0|%0, %1, %2}" - [(set_attr "isa" "*,*,noavx,avx,noavx,noavx,avx") + [(set_attr "isa" "sse2,sse2,sse2_noavx,avx,noavx,noavx,avx") (set_attr "type" "ssemov,ssemov,sselog,sselog,ssemov,ssemov,ssemov") (set_attr "prefix" "maybe_vex,orig,orig,vex,orig,orig,vex") (set_attr "mode" "TI,TI,TI,TI,V4SF,V2SF,V2SF")])