From patchwork Thu Jun 17 18:43:58 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 56077 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 9A1C8B7D43 for ; Fri, 18 Jun 2010 04:44:15 +1000 (EST) Received: (qmail 8979 invoked by alias); 17 Jun 2010 18:44:13 -0000 Received: (qmail 8953 invoked by uid 22791); 17 Jun 2010 18:44:08 -0000 X-SWARE-Spam-Status: No, hits=1.4 required=5.0 tests=AWL, BAYES_00, MEDICAL_SUBJECT, NO_DNS_FOR_FROM, TW_AV, TW_VX, TW_XV, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 17 Jun 2010 18:44:01 +0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP; 17 Jun 2010 11:43:53 -0700 X-ExtLoop1: 1 Received: from gnu-6.sc.intel.com ([10.3.194.135]) by fmsmga002.fm.intel.com with ESMTP; 17 Jun 2010 11:43:56 -0700 Received: by gnu-6.sc.intel.com (Postfix, from userid 500) id AC6C1201C6; Thu, 17 Jun 2010 11:43:58 -0700 (PDT) Date: Thu, 17 Jun 2010 11:43:58 -0700 From: "H.J. Lu" To: gcc-patches@gcc.gnu.org Cc: Uros Bizjak Subject: PATCH: Simplify mnemonic suffix iterators in sse.md Message-ID: <20100617184358.GA13038@intel.com> Reply-To: "H.J. Lu" MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.20 (2009-12-10) Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Hi, There are 6 suffix iterators for instruction mnemonic in sse.md, which basically do similar things. This patch replace 6 iterators with 2. OK for trunk if there are no regressions? Thanks. H.J. --- 2010-06-17 H.J. Lu * config/i386/sse.md (fma4modesuffixf4): Removed. (ssemodesuffixf2s): Likewise. (ssemodesuffixf4): Likewise. (ssemodesuffixf2c): Likewise. (ssescalarmodesuffix2s): Likewise. (avxmodesuffixf2c): Likewise. (sse_mnemonic_suffix): New. (sse_mnemonic_scalar_suffix): Likewise. Update patterns with sse_mnemonic_suffix and sse_mnemonic_scalar_suffix. diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md index 7625906..de8405d 100644 --- a/gcc/config/i386/sse.md +++ b/gcc/config/i386/sse.md @@ -89,18 +89,13 @@ ;; Mapping from integer vector mode to mnemonic suffix (define_mode_attr ssevecsize [(V16QI "b") (V8HI "w") (V4SI "d") (V2DI "q")]) -;; Mapping of the fma4 suffix -(define_mode_attr fma4modesuffixf4 [(V8SF "ps") (V4DF "pd")]) -(define_mode_attr ssemodesuffixf2s [(SF "ss") (DF "sd") - (V4SF "ss") (V2DF "sd")]) - -;; Mapping of the avx suffix -(define_mode_attr ssemodesuffixf4 [(SF "ss") (DF "sd") - (V4SF "ps") (V2DF "pd")]) - -(define_mode_attr ssemodesuffixf2c [(V4SF "s") (V2DF "d")]) - -(define_mode_attr ssescalarmodesuffix2s [(V4SF "ss") (V4SI "d")]) +;; Mapping of the insn mnemonic suffix +(define_mode_attr sse_mnemonic_suffix + [(SF "ss") (DF "sd") (V4SF "ps") (V2DF "pd") (V8SF "ps") (V4DF "pd") + (V8SI "ps") (V4DI "pd")]) +(define_mode_attr sse_mnemonic_scalar_suffix + [(SF "ss") (DF "sd") (V4SF "ss") (V2DF "sd") (V8SF "ss") (V4DF "sd") + (V4SI "d")]) ;; Mapping of the max integer size for xop rotate immediate constraint (define_mode_attr sserotatemax [(V16QI "7") (V8HI "15") (V4SI "31") (V2DI "63")]) @@ -141,8 +136,6 @@ [(V4SF "V4SI") (V8SF "V8SI") (V4SI "V4SF") (V8SI "V8SF")]) (define_mode_attr avxpermvecmode [(V2DF "V2DI") (V4SF "V4SI") (V4DF "V4DI") (V8SF "V8SI")]) -(define_mode_attr avxmodesuffixf2c - [(V4SF "s") (V2DF "d") (V8SI "s") (V8SF "s") (V4DI "d") (V4DF "d")]) (define_mode_attr avxmodesuffixp [(V2DF "pd") (V4SI "si") (V4SF "ps") (V8SF "ps") (V8SI "si") (V4DF "pd")]) @@ -366,14 +359,14 @@ DONE; }) -(define_insn "avx_movup" +(define_insn "avx_movu" [(set (match_operand:AVXMODEF2P 0 "nonimmediate_operand" "=x,m") (unspec:AVXMODEF2P [(match_operand:AVXMODEF2P 1 "nonimmediate_operand" "xm,x")] UNSPEC_MOVU))] "AVX_VEC_FLOAT_MODE_P (mode) && !(MEM_P (operands[0]) && MEM_P (operands[1]))" - "vmovup\t{%1, %0|%0, %1}" + "vmovu\t{%1, %0|%0, %1}" [(set_attr "type" "ssemov") (set_attr "movu" "1") (set_attr "prefix" "vex") @@ -392,14 +385,14 @@ (set_attr "prefix" "maybe_vex") (set_attr "mode" "TI")]) -(define_insn "_movup" +(define_insn "_movu" [(set (match_operand:SSEMODEF2P 0 "nonimmediate_operand" "=x,m") (unspec:SSEMODEF2P [(match_operand:SSEMODEF2P 1 "nonimmediate_operand" "xm,x")] UNSPEC_MOVU))] "SSE_VEC_FLOAT_MODE_P (mode) && !(MEM_P (operands[0]) && MEM_P (operands[1]))" - "movup\t{%1, %0|%0, %1}" + "movu\t{%1, %0|%0, %1}" [(set_attr "type" "ssemov") (set_attr "movu" "1") (set_attr "mode" "")]) @@ -433,7 +426,7 @@ [(match_operand:AVXMODEF2P 1 "register_operand" "x")] UNSPEC_MOVNT))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vmovntp\t{%1, %0|%0, %1}" + "vmovnt\t{%1, %0|%0, %1}" [(set_attr "type" "ssemov") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -444,7 +437,7 @@ [(match_operand:SSEMODEF2P 1 "register_operand" "x")] UNSPEC_MOVNT))] "SSE_VEC_FLOAT_MODE_P (mode)" - "movntp\t{%1, %0|%0, %1}" + "movnt\t{%1, %0|%0, %1}" [(set_attr "type" "ssemov") (set_attr "mode" "")]) @@ -565,7 +558,7 @@ (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")))] "AVX_VEC_FLOAT_MODE_P (mode) && ix86_binary_operator_ok (, mode, operands)" - "vp\t{%2, %1, %0|%0, %1, %2}" + "v\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sseadd") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -585,7 +578,7 @@ (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")))] "SSE_VEC_FLOAT_MODE_P (mode) && ix86_binary_operator_ok (, mode, operands)" - "p\t{%2, %0|%0, %2}" + "\t{%2, %0|%0, %2}" [(set_attr "type" "sseadd") (set_attr "mode" "")]) @@ -598,7 +591,7 @@ (match_dup 1) (const_int 1)))] "AVX128_VEC_FLOAT_MODE_P (mode)" - "vs\t{%2, %1, %0|%0, %1, %2}" + "v\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sseadd") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -612,7 +605,7 @@ (match_dup 1) (const_int 1)))] "SSE_VEC_FLOAT_MODE_P (mode)" - "s\t{%2, %0|%0, %2}" + "\t{%2, %0|%0, %2}" [(set_attr "type" "sseadd") (set_attr "mode" "")]) @@ -631,7 +624,7 @@ (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")))] "AVX_VEC_FLOAT_MODE_P (mode) && ix86_binary_operator_ok (MULT, mode, operands)" - "vmulp\t{%2, %1, %0|%0, %1, %2}" + "vmul\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssemul") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -651,7 +644,7 @@ (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")))] "SSE_VEC_FLOAT_MODE_P (mode) && ix86_binary_operator_ok (MULT, mode, operands)" - "mulp\t{%2, %0|%0, %2}" + "mul\t{%2, %0|%0, %2}" [(set_attr "type" "ssemul") (set_attr "mode" "")]) @@ -664,7 +657,7 @@ (match_dup 1) (const_int 1)))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vmuls\t{%2, %1, %0|%0, %1, %2}" + "vmul\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssemul") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -678,7 +671,7 @@ (match_dup 1) (const_int 1)))] "SSE_VEC_FLOAT_MODE_P (mode)" - "muls\t{%2, %0|%0, %2}" + "mul\t{%2, %0|%0, %2}" [(set_attr "type" "ssemul") (set_attr "mode" "")]) @@ -713,7 +706,7 @@ (match_operand:AVXMODEF2P 1 "register_operand" "x") (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vdivp\t{%2, %1, %0|%0, %1, %2}" + "vdiv\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssediv") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -747,7 +740,7 @@ (match_operand:SSEMODEF2P 1 "register_operand" "x") (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")))] "AVX128_VEC_FLOAT_MODE_P (mode)" - "vdivp\t{%2, %1, %0|%0, %1, %2}" + "vdiv\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssediv") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -758,7 +751,7 @@ (match_operand:SSEMODEF2P 1 "register_operand" "0") (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")))] "SSE_VEC_FLOAT_MODE_P (mode)" - "divp\t{%2, %0|%0, %2}" + "div\t{%2, %0|%0, %2}" [(set_attr "type" "ssediv") (set_attr "mode" "")]) @@ -771,7 +764,7 @@ (match_dup 1) (const_int 1)))] "AVX128_VEC_FLOAT_MODE_P (mode)" - "vdivs\t{%2, %1, %0|%0, %1, %2}" + "vdiv\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssediv") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -785,7 +778,7 @@ (match_dup 1) (const_int 1)))] "SSE_VEC_FLOAT_MODE_P (mode)" - "divs\t{%2, %0|%0, %2}" + "div\t{%2, %0|%0, %2}" [(set_attr "type" "ssediv") (set_attr "mode" "")]) @@ -909,7 +902,7 @@ (match_operand:SSEMODEF2P 2 "register_operand" "x") (const_int 1)))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vsqrts\t{%1, %2, %0|%0, %2, %1}" + "vsqrt\t{%1, %2, %0|%0, %2, %1}" [(set_attr "type" "sse") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -922,7 +915,7 @@ (match_operand:SSEMODEF2P 2 "register_operand" "0") (const_int 1)))] "SSE_VEC_FLOAT_MODE_P (mode)" - "sqrts\t{%1, %0|%0, %1}" + "sqrt\t{%1, %0|%0, %1}" [(set_attr "type" "sse") (set_attr "atom_sse_attr" "sqrt") (set_attr "mode" "")]) @@ -1027,7 +1020,7 @@ (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")))] "AVX_VEC_FLOAT_MODE_P (mode) && flag_finite_math_only && ix86_binary_operator_ok (, mode, operands)" - "vp\t{%2, %1, %0|%0, %1, %2}" + "v\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sseadd") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -1039,7 +1032,7 @@ (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")))] "SSE_VEC_FLOAT_MODE_P (mode) && flag_finite_math_only && ix86_binary_operator_ok (, mode, operands)" - "p\t{%2, %0|%0, %2}" + "\t{%2, %0|%0, %2}" [(set_attr "type" "sseadd") (set_attr "mode" "")]) @@ -1049,7 +1042,7 @@ (match_operand:AVXMODEF2P 1 "nonimmediate_operand" "%x") (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vp\t{%2, %1, %0|%0, %1, %2}" + "v\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sseadd") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -1060,7 +1053,7 @@ (match_operand:SSEMODEF2P 1 "register_operand" "0") (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")))] "SSE_VEC_FLOAT_MODE_P (mode)" - "p\t{%2, %0|%0, %2}" + "\t{%2, %0|%0, %2}" [(set_attr "type" "sseadd") (set_attr "mode" "")]) @@ -1073,7 +1066,7 @@ (match_dup 1) (const_int 1)))] "AVX128_VEC_FLOAT_MODE_P (mode)" - "vs\t{%2, %1, %0|%0, %1, %2}" + "v\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sse") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -1087,7 +1080,7 @@ (match_dup 1) (const_int 1)))] "SSE_VEC_FLOAT_MODE_P (mode)" - "s\t{%2, %0|%0, %2}" + "\t{%2, %0|%0, %2}" [(set_attr "type" "sseadd") (set_attr "mode" "")]) @@ -1104,7 +1097,7 @@ (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")] UNSPEC_IEEE_MIN))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vminp\t{%2, %1, %0|%0, %1, %2}" + "vmin\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sseadd") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -1116,7 +1109,7 @@ (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")] UNSPEC_IEEE_MAX))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vmaxp\t{%2, %1, %0|%0, %1, %2}" + "vmax\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sseadd") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -1128,7 +1121,7 @@ (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")] UNSPEC_IEEE_MIN))] "SSE_VEC_FLOAT_MODE_P (mode)" - "minp\t{%2, %0|%0, %2}" + "min\t{%2, %0|%0, %2}" [(set_attr "type" "sseadd") (set_attr "mode" "")]) @@ -1139,7 +1132,7 @@ (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")] UNSPEC_IEEE_MAX))] "SSE_VEC_FLOAT_MODE_P (mode)" - "maxp\t{%2, %0|%0, %2}" + "max\t{%2, %0|%0, %2}" [(set_attr "type" "sseadd") (set_attr "mode" "")]) @@ -1438,7 +1431,7 @@ ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; -(define_insn "avx_cmpp3" +(define_insn "avx_cmp3" [(set (match_operand:AVXMODEF2P 0 "register_operand" "=x") (unspec:AVXMODEF2P [(match_operand:AVXMODEF2P 1 "register_operand" "x") @@ -1446,13 +1439,13 @@ (match_operand:SI 3 "const_0_to_31_operand" "n")] UNSPEC_PCMP))] "TARGET_AVX" - "vcmpp\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vcmp\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssecmp") (set_attr "length_immediate" "1") (set_attr "prefix" "vex") (set_attr "mode" "")]) -(define_insn "avx_cmps3" +(define_insn "avx_cmp3" [(set (match_operand:SSEMODEF2P 0 "register_operand" "") (vec_merge:SSEMODEF2P (unspec:SSEMODEF2P @@ -1463,7 +1456,7 @@ (match_dup 1) (const_int 1)))] "TARGET_AVX" - "vcmps\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vcmp\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssecmp") (set_attr "length_immediate" "1") (set_attr "prefix" "vex") @@ -1477,7 +1470,7 @@ [(match_operand:AVXMODEF2P 1 "register_operand" "x") (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")]))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vcmp%D3p\t{%2, %1, %0|%0, %1, %2}" + "vcmp%D3\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssecmp") (set_attr "prefix" "vex") (set_attr "length_immediate" "1") @@ -1490,7 +1483,7 @@ (match_operand:SSEMODEF4 2 "nonimmediate_operand" "xm")]))] "!TARGET_XOP && (SSE_FLOAT_MODE_P (mode) || SSE_VEC_FLOAT_MODE_P (mode))" - "cmp%D3\t{%2, %0|%0, %2}" + "cmp%D3\t{%2, %0|%0, %2}" [(set_attr "type" "ssecmp") (set_attr "length_immediate" "1") (set_attr "mode" "")]) @@ -1504,7 +1497,7 @@ (match_dup 1) (const_int 1)))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vcmp%D3s\t{%2, %1, %0|%0, %1, %2}" + "vcmp%D3\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssecmp") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -1518,7 +1511,7 @@ (match_dup 1) (const_int 1)))] "SSE_VEC_FLOAT_MODE_P (mode)" - "cmp%D3s\t{%2, %0|%0, %2}" + "cmp%D3\t{%2, %0|%0, %2}" [(set_attr "type" "ssecmp") (set_attr "length_immediate" "1") (set_attr "mode" "")]) @@ -1592,7 +1585,7 @@ (match_operand:AVXMODEF2P 1 "register_operand" "x")) (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm")))] "AVX_VEC_FLOAT_MODE_P (mode)" - "vandnp\t{%2, %1, %0|%0, %1, %2}" + "vandn\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sselog") (set_attr "prefix" "vex") (set_attr "mode" "")]) @@ -1604,7 +1597,7 @@ (match_operand:SSEMODEF2P 1 "register_operand" "0")) (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm")))] "SSE_VEC_FLOAT_MODE_P (mode)" - "andnp\t{%2, %0|%0, %2}" + "andn\t{%2, %0|%0, %2}" [(set_attr "type" "sselog") (set_attr "mode" "")]) @@ -1627,7 +1620,7 @@ if (TARGET_SSE_PACKED_SINGLE_INSN_OPTIMAL) return "vps\t{%2, %1, %0|%0, %1, %2}"; else - return "vp\t{%2, %1, %0|%0, %1, %2}"; + return "v\t{%2, %1, %0|%0, %1, %2}"; } [(set_attr "type" "sselog") (set_attr "prefix" "vex") @@ -1652,7 +1645,7 @@ if (TARGET_SSE_PACKED_SINGLE_INSN_OPTIMAL) return "ps\t{%2, %0|%0, %2}"; else - return "p\t{%2, %0|%0, %2}"; + return "\t{%2, %0|%0, %2}"; } [(set_attr "type" "sselog") (set_attr "mode" "")]) @@ -1761,7 +1754,7 @@ (match_operand:FMA4MODEF4 2 "nonimmediate_operand" "x,m")) (match_operand:FMA4MODEF4 3 "nonimmediate_operand" "xm,x")))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1774,7 +1767,7 @@ (match_operand:FMA4MODEF4 2 "nonimmediate_operand" "x,m")) (match_operand:FMA4MODEF4 3 "nonimmediate_operand" "xm,x")))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1788,7 +1781,7 @@ (match_operand:FMA4MODEF4 1 "nonimmediate_operand" "%x,x") (match_operand:FMA4MODEF4 2 "nonimmediate_operand" "x,m"))))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1802,7 +1795,7 @@ (match_operand:FMA4MODEF4 2 "nonimmediate_operand" "x,m")) (match_operand:FMA4MODEF4 3 "nonimmediate_operand" "xm,x")))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1814,7 +1807,7 @@ (match_operand:SSEMODEF4 2 "nonimmediate_operand" "x,m")) (match_operand:SSEMODEF4 3 "nonimmediate_operand" "xm,x")))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1832,7 +1825,7 @@ (match_dup 0) (const_int 1)))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1846,7 +1839,7 @@ (match_operand:SSEMODEF4 2 "nonimmediate_operand" "x,m")) (match_operand:SSEMODEF4 3 "nonimmediate_operand" "xm,x")))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1864,7 +1857,7 @@ (match_dup 0) (const_int 1)))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1878,7 +1871,7 @@ (match_operand:SSEMODEF4 1 "nonimmediate_operand" "%x,x") (match_operand:SSEMODEF4 2 "nonimmediate_operand" "x,m"))))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1896,7 +1889,7 @@ (match_dup 0) (const_int 1)))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1911,7 +1904,7 @@ (match_operand:SSEMODEF4 2 "nonimmediate_operand" "x,m")) (match_operand:SSEMODEF4 3 "nonimmediate_operand" "xm,x")))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1930,7 +1923,7 @@ (match_dup 0) (const_int 1)))] "TARGET_FMA4 && TARGET_FUSED_MADD" - "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1944,7 +1937,7 @@ (match_operand:FMA4MODEF4 3 "nonimmediate_operand" "xm,x"))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1958,7 +1951,7 @@ (match_operand:FMA4MODEF4 3 "nonimmediate_operand" "xm,x"))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1972,7 +1965,7 @@ (match_operand:FMA4MODEF4 2 "nonimmediate_operand" "x,m")))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -1987,7 +1980,7 @@ (match_operand:FMA4MODEF4 3 "nonimmediate_operand" "xm,x"))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -2001,7 +1994,7 @@ (match_operand:SSEMODEF2P 3 "nonimmediate_operand" "xm,x"))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -2015,7 +2008,7 @@ (match_operand:SSEMODEF2P 3 "nonimmediate_operand" "xm,x"))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -2029,7 +2022,7 @@ (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "x,m")))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -2044,7 +2037,7 @@ (match_operand:SSEMODEF2P 3 "nonimmediate_operand" "xm,x"))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -2063,7 +2056,7 @@ (const_int 1))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -2080,7 +2073,7 @@ (const_int 1))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -2097,7 +2090,7 @@ (const_int 1))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmadd\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -2115,7 +2108,7 @@ (const_int 1))] UNSPEC_FMA4_INTRINSIC))] "TARGET_FMA4" - "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vfnmsub\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemuladd") (set_attr "mode" "")]) @@ -3984,7 +3977,7 @@ "TARGET_AVX" "@ vinsertps\t{$0xe, %2, %2, %0|%0, %2, %2, 0xe} - vmov\t{%2, %0|%0, %2} + vmov\t{%2, %0|%0, %2} vmovd\t{%2, %0|%0, %2} vmovss\t{%2, %1, %0|%0, %1, %2} vpinsrd\t{$0, %2, %1, %0|%0, %1, %2, 0} @@ -4006,7 +3999,7 @@ "TARGET_SSE4_1" "@ insertps\t{$0xe, %2, %0|%0, %2, 0xe} - mov\t{%2, %0|%0, %2} + mov\t{%2, %0|%0, %2} movd\t{%2, %0|%0, %2} movss\t{%2, %0|%0, %2} pinsrd\t{$0, %2, %0|%0, %2, 0} @@ -4026,7 +4019,7 @@ (const_int 1)))] "TARGET_SSE2" "@ - mov\t{%2, %0|%0, %2} + mov\t{%2, %0|%0, %2} movd\t{%2, %0|%0, %2} movss\t{%2, %0|%0, %2} #" @@ -8031,24 +8024,24 @@ (set_attr "prefix_data16" "1") (set_attr "mode" "TI")]) -(define_insn "avx_movmskp256" +(define_insn "avx_movmsk256" [(set (match_operand:SI 0 "register_operand" "=r") (unspec:SI [(match_operand:AVX256MODEF2P 1 "register_operand" "x")] UNSPEC_MOVMSK))] "AVX256_VEC_FLOAT_MODE_P (mode)" - "vmovmskp\t{%1, %0|%0, %1}" + "vmovmsk\t{%1, %0|%0, %1}" [(set_attr "type" "ssecvt") (set_attr "prefix" "vex") (set_attr "mode" "")]) -(define_insn "_movmskp" +(define_insn "_movmsk" [(set (match_operand:SI 0 "register_operand" "=r") (unspec:SI [(match_operand:SSEMODEF2P 1 "register_operand" "x")] UNSPEC_MOVMSK))] "SSE_VEC_FLOAT_MODE_P (mode)" - "%vmovmskp\t{%1, %0|%0, %1}" + "%vmovmsk\t{%1, %0|%0, %1}" [(set_attr "type" "ssemov") (set_attr "prefix" "maybe_vex") (set_attr "mode" "")]) @@ -9288,7 +9281,7 @@ (parallel [(const_int 0)]))] UNSPEC_MOVNT))] "TARGET_SSE4A" - "movnts\t{%1, %0|%0, %1}" + "movnt\t{%1, %0|%0, %1}" [(set_attr "type" "ssemov") (set_attr "mode" "")]) @@ -9349,21 +9342,21 @@ ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; -(define_insn "avx_blendp" +(define_insn "avx_blend" [(set (match_operand:AVXMODEF2P 0 "register_operand" "=x") (vec_merge:AVXMODEF2P (match_operand:AVXMODEF2P 2 "nonimmediate_operand" "xm") (match_operand:AVXMODEF2P 1 "register_operand" "x") (match_operand:SI 3 "const_0_to__operand" "n")))] "TARGET_AVX" - "vblendp\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vblend\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemov") (set_attr "prefix_extra" "1") (set_attr "length_immediate" "1") (set_attr "prefix" "vex") (set_attr "mode" "")]) -(define_insn "avx_blendvp" +(define_insn "avx_blendv" [(set (match_operand:AVXMODEF2P 0 "register_operand" "=x") (unspec:AVXMODEF2P [(match_operand:AVXMODEF2P 1 "register_operand" "x") @@ -9371,28 +9364,28 @@ (match_operand:AVXMODEF2P 3 "register_operand" "x")] UNSPEC_BLENDV))] "TARGET_AVX" - "vblendvp\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vblendv\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemov") (set_attr "prefix_extra" "1") (set_attr "length_immediate" "1") (set_attr "prefix" "vex") (set_attr "mode" "")]) -(define_insn "sse4_1_blendp" +(define_insn "sse4_1_blend" [(set (match_operand:SSEMODEF2P 0 "register_operand" "=x") (vec_merge:SSEMODEF2P (match_operand:SSEMODEF2P 2 "nonimmediate_operand" "xm") (match_operand:SSEMODEF2P 1 "register_operand" "0") (match_operand:SI 3 "const_0_to__operand" "n")))] "TARGET_SSE4_1" - "blendp\t{%3, %2, %0|%0, %2, %3}" + "blend\t{%3, %2, %0|%0, %2, %3}" [(set_attr "type" "ssemov") (set_attr "prefix_data16" "1") (set_attr "prefix_extra" "1") (set_attr "length_immediate" "1") (set_attr "mode" "")]) -(define_insn "sse4_1_blendvp" +(define_insn "sse4_1_blendv" [(set (match_operand:SSEMODEF2P 0 "reg_not_xmm0_operand" "=x") (unspec:SSEMODEF2P [(match_operand:SSEMODEF2P 1 "reg_not_xmm0_operand" "0") @@ -9400,13 +9393,13 @@ (match_operand:SSEMODEF2P 3 "register_operand" "Yz")] UNSPEC_BLENDV))] "TARGET_SSE4_1" - "blendvp\t{%3, %2, %0|%0, %2, %3}" + "blendv\t{%3, %2, %0|%0, %2, %3}" [(set_attr "type" "ssemov") (set_attr "prefix_data16" "1") (set_attr "prefix_extra" "1") (set_attr "mode" "")]) -(define_insn "avx_dpp" +(define_insn "avx_dp" [(set (match_operand:AVXMODEF2P 0 "register_operand" "=x") (unspec:AVXMODEF2P [(match_operand:AVXMODEF2P 1 "nonimmediate_operand" "%x") @@ -9414,14 +9407,14 @@ (match_operand:SI 3 "const_0_to_255_operand" "n")] UNSPEC_DP))] "TARGET_AVX" - "vdpp\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vdp\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssemul") (set_attr "prefix" "vex") (set_attr "prefix_extra" "1") (set_attr "length_immediate" "1") (set_attr "mode" "")]) -(define_insn "sse4_1_dpp" +(define_insn "sse4_1_dp" [(set (match_operand:SSEMODEF2P 0 "register_operand" "=x") (unspec:SSEMODEF2P [(match_operand:SSEMODEF2P 1 "nonimmediate_operand" "%0") @@ -9429,7 +9422,7 @@ (match_operand:SI 3 "const_0_to_255_operand" "n")] UNSPEC_DP))] "TARGET_SSE4_1" - "dpp\t{%3, %2, %0|%0, %2, %3}" + "dp\t{%3, %2, %0|%0, %2, %3}" [(set_attr "type" "ssemul") (set_attr "prefix_data16" "1") (set_attr "prefix_extra" "1") @@ -9955,13 +9948,13 @@ ;; ptestps/ptestpd are very similar to comiss and ucomiss when ;; setting FLAGS_REG. But it is not a really compare instruction. -(define_insn "avx_vtestp" +(define_insn "avx_vtest" [(set (reg:CC FLAGS_REG) (unspec:CC [(match_operand:AVXMODEF2P 0 "register_operand" "x") (match_operand:AVXMODEF2P 1 "nonimmediate_operand" "xm")] UNSPEC_VTESTP))] "TARGET_AVX" - "vtestp\t{%1, %0|%0, %1}" + "vtest\t{%1, %0|%0, %1}" [(set_attr "type" "ssecomi") (set_attr "prefix_extra" "1") (set_attr "prefix" "vex") @@ -9993,28 +9986,28 @@ (set_attr "prefix" "maybe_vex") (set_attr "mode" "TI")]) -(define_insn "avx_roundp256" +(define_insn "avx_round256" [(set (match_operand:AVX256MODEF2P 0 "register_operand" "=x") (unspec:AVX256MODEF2P [(match_operand:AVX256MODEF2P 1 "nonimmediate_operand" "xm") (match_operand:SI 2 "const_0_to_15_operand" "n")] UNSPEC_ROUND))] "TARGET_AVX" - "vroundp\t{%2, %1, %0|%0, %1, %2}" + "vround\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssecvt") (set_attr "prefix_extra" "1") (set_attr "length_immediate" "1") (set_attr "prefix" "vex") (set_attr "mode" "")]) -(define_insn "sse4_1_roundp" +(define_insn "sse4_1_round" [(set (match_operand:SSEMODEF2P 0 "register_operand" "=x") (unspec:SSEMODEF2P [(match_operand:SSEMODEF2P 1 "nonimmediate_operand" "xm") (match_operand:SI 2 "const_0_to_15_operand" "n")] UNSPEC_ROUND))] "TARGET_ROUND" - "%vroundp\t{%2, %1, %0|%0, %1, %2}" + "%vround\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "ssecvt") (set_attr "prefix_data16" "1") (set_attr "prefix_extra" "1") @@ -10022,7 +10015,7 @@ (set_attr "prefix" "maybe_vex") (set_attr "mode" "")]) -(define_insn "*avx_rounds" +(define_insn "*avx_round" [(set (match_operand:SSEMODEF2P 0 "register_operand" "=x") (vec_merge:SSEMODEF2P (unspec:SSEMODEF2P @@ -10032,14 +10025,14 @@ (match_operand:SSEMODEF2P 1 "register_operand" "x") (const_int 1)))] "TARGET_AVX" - "vrounds\t{%3, %2, %1, %0|%0, %1, %2, %3}" + "vround\t{%3, %2, %1, %0|%0, %1, %2, %3}" [(set_attr "type" "ssecvt") (set_attr "prefix_extra" "1") (set_attr "length_immediate" "1") (set_attr "prefix" "vex") (set_attr "mode" "")]) -(define_insn "sse4_1_rounds" +(define_insn "sse4_1_round" [(set (match_operand:SSEMODEF2P 0 "register_operand" "=x") (vec_merge:SSEMODEF2P (unspec:SSEMODEF2P @@ -10049,7 +10042,7 @@ (match_operand:SSEMODEF2P 1 "register_operand" "0") (const_int 1)))] "TARGET_ROUND" - "rounds\t{%3, %2, %0|%0, %2, %3}" + "round\t{%3, %2, %0|%0, %2, %3}" [(set_attr "type" "ssecvt") (set_attr "prefix_data16" "1") (set_attr "prefix_extra" "1") @@ -11491,7 +11484,7 @@ [(match_operand:SSEMODEF2P 1 "nonimmediate_operand" "xm")] UNSPEC_FRCZ))] "TARGET_XOP" - "vfrcz\t{%1, %0|%0, %1}" + "vfrcz\t{%1, %0|%0, %1}" [(set_attr "type" "ssecvt1") (set_attr "mode" "")]) @@ -11505,7 +11498,7 @@ (match_operand:SSEMODEF2P 1 "register_operand" "0") (const_int 1)))] "TARGET_XOP" - "vfrcz\t{%2, %0|%0, %2}" + "vfrcz\t{%2, %0|%0, %2}" [(set_attr "type" "ssecvt1") (set_attr "mode" "")]) @@ -11515,7 +11508,7 @@ [(match_operand:FMA4MODEF4 1 "nonimmediate_operand" "xm")] UNSPEC_FRCZ))] "TARGET_XOP" - "vfrcz\t{%1, %0|%0, %1}" + "vfrcz\t{%1, %0|%0, %1}" [(set_attr "type" "ssecvt1") (set_attr "mode" "")]) @@ -11595,7 +11588,7 @@ (match_operand:SI 4 "const_0_to_3_operand" "n")] UNSPEC_VPERMIL2))] "TARGET_XOP" - "vpermil2p\t{%4, %3, %2, %1, %0|%0, %1, %2, %3, %4}" + "vpermil2\t{%4, %3, %2, %1, %0|%0, %1, %2, %3, %4}" [(set_attr "type" "sse4arg") (set_attr "length_immediate" "1") (set_attr "mode" "")]) @@ -11812,7 +11805,7 @@ (match_operand: 1 "nonimmediate_operand" "m,?x")))] "TARGET_AVX" "@ - vbroadcasts\t{%1, %0|%0, %1} + vbroadcast\t{%1, %0|%0, %1} #" "&& reload_completed && REG_P (operands[1])" [(set (match_dup 2) (vec_duplicate: (match_dup 1))) @@ -11966,7 +11959,7 @@ { int mask = avx_vpermilp_parallel (operands[2], mode) - 1; operands[2] = GEN_INT (mask); - return "vpermilp\t{%2, %1, %0|%0, %1, %2}"; + return "vpermil\t{%2, %1, %0|%0, %1, %2}"; } [(set_attr "type" "sselog") (set_attr "prefix_extra" "1") @@ -11981,7 +11974,7 @@ (match_operand: 2 "nonimmediate_operand" "xm")] UNSPEC_VPERMIL))] "TARGET_AVX" - "vpermilp\t{%2, %1, %0|%0, %1, %2}" + "vpermil\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sselog") (set_attr "prefix_extra" "1") (set_attr "prefix" "vex") @@ -12224,7 +12217,7 @@ (set_attr "prefix" "vex") (set_attr "mode" "V8SF")]) -(define_insn "avx_maskloadp" +(define_insn "avx_maskload" [(set (match_operand:AVXMODEF2P 0 "register_operand" "=x") (unspec:AVXMODEF2P [(match_operand:AVXMODEF2P 1 "memory_operand" "m") @@ -12232,13 +12225,13 @@ (match_dup 0)] UNSPEC_MASKLOAD))] "TARGET_AVX" - "vmaskmovp\t{%1, %2, %0|%0, %2, %1}" + "vmaskmov\t{%1, %2, %0|%0, %2, %1}" [(set_attr "type" "sselog1") (set_attr "prefix_extra" "1") (set_attr "prefix" "vex") (set_attr "mode" "")]) -(define_insn "avx_maskstorep" +(define_insn "avx_maskstore" [(set (match_operand:AVXMODEF2P 0 "memory_operand" "=m") (unspec:AVXMODEF2P [(match_operand:AVXMODEF2P 1 "register_operand" "x") @@ -12246,7 +12239,7 @@ (match_dup 0)] UNSPEC_MASKSTORE))] "TARGET_AVX" - "vmaskmovp\t{%2, %1, %0|%0, %1, %2}" + "vmaskmov\t{%2, %1, %0|%0, %1, %2}" [(set_attr "type" "sselog1") (set_attr "prefix_extra" "1") (set_attr "prefix" "vex")