From patchwork Tue Aug 12 12:58:06 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kirill Yukhin X-Patchwork-Id: 379325 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 27AC91400B2 for ; Tue, 12 Aug 2014 22:58:33 +1000 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:cc:subject:message-id:mime-version:content-type; q=dns; s=default; b=lY5nbFcguzY6orjh4bpwOMfHzFJwziv2T/weLUX4mMDZ2/DSGh 38WgaihgYS+olnh8nI9p9EjY2MQzR6qw9u8hoOi/a+kfVd+yuO7qejlBIeEQNgvq TTNXnUKm/4pr/i5IwSrdFvnlFZd/LtO0ywrrj1GcZwXlVq3Hl67QD9rjk= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:cc:subject:message-id:mime-version:content-type; s= default; bh=gDfpEiUIjB4wKOzt/B+hjqIuKmM=; b=njlxwt5lEDXOkge/j2Nr 17xjnftKhhxbTAXb6pJP7wL5mlXxN9xRbVRlJrMyKkDcYb7uRsFqR/F52AWNPQb6 OEoltLfd15yRAa+S+SaIOt4tEmNh6s3gYl6Q/y9zUxeEOQtYkRDtrLe6OJqIs+T8 X5ltQiP+6ZepxnxpT8eXB0A= Received: (qmail 14062 invoked by alias); 12 Aug 2014 12:58:26 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 14026 invoked by uid 89); 12 Aug 2014 12:58:21 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.9 required=5.0 tests=AWL, BAYES_00, FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-yk0-f172.google.com Received: from mail-yk0-f172.google.com (HELO mail-yk0-f172.google.com) (209.85.160.172) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Tue, 12 Aug 2014 12:58:18 +0000 Received: by mail-yk0-f172.google.com with SMTP id 10so7050692ykt.3 for ; Tue, 12 Aug 2014 05:58:16 -0700 (PDT) X-Received: by 10.236.1.34 with SMTP id 22mr54925494yhc.45.1407848296079; Tue, 12 Aug 2014 05:58:16 -0700 (PDT) Received: from msticlxl57.ims.intel.com ([192.55.55.41]) by mx.google.com with ESMTPSA id k106sm29853181yhq.49.2014.08.12.05.58.12 for (version=TLSv1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 12 Aug 2014 05:58:15 -0700 (PDT) Date: Tue, 12 Aug 2014 16:58:06 +0400 From: Kirill Yukhin To: Uros Bizjak Cc: Jakub Jelinek , Richard Henderson , GCC Patches , kirill.yukhin@gmail.com Subject: [PATCH i386 AVX512] [8/n] Extend substs for new patterns. Message-ID: <20140812125804.GB916@msticlxl57.ims.intel.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-IsSubscribed: yes Hello, This patch extends substs/subst_attrs to be used with new patterns. Bootstrapped. Is it ok for trunk? gcc/ * config/i386/sse.md: Allow V64QI, V32QI, V32HI, V4HI modes. * config/i386/subst.md (define_mode_iterator SUBST_V): Update. (define_mode_iterator SUBST_A): Ditto. (define_subst_attr "mask_operand7"): New. (define_subst_attr "mask_operand10"): New. (define_subst_attr "mask_operand_arg34") : New. (define_subst_attr "mask_expand_op3"): New. (define_subst_attr "mask_mode512bit_condition"): Handle TARGET_AVX512VL. (define_subst_attr "sd_mask_mode512bit_condition"): Ditto. (define_subst_attr "round_mask_operand4"): New. (define_subst_attr "round_mask_scalar_op3"): Delete. (define_subst_attr "round_mask_op4"): New. (define_subst_attr "round_mode512bit_condition"): Allow V8DImode, V16SImode. (define_subst_attr "round_modev8sf_condition"): New. (define_subst_attr "round_modev4sf_condition"): GET_MODE instead of mode. (define_subst_attr "round_saeonly_mask_operand4"): New. (define_subst_attr "round_saeonly_mask_op4"): New. (define_subst_attr "round_saeonly_mode512bit_condition"): Allow V8DImode, V16SImode. (define_subst_attr "round_saeonly_modev8sf_condition"): New. (define_subst_attr "mask_expand4_name" "mask_expand4"): New. (define_subst_attr "mask_expand4_args"): New. (define_subst "mask_expand4"): New. --- Thanks, K diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md index 3337104..ebe38f3 100644 --- a/gcc/config/i386/sse.md +++ b/gcc/config/i386/sse.md @@ -471,8 +471,8 @@ ;; Mapping of vector modes to corresponding mask size (define_mode_attr avx512fmaskmode - [(V16QI "HI") - (V16HI "HI") (V8HI "QI") + [(V64QI "DI") (V32QI "SI") (V16QI "HI") + (V32HI "SI") (V16HI "HI") (V8HI "QI") (V4HI "QI") (V16SI "HI") (V8SI "QI") (V4SI "QI") (V8DI "QI") (V4DI "QI") (V2DI "QI") (V16SF "HI") (V8SF "QI") (V4SF "QI") diff --git a/gcc/config/i386/subst.md b/gcc/config/i386/subst.md index 1654cba..8826533 100644 --- a/gcc/config/i386/subst.md +++ b/gcc/config/i386/subst.md @@ -20,8 +20,8 @@ ;; Some iterators for extending subst as much as possible ;; All vectors (Use it for destination) (define_mode_iterator SUBST_V - [V16QI - V16HI V8HI + [V64QI V32QI V16QI + V32HI V16HI V8HI V16SI V8SI V4SI V8DI V4DI V2DI V16SF V8SF V4SF @@ -31,8 +31,8 @@ [QI HI SI DI]) (define_mode_iterator SUBST_A - [V16QI - V16HI V8HI + [V64QI V32QI V16QI + V32HI V16HI V8HI V16SI V8SI V4SI V8DI V4DI V2DI V16SF V8SF V4SF @@ -47,16 +47,20 @@ (define_subst_attr "mask_operand3_1" "mask" "" "%%{%%4%%}%%N3") ;; for sprintf (define_subst_attr "mask_operand4" "mask" "" "%{%5%}%N4") (define_subst_attr "mask_operand6" "mask" "" "%{%7%}%N6") +(define_subst_attr "mask_operand7" "mask" "" "%{%8%}%N7") +(define_subst_attr "mask_operand10" "mask" "" "%{%11%}%N10") (define_subst_attr "mask_operand11" "mask" "" "%{%12%}%N11") (define_subst_attr "mask_operand18" "mask" "" "%{%19%}%N18") (define_subst_attr "mask_operand19" "mask" "" "%{%20%}%N19") (define_subst_attr "mask_codefor" "mask" "*" "") -(define_subst_attr "mask_mode512bit_condition" "mask" "1" "( == 64)") +(define_subst_attr "mask_operand_arg34" "mask" "" ", operands[3], operands[4]") +(define_subst_attr "mask_mode512bit_condition" "mask" "1" "(GET_MODE_SIZE (GET_MODE (operands[0])) == 64 || TARGET_AVX512VL)") (define_subst_attr "store_mask_constraint" "mask" "vm" "v") (define_subst_attr "store_mask_predicate" "mask" "nonimmediate_operand" "register_operand") (define_subst_attr "mask_prefix" "mask" "vex" "evex") (define_subst_attr "mask_prefix2" "mask" "maybe_vex" "evex") (define_subst_attr "mask_prefix3" "mask" "orig,vex" "evex") +(define_subst_attr "mask_expand_op3" "mask" "3" "5") (define_subst "mask" [(set (match_operand:SUBST_V 0) @@ -85,7 +89,7 @@ (define_subst_attr "sd_mask_op4" "sd" "" "%{%5%}%N4") (define_subst_attr "sd_mask_op5" "sd" "" "%{%6%}%N5") (define_subst_attr "sd_mask_codefor" "sd" "*" "") -(define_subst_attr "sd_mask_mode512bit_condition" "sd" "1" "( == 64)") +(define_subst_attr "sd_mask_mode512bit_condition" "sd" "1" "( == 64 || TARGET_AVX512VL)") (define_subst "sd" [(set (match_operand:SUBST_V 0) @@ -101,6 +105,7 @@ (define_subst_attr "round_name" "round" "" "_round") (define_subst_attr "round_mask_operand2" "mask" "%R2" "%R4") (define_subst_attr "round_mask_operand3" "mask" "%R3" "%R5") +(define_subst_attr "round_mask_operand4" "mask" "%R4" "%R6") (define_subst_attr "round_sd_mask_operand4" "sd" "%R4" "%R6") (define_subst_attr "round_op2" "round" "" "%R2") (define_subst_attr "round_op3" "round" "" "%R3") @@ -109,15 +114,19 @@ (define_subst_attr "round_op6" "round" "" "%R6") (define_subst_attr "round_mask_op2" "round" "" "") (define_subst_attr "round_mask_op3" "round" "" "") -(define_subst_attr "round_mask_scalar_op3" "round" "" "") +(define_subst_attr "round_mask_op4" "round" "" "") (define_subst_attr "round_sd_mask_op4" "round" "" "") (define_subst_attr "round_constraint" "round" "vm" "v") (define_subst_attr "round_constraint2" "round" "m" "v") (define_subst_attr "round_constraint3" "round" "rm" "r") (define_subst_attr "round_nimm_predicate" "round" "nonimmediate_operand" "register_operand") (define_subst_attr "round_prefix" "round" "vex" "evex") -(define_subst_attr "round_mode512bit_condition" "round" "1" "(mode == V16SFmode || mode == V8DFmode)") -(define_subst_attr "round_modev4sf_condition" "round" "1" "(mode == V4SFmode)") +(define_subst_attr "round_mode512bit_condition" "round" "1" "(GET_MODE (operands[0]) == V16SFmode + || GET_MODE (operands[0]) == V8DFmode + || GET_MODE (operands[0]) == V8DImode + || GET_MODE (operands[0]) == V16SImode)") +(define_subst_attr "round_modev8sf_condition" "round" "1" "(GET_MODE (operands[0]) == V8SFmode)") +(define_subst_attr "round_modev4sf_condition" "round" "1" "(GET_MODE (operands[0]) == V4SFmode)") (define_subst_attr "round_codefor" "round" "*" "") (define_subst_attr "round_opnum" "round" "5" "6") @@ -133,6 +142,7 @@ (define_subst_attr "round_saeonly_name" "round_saeonly" "" "_round") (define_subst_attr "round_saeonly_mask_operand2" "mask" "%r2" "%r4") (define_subst_attr "round_saeonly_mask_operand3" "mask" "%r3" "%r5") +(define_subst_attr "round_saeonly_mask_operand4" "mask" "%r4" "%r6") (define_subst_attr "round_saeonly_mask_scalar_merge_operand4" "mask_scalar_merge" "%r4" "%r5") (define_subst_attr "round_saeonly_sd_mask_operand5" "sd" "%r5" "%r7") (define_subst_attr "round_saeonly_op2" "round_saeonly" "" "%r2") @@ -143,12 +153,17 @@ (define_subst_attr "round_saeonly_prefix" "round_saeonly" "vex" "evex") (define_subst_attr "round_saeonly_mask_op2" "round_saeonly" "" "") (define_subst_attr "round_saeonly_mask_op3" "round_saeonly" "" "") +(define_subst_attr "round_saeonly_mask_op4" "round_saeonly" "" "") (define_subst_attr "round_saeonly_mask_scalar_merge_op4" "round_saeonly" "" "") (define_subst_attr "round_saeonly_sd_mask_op5" "round_saeonly" "" "") (define_subst_attr "round_saeonly_constraint" "round_saeonly" "vm" "v") (define_subst_attr "round_saeonly_constraint2" "round_saeonly" "m" "v") (define_subst_attr "round_saeonly_nimm_predicate" "round_saeonly" "nonimmediate_operand" "register_operand") -(define_subst_attr "round_saeonly_mode512bit_condition" "round_saeonly" "1" "(mode == V16SFmode || mode == V8DFmode)") +(define_subst_attr "round_saeonly_mode512bit_condition" "round_saeonly" "1" "(mode == V16SFmode + || mode == V8DFmode + || mode == V8DImode + || mode == V16SImode)") +(define_subst_attr "round_saeonly_modev8sf_condition" "round_saeonly" "1" "(mode == V8SFmode)") (define_subst "round_saeonly" [(set (match_operand:SUBST_A 0) @@ -196,3 +211,19 @@ (match_dup 4) (match_dup 5) (unspec [(match_operand:SI 6 "const48_operand")] UNSPEC_EMBEDDED_ROUNDING)]) + +(define_subst_attr "mask_expand4_name" "mask_expand4" "" "_mask") +(define_subst_attr "mask_expand4_args" "mask_expand4" "" ", operands[4], operands[5]") + +(define_subst "mask_expand4" + [(match_operand:SUBST_V 0) + (match_operand:SUBST_V 1) + (match_operand:SUBST_V 2) + (match_operand:SI 3)] + "TARGET_AVX512VL" + [(match_dup 0) + (match_dup 1) + (match_dup 2) + (match_dup 3) + (match_operand:SUBST_V 4 "vector_move_operand") + (match_operand: 5 "register_operand")])