From patchwork Wed Aug 14 11:01:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 1146964 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-506923-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="lSzUf7aP"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 467mr21qZTz9s7T for ; Wed, 14 Aug 2019 21:02:09 +1000 (AEST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; q=dns; s= default; b=t5b2VJBBa+2aUSpI9sUqYxWRtusaeIUZlMNYqtjN3Mf8JdWRkYB46 My4tMU+kW/HRiqhXRLX2XLaIJBbxCa7/aAHRhRzNh2iKU4+fBpkOSj8+jSD6E1Dt oB+Ijhse5cqSoMrz/wqxlbUO+Bu7dRj7NHFb4rgk/DnuEUZjbFy+C4= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; s= default; bh=Tzb4jWjfdGRhOY4oyM9cKq8D2os=; b=lSzUf7aPRiEgXxTbkEPp cnBgGVoOBLh45Wh4ewHva7mLXpLfBmyYoHZk0Q7PqTuoSp8sm6xM4HUXVJ3tU84k 7958KwgIS9qJjD0u4WQYurekfLb9t3cg38msRcs3rfd95WbETgARTyQRhPLvTQeD 2FbU38qZo+rbNszo4Z8t8n4= Received: (qmail 26729 invoked by alias); 14 Aug 2019 11:02:01 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 26647 invoked by uid 89); 14 Aug 2019 11:01:59 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-8.4 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, SPF_PASS autolearn=ham version=3.3.1 spammy= X-HELO: foss.arm.com Received: from foss.arm.com (HELO foss.arm.com) (217.140.110.172) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 14 Aug 2019 11:01:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8FD3928 for ; Wed, 14 Aug 2019 04:01:54 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.99.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1AA5E3F706 for ; Wed, 14 Aug 2019 04:01:53 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [committed][AArch64] Use SVE UXT[BHW] as a form of predicated AND Date: Wed, 14 Aug 2019 12:01:52 +0100 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 X-IsSubscribed: yes UXTB, UXTH and UXTW are equivalent to predicated ANDs with the constants 0xff, 0xffff and 0xffffffff respectively. This patch uses them in the patterns for IFN_COND_AND. Tested on aarch64-linux-gnu (with and without SVE) and aarch64_be-elf. Applied as r274479. Richard 2019-08-14 Richard Sandiford gcc/ * config/aarch64/aarch64.c (aarch64_print_operand): Allow %e to take the equivalent mask, as well as a bit count. * config/aarch64/predicates.md (aarch64_sve_uxtb_immediate) (aarch64_sve_uxth_immediate, aarch64_sve_uxt_immediate) (aarch64_sve_pred_and_operand): New predicates. * config/aarch64/iterators.md (sve_pred_int_rhs2_operand): New code attribute. * config/aarch64/aarch64-sve.md (cond_): Use it. (*cond_uxt_2, *cond_uxt_any): New patterns. gcc/testsuite/ * gcc.target/aarch64/sve/cond_uxt_1.c: New test. * gcc.target/aarch64/sve/cond_uxt_1_run.c: Likewise. * gcc.target/aarch64/sve/cond_uxt_2.c: Likewise. * gcc.target/aarch64/sve/cond_uxt_2_run.c: Likewise. * gcc.target/aarch64/sve/cond_uxt_3.c: Likewise. * gcc.target/aarch64/sve/cond_uxt_3_run.c: Likewise. * gcc.target/aarch64/sve/cond_uxt_4.c: Likewise. * gcc.target/aarch64/sve/cond_uxt_4_run.c: Likewise. Index: gcc/config/aarch64/aarch64.c =================================================================== --- gcc/config/aarch64/aarch64.c 2019-08-14 10:18:10.642319210 +0100 +++ gcc/config/aarch64/aarch64.c 2019-08-14 12:00:03.209840337 +0100 @@ -8328,7 +8328,8 @@ sizetochar (int size) 'D': Take the duplicated element in a vector constant and print it as an unsigned integer, in decimal. 'e': Print the sign/zero-extend size as a character 8->b, - 16->h, 32->w. + 16->h, 32->w. Can also be used for masks: + 0xff->b, 0xffff->h, 0xffffffff->w. 'I': If the operand is a duplicated vector constant, replace it with the duplicated scalar. If the operand is then a floating-point constant, replace @@ -8399,27 +8400,22 @@ aarch64_print_operand (FILE *f, rtx x, i case 'e': { - int n; - - if (!CONST_INT_P (x) - || (n = exact_log2 (INTVAL (x) & ~7)) <= 0) + x = unwrap_const_vec_duplicate (x); + if (!CONST_INT_P (x)) { output_operand_lossage ("invalid operand for '%%%c'", code); return; } - switch (n) + HOST_WIDE_INT val = INTVAL (x); + if ((val & ~7) == 8 || val == 0xff) + fputc ('b', f); + else if ((val & ~7) == 16 || val == 0xffff) + fputc ('h', f); + else if ((val & ~7) == 32 || val == 0xffffffff) + fputc ('w', f); + else { - case 3: - fputc ('b', f); - break; - case 4: - fputc ('h', f); - break; - case 5: - fputc ('w', f); - break; - default: output_operand_lossage ("invalid operand for '%%%c'", code); return; } Index: gcc/config/aarch64/predicates.md =================================================================== --- gcc/config/aarch64/predicates.md 2019-08-14 10:18:10.642319210 +0100 +++ gcc/config/aarch64/predicates.md 2019-08-14 12:00:03.209840337 +0100 @@ -606,11 +606,26 @@ (define_predicate "aarch64_sve_inc_dec_i (and (match_code "const,const_vector") (match_test "aarch64_sve_inc_dec_immediate_p (op)"))) +(define_predicate "aarch64_sve_uxtb_immediate" + (and (match_code "const_vector") + (match_test "GET_MODE_UNIT_BITSIZE (GET_MODE (op)) > 8") + (match_test "aarch64_const_vec_all_same_int_p (op, 0xff)"))) + +(define_predicate "aarch64_sve_uxth_immediate" + (and (match_code "const_vector") + (match_test "GET_MODE_UNIT_BITSIZE (GET_MODE (op)) > 16") + (match_test "aarch64_const_vec_all_same_int_p (op, 0xffff)"))) + (define_predicate "aarch64_sve_uxtw_immediate" (and (match_code "const_vector") (match_test "GET_MODE_UNIT_BITSIZE (GET_MODE (op)) > 32") (match_test "aarch64_const_vec_all_same_int_p (op, 0xffffffff)"))) +(define_predicate "aarch64_sve_uxt_immediate" + (ior (match_operand 0 "aarch64_sve_uxtb_immediate") + (match_operand 0 "aarch64_sve_uxth_immediate") + (match_operand 0 "aarch64_sve_uxtw_immediate"))) + (define_predicate "aarch64_sve_logical_immediate" (and (match_code "const,const_vector") (match_test "aarch64_sve_bitmask_immediate_p (op)"))) @@ -670,6 +685,10 @@ (define_predicate "aarch64_sve_add_opera (match_operand 0 "aarch64_sve_sub_arith_immediate") (match_operand 0 "aarch64_sve_inc_dec_immediate"))) +(define_predicate "aarch64_sve_pred_and_operand" + (ior (match_operand 0 "register_operand") + (match_operand 0 "aarch64_sve_uxt_immediate"))) + (define_predicate "aarch64_sve_logical_operand" (ior (match_operand 0 "register_operand") (match_operand 0 "aarch64_sve_logical_immediate"))) Index: gcc/config/aarch64/iterators.md =================================================================== --- gcc/config/aarch64/iterators.md 2019-08-14 10:28:46.145666799 +0100 +++ gcc/config/aarch64/iterators.md 2019-08-14 12:00:03.209840337 +0100 @@ -1525,6 +1525,20 @@ (define_code_attr sve_imm_prefix [(mult (umax "D") (umin "D")]) +;; The predicate to use for the second input operand in a cond_ +;; pattern. +(define_code_attr sve_pred_int_rhs2_operand + [(plus "register_operand") + (minus "register_operand") + (mult "register_operand") + (smax "register_operand") + (umax "register_operand") + (smin "register_operand") + (umin "register_operand") + (and "aarch64_sve_pred_and_operand") + (ior "register_operand") + (xor "register_operand")]) + ;; ------------------------------------------------------------------- ;; Int Iterators. ;; ------------------------------------------------------------------- Index: gcc/config/aarch64/aarch64-sve.md =================================================================== --- gcc/config/aarch64/aarch64-sve.md 2019-08-14 11:56:54.223221654 +0100 +++ gcc/config/aarch64/aarch64-sve.md 2019-08-14 12:00:03.205840369 +0100 @@ -54,6 +54,7 @@ ;; ;; == Unary arithmetic ;; ---- [INT] General unary arithmetic corresponding to rtx codes +;; ---- [INT] Zero extension ;; ---- [INT] Logical inverse ;; ---- [FP] General unary arithmetic corresponding to unspecs ;; ---- [PRED] Inverse @@ -1494,6 +1495,58 @@ (define_insn "*cond__any" ) ;; ------------------------------------------------------------------------- +;; ---- [INT] Zero extension +;; ------------------------------------------------------------------------- +;; Includes: +;; - UXTB +;; - UXTH +;; - UXTW +;; ------------------------------------------------------------------------- + +;; Match UXT[BHW] as a conditional AND of a constant, merging with the +;; first input. +(define_insn "*cond_uxt_2" + [(set (match_operand:SVE_I 0 "register_operand" "=w, ?&w") + (unspec:SVE_I + [(match_operand: 1 "register_operand" "Upl, Upl") + (and:SVE_I + (match_operand:SVE_I 2 "register_operand" "0, w") + (match_operand:SVE_I 3 "aarch64_sve_uxt_immediate")) + (match_dup 2)] + UNSPEC_SEL))] + "TARGET_SVE" + "@ + uxt%e3\t%0., %1/m, %0. + movprfx\t%0, %2\;uxt%e3\t%0., %1/m, %2." + [(set_attr "movprfx" "*,yes")] +) + +;; Match UXT[BHW] as a conditional AND of a constant, merging with an +;; independent value. +;; +;; The earlyclobber isn't needed for the first alternative, but omitting +;; it would only help the case in which operands 2 and 4 are the same, +;; which is handled above rather than here. Marking all the alternatives +;; as early-clobber helps to make the instruction more regular to the +;; register allocator. +(define_insn "*cond_uxt_any" + [(set (match_operand:SVE_I 0 "register_operand" "=&w, ?&w, ?&w") + (unspec:SVE_I + [(match_operand: 1 "register_operand" "Upl, Upl, Upl") + (and:SVE_I + (match_operand:SVE_I 2 "register_operand" "w, w, w") + (match_operand:SVE_I 3 "aarch64_sve_uxt_immediate")) + (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero" "0, Dz, w")] + UNSPEC_SEL))] + "TARGET_SVE && !rtx_equal_p (operands[2], operands[4])" + "@ + uxt%e3\t%0., %1/m, %2. + movprfx\t%0., %1/z, %2.\;uxt%e3\t%0., %1/m, %2. + movprfx\t%0, %4\;uxt%e3\t%0., %1/m, %2." + [(set_attr "movprfx" "*,yes,yes")] +) + +;; ------------------------------------------------------------------------- ;; ---- [INT] Logical inverse ;; ------------------------------------------------------------------------- @@ -1794,7 +1847,7 @@ (define_expand "cond_" [(match_operand: 1 "register_operand") (SVE_INT_BINARY:SVE_I (match_operand:SVE_I 2 "register_operand") - (match_operand:SVE_I 3 "register_operand")) + (match_operand:SVE_I 3 "")) (match_operand:SVE_I 4 "aarch64_simd_reg_or_zero")] UNSPEC_SEL))] "TARGET_SVE" Index: gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_1.c =================================================================== --- /dev/null 2019-07-30 08:53:31.317691683 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_1.c 2019-08-14 12:00:03.209840337 +0100 @@ -0,0 +1,40 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include + +#define NUM_ELEMS(TYPE) (320 / sizeof (TYPE)) + +#define DEF_LOOP(TYPE, CONST) \ + void __attribute__ ((noipa)) \ + test_##CONST##_##TYPE (TYPE *restrict r, TYPE *restrict a, \ + TYPE *restrict b) \ + { \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + r[i] = a[i] > 20 ? b[i] & CONST : b[i]; \ + } + +#define TEST_ALL(T) \ + T (uint16_t, 0xff) \ + \ + T (uint32_t, 0xff) \ + T (uint32_t, 0xffff) \ + \ + T (uint64_t, 0xff) \ + T (uint64_t, 0xffff) \ + T (uint64_t, 0xffffffff) + +TEST_ALL (DEF_LOOP) + +/* { dg-final { scan-assembler {\tld1h\t(z[0-9]+\.h), p[0-7]/z, \[x2,[^L]*\tuxtb\t\1, p[0-7]/m, \1\n} } } */ + +/* { dg-final { scan-assembler {\tld1w\t(z[0-9]+\.s), p[0-7]/z, \[x2,[^L]*\tuxtb\t\1, p[0-7]/m, \1\n} } } */ +/* { dg-final { scan-assembler {\tld1w\t(z[0-9]+\.s), p[0-7]/z, \[x2,[^L]*\tuxth\t\1, p[0-7]/m, \1\n} } } */ + +/* { dg-final { scan-assembler {\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x2,[^L]*\tuxtb\t\1, p[0-7]/m, \1\n} } } */ +/* { dg-final { scan-assembler {\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x2,[^L]*\tuxth\t\1, p[0-7]/m, \1\n} } } */ +/* { dg-final { scan-assembler {\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x2,[^L]*\tuxtw\t\1, p[0-7]/m, \1\n} } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ +/* { dg-final { scan-assembler-not {\tmovprfx\t} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_1_run.c =================================================================== --- /dev/null 2019-07-30 08:53:31.317691683 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_1_run.c 2019-08-14 12:00:03.209840337 +0100 @@ -0,0 +1,27 @@ +/* { dg-do run { target { aarch64_sve_hw } } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_uxt_1.c" + +#define TEST_LOOP(TYPE, CONST) \ + { \ + TYPE r[NUM_ELEMS (TYPE)]; \ + TYPE a[NUM_ELEMS (TYPE)]; \ + TYPE b[NUM_ELEMS (TYPE)]; \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + { \ + a[i] = (i & 1 ? i : 3 * i); \ + b[i] = (i >> 4) << (i & 15); \ + asm volatile ("" ::: "memory"); \ + } \ + test_##CONST##_##TYPE (r, a, b); \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + if (r[i] != (a[i] > 20 ? b[i] & CONST : b[i])) \ + __builtin_abort (); \ + } + +int main () +{ + TEST_ALL (TEST_LOOP) + return 0; +} Index: gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_2.c =================================================================== --- /dev/null 2019-07-30 08:53:31.317691683 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_2.c 2019-08-14 12:00:03.209840337 +0100 @@ -0,0 +1,40 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include + +#define NUM_ELEMS(TYPE) (320 / sizeof (TYPE)) + +#define DEF_LOOP(TYPE, CONST) \ + void __attribute__ ((noipa)) \ + test_##CONST##_##TYPE (TYPE *restrict r, TYPE *restrict a, \ + TYPE *restrict b) \ + { \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + r[i] = a[i] > 20 ? b[i] & CONST : a[i]; \ + } + +#define TEST_ALL(T) \ + T (uint16_t, 0xff) \ + \ + T (uint32_t, 0xff) \ + T (uint32_t, 0xffff) \ + \ + T (uint64_t, 0xff) \ + T (uint64_t, 0xffff) \ + T (uint64_t, 0xffffffff) + +TEST_ALL (DEF_LOOP) + +/* { dg-final { scan-assembler {\tld1h\t(z[0-9]+\.h), p[0-7]/z, \[x1,[^L]*\tld1h\t(z[0-9]+\.h), p[0-7]/z, \[x2,[^L]*\tuxtb\t\1, p[0-7]/m, \2\n} } } */ + +/* { dg-final { scan-assembler {\tld1w\t(z[0-9]+\.s), p[0-7]/z, \[x1,[^L]*\tld1w\t(z[0-9]+\.s), p[0-7]/z, \[x2,[^L]*\tuxtb\t\1, p[0-7]/m, \2\n} } } */ +/* { dg-final { scan-assembler {\tld1w\t(z[0-9]+\.s), p[0-7]/z, \[x1,[^L]*\tld1w\t(z[0-9]+\.s), p[0-7]/z, \[x2,[^L]*\tuxth\t\1, p[0-7]/m, \2\n} } } */ + +/* { dg-final { scan-assembler {\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x1,[^L]*\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x2,[^L]*\tuxtb\t\1, p[0-7]/m, \2\n} } } */ +/* { dg-final { scan-assembler {\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x1,[^L]*\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x2,[^L]*\tuxth\t\1, p[0-7]/m, \2\n} } } */ +/* { dg-final { scan-assembler {\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x1,[^L]*\tld1d\t(z[0-9]+\.d), p[0-7]/z, \[x2,[^L]*\tuxtw\t\1, p[0-7]/m, \2\n} } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ +/* { dg-final { scan-assembler-not {\tmovprfx\t} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_2_run.c =================================================================== --- /dev/null 2019-07-30 08:53:31.317691683 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_2_run.c 2019-08-14 12:00:03.213840310 +0100 @@ -0,0 +1,27 @@ +/* { dg-do run { target { aarch64_sve_hw } } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_uxt_2.c" + +#define TEST_LOOP(TYPE, CONST) \ + { \ + TYPE r[NUM_ELEMS (TYPE)]; \ + TYPE a[NUM_ELEMS (TYPE)]; \ + TYPE b[NUM_ELEMS (TYPE)]; \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + { \ + a[i] = (i & 1 ? i : 3 * i); \ + b[i] = (i >> 4) << (i & 15); \ + asm volatile ("" ::: "memory"); \ + } \ + test_##CONST##_##TYPE (r, a, b); \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + if (r[i] != (a[i] > 20 ? b[i] & CONST : a[i])) \ + __builtin_abort (); \ + } + +int main () +{ + TEST_ALL (TEST_LOOP) + return 0; +} Index: gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_3.c =================================================================== --- /dev/null 2019-07-30 08:53:31.317691683 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_3.c 2019-08-14 12:00:03.213840310 +0100 @@ -0,0 +1,39 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include + +#define NUM_ELEMS(TYPE) (320 / sizeof (TYPE)) + +#define DEF_LOOP(TYPE, CONST) \ + void __attribute__ ((noipa)) \ + test_##CONST##_##TYPE (TYPE *restrict r, TYPE *restrict a, \ + TYPE *restrict b) \ + { \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + r[i] = a[i] > 20 ? b[i] & CONST : 127; \ + } + +#define TEST_ALL(T) \ + T (uint16_t, 0xff) \ + \ + T (uint32_t, 0xff) \ + T (uint32_t, 0xffff) \ + \ + T (uint64_t, 0xff) \ + T (uint64_t, 0xffff) \ + T (uint64_t, 0xffffffff) + +TEST_ALL (DEF_LOOP) + +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+), z[0-9]+\n\tuxtb\t\1\.h, p[0-7]/m, z[0-9]+\.h\n} } } */ + +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+), z[0-9]+\n\tuxtb\t\1\.s, p[0-7]/m, z[0-9]+\.s\n} } } */ +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+), z[0-9]+\n\tuxth\t\1\.s, p[0-7]/m, z[0-9]+\.s\n} } } */ + +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+), z[0-9]+\n\tuxtb\t\1\.d, p[0-7]/m, z[0-9]+\.d\n} } } */ +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+), z[0-9]+\n\tuxth\t\1\.d, p[0-7]/m, z[0-9]+\.d\n} } } */ +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+), z[0-9]+\n\tuxtw\t\1\.d, p[0-7]/m, z[0-9]+\.d\n} } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz[^\n]*z} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_3_run.c =================================================================== --- /dev/null 2019-07-30 08:53:31.317691683 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_3_run.c 2019-08-14 12:00:03.213840310 +0100 @@ -0,0 +1,27 @@ +/* { dg-do run { target { aarch64_sve_hw } } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_uxt_3.c" + +#define TEST_LOOP(TYPE, CONST) \ + { \ + TYPE r[NUM_ELEMS (TYPE)]; \ + TYPE a[NUM_ELEMS (TYPE)]; \ + TYPE b[NUM_ELEMS (TYPE)]; \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + { \ + a[i] = (i & 1 ? i : 3 * i); \ + b[i] = (i >> 4) << (i & 15); \ + asm volatile ("" ::: "memory"); \ + } \ + test_##CONST##_##TYPE (r, a, b); \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + if (r[i] != (a[i] > 20 ? b[i] & CONST : 127)) \ + __builtin_abort (); \ + } + +int main () +{ + TEST_ALL (TEST_LOOP) + return 0; +} Index: gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_4.c =================================================================== --- /dev/null 2019-07-30 08:53:31.317691683 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_4.c 2019-08-14 12:00:03.213840310 +0100 @@ -0,0 +1,36 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include + +#define NUM_ELEMS(TYPE) (320 / sizeof (TYPE)) + +#define DEF_LOOP(TYPE, CONST) \ + void __attribute__ ((noipa)) \ + test_##CONST##_##TYPE (TYPE *restrict r, TYPE *restrict a, \ + TYPE *restrict b) \ + { \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + r[i] = a[i] > 20 ? b[i] & CONST : 0; \ + } + +#define TEST_ALL(T) \ + T (uint16_t, 0xff) \ + \ + T (uint32_t, 0xff) \ + T (uint32_t, 0xffff) \ + \ + T (uint64_t, 0xff) \ + T (uint64_t, 0xffff) \ + T (uint64_t, 0xffffffff) + +TEST_ALL (DEF_LOOP) + +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+\.h), (p[0-7])/z, z[0-9]+\.h\n\tuxtb\t\1, \2/m, z[0-9]+\.h\n} } } */ + +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+\.s), (p[0-7])/z, z[0-9]+\.s\n\tuxtb\t\1, \2/m, z[0-9]+\.s\n} } } */ +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+\.s), (p[0-7])/z, z[0-9]+\.s\n\tuxth\t\1, \2/m, z[0-9]+\.s\n} } } */ + +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+\.d), (p[0-7])/z, z[0-9]+\.d\n\tuxtb\t\1, \2/m, z[0-9]+\.d\n} } } */ +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+\.d), (p[0-7])/z, z[0-9]+\.d\n\tuxth\t\1, \2/m, z[0-9]+\.d\n} } } */ +/* { dg-final { scan-assembler {\tmovprfx\t(z[0-9]+\.d), (p[0-7])/z, z[0-9]+\.d\n\tuxtw\t\1, \2/m, z[0-9]+\.d\n} } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_4_run.c =================================================================== --- /dev/null 2019-07-30 08:53:31.317691683 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/cond_uxt_4_run.c 2019-08-14 12:00:03.213840310 +0100 @@ -0,0 +1,27 @@ +/* { dg-do run { target { aarch64_sve_hw } } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_uxt_4.c" + +#define TEST_LOOP(TYPE, CONST) \ + { \ + TYPE r[NUM_ELEMS (TYPE)]; \ + TYPE a[NUM_ELEMS (TYPE)]; \ + TYPE b[NUM_ELEMS (TYPE)]; \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + { \ + a[i] = (i & 1 ? i : 3 * i); \ + b[i] = (i >> 4) << (i & 15); \ + asm volatile ("" ::: "memory"); \ + } \ + test_##CONST##_##TYPE (r, a, b); \ + for (int i = 0; i < NUM_ELEMS (TYPE); ++i) \ + if (r[i] != (a[i] > 20 ? b[i] & CONST : 0)) \ + __builtin_abort (); \ + } + +int main () +{ + TEST_ALL (TEST_LOOP) + return 0; +}