From patchwork Sat Nov 16 11:08:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 1196086 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-513780-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="grfB73JQ"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 47FXXQ5Lqmz9s4Y for ; Sat, 16 Nov 2019 22:08:52 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; q=dns; s= default; b=aNA8kscGwlEXB/Cdkd7E9Oeao1BxG7xAfSrXnWM0uukHblZbfJpap 3jlCObboHwAaBd75xKYUSoXBAHI5fZGUqifTiOffReqc7HiJQ9rllrKcf2ZaI00c WadFC3UzClcaTdjULF6xkmirTOrnGWBh78I7Di9s5C6+6m5+z1gAeg= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; s= default; bh=Vo6z5T3CTXsjfFCRHt5nr1vtLP4=; b=grfB73JQRJ0KvaoWQUiJ 9L/EB2QEyVIfh4E5vuEijQ+2ufRe9uLMtkqXyCHeNdjy4WFm686Qm2lLX/Gj15Ld gm1ZpsCCBYu7vr2a3um5wgcqJp5gxVBElKRjHfmeyvN1AQiUNbFXV0Il23hMdW7J gOLuvOtYQzl2Wq0hDYRjlGw= Received: (qmail 117738 invoked by alias); 16 Nov 2019 11:08:45 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 117730 invoked by uid 89); 16 Nov 2019 11:08:44 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-9.3 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS autolearn=ham version=3.3.1 spammy=fnoinline, fno-inline, 85, omitting X-HELO: foss.arm.com Received: from foss.arm.com (HELO foss.arm.com) (217.140.110.172) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Sat, 16 Nov 2019 11:08:41 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F0B630E for ; Sat, 16 Nov 2019 03:08:40 -0800 (PST) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9DE2E3F534 for ; Sat, 16 Nov 2019 03:08:39 -0800 (PST) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [committed][AArch64] Add sign and zero extension for partial SVE modes Date: Sat, 16 Nov 2019 11:08:38 +0000 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 X-IsSubscribed: yes This patch adds support for extending from partial SVE modes to both full vector modes and wider partial modes. Some tests now need --param aarch64-sve-compare-costs=0 to force the original full-vector code. Tested on aarch64-linux-gnu and applied as r278342. Richard 2019-11-16 Richard Sandiford gcc/ * config/aarch64/iterators.md (SVE_HSDI): New mode iterator. (narrower_mask): Handle VNx4HI, VNx2HI and VNx2SI. * config/aarch64/aarch64-sve.md (2): New pattern. (*2): Likewise. (@aarch64_pred_sxt): Update comment. Avoid new narrower_mask ambiguity. (@aarch64_cond_sxt): Likewise. (*cond_uxt_2): Update comment. (*cond_uxt_any): Likewise. gcc/testsuite/ * gcc.target/aarch64/sve/cost_model_1.c: Expect the loop to be vectorized with bytes stored in 32-bit containers. * gcc.target/aarch64/sve/extend_1.c: New test. * gcc.target/aarch64/sve/extend_2.c: New test. * gcc.target/aarch64/sve/extend_3.c: New test. * gcc.target/aarch64/sve/extend_4.c: New test. * gcc.target/aarch64/sve/load_const_offset_3.c: Add --param aarch64-sve-compare-costs=0. * gcc.target/aarch64/sve/mask_struct_store_1.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_1_run.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_2.c: Likewise. * gcc.target/aarch64/sve/mask_struct_store_2_run.c: Likewise. * gcc.target/aarch64/sve/unpack_unsigned_1.c: Likewise. * gcc.target/aarch64/sve/unpack_unsigned_1_run.c: Likewise. Index: gcc/config/aarch64/iterators.md =================================================================== --- gcc/config/aarch64/iterators.md 2019-11-16 11:02:04.689360127 +0000 +++ gcc/config/aarch64/iterators.md 2019-11-16 11:04:07.924489895 +0000 @@ -359,6 +359,11 @@ (define_mode_iterator SVE_I [VNx16QI VNx VNx4SI VNx2SI VNx2DI]) +;; SVE integer vector modes whose elements are 16 bits or wider. +(define_mode_iterator SVE_HSDI [VNx8HI VNx4HI VNx2HI + VNx4SI VNx2SI + VNx2DI]) + ;; Modes involved in extending or truncating SVE data, for 8 elements per ;; 128-bit block. (define_mode_iterator VNx8_NARROW [VNx8QI]) @@ -1364,9 +1369,10 @@ (define_mode_attr self_mask [(VNx8QI "0x (VNx2HI "0x22") (VNx2SI "0x24")]) -;; For full vector modes, the mask of narrower modes, encoded as above. -(define_mode_attr narrower_mask [(VNx8HI "0x81") - (VNx4SI "0x43") +;; For SVE_HSDI vector modes, the mask of narrower modes, encoded as above. +(define_mode_attr narrower_mask [(VNx8HI "0x81") (VNx4HI "0x41") + (VNx2HI "0x21") + (VNx4SI "0x43") (VNx2SI "0x23") (VNx2DI "0x27")]) ;; The constraint to use for an SVE [SU]DOT, FMUL, FMLA or FMLS lane index. Index: gcc/config/aarch64/aarch64-sve.md =================================================================== --- gcc/config/aarch64/aarch64-sve.md 2019-11-16 11:02:04.685360155 +0000 +++ gcc/config/aarch64/aarch64-sve.md 2019-11-16 11:04:07.924489895 +0000 @@ -71,8 +71,7 @@ ;; == Unary arithmetic ;; ---- [INT] General unary arithmetic corresponding to rtx codes ;; ---- [INT] General unary arithmetic corresponding to unspecs -;; ---- [INT] Sign extension -;; ---- [INT] Zero extension +;; ---- [INT] Sign and zero extension ;; ---- [INT] Logical inverse ;; ---- [FP<-INT] General unary arithmetic that maps to unspecs ;; ---- [FP] General unary arithmetic corresponding to unspecs @@ -2812,15 +2811,44 @@ (define_insn "@cond_" ) ;; ------------------------------------------------------------------------- -;; ---- [INT] Sign extension +;; ---- [INT] Sign and zero extension ;; ------------------------------------------------------------------------- ;; Includes: ;; - SXTB ;; - SXTH ;; - SXTW +;; - UXTB +;; - UXTH +;; - UXTW ;; ------------------------------------------------------------------------- -;; Predicated SXT[BHW]. +;; Unpredicated sign and zero extension from a narrower mode. +(define_expand "2" + [(set (match_operand:SVE_HSDI 0 "register_operand") + (unspec:SVE_HSDI + [(match_dup 2) + (ANY_EXTEND:SVE_HSDI + (match_operand:SVE_PARTIAL_I 1 "register_operand"))] + UNSPEC_PRED_X))] + "TARGET_SVE && (~ & ) == 0" + { + operands[2] = aarch64_ptrue_reg (mode); + } +) + +;; Predicated sign and zero extension from a narrower mode. +(define_insn "*2" + [(set (match_operand:SVE_HSDI 0 "register_operand" "=w") + (unspec:SVE_HSDI + [(match_operand: 1 "register_operand" "Upl") + (ANY_EXTEND:SVE_HSDI + (match_operand:SVE_PARTIAL_I 2 "register_operand" "w"))] + UNSPEC_PRED_X))] + "TARGET_SVE && (~ & ) == 0" + "xt\t%0., %1/m, %2." +) + +;; Predicated truncate-and-sign-extend operations. (define_insn "@aarch64_pred_sxt" [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w") (unspec:SVE_FULL_HSDI @@ -2829,11 +2857,12 @@ (define_insn "@aarch64_pred_sxt & ) == 0" + "TARGET_SVE + && (~ & ) == 0" "sxt\t%0., %1/m, %2." ) -;; Predicated SXT[BHW] with merging. +;; Predicated truncate-and-sign-extend operations with merging. (define_insn "@aarch64_cond_sxt" [(set (match_operand:SVE_FULL_HSDI 0 "register_operand" "=w, ?&w, ?&w") (unspec:SVE_FULL_HSDI @@ -2843,7 +2872,8 @@ (define_insn "@aarch64_cond_sxt & ) == 0" + "TARGET_SVE + && (~ & ) == 0" "@ sxt\t%0., %1/m, %2. movprfx\t%0., %1/z, %2.\;sxt\t%0., %1/m, %2. @@ -2851,17 +2881,11 @@ (define_insn "@aarch64_cond_sxt_2" [(set (match_operand:SVE_FULL_I 0 "register_operand" "=w, ?&w") (unspec:SVE_FULL_I @@ -2878,7 +2902,7 @@ (define_insn "*cond_uxt_2" [(set_attr "movprfx" "*,yes")] ) -;; Match UXT[BHW] as a conditional AND of a constant, merging with an +;; Predicated truncate-and-zero-extend operations, merging with an ;; independent value. ;; ;; The earlyclobber isn't needed for the first alternative, but omitting Index: gcc/testsuite/gcc.target/aarch64/sve/cost_model_1.c =================================================================== --- gcc/testsuite/gcc.target/aarch64/sve/cost_model_1.c 2019-03-18 12:25:05.331409940 +0000 +++ gcc/testsuite/gcc.target/aarch64/sve/cost_model_1.c 2019-11-16 11:04:07.924489895 +0000 @@ -1,4 +1,4 @@ -/* { dg-options "-O2 -ftree-vectorize -fdump-tree-vect-details" } */ +/* { dg-options "-O2 -ftree-vectorize" } */ void f (unsigned int *restrict x, unsigned int *restrict y, @@ -8,5 +8,4 @@ f (unsigned int *restrict x, unsigned in x[i] = x[i] + y[i] + z[i]; } -/* { dg-final { scan-tree-dump "not vectorized: estimated iteration count too small" vect } } */ -/* { dg-final { scan-tree-dump "vectorized 0 loops" vect } } */ +/* { dg-final { scan-assembler {\tld1b\tz[0-9]+\.s, p[0-7]/z, \[x2\]\n} } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/extend_1.c =================================================================== --- /dev/null 2019-09-17 11:41:18.176664108 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/extend_1.c 2019-11-16 11:04:07.924489895 +0000 @@ -0,0 +1,40 @@ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include + +#define TEST_LOOP(TYPE1, TYPE2) \ + void \ + f_##TYPE1##_##TYPE2 (TYPE1 *restrict dst, TYPE1 *restrict src1, \ + TYPE2 *restrict src2, int n) \ + { \ + for (int i = 0; i < n; ++i) \ + dst[i] += src1[i] + (TYPE2) (src2[i] + 1); \ + } + +#define TEST_ALL(T) \ + T (uint16_t, uint8_t) \ + T (uint32_t, uint8_t) \ + T (uint64_t, uint8_t) \ + T (uint32_t, uint16_t) \ + T (uint64_t, uint16_t) \ + T (uint64_t, uint32_t) + +TEST_ALL (TEST_LOOP) + +/* { dg-final { scan-assembler-times {\tld1b\tz[0-9]+\.h,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1b\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1b\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1h\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1h\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1w\tz[0-9]+\.d,} 1 } } */ + +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.b, z[0-9]+\.b, #1\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.h, z[0-9]+\.h, #1\n} 2 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.s, z[0-9]+\.s, #1\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tuxtb\tz[0-9]+\.h,} 1 } } */ +/* { dg-final { scan-assembler-times {\tuxtb\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tuxtb\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tuxth\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tuxth\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tuxtw\tz[0-9]+\.d,} 1 } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/extend_2.c =================================================================== --- /dev/null 2019-09-17 11:41:18.176664108 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/extend_2.c 2019-11-16 11:04:07.924489895 +0000 @@ -0,0 +1,40 @@ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include + +#define TEST_LOOP(TYPE1, TYPE2) \ + void \ + f_##TYPE1##_##TYPE2 (TYPE1 *restrict dst, TYPE1 *restrict src1, \ + TYPE2 *restrict src2, int n) \ + { \ + for (int i = 0; i < n; ++i) \ + dst[i] += src1[i] + (TYPE2) (src2[i] + 1); \ + } + +#define TEST_ALL(T) \ + T (int16_t, int8_t) \ + T (int32_t, int8_t) \ + T (int64_t, int8_t) \ + T (int32_t, int16_t) \ + T (int64_t, int16_t) \ + T (int64_t, int32_t) + +TEST_ALL (TEST_LOOP) + +/* { dg-final { scan-assembler-times {\tld1b\tz[0-9]+\.h,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1b\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1b\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1h\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1h\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1w\tz[0-9]+\.d,} 1 } } */ + +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.b, z[0-9]+\.b, #1\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.h, z[0-9]+\.h, #1\n} 2 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.s, z[0-9]+\.s, #1\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tsxtb\tz[0-9]+\.h,} 1 } } */ +/* { dg-final { scan-assembler-times {\tsxtb\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tsxtb\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tsxth\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tsxth\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tsxtw\tz[0-9]+\.d,} 1 } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/extend_3.c =================================================================== --- /dev/null 2019-09-17 11:41:18.176664108 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/extend_3.c 2019-11-16 11:04:07.924489895 +0000 @@ -0,0 +1,25 @@ +/* { dg-options "-O2 -ftree-vectorize -msve-vector-bits=512" } */ + +#include + +void +f (uint64_t *dst, uint32_t *restrict src1, uint16_t *restrict src2, + uint8_t *restrict src3) +{ + for (int i = 0; i < 7; ++i) + dst[i] += (uint32_t) (src1[i] + (uint16_t) (src2[i] + + (uint8_t) (src3[i] + 1))); +} + +/* { dg-final { scan-assembler-times {\tld1b\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1h\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1w\tz[0-9]+\.d,} 1 } } */ + +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.b, z[0-9]+\.b, #1\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.h, z[0-9]+\.h, z[0-9]+\.h\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.s, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.d, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tuxtb\tz[0-9]+\.h,} 1 } } */ +/* { dg-final { scan-assembler-times {\tuxth\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tuxtw\tz[0-9]+\.d,} 1 } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/extend_4.c =================================================================== --- /dev/null 2019-09-17 11:41:18.176664108 +0100 +++ gcc/testsuite/gcc.target/aarch64/sve/extend_4.c 2019-11-16 11:04:07.924489895 +0000 @@ -0,0 +1,25 @@ +/* { dg-options "-O2 -ftree-vectorize -msve-vector-bits=512" } */ + +#include + +void +f (int64_t *dst, int32_t *restrict src1, int16_t *restrict src2, + int8_t *restrict src3) +{ + for (int i = 0; i < 7; ++i) + dst[i] += (int32_t) (src1[i] + (int16_t) (src2[i] + + (int8_t) (src3[i] + 1))); +} + +/* { dg-final { scan-assembler-times {\tld1b\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1h\tz[0-9]+\.d,} 1 } } */ +/* { dg-final { scan-assembler-times {\tld1w\tz[0-9]+\.d,} 1 } } */ + +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.b, z[0-9]+\.b, #1\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.h, z[0-9]+\.h, z[0-9]+\.h\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.s, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tadd\tz[0-9]+\.d, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tsxtb\tz[0-9]+\.h,} 1 } } */ +/* { dg-final { scan-assembler-times {\tsxth\tz[0-9]+\.s,} 1 } } */ +/* { dg-final { scan-assembler-times {\tsxtw\tz[0-9]+\.d,} 1 } } */ Index: gcc/testsuite/gcc.target/aarch64/sve/load_const_offset_3.c =================================================================== --- gcc/testsuite/gcc.target/aarch64/sve/load_const_offset_3.c 2019-03-08 18:14:29.784994721 +0000 +++ gcc/testsuite/gcc.target/aarch64/sve/load_const_offset_3.c 2019-11-16 11:04:07.924489895 +0000 @@ -1,5 +1,5 @@ /* { dg-do assemble { target aarch64_asm_sve_ok } } */ -/* { dg-options "-O2 -ftree-vectorize -save-temps -msve-vector-bits=256" } */ +/* { dg-options "-O2 -ftree-vectorize -save-temps -msve-vector-bits=256 --param aarch64-sve-compare-costs=0" } */ #include "load_const_offset_2.c" Index: gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_1.c =================================================================== --- gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_1.c 2019-03-08 18:14:29.792994691 +0000 +++ gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_1.c 2019-11-16 11:04:07.924489895 +0000 @@ -1,5 +1,5 @@ /* { dg-do compile } */ -/* { dg-options "-O2 -ftree-vectorize -ffast-math" } */ +/* { dg-options "-O2 -ftree-vectorize -ffast-math --param aarch64-sve-compare-costs=0" } */ #include Index: gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_1_run.c =================================================================== --- gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_1_run.c 2019-03-08 18:14:29.780994734 +0000 +++ gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_1_run.c 2019-11-16 11:04:07.924489895 +0000 @@ -1,5 +1,5 @@ /* { dg-do run { target aarch64_sve_hw } } */ -/* { dg-options "-O2 -ftree-vectorize -ffast-math" } */ +/* { dg-options "-O2 -ftree-vectorize -ffast-math --param aarch64-sve-compare-costs=0" } */ #include "mask_struct_store_1.c" Index: gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_2.c =================================================================== --- gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_2.c 2019-03-08 18:14:29.764994797 +0000 +++ gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_2.c 2019-11-16 11:04:07.924489895 +0000 @@ -1,5 +1,5 @@ /* { dg-do compile } */ -/* { dg-options "-O2 -ftree-vectorize -ffast-math" } */ +/* { dg-options "-O2 -ftree-vectorize -ffast-math --param aarch64-sve-compare-costs=0" } */ #include Index: gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_2_run.c =================================================================== --- gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_2_run.c 2019-03-08 18:14:29.764994797 +0000 +++ gcc/testsuite/gcc.target/aarch64/sve/mask_struct_store_2_run.c 2019-11-16 11:04:07.924489895 +0000 @@ -1,5 +1,5 @@ /* { dg-do run { target aarch64_sve_hw } } */ -/* { dg-options "-O2 -ftree-vectorize -ffast-math" } */ +/* { dg-options "-O2 -ftree-vectorize -ffast-math --param aarch64-sve-compare-costs=0" } */ #include "mask_struct_store_2.c" Index: gcc/testsuite/gcc.target/aarch64/sve/unpack_unsigned_1.c =================================================================== --- gcc/testsuite/gcc.target/aarch64/sve/unpack_unsigned_1.c 2019-03-08 18:14:29.792994691 +0000 +++ gcc/testsuite/gcc.target/aarch64/sve/unpack_unsigned_1.c 2019-11-16 11:04:07.924489895 +0000 @@ -1,5 +1,5 @@ /* { dg-do compile } */ -/* { dg-options "-O2 -ftree-vectorize -fno-inline" } */ +/* { dg-options "-O2 -ftree-vectorize -fno-inline --param aarch64-sve-compare-costs=0" } */ #include Index: gcc/testsuite/gcc.target/aarch64/sve/unpack_unsigned_1_run.c =================================================================== --- gcc/testsuite/gcc.target/aarch64/sve/unpack_unsigned_1_run.c 2019-03-08 18:14:29.768994780 +0000 +++ gcc/testsuite/gcc.target/aarch64/sve/unpack_unsigned_1_run.c 2019-11-16 11:04:07.924489895 +0000 @@ -1,5 +1,5 @@ /* { dg-do run { target aarch64_sve_hw } } */ -/* { dg-options "-O2 -ftree-vectorize -fno-inline" } */ +/* { dg-options "-O2 -ftree-vectorize -fno-inline --param aarch64-sve-compare-costs=0" } */ #include "unpack_unsigned_1.c"