From patchwork Tue Sep 20 15:49:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bernd Edlinger X-Patchwork-Id: 672336 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3sdnJG5Sppz9sCp for ; Wed, 21 Sep 2016 01:49:49 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=Tj3tgLRp; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:references:in-reply-to :content-type:mime-version; q=dns; s=default; b=TbkdRNd9dPVtl67m rxWNfqrGVtNAYM550WDwLLtW+9JKUTWUbFX6DCw2xndgIU/HVQEkOJU7evppMthp RMteRzkygL8pArn1iU+Sywap0OD5El7fCoJyltKmgmFaerDxCR8UAHwaW+SjQpMx YWRdldTOU7L7yx3R+BbE4FESYTQ= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:references:in-reply-to :content-type:mime-version; s=default; bh=BwKdnygLRwKLFuK52YSS+H cEpbs=; b=Tj3tgLRpGIvyf70zB5u+ZHWRRoQeW7rbH2lMq6rbIupSukywxzcPKr yTnL65/euqN/KSosUb0+NN/+RIXGYJoTVXNfCJ/oFnOAo/9RMsDp8ixkY9rlh8aX QSokWhR6+QNesxG15u7DJ19ZX+UhRbqUqKOEa8cd2CWy2igB9VM84= Received: (qmail 127083 invoked by alias); 20 Sep 2016 15:49:41 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 127011 invoked by uid 89); 20 Sep 2016 15:49:40 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=0.6 required=5.0 tests=AWL, BAYES_50, FREEMAIL_FROM, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_LOW, SPF_PASS autolearn=no version=3.3.2 spammy=sk:_M_get_, _Alloc, _Base, _alloc X-HELO: BLU004-OMC4S17.hotmail.com Received: from blu004-omc4s17.hotmail.com (HELO BLU004-OMC4S17.hotmail.com) (65.55.111.156) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 20 Sep 2016 15:49:30 +0000 Received: from EUR03-VE1-obe.outbound.protection.outlook.com ([65.55.111.136]) by BLU004-OMC4S17.hotmail.com over TLS secured channel with Microsoft SMTPSVC(7.5.7601.23008); Tue, 20 Sep 2016 08:49:28 -0700 Received: from VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com (10.152.18.58) by VE1EUR03HT018.eop-EUR03.prod.protection.outlook.com (10.152.19.65) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.629.5; Tue, 20 Sep 2016 15:49:23 +0000 Received: from HE1PR0701MB2169.eurprd07.prod.outlook.com (10.152.18.56) by VE1EUR03FT025.mail.protection.outlook.com (10.152.18.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.629.5 via Frontend Transport; Tue, 20 Sep 2016 15:49:22 +0000 Received: from HE1PR0701MB2169.eurprd07.prod.outlook.com ([10.168.36.18]) by HE1PR0701MB2169.eurprd07.prod.outlook.com ([10.168.36.18]) with mapi id 15.01.0629.006; Tue, 20 Sep 2016 15:49:22 +0000 From: Bernd Edlinger To: Richard Biener CC: "gcc-patches@gcc.gnu.org" Subject: Re: [PATCH] Fix PR tree-optimization/77550 Date: Tue, 20 Sep 2016 15:49:22 +0000 Message-ID: References: In-Reply-To: authentication-results: spf=softfail (sender IP is 10.152.18.56) smtp.mailfrom=hotmail.de; suse.de; dkim=none (message not signed) header.d=none; suse.de; dmarc=none action=none header.from=hotmail.de; received-spf: SoftFail (protection.outlook.com: domain of transitioning hotmail.de discourages use of 10.152.18.56 as permitted sender) x-ms-exchange-messagesentrepresentingtype: 1 x-eopattributedmessage: 0 x-microsoft-exchange-diagnostics: 1; VE1EUR03HT018; 6:lgsvGNvxEsO/3YZ/qIaHQgDPSHACJ3UGC+MMCNhzGA11OSPA/3EJS8qIs/AZP1OSn/tewNa7j+zjgyuq/j5U98BvvsZRFJs5dFrPtsFMZj9NtVx/6tuV9FLaJSbJo+u/NwUWAh+glhDJ8HSIcDgFZAd0inDHfRqjASM515YMBLhaZnFC+yk/DovqoeGWqVhtYuuwHOsTEfxQhSqhAYycDN7n1nGGQIkTf7CZ5wQQBYRu7ZZpVkntRFuAFQlVH0vLslYnnVThwhtWojqrVvkJ8+KncqWV3AbmPtt//362VxE=; 5:wj6wDedqUL7j3b/LoD2kFGsCndRlkkaQeGh12Bxq2AgdYIP/joQnZY6gPbVR82ewUn4QzCuxzT4tBQ5LkRFC9VaL6U0Wc+DyjUvegQpoMLIrD1fRUyve1n54+ay9HGbQxl5RPtsLdzEBu7Tk67SB2A==; 24:90+kCsAT7lDdjUM13weaYDy9EA+7mIfC4pznWyRcduRVXp/UIab2dN6+rjFHnQFxA4DQIQ5Cw1h29CTaxMEIOE6SX3nb5tpKuXXBy5YXx8Q=; 7:raXUb6PYtwS7Lq1QMbH9kd5Nru3HRfrDnsqCTBkFwZUgWv4ipoGk5mSy3ynTDS34evaIIseLrRXyGStGvnSeIqKtWaFPEpNdsEPV4+Cb6Ar5VUs0Gi7y7Z/p4OZPWApsbSg/M9el6rqBNF8Wp1fZewx6nBLYOfU1BixjKEmHI6e3tC3S0pdtHzZPPmQEogyZgqGlVKm9Hkh06N+qe8XBVBk9RCrl7hn2I9H6j8t39Ao7mpG17168oSgXuknnZgQJk/70Z/3MdJERiqeowBqqBOB5uxHDHmWMcVqUypsYputGxAuRzr9pLacGMwsmtexk x-forefront-antispam-report: EFV:NLI; SFV:NSPM; SFS:(10019020)(98900003); DIR:OUT; SFP:1102; SCL:1; SRVR:VE1EUR03HT018; H:HE1PR0701MB2169.eurprd07.prod.outlook.com; FPR:; SPF:None; LANG:en; x-ms-office365-filtering-correlation-id: 392a58e8-18c6-4f44-5de4-08d3e16dafe0 x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(1601124038)(1603103081)(1601125047); SRVR:VE1EUR03HT018; x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(432015012)(102415321)(82015046); SRVR:VE1EUR03HT018; BCL:0; PCL:0; RULEID:; SRVR:VE1EUR03HT018; x-forefront-prvs: 0071BFA85B spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: outlook.com X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Sep 2016 15:49:22.2225 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Internet X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1EUR03HT018 On 09/20/16 09:41, Richard Biener wrote: > On Mon, 19 Sep 2016, Bernd Edlinger wrote: > >> On 09/19/16 11:25, Richard Biener wrote: >>> On Sun, 18 Sep 2016, Bernd Edlinger wrote: >>> >>>> Hi, >>>> >>>> this PR shows that in vectorizable_store and vectorizable_load >>>> as well, the vector access always uses the first dr as the alias >>>> type for the whole access. But that is not right, if they are >>>> different types, like in this example. >>>> >>>> So I tried to replace all reference_alias_ptr_type (DR_REF (first_dr)) >>>> by an alias type that is correct for all references in the whole >>>> access group. With this patch we fall back to ptr_type_node, which >>>> can alias anything, if the group consists of different alias sets. >>>> >>>> >>>> Bootstrapped and reg-tested on x86_64-pc-linux-gnu. >>>> Is it OK for trunk and gcc-6-branch? >>> >>> +/* Function get_group_alias_ptr_type. >>> + >>> + Return the alias type for the group starting at FIRST_STMT >>> + containing GROUP_SIZE elements. */ >>> + >>> +static tree >>> +get_group_alias_ptr_type (gimple *first_stmt, int group_size) >>> +{ >>> + struct data_reference *first_dr, *next_dr; >>> + gimple *next_stmt; >>> + int i; >>> + >>> + first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt)); >>> + next_stmt = GROUP_NEXT_ELEMENT (vinfo_for_stmt (first_stmt)); >>> + for (i = 1; i < group_size && next_stmt; i++) >>> + { >>> >>> >>> there is no need to pass in group_size, it's enough to walk >>> GROUP_NEXT_ELEMENT until it becomes NULL. >>> >>> Ok with removing the redundant arg. >>> >>> Thanks, >>> Richard. >>> >> >> Hmmm, I'm afraid this needs one more iteration. >> >> >> I tried first to check if there are no stmts after the group_size >> and noticed there are cases when group_size is 0, >> for instance in gcc.dg/torture/pr36244.c. >> >> I think there is a bug in vectorizable_load, here: >> >> if (grouped_load) >> { >> first_stmt = GROUP_FIRST_ELEMENT (stmt_info); >> /* For SLP vectorization we directly vectorize a subchain >> without permutation. */ >> if (slp && ! SLP_TREE_LOAD_PERMUTATION (slp_node).exists ()) >> first_stmt = ; >> >> group_size = GROUP_SIZE (vinfo_for_stmt (first_stmt)); >> >> = 0, and even worse: >> >> group_gap_adj = vf * group_size - nunits * vec_num; >> >> = -4 ! >> >> apparently GROUP_SIZE is only valid on the GROUP_FIRST_ELEMENT, > > Yes. I'm not sure group_size or group_gap_adj are used in the > slp && ! SLP_TREE_LOAD_PERMUTATION (slp_node).exists () case but moving > the computation up before we re-set first_stmt is probably a good idea. > >> while it may be 0 on SLP_TREE_SCALAR_STMTS (slp_node)[0] >> >> moving the GROUP_SIZE up before first_stmt is overwritten >> results in no different code.... > > See above - it's eventually unused. The load/store vectorization code > is quite twisted ;) > Agreed. Here is the new version of the patch: Moved the goups_size up, and everything works fine. Removed the parameter from get_group_alias_ptr_type. I think in the case, where first_stmt is not set to GROUP_FIRST_ELEMENT (stmt_info) but directly to stmt, it is likely somewhere in a list, so it is not necessary to walk the GROUP_NEXT_ELEMENT, so I would like to call reference_alias_ptr_type directly in that case. Bootstrapped and reg-tested on x86_64-pc-linux-gnu. Is it OK for trunk and gcc-6 branch? Thanks Bernd. gcc: 2016-09-18 Bernd Edlinger PR tree-optimization/77550 * tree-vect-stmts.c (create_array_ref): Change parameters. (get_group_alias_ptr_type): New function. (vectorizable_store, vectorizable_load): Use get_group_alias_ptr_type. testsuite: 2016-09-18 Bernd Edlinger PR tree-optimization/77550 * g++.dg/pr77550.C: New test. Index: gcc/tree-vect-stmts.c =================================================================== --- gcc/tree-vect-stmts.c (revision 240251) +++ gcc/tree-vect-stmts.c (working copy) @@ -170,11 +170,10 @@ write_vector_array (gimple *stmt, gimple_stmt_iter (and its group). */ static tree -create_array_ref (tree type, tree ptr, struct data_reference *first_dr) +create_array_ref (tree type, tree ptr, tree alias_ptr_type) { - tree mem_ref, alias_ptr_type; + tree mem_ref; - alias_ptr_type = reference_alias_ptr_type (DR_REF (first_dr)); mem_ref = build2 (MEM_REF, type, ptr, build_int_cst (alias_ptr_type, 0)); /* Arrays have the same alignment as their type. */ set_ptr_info_alignment (get_ptr_info (ptr), TYPE_ALIGN_UNIT (type), 0); @@ -5432,6 +5431,35 @@ ensure_base_align (stmt_vec_info stmt_info, struct } +/* Function get_group_alias_ptr_type. + + Return the alias type for the group starting at FIRST_STMT. */ + +static tree +get_group_alias_ptr_type (gimple *first_stmt) +{ + struct data_reference *first_dr, *next_dr; + gimple *next_stmt; + + first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt)); + next_stmt = GROUP_NEXT_ELEMENT (vinfo_for_stmt (first_stmt)); + while (next_stmt) + { + next_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (next_stmt)); + if (get_alias_set (DR_REF (first_dr)) + != get_alias_set (DR_REF (next_dr))) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_NOTE, vect_location, + "conflicting alias set types.\n"); + return ptr_type_node; + } + next_stmt = GROUP_NEXT_ELEMENT (vinfo_for_stmt (next_stmt)); + } + return reference_alias_ptr_type (DR_REF (first_dr)); +} + + /* Function vectorizable_store. Check if STMT defines a non scalar data-ref (array/pointer/structure) that @@ -5482,6 +5510,7 @@ vectorizable_store (gimple *stmt, gimple_stmt_iter gimple *new_stmt; int vf; vec_load_store_type vls_type; + tree ref_type; if (!STMT_VINFO_RELEVANT_P (stmt_info) && !bb_vinfo) return false; @@ -5771,6 +5800,8 @@ vectorizable_store (gimple *stmt, gimple_stmt_iter /* VEC_NUM is the number of vect stmts to be created for this group. */ vec_num = group_size; + + ref_type = get_group_alias_ptr_type (first_stmt); } else { @@ -5777,6 +5808,7 @@ vectorizable_store (gimple *stmt, gimple_stmt_iter first_stmt = stmt; first_dr = dr; group_size = vec_num = 1; + ref_type = reference_alias_ptr_type (DR_REF (first_dr)); } if (dump_enabled_p ()) @@ -5804,7 +5836,7 @@ vectorizable_store (gimple *stmt, gimple_stmt_iter (unshare_expr (DR_BASE_ADDRESS (first_dr)), size_binop (PLUS_EXPR, convert_to_ptrofftype (unshare_expr (DR_OFFSET (first_dr))), - convert_to_ptrofftype (DR_INIT(first_dr)))); + convert_to_ptrofftype (DR_INIT (first_dr)))); stride_step = fold_convert (sizetype, unshare_expr (DR_STEP (first_dr))); /* For a store with loop-invariant (but other than power-of-2) @@ -5865,7 +5897,7 @@ vectorizable_store (gimple *stmt, gimple_stmt_iter gsi_insert_seq_on_edge_immediate (loop_preheader_edge (loop), stmts); prev_stmt_info = NULL; - alias_off = build_int_cst (reference_alias_ptr_type (DR_REF (first_dr)), 0); + alias_off = build_int_cst (ref_type, 0); next_stmt = first_stmt; for (g = 0; g < group_size; g++) { @@ -6081,11 +6113,10 @@ vectorizable_store (gimple *stmt, gimple_stmt_iter && integer_zerop (DR_OFFSET (first_dr)) && integer_zerop (DR_INIT (first_dr)) && alias_sets_conflict_p (get_alias_set (aggr_type), - get_alias_set (DR_REF (first_dr)))) + get_alias_set (TREE_TYPE (ref_type)))) { dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr)); - dataref_offset = build_int_cst (reference_alias_ptr_type - (DR_REF (first_dr)), 0); + dataref_offset = build_int_cst (ref_type, 0); inv_p = false; } else @@ -6136,7 +6167,7 @@ vectorizable_store (gimple *stmt, gimple_stmt_iter /* Emit: MEM_REF[...all elements...] = STORE_LANES (VEC_ARRAY). */ - data_ref = create_array_ref (aggr_type, dataref_ptr, first_dr); + data_ref = create_array_ref (aggr_type, dataref_ptr, ref_type); new_stmt = gimple_build_call_internal (IFN_STORE_LANES, 1, vec_array); gimple_call_set_lhs (new_stmt, data_ref); vect_finish_stmt_generation (stmt, new_stmt, gsi); @@ -6174,8 +6205,7 @@ vectorizable_store (gimple *stmt, gimple_stmt_iter dataref_ptr, dataref_offset ? dataref_offset - : build_int_cst (reference_alias_ptr_type - (DR_REF (first_dr)), 0)); + : build_int_cst (ref_type, 0)); align = TYPE_ALIGN_UNIT (vectype); if (aligned_access_p (first_dr)) misalign = 0; @@ -6395,7 +6425,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera tree dataref_offset = NULL_TREE; gimple *ptr_incr = NULL; int ncopies; - int i, j, group_size = -1, group_gap_adj; + int i, j, group_size, group_gap_adj; tree msq = NULL_TREE, lsq; tree offset = NULL_TREE; tree byte_offset = NULL_TREE; @@ -6417,6 +6447,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera tree aggr_type; gather_scatter_info gs_info; vec_info *vinfo = stmt_info->vinfo; + tree ref_type; if (!STMT_VINFO_RELEVANT_P (stmt_info) && !bb_vinfo) return false; @@ -6773,10 +6804,19 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera gcc_assert (!nested_in_vect_loop); if (slp && grouped_load) - first_dr = STMT_VINFO_DATA_REF - (vinfo_for_stmt (GROUP_FIRST_ELEMENT (stmt_info))); + { + first_stmt = GROUP_FIRST_ELEMENT (stmt_info); + first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt)); + group_size = GROUP_SIZE (vinfo_for_stmt (first_stmt)); + ref_type = get_group_alias_ptr_type (first_stmt); + } else - first_dr = dr; + { + first_stmt = stmt; + first_dr = dr; + group_size = 1; + ref_type = reference_alias_ptr_type (DR_REF (first_dr)); + } stride_base = fold_build_pointer_plus @@ -6820,7 +6860,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera prev_stmt_info = NULL; running_off = offvar; - alias_off = build_int_cst (reference_alias_ptr_type (DR_REF (first_dr)), 0); + alias_off = build_int_cst (ref_type, 0); int nloads = nunits; int lnel = 1; tree ltype = TREE_TYPE (vectype); @@ -6917,6 +6957,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera if (grouped_load) { first_stmt = GROUP_FIRST_ELEMENT (stmt_info); + group_size = GROUP_SIZE (vinfo_for_stmt (first_stmt)); /* For SLP vectorization we directly vectorize a subchain without permutation. */ if (slp && ! SLP_TREE_LOAD_PERMUTATION (slp_node).exists ()) @@ -6942,7 +6983,6 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera return true; } first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt)); - group_size = GROUP_SIZE (vinfo_for_stmt (first_stmt)); group_gap_adj = 0; /* VEC_NUM is the number of vect stmts to be created for this group. */ @@ -6960,6 +7000,8 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera } else vec_num = group_size; + + ref_type = get_group_alias_ptr_type (first_stmt); } else { @@ -6967,6 +7009,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera first_dr = dr; group_size = vec_num = 1; group_gap_adj = 0; + ref_type = reference_alias_ptr_type (DR_REF (first_dr)); } alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false); @@ -7127,13 +7170,12 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera && integer_zerop (DR_OFFSET (first_dr)) && integer_zerop (DR_INIT (first_dr)) && alias_sets_conflict_p (get_alias_set (aggr_type), - get_alias_set (DR_REF (first_dr))) + get_alias_set (TREE_TYPE (ref_type))) && (alignment_support_scheme == dr_aligned || alignment_support_scheme == dr_unaligned_supported)) { dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr)); - dataref_offset = build_int_cst (reference_alias_ptr_type - (DR_REF (first_dr)), 0); + dataref_offset = build_int_cst (ref_type, 0); inv_p = false; } else if (first_stmt_for_drptr @@ -7179,7 +7221,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera /* Emit: VEC_ARRAY = LOAD_LANES (MEM_REF[...all elements...]). */ - data_ref = create_array_ref (aggr_type, dataref_ptr, first_dr); + data_ref = create_array_ref (aggr_type, dataref_ptr, ref_type); new_stmt = gimple_build_call_internal (IFN_LOAD_LANES, 1, data_ref); gimple_call_set_lhs (new_stmt, vec_array); vect_finish_stmt_generation (stmt, new_stmt, gsi); @@ -7215,8 +7257,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera = fold_build2 (MEM_REF, vectype, dataref_ptr, dataref_offset ? dataref_offset - : build_int_cst (reference_alias_ptr_type - (DR_REF (first_dr)), 0)); + : build_int_cst (ref_type, 0)); align = TYPE_ALIGN_UNIT (vectype); if (alignment_support_scheme == dr_aligned) { @@ -7272,8 +7313,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera vect_finish_stmt_generation (stmt, new_stmt, gsi); data_ref = build2 (MEM_REF, vectype, ptr, - build_int_cst (reference_alias_ptr_type - (DR_REF (first_dr)), 0)); + build_int_cst (ref_type, 0)); vec_dest = vect_create_destination_var (scalar_dest, vectype); new_stmt = gimple_build_assign (vec_dest, data_ref); @@ -7298,8 +7338,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera vect_finish_stmt_generation (stmt, new_stmt, gsi); data_ref = build2 (MEM_REF, vectype, ptr, - build_int_cst (reference_alias_ptr_type - (DR_REF (first_dr)), 0)); + build_int_cst (ref_type, 0)); break; } case dr_explicit_realign_optimized: @@ -7315,8 +7354,7 @@ vectorizable_load (gimple *stmt, gimple_stmt_itera vect_finish_stmt_generation (stmt, new_stmt, gsi); data_ref = build2 (MEM_REF, vectype, new_temp, - build_int_cst (reference_alias_ptr_type - (DR_REF (first_dr)), 0)); + build_int_cst (ref_type, 0)); break; default: gcc_unreachable (); Index: gcc/testsuite/g++.dg/pr77550.C =================================================================== --- gcc/testsuite/g++.dg/pr77550.C (revision 0) +++ gcc/testsuite/g++.dg/pr77550.C (working copy) @@ -0,0 +1,295 @@ +// { dg-do run } +// { dg-options "-std=c++14 -O3" } + +namespace std { +typedef int size_t; +inline namespace __cxx11 {} +template using _Require = void; +template using __void_t = void; +template class, typename...> +struct A { + using type = int; +}; +template class _Op, + typename... _Args> +struct A<_Default, __void_t<_Op<_Args...>>, _Op, _Args...> { + using type = _Op<_Args...>; +}; +template class _Op, + typename... _Args> +using __detected_or = A<_Default, void, _Op, _Args...>; +template class _Op, + typename... _Args> +using __detected_or_t = typename __detected_or<_Default, _Op, _Args...>::type; +template