From patchwork Thu Jul 29 13:31:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bill Schmidt X-Patchwork-Id: 1511280 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=SxW8UBEh; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4GbBh94vdwz9sW8 for ; Thu, 29 Jul 2021 23:48:41 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 0C12C39AFC3F for ; Thu, 29 Jul 2021 13:48:38 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 0C12C39AFC3F DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1627566518; bh=BcTIRGZc1HDvVUsxlz5P7I5D+F6TSdGRzb3enarmwwM=; h=To:Subject:Date:In-Reply-To:References:In-Reply-To:References: List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help: List-Subscribe:From:Reply-To:Cc:From; b=SxW8UBEhHomXj4yuSh+W2Kif0Qft7+EqraWaSjb1ESlLobEHF5aML0nOp05bUVu9g Fk0mLqoIlIlANHOwhZpL+PDMcD9qcP5zllVys0iT8Bh2G1/5rDr2aqL4FLh4CRGDrd qvOU0GL4yKxQmGUTTXf295XaqU2v9q8QsfzVVlyU= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by sourceware.org (Postfix) with ESMTPS id 5E99039AFC28 for ; Thu, 29 Jul 2021 13:33:34 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 5E99039AFC28 Received: from pps.filterd (m0098420.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 16TD3m84183429; Thu, 29 Jul 2021 09:33:34 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 3a3sha89kd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 29 Jul 2021 09:33:33 -0400 Received: from m0098420.ppops.net (m0098420.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 16TD4uuW193123; Thu, 29 Jul 2021 09:33:33 -0400 Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com [169.47.144.27]) by mx0b-001b2d01.pphosted.com with ESMTP id 3a3sha89k3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 29 Jul 2021 09:33:33 -0400 Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1]) by ppma05wdc.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 16TDWnF6009779; Thu, 29 Jul 2021 13:33:32 GMT Received: from b01cxnp23034.gho.pok.ibm.com (b01cxnp23034.gho.pok.ibm.com [9.57.198.29]) by ppma05wdc.us.ibm.com with ESMTP id 3a2364qeg4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 29 Jul 2021 13:33:32 +0000 Received: from b01ledav005.gho.pok.ibm.com (b01ledav005.gho.pok.ibm.com [9.57.199.110]) by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 16TDXWYX41615854 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 29 Jul 2021 13:33:32 GMT Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BBD46AE05C; Thu, 29 Jul 2021 13:33:31 +0000 (GMT) Received: from b01ledav005.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8BCEAAE06B; Thu, 29 Jul 2021 13:33:31 +0000 (GMT) Received: from localhost (unknown [9.40.194.84]) by b01ledav005.gho.pok.ibm.com (Postfix) with ESMTP; Thu, 29 Jul 2021 13:33:31 +0000 (GMT) To: gcc-patches@gcc.gnu.org Subject: [PATCH 19/34] rs6000: Handle overloads during program parsing Date: Thu, 29 Jul 2021 08:31:06 -0500 Message-Id: <53892da7f943e7a7a849d2156fae117a31fc90c0.1627562851.git.wschmidt@linux.ibm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: References: In-Reply-To: References: X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: vu1cI02Lot2Q0HDsdGUUVS69hOHipNg- X-Proofpoint-GUID: eM_FY9P8KfmizfaJGVuWSOfR2aTm5z1L X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.790 definitions=2021-07-29_10:2021-07-29, 2021-07-29 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 adultscore=0 impostorscore=0 mlxlogscore=999 phishscore=0 priorityscore=1501 suspectscore=0 mlxscore=0 spamscore=0 clxscore=1015 bulkscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2107140000 definitions=main-2107290084 X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Bill Schmidt via Gcc-patches From: Bill Schmidt Reply-To: Bill Schmidt Cc: dje.gcc@gmail.com, segher@kernel.crashing.org, willschm@linux.ibm.com Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" Although this patch looks quite large, the changes are fairly minimal. Most of it is duplicating the large function that does the overload resolution using the automatically generated data structures instead of the old hand-generated ones. This doesn't make the patch terribly easy to review, unfortunately. Just be aware that generally we aren't changing the logic and functionality of overload handling. 2021-06-07 Bill Schmidt gcc/ * config/rs6000/rs6000-c.c (rs6000-builtins.h): New include. (altivec_resolve_new_overloaded_builtin): New forward decl. (rs6000_new_builtin_type_compatible): New function. (altivec_resolve_overloaded_builtin): Call altivec_resolve_new_overloaded_builtin. (altivec_build_new_resolved_builtin): New function. (altivec_resolve_new_overloaded_builtin): Likewise. * config/rs6000/rs6000-call.c (rs6000_new_builtin_is_supported_p): Likewise. --- gcc/config/rs6000/rs6000-c.c | 1083 +++++++++++++++++++++++++++++++ gcc/config/rs6000/rs6000-call.c | 91 +++ 2 files changed, 1174 insertions(+) diff --git a/gcc/config/rs6000/rs6000-c.c b/gcc/config/rs6000/rs6000-c.c index afcb5bb6e39..a986e57fe7d 100644 --- a/gcc/config/rs6000/rs6000-c.c +++ b/gcc/config/rs6000/rs6000-c.c @@ -35,6 +35,10 @@ #include "langhooks.h" #include "c/c-tree.h" +#include "rs6000-builtins.h" + +static tree +altivec_resolve_new_overloaded_builtin (location_t, tree, void *); /* Handle the machine specific pragma longcall. Its syntax is @@ -811,6 +815,30 @@ is_float128_p (tree t) && t == long_double_type_node)); } +static bool +rs6000_new_builtin_type_compatible (tree t, tree u) +{ + if (t == error_mark_node) + return false; + + if (INTEGRAL_TYPE_P (t) && INTEGRAL_TYPE_P (u)) + return true; + + if (TARGET_IEEEQUAD && TARGET_LONG_DOUBLE_128 + && is_float128_p (t) && is_float128_p (u)) + return true; + + if (POINTER_TYPE_P (t) && POINTER_TYPE_P (u)) + { + t = TREE_TYPE (t); + u = TREE_TYPE (u); + if (TYPE_READONLY (u)) + t = build_qualified_type (t, TYPE_QUAL_CONST); + } + + return lang_hooks.types_compatible_p (t, u); +} + static inline bool rs6000_builtin_type_compatible (tree t, int id) { @@ -927,6 +955,10 @@ tree altivec_resolve_overloaded_builtin (location_t loc, tree fndecl, void *passed_arglist) { + if (new_builtins_are_live) + return altivec_resolve_new_overloaded_builtin (loc, fndecl, + passed_arglist); + vec *arglist = static_cast *> (passed_arglist); unsigned int nargs = vec_safe_length (arglist); enum rs6000_builtins fcode @@ -1930,3 +1962,1054 @@ altivec_resolve_overloaded_builtin (location_t loc, tree fndecl, return error_mark_node; } } + +/* Build a tree for a function call to an Altivec non-overloaded builtin. + The overloaded builtin that matched the types and args is described + by DESC. The N arguments are given in ARGS, respectively. + + Actually the only thing it does is calling fold_convert on ARGS, with + a small exception for vec_{all,any}_{ge,le} predicates. */ + +static tree +altivec_build_new_resolved_builtin (tree *args, int n, tree fntype, + tree ret_type, + rs6000_gen_builtins bif_id, + rs6000_gen_builtins ovld_id) +{ + tree argtypes = TYPE_ARG_TYPES (fntype); + tree arg_type[MAX_OVLD_ARGS]; + tree fndecl = rs6000_builtin_decls_x[bif_id]; + tree call; + + for (int i = 0; i < n; i++) + arg_type[i] = TREE_VALUE (argtypes), argtypes = TREE_CHAIN (argtypes); + + /* The AltiVec overloading implementation is overall gross, but this + is particularly disgusting. The vec_{all,any}_{ge,le} builtins + are completely different for floating-point vs. integer vector + types, because the former has vcmpgefp, but the latter should use + vcmpgtXX. + + In practice, the second and third arguments are swapped, and the + condition (LT vs. EQ, which is recognizable by bit 1 of the first + argument) is reversed. Patch the arguments here before building + the resolved CALL_EXPR. */ + if (n == 3 + && ovld_id == RS6000_OVLD_VEC_CMPGE_P + && bif_id != RS6000_BIF_VCMPGEFP_P + && bif_id != RS6000_BIF_XVCMPGEDP_P) + { + std::swap (args[1], args[2]); + std::swap (arg_type[1], arg_type[2]); + + args[0] = fold_build2 (BIT_XOR_EXPR, TREE_TYPE (args[0]), args[0], + build_int_cst (NULL_TREE, 2)); + } + + /* If the number of arguments to an overloaded function increases, + we must expand this switch. */ + gcc_assert (MAX_OVLD_ARGS <= 4); + + switch (n) + { + case 0: + call = build_call_expr (fndecl, 0); + break; + case 1: + call = build_call_expr (fndecl, 1, + fully_fold_convert (arg_type[0], args[0])); + break; + case 2: + call = build_call_expr (fndecl, 2, + fully_fold_convert (arg_type[0], args[0]), + fully_fold_convert (arg_type[1], args[1])); + break; + case 3: + call = build_call_expr (fndecl, 3, + fully_fold_convert (arg_type[0], args[0]), + fully_fold_convert (arg_type[1], args[1]), + fully_fold_convert (arg_type[2], args[2])); + break; + case 4: + call = build_call_expr (fndecl, 4, + fully_fold_convert (arg_type[0], args[0]), + fully_fold_convert (arg_type[1], args[1]), + fully_fold_convert (arg_type[2], args[2]), + fully_fold_convert (arg_type[3], args[3])); + break; + default: + gcc_unreachable (); + } + return fold_convert (ret_type, call); +} + +/* Implementation of the resolve_overloaded_builtin target hook, to + support Altivec's overloaded builtins. */ + +static tree +altivec_resolve_new_overloaded_builtin (location_t loc, tree fndecl, + void *passed_arglist) +{ + vec *arglist = static_cast *> (passed_arglist); + unsigned int nargs = vec_safe_length (arglist); + enum rs6000_gen_builtins fcode + = (enum rs6000_gen_builtins) DECL_MD_FUNCTION_CODE (fndecl); + tree fnargs = TYPE_ARG_TYPES (TREE_TYPE (fndecl)); + tree types[MAX_OVLD_ARGS], args[MAX_OVLD_ARGS]; + unsigned int n; + + /* Return immediately if this isn't an overload. */ + if (fcode <= RS6000_OVLD_NONE) + return NULL_TREE; + + unsigned int adj_fcode = fcode - RS6000_OVLD_NONE; + + if (TARGET_DEBUG_BUILTIN) + fprintf (stderr, "altivec_resolve_overloaded_builtin, code = %4d, %s\n", + (int)fcode, IDENTIFIER_POINTER (DECL_NAME (fndecl))); + + /* vec_lvsl and vec_lvsr are deprecated for use with LE element order. */ + if (fcode == RS6000_OVLD_VEC_LVSL && !BYTES_BIG_ENDIAN) + warning (OPT_Wdeprecated, + "% is deprecated for little endian; use " + "assignment for unaligned loads and stores"); + else if (fcode == RS6000_OVLD_VEC_LVSR && !BYTES_BIG_ENDIAN) + warning (OPT_Wdeprecated, + "% is deprecated for little endian; use " + "assignment for unaligned loads and stores"); + + if (fcode == RS6000_OVLD_VEC_MUL) + { + /* vec_mul needs to be special cased because there are no instructions + for it for the {un}signed char, {un}signed short, and {un}signed int + types. */ + if (nargs != 2) + { + error ("builtin %qs only accepts 2 arguments", "vec_mul"); + return error_mark_node; + } + + tree arg0 = (*arglist)[0]; + tree arg0_type = TREE_TYPE (arg0); + tree arg1 = (*arglist)[1]; + tree arg1_type = TREE_TYPE (arg1); + + /* Both arguments must be vectors and the types must be compatible. */ + if (TREE_CODE (arg0_type) != VECTOR_TYPE) + goto bad; + if (!lang_hooks.types_compatible_p (arg0_type, arg1_type)) + goto bad; + + switch (TYPE_MODE (TREE_TYPE (arg0_type))) + { + case E_QImode: + case E_HImode: + case E_SImode: + case E_DImode: + case E_TImode: + { + /* For scalar types just use a multiply expression. */ + return fold_build2_loc (loc, MULT_EXPR, TREE_TYPE (arg0), arg0, + fold_convert (TREE_TYPE (arg0), arg1)); + } + case E_SFmode: + { + /* For floats use the xvmulsp instruction directly. */ + tree call = rs6000_builtin_decls_x[RS6000_BIF_XVMULSP]; + return build_call_expr (call, 2, arg0, arg1); + } + case E_DFmode: + { + /* For doubles use the xvmuldp instruction directly. */ + tree call = rs6000_builtin_decls_x[RS6000_BIF_XVMULDP]; + return build_call_expr (call, 2, arg0, arg1); + } + /* Other types are errors. */ + default: + goto bad; + } + } + + if (fcode == RS6000_OVLD_VEC_CMPNE) + { + /* vec_cmpne needs to be special cased because there are no instructions + for it (prior to power 9). */ + if (nargs != 2) + { + error ("builtin %qs only accepts 2 arguments", "vec_cmpne"); + return error_mark_node; + } + + tree arg0 = (*arglist)[0]; + tree arg0_type = TREE_TYPE (arg0); + tree arg1 = (*arglist)[1]; + tree arg1_type = TREE_TYPE (arg1); + + /* Both arguments must be vectors and the types must be compatible. */ + if (TREE_CODE (arg0_type) != VECTOR_TYPE) + goto bad; + if (!lang_hooks.types_compatible_p (arg0_type, arg1_type)) + goto bad; + + /* Power9 instructions provide the most efficient implementation of + ALTIVEC_BUILTIN_VEC_CMPNE if the mode is not DImode or TImode + or SFmode or DFmode. */ + if (!TARGET_P9_VECTOR + || (TYPE_MODE (TREE_TYPE (arg0_type)) == DImode) + || (TYPE_MODE (TREE_TYPE (arg0_type)) == TImode) + || (TYPE_MODE (TREE_TYPE (arg0_type)) == SFmode) + || (TYPE_MODE (TREE_TYPE (arg0_type)) == DFmode)) + { + switch (TYPE_MODE (TREE_TYPE (arg0_type))) + { + /* vec_cmpneq (va, vb) == vec_nor (vec_cmpeq (va, vb), + vec_cmpeq (va, vb)). */ + /* Note: vec_nand also works but opt changes vec_nand's + to vec_nor's anyway. */ + case E_QImode: + case E_HImode: + case E_SImode: + case E_DImode: + case E_TImode: + case E_SFmode: + case E_DFmode: + { + /* call = vec_cmpeq (va, vb) + result = vec_nor (call, call). */ + vec *params = make_tree_vector (); + vec_safe_push (params, arg0); + vec_safe_push (params, arg1); + tree call = altivec_resolve_new_overloaded_builtin + (loc, rs6000_builtin_decls_x[RS6000_OVLD_VEC_CMPEQ], + params); + /* Use save_expr to ensure that operands used more than once + that may have side effects (like calls) are only evaluated + once. */ + call = save_expr (call); + params = make_tree_vector (); + vec_safe_push (params, call); + vec_safe_push (params, call); + return altivec_resolve_new_overloaded_builtin + (loc, rs6000_builtin_decls_x[RS6000_OVLD_VEC_NOR], params); + } + /* Other types are errors. */ + default: + goto bad; + } + } + /* else, fall through and process the Power9 alternative below */ + } + + if (fcode == RS6000_OVLD_VEC_ADDE || fcode == RS6000_OVLD_VEC_SUBE) + { + /* vec_adde needs to be special cased because there is no instruction + for the {un}signed int version. */ + if (nargs != 3) + { + const char *name + = fcode == RS6000_OVLD_VEC_ADDE ? "vec_adde": "vec_sube"; + error ("builtin %qs only accepts 3 arguments", name); + return error_mark_node; + } + + tree arg0 = (*arglist)[0]; + tree arg0_type = TREE_TYPE (arg0); + tree arg1 = (*arglist)[1]; + tree arg1_type = TREE_TYPE (arg1); + tree arg2 = (*arglist)[2]; + tree arg2_type = TREE_TYPE (arg2); + + /* All 3 arguments must be vectors of (signed or unsigned) (int or + __int128) and the types must be compatible. */ + if (TREE_CODE (arg0_type) != VECTOR_TYPE) + goto bad; + if (!lang_hooks.types_compatible_p (arg0_type, arg1_type) + || !lang_hooks.types_compatible_p (arg1_type, arg2_type)) + goto bad; + + switch (TYPE_MODE (TREE_TYPE (arg0_type))) + { + /* For {un}signed ints, + vec_adde (va, vb, carryv) == vec_add (vec_add (va, vb), + vec_and (carryv, 1)). + vec_sube (va, vb, carryv) == vec_sub (vec_sub (va, vb), + vec_and (carryv, 1)). */ + case E_SImode: + { + tree add_sub_builtin; + + vec *params = make_tree_vector (); + vec_safe_push (params, arg0); + vec_safe_push (params, arg1); + + if (fcode == RS6000_OVLD_VEC_ADDE) + add_sub_builtin = rs6000_builtin_decls_x[RS6000_OVLD_VEC_ADD]; + else + add_sub_builtin = rs6000_builtin_decls_x[RS6000_OVLD_VEC_SUB]; + + tree call + = altivec_resolve_new_overloaded_builtin (loc, + add_sub_builtin, + params); + tree const1 = build_int_cstu (TREE_TYPE (arg0_type), 1); + tree ones_vector = build_vector_from_val (arg0_type, const1); + tree and_expr = fold_build2_loc (loc, BIT_AND_EXPR, arg0_type, + arg2, ones_vector); + params = make_tree_vector (); + vec_safe_push (params, call); + vec_safe_push (params, and_expr); + return altivec_resolve_new_overloaded_builtin (loc, + add_sub_builtin, + params); + } + /* For {un}signed __int128s use the vaddeuqm/vsubeuqm instruction + directly. */ + case E_TImode: + break; + + /* Types other than {un}signed int and {un}signed __int128 + are errors. */ + default: + goto bad; + } + } + + if (fcode == RS6000_OVLD_VEC_ADDEC || fcode == RS6000_OVLD_VEC_SUBEC) + { + /* vec_addec and vec_subec needs to be special cased because there is + no instruction for the {un}signed int version. */ + if (nargs != 3) + { + const char *name = fcode == RS6000_OVLD_VEC_ADDEC ? + "vec_addec": "vec_subec"; + error ("builtin %qs only accepts 3 arguments", name); + return error_mark_node; + } + + tree arg0 = (*arglist)[0]; + tree arg0_type = TREE_TYPE (arg0); + tree arg1 = (*arglist)[1]; + tree arg1_type = TREE_TYPE (arg1); + tree arg2 = (*arglist)[2]; + tree arg2_type = TREE_TYPE (arg2); + + /* All 3 arguments must be vectors of (signed or unsigned) (int or + __int128) and the types must be compatible. */ + if (TREE_CODE (arg0_type) != VECTOR_TYPE) + goto bad; + if (!lang_hooks.types_compatible_p (arg0_type, arg1_type) + || !lang_hooks.types_compatible_p (arg1_type, arg2_type)) + goto bad; + + switch (TYPE_MODE (TREE_TYPE (arg0_type))) + { + /* For {un}signed ints, + vec_addec (va, vb, carryv) == + vec_or (vec_addc (va, vb), + vec_addc (vec_add (va, vb), + vec_and (carryv, 0x1))). */ + case E_SImode: + { + /* Use save_expr to ensure that operands used more than once + that may have side effects (like calls) are only evaluated + once. */ + tree as_builtin; + tree as_c_builtin; + + arg0 = save_expr (arg0); + arg1 = save_expr (arg1); + vec *params = make_tree_vector (); + vec_safe_push (params, arg0); + vec_safe_push (params, arg1); + + if (fcode == RS6000_OVLD_VEC_ADDEC) + as_c_builtin = rs6000_builtin_decls_x[RS6000_OVLD_VEC_ADDC]; + else + as_c_builtin = rs6000_builtin_decls_x[RS6000_OVLD_VEC_SUBC]; + + tree call1 = altivec_resolve_new_overloaded_builtin (loc, + as_c_builtin, + params); + params = make_tree_vector (); + vec_safe_push (params, arg0); + vec_safe_push (params, arg1); + + + if (fcode == RS6000_OVLD_VEC_ADDEC) + as_builtin = rs6000_builtin_decls_x[RS6000_OVLD_VEC_ADD]; + else + as_builtin = rs6000_builtin_decls_x[RS6000_OVLD_VEC_SUB]; + + tree call2 = altivec_resolve_new_overloaded_builtin (loc, + as_builtin, + params); + tree const1 = build_int_cstu (TREE_TYPE (arg0_type), 1); + tree ones_vector = build_vector_from_val (arg0_type, const1); + tree and_expr = fold_build2_loc (loc, BIT_AND_EXPR, arg0_type, + arg2, ones_vector); + params = make_tree_vector (); + vec_safe_push (params, call2); + vec_safe_push (params, and_expr); + call2 = altivec_resolve_new_overloaded_builtin (loc, as_c_builtin, + params); + params = make_tree_vector (); + vec_safe_push (params, call1); + vec_safe_push (params, call2); + tree or_builtin = rs6000_builtin_decls_x[RS6000_OVLD_VEC_OR]; + return altivec_resolve_new_overloaded_builtin (loc, or_builtin, + params); + } + /* For {un}signed __int128s use the vaddecuq/vsubbecuq + instructions. This occurs through normal processing. */ + case E_TImode: + break; + + /* Types other than {un}signed int and {un}signed __int128 + are errors. */ + default: + goto bad; + } + } + + /* For now treat vec_splats and vec_promote as the same. */ + if (fcode == RS6000_OVLD_VEC_SPLATS || fcode == RS6000_OVLD_VEC_PROMOTE) + { + tree type, arg; + int size; + int i; + bool unsigned_p; + vec *vec; + const char *name + = fcode == RS6000_OVLD_VEC_SPLATS ? "vec_splats": "vec_promote"; + + if (fcode == RS6000_OVLD_VEC_SPLATS && nargs != 1) + { + error ("builtin %qs only accepts 1 argument", name); + return error_mark_node; + } + if (fcode == RS6000_OVLD_VEC_PROMOTE && nargs != 2) + { + error ("builtin %qs only accepts 2 arguments", name); + return error_mark_node; + } + /* Ignore promote's element argument. */ + if (fcode == RS6000_OVLD_VEC_PROMOTE + && !INTEGRAL_TYPE_P (TREE_TYPE ((*arglist)[1]))) + goto bad; + + arg = (*arglist)[0]; + type = TREE_TYPE (arg); + if (!SCALAR_FLOAT_TYPE_P (type) + && !INTEGRAL_TYPE_P (type)) + goto bad; + unsigned_p = TYPE_UNSIGNED (type); + switch (TYPE_MODE (type)) + { + case E_TImode: + type = (unsigned_p ? unsigned_V1TI_type_node : V1TI_type_node); + size = 1; + break; + case E_DImode: + type = (unsigned_p ? unsigned_V2DI_type_node : V2DI_type_node); + size = 2; + break; + case E_SImode: + type = (unsigned_p ? unsigned_V4SI_type_node : V4SI_type_node); + size = 4; + break; + case E_HImode: + type = (unsigned_p ? unsigned_V8HI_type_node : V8HI_type_node); + size = 8; + break; + case E_QImode: + type = (unsigned_p ? unsigned_V16QI_type_node : V16QI_type_node); + size = 16; + break; + case E_SFmode: type = V4SF_type_node; size = 4; break; + case E_DFmode: type = V2DF_type_node; size = 2; break; + default: + goto bad; + } + arg = save_expr (fold_convert (TREE_TYPE (type), arg)); + vec_alloc (vec, size); + for(i = 0; i < size; i++) + { + constructor_elt elt = {NULL_TREE, arg}; + vec->quick_push (elt); + } + return build_constructor (type, vec); + } + + /* For now use pointer tricks to do the extraction, unless we are on VSX + extracting a double from a constant offset. */ + if (fcode == RS6000_OVLD_VEC_EXTRACT) + { + tree arg1; + tree arg1_type; + tree arg2; + tree arg1_inner_type; + tree decl, stmt; + tree innerptrtype; + machine_mode mode; + + /* No second argument. */ + if (nargs != 2) + { + error ("builtin %qs only accepts 2 arguments", "vec_extract"); + return error_mark_node; + } + + arg2 = (*arglist)[1]; + arg1 = (*arglist)[0]; + arg1_type = TREE_TYPE (arg1); + + if (TREE_CODE (arg1_type) != VECTOR_TYPE) + goto bad; + if (!INTEGRAL_TYPE_P (TREE_TYPE (arg2))) + goto bad; + + /* See if we can optimize vec_extracts with the current VSX instruction + set. */ + mode = TYPE_MODE (arg1_type); + if (VECTOR_MEM_VSX_P (mode)) + + { + tree call = NULL_TREE; + int nunits = GET_MODE_NUNITS (mode); + + arg2 = fold_for_warn (arg2); + + /* If the second argument is an integer constant, generate + the built-in code if we can. We need 64-bit and direct + move to extract the small integer vectors. */ + if (TREE_CODE (arg2) == INTEGER_CST) + { + wide_int selector = wi::to_wide (arg2); + selector = wi::umod_trunc (selector, nunits); + arg2 = wide_int_to_tree (TREE_TYPE (arg2), selector); + switch (mode) + { + default: + break; + + case E_V1TImode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V1TI]; + break; + + case E_V2DFmode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V2DF]; + break; + + case E_V2DImode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V2DI]; + break; + + case E_V4SFmode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V4SF]; + break; + + case E_V4SImode: + if (TARGET_DIRECT_MOVE_64BIT) + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V4SI]; + break; + + case E_V8HImode: + if (TARGET_DIRECT_MOVE_64BIT) + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V8HI]; + break; + + case E_V16QImode: + if (TARGET_DIRECT_MOVE_64BIT) + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V16QI]; + break; + } + } + + /* If the second argument is variable, we can optimize it if we are + generating 64-bit code on a machine with direct move. */ + else if (TREE_CODE (arg2) != INTEGER_CST && TARGET_DIRECT_MOVE_64BIT) + { + switch (mode) + { + default: + break; + + case E_V2DFmode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V2DF]; + break; + + case E_V2DImode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V2DI]; + break; + + case E_V4SFmode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V4SF]; + break; + + case E_V4SImode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V4SI]; + break; + + case E_V8HImode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V8HI]; + break; + + case E_V16QImode: + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_EXT_V16QI]; + break; + } + } + + if (call) + { + tree result = build_call_expr (call, 2, arg1, arg2); + /* Coerce the result to vector element type. May be no-op. */ + arg1_inner_type = TREE_TYPE (arg1_type); + result = fold_convert (arg1_inner_type, result); + return result; + } + } + + /* Build *(((arg1_inner_type*)&(vector type){arg1})+arg2). */ + arg1_inner_type = TREE_TYPE (arg1_type); + arg2 = build_binary_op (loc, BIT_AND_EXPR, arg2, + build_int_cst (TREE_TYPE (arg2), + TYPE_VECTOR_SUBPARTS (arg1_type) + - 1), 0); + decl = build_decl (loc, VAR_DECL, NULL_TREE, arg1_type); + DECL_EXTERNAL (decl) = 0; + TREE_PUBLIC (decl) = 0; + DECL_CONTEXT (decl) = current_function_decl; + TREE_USED (decl) = 1; + TREE_TYPE (decl) = arg1_type; + TREE_READONLY (decl) = TYPE_READONLY (arg1_type); + if (c_dialect_cxx ()) + { + stmt = build4 (TARGET_EXPR, arg1_type, decl, arg1, + NULL_TREE, NULL_TREE); + SET_EXPR_LOCATION (stmt, loc); + } + else + { + DECL_INITIAL (decl) = arg1; + stmt = build1 (DECL_EXPR, arg1_type, decl); + TREE_ADDRESSABLE (decl) = 1; + SET_EXPR_LOCATION (stmt, loc); + stmt = build1 (COMPOUND_LITERAL_EXPR, arg1_type, stmt); + } + + innerptrtype = build_pointer_type (arg1_inner_type); + + stmt = build_unary_op (loc, ADDR_EXPR, stmt, 0); + stmt = convert (innerptrtype, stmt); + stmt = build_binary_op (loc, PLUS_EXPR, stmt, arg2, 1); + stmt = build_indirect_ref (loc, stmt, RO_NULL); + + /* PR83660: We mark this as having side effects so that + downstream in fold_build_cleanup_point_expr () it will get a + CLEANUP_POINT_EXPR. If it does not we can run into an ICE + later in gimplify_cleanup_point_expr (). Potentially this + causes missed optimization because there actually is no side + effect. */ + if (c_dialect_cxx ()) + TREE_SIDE_EFFECTS (stmt) = 1; + + return stmt; + } + + /* For now use pointer tricks to do the insertion, unless we are on VSX + inserting a double to a constant offset.. */ + if (fcode == RS6000_OVLD_VEC_INSERT) + { + tree arg0; + tree arg1; + tree arg2; + tree arg1_type; + tree decl, stmt; + machine_mode mode; + + /* No second or third arguments. */ + if (nargs != 3) + { + error ("builtin %qs only accepts 3 arguments", "vec_insert"); + return error_mark_node; + } + + arg0 = (*arglist)[0]; + arg1 = (*arglist)[1]; + arg1_type = TREE_TYPE (arg1); + arg2 = fold_for_warn ((*arglist)[2]); + + if (TREE_CODE (arg1_type) != VECTOR_TYPE) + goto bad; + if (!INTEGRAL_TYPE_P (TREE_TYPE (arg2))) + goto bad; + + /* If we can use the VSX xxpermdi instruction, use that for insert. */ + mode = TYPE_MODE (arg1_type); + if ((mode == V2DFmode || mode == V2DImode) && VECTOR_UNIT_VSX_P (mode) + && TREE_CODE (arg2) == INTEGER_CST) + { + wide_int selector = wi::to_wide (arg2); + selector = wi::umod_trunc (selector, 2); + tree call = NULL_TREE; + + arg2 = wide_int_to_tree (TREE_TYPE (arg2), selector); + if (mode == V2DFmode) + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_SET_V2DF]; + else if (mode == V2DImode) + call = rs6000_builtin_decls_x[RS6000_BIF_VEC_SET_V2DI]; + + /* Note, __builtin_vec_insert_ has vector and scalar types + reversed. */ + if (call) + return build_call_expr (call, 3, arg1, arg0, arg2); + } + else if (mode == V1TImode && VECTOR_UNIT_VSX_P (mode) + && TREE_CODE (arg2) == INTEGER_CST) + { + tree call = rs6000_builtin_decls_x[RS6000_BIF_VEC_SET_V1TI]; + wide_int selector = wi::zero(32); + + arg2 = wide_int_to_tree (TREE_TYPE (arg2), selector); + /* Note, __builtin_vec_insert_ has vector and scalar types + reversed. */ + return build_call_expr (call, 3, arg1, arg0, arg2); + } + + /* Build *(((arg1_inner_type*)&(vector type){arg1})+arg2) = arg0 with + VIEW_CONVERT_EXPR. i.e.: + D.3192 = v1; + _1 = n & 3; + VIEW_CONVERT_EXPR(D.3192)[_1] = i; + v1 = D.3192; + D.3194 = v1; */ + if (TYPE_VECTOR_SUBPARTS (arg1_type) == 1) + arg2 = build_int_cst (TREE_TYPE (arg2), 0); + else + arg2 = build_binary_op (loc, BIT_AND_EXPR, arg2, + build_int_cst (TREE_TYPE (arg2), + TYPE_VECTOR_SUBPARTS (arg1_type) + - 1), 0); + decl = build_decl (loc, VAR_DECL, NULL_TREE, arg1_type); + DECL_EXTERNAL (decl) = 0; + TREE_PUBLIC (decl) = 0; + DECL_CONTEXT (decl) = current_function_decl; + TREE_USED (decl) = 1; + TREE_TYPE (decl) = arg1_type; + TREE_READONLY (decl) = TYPE_READONLY (arg1_type); + TREE_ADDRESSABLE (decl) = 1; + if (c_dialect_cxx ()) + { + stmt = build4 (TARGET_EXPR, arg1_type, decl, arg1, + NULL_TREE, NULL_TREE); + SET_EXPR_LOCATION (stmt, loc); + } + else + { + DECL_INITIAL (decl) = arg1; + stmt = build1 (DECL_EXPR, arg1_type, decl); + SET_EXPR_LOCATION (stmt, loc); + stmt = build1 (COMPOUND_LITERAL_EXPR, arg1_type, stmt); + } + + if (TARGET_VSX) + { + stmt = build_array_ref (loc, stmt, arg2); + stmt = fold_build2 (MODIFY_EXPR, TREE_TYPE (arg0), stmt, + convert (TREE_TYPE (stmt), arg0)); + stmt = build2 (COMPOUND_EXPR, arg1_type, stmt, decl); + } + else + { + tree arg1_inner_type; + tree innerptrtype; + arg1_inner_type = TREE_TYPE (arg1_type); + innerptrtype = build_pointer_type (arg1_inner_type); + + stmt = build_unary_op (loc, ADDR_EXPR, stmt, 0); + stmt = convert (innerptrtype, stmt); + stmt = build_binary_op (loc, PLUS_EXPR, stmt, arg2, 1); + stmt = build_indirect_ref (loc, stmt, RO_NULL); + stmt = build2 (MODIFY_EXPR, TREE_TYPE (stmt), stmt, + convert (TREE_TYPE (stmt), arg0)); + stmt = build2 (COMPOUND_EXPR, arg1_type, stmt, decl); + } + return stmt; + } + + for (n = 0; + !VOID_TYPE_P (TREE_VALUE (fnargs)) && n < nargs; + fnargs = TREE_CHAIN (fnargs), n++) + { + tree decl_type = TREE_VALUE (fnargs); + tree arg = (*arglist)[n]; + tree type; + + if (arg == error_mark_node) + return error_mark_node; + + if (n >= MAX_OVLD_ARGS) + abort (); + + arg = default_conversion (arg); + + /* The C++ front-end converts float * to const void * using + NOP_EXPR (NOP_EXPR (x)). */ + type = TREE_TYPE (arg); + if (POINTER_TYPE_P (type) + && TREE_CODE (arg) == NOP_EXPR + && lang_hooks.types_compatible_p (TREE_TYPE (arg), + const_ptr_type_node) + && lang_hooks.types_compatible_p (TREE_TYPE (TREE_OPERAND (arg, 0)), + ptr_type_node)) + { + arg = TREE_OPERAND (arg, 0); + type = TREE_TYPE (arg); + } + + /* Remove the const from the pointers to simplify the overload + matching further down. */ + if (POINTER_TYPE_P (decl_type) + && POINTER_TYPE_P (type) + && TYPE_QUALS (TREE_TYPE (type)) != 0) + { + if (TYPE_READONLY (TREE_TYPE (type)) + && !TYPE_READONLY (TREE_TYPE (decl_type))) + warning (0, "passing argument %d of %qE discards qualifiers from " + "pointer target type", n + 1, fndecl); + type = build_pointer_type (build_qualified_type (TREE_TYPE (type), + 0)); + arg = fold_convert (type, arg); + } + + /* For RS6000_OVLD_VEC_LXVL, convert any const * to its non constant + equivalent to simplify the overload matching below. */ + if (fcode == RS6000_OVLD_VEC_LXVL) + { + if (POINTER_TYPE_P (type) + && TYPE_READONLY (TREE_TYPE (type))) + { + type = build_pointer_type (build_qualified_type ( + TREE_TYPE (type),0)); + arg = fold_convert (type, arg); + } + } + + args[n] = arg; + types[n] = type; + } + + /* If the number of arguments did not match the prototype, return NULL + and the generic code will issue the appropriate error message. */ + if (!VOID_TYPE_P (TREE_VALUE (fnargs)) || n < nargs) + return NULL; + + if (fcode == RS6000_OVLD_VEC_STEP) + { + if (TREE_CODE (types[0]) != VECTOR_TYPE) + goto bad; + + return build_int_cst (NULL_TREE, TYPE_VECTOR_SUBPARTS (types[0])); + } + + { + bool unsupported_builtin = false; + enum rs6000_gen_builtins overloaded_code; + bool supported = false; + ovlddata *instance = rs6000_overload_info[adj_fcode].first_instance; + gcc_assert (instance != NULL); + + /* Need to special case __builtin_cmpb because the overloaded forms + of this function take (unsigned int, unsigned int) or (unsigned + long long int, unsigned long long int). Since C conventions + allow the respective argument types to be implicitly coerced into + each other, the default handling does not provide adequate + discrimination between the desired forms of the function. */ + if (fcode == RS6000_OVLD_SCAL_CMPB) + { + machine_mode arg1_mode = TYPE_MODE (types[0]); + machine_mode arg2_mode = TYPE_MODE (types[1]); + + if (nargs != 2) + { + error ("builtin %qs only accepts 2 arguments", "__builtin_cmpb"); + return error_mark_node; + } + + /* If any supplied arguments are wider than 32 bits, resolve to + 64-bit variant of built-in function. */ + if ((GET_MODE_PRECISION (arg1_mode) > 32) + || (GET_MODE_PRECISION (arg2_mode) > 32)) + { + /* Assure all argument and result types are compatible with + the built-in function represented by RS6000_BIF_CMPB. */ + overloaded_code = RS6000_BIF_CMPB; + } + else + { + /* Assure all argument and result types are compatible with + the built-in function represented by RS6000_BIF_CMPB_32. */ + overloaded_code = RS6000_BIF_CMPB_32; + } + + while (instance && instance->bifid != overloaded_code) + instance = instance->next; + + gcc_assert (instance != NULL); + tree fntype = rs6000_builtin_info_x[instance->bifid].fntype; + tree parmtype0 = TREE_VALUE (TYPE_ARG_TYPES (fntype)); + tree parmtype1 = TREE_VALUE (TREE_CHAIN (TYPE_ARG_TYPES (fntype))); + + if (rs6000_new_builtin_type_compatible (types[0], parmtype0) + && rs6000_new_builtin_type_compatible (types[1], parmtype1)) + { + if (rs6000_builtin_decl (instance->bifid, false) != error_mark_node + && rs6000_new_builtin_is_supported_p (instance->bifid)) + { + tree ret_type = TREE_TYPE (instance->fntype); + return altivec_build_new_resolved_builtin (args, n, fntype, + ret_type, + instance->bifid, + fcode); + } + else + unsupported_builtin = true; + } + } + else if (fcode == RS6000_OVLD_VEC_VSIE) + { + machine_mode arg1_mode = TYPE_MODE (types[0]); + + if (nargs != 2) + { + error ("builtin %qs only accepts 2 arguments", + "scalar_insert_exp"); + return error_mark_node; + } + + /* If supplied first argument is wider than 64 bits, resolve to + 128-bit variant of built-in function. */ + if (GET_MODE_PRECISION (arg1_mode) > 64) + { + /* If first argument is of float variety, choose variant + that expects __ieee128 argument. Otherwise, expect + __int128 argument. */ + if (GET_MODE_CLASS (arg1_mode) == MODE_FLOAT) + overloaded_code = RS6000_BIF_VSIEQPF; + else + overloaded_code = RS6000_BIF_VSIEQP; + } + else + { + /* If first argument is of float variety, choose variant + that expects double argument. Otherwise, expect + long long int argument. */ + if (GET_MODE_CLASS (arg1_mode) == MODE_FLOAT) + overloaded_code = RS6000_BIF_VSIEDPF; + else + overloaded_code = RS6000_BIF_VSIEDP; + } + + while (instance && instance->bifid != overloaded_code) + instance = instance->next; + + gcc_assert (instance != NULL); + tree fntype = rs6000_builtin_info_x[instance->bifid].fntype; + tree parmtype0 = TREE_VALUE (TYPE_ARG_TYPES (fntype)); + tree parmtype1 = TREE_VALUE (TREE_CHAIN (TYPE_ARG_TYPES (fntype))); + + if (rs6000_new_builtin_type_compatible (types[0], parmtype0) + && rs6000_new_builtin_type_compatible (types[1], parmtype1)) + { + if (rs6000_builtin_decl (instance->bifid, false) != error_mark_node + && rs6000_new_builtin_is_supported_p (instance->bifid)) + { + tree ret_type = TREE_TYPE (instance->fntype); + return altivec_build_new_resolved_builtin (args, n, fntype, + ret_type, + instance->bifid, + fcode); + } + else + unsupported_builtin = true; + } + } + else + { + /* Functions with no arguments can have only one overloaded + instance. */ + gcc_assert (n > 0 || !instance->next); + + for (; instance != NULL; instance = instance->next) + { + bool mismatch = false; + tree nextparm = TYPE_ARG_TYPES (instance->fntype); + + for (unsigned int arg_i = 0; + arg_i < nargs && nextparm != NULL; + arg_i++) + { + tree parmtype = TREE_VALUE (nextparm); + if (!rs6000_new_builtin_type_compatible (types[arg_i], + parmtype)) + { + mismatch = true; + break; + } + nextparm = TREE_CHAIN (nextparm); + } + + if (mismatch) + continue; + + supported = rs6000_new_builtin_is_supported_p (instance->bifid); + if (rs6000_builtin_decl (instance->bifid, false) != error_mark_node + && supported) + { + tree fntype = rs6000_builtin_info_x[instance->bifid].fntype; + tree ret_type = TREE_TYPE (instance->fntype); + return altivec_build_new_resolved_builtin (args, n, fntype, + ret_type, + instance->bifid, + fcode); + } + else + { + unsupported_builtin = true; + break; + } + } + } + + if (unsupported_builtin) + { + const char *name = rs6000_overload_info[adj_fcode].ovld_name; + if (!supported) + { + const char *internal_name + = rs6000_builtin_info_x[instance->bifid].bifname; + /* An error message making reference to the name of the + non-overloaded function has already been issued. Add + clarification of the previous message. */ + rich_location richloc (line_table, input_location); + inform (&richloc, "builtin %qs requires builtin %qs", + name, internal_name); + } + else + error ("%qs is not supported in this compiler configuration", name); + /* If an error-representing result tree was returned from + altivec_build_resolved_builtin above, use it. */ + /* + return (result != NULL) ? result : error_mark_node; + */ + return error_mark_node; + } + } + bad: + { + const char *name = rs6000_overload_info[adj_fcode].ovld_name; + error ("invalid parameter combination for AltiVec intrinsic %qs", name); + return error_mark_node; + } +} diff --git a/gcc/config/rs6000/rs6000-call.c b/gcc/config/rs6000/rs6000-call.c index 0c555f29f7d..b08440fd074 100644 --- a/gcc/config/rs6000/rs6000-call.c +++ b/gcc/config/rs6000/rs6000-call.c @@ -12965,6 +12965,97 @@ rs6000_gimple_fold_builtin (gimple_stmt_iterator *gsi) return false; } +/* Check whether a builtin function is supported in this target + configuration. */ +bool +rs6000_new_builtin_is_supported_p (enum rs6000_gen_builtins fncode) +{ + switch (rs6000_builtin_info_x[(size_t) fncode].enable) + { + default: + gcc_unreachable (); + case ENB_ALWAYS: + return true; + case ENB_P5: + if (!TARGET_POPCNTB) + return false; + break; + case ENB_P6: + if (!TARGET_CMPB) + return false; + break; + case ENB_ALTIVEC: + if (!TARGET_ALTIVEC) + return false; + break; + case ENB_CELL: + if (!TARGET_ALTIVEC || rs6000_cpu != PROCESSOR_CELL) + return false; + break; + case ENB_VSX: + if (!TARGET_VSX) + return false; + break; + case ENB_P7: + if (!TARGET_POPCNTD) + return false; + break; + case ENB_P7_64: + if (!TARGET_POPCNTD || !TARGET_POWERPC64) + return false; + break; + case ENB_P8: + if (!TARGET_DIRECT_MOVE) + return false; + break; + case ENB_P8V: + if (!TARGET_P8_VECTOR) + return false; + break; + case ENB_P9: + if (!TARGET_MODULO) + return false; + break; + case ENB_P9_64: + if (!TARGET_MODULO || !TARGET_POWERPC64) + return false; + break; + case ENB_P9V: + if (!TARGET_P9_VECTOR) + return false; + break; + case ENB_IEEE128_HW: + if (!TARGET_FLOAT128_HW) + return false; + break; + case ENB_DFP: + if (!TARGET_DFP) + return false; + break; + case ENB_CRYPTO: + if (!TARGET_CRYPTO) + return false; + break; + case ENB_HTM: + if (!TARGET_HTM) + return false; + break; + case ENB_P10: + if (!TARGET_POWER10) + return false; + break; + case ENB_P10_64: + if (!TARGET_POWER10 || !TARGET_POWERPC64) + return false; + break; + case ENB_MMA: + if (!TARGET_MMA) + return false; + break; + }; + return true; +} + /* Expand an expression EXP that calls a built-in function, with result going to TARGET if that's convenient (and in mode MODE if that's convenient).