From patchwork Tue Jul 10 08:31:10 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aldy Hernandez X-Patchwork-Id: 941858 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-481267-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="x/Hx4n7V"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 41PwQl5c26z9ryt for ; Tue, 10 Jul 2018 18:31:26 +1000 (AEST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:message-id:date:mime-version:content-type; q=dns; s=default; b=H38SMC3oezfbKechHDRKS9eK90+WVqApXXhoFn61LxTJSDAodb tV/A4PEqQkDcgqcG5PRTcUt5pVDnFQiK3wga6l+0z6YyVh/xgKpLnMeTM6Yq24T0 PA+NmxCKFM6h2EvhL85HOG9FXd1uFMDce6Rleq4/uIR/9RHxiokGJ7Ac0= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:message-id:date:mime-version:content-type; s= default; bh=4vQMrZHkoyVzAYwNsjseahYth4A=; b=x/Hx4n7VasOu5XhXvD2h Ju+uEdpo7zBgvg2y3E2UgN6Z2mDyE2UvFiCswub6TMxVEobjSJXeoQ4wSh2x7Ge7 W1zJlT7kztUF1LQQxHqG9MsKltIVtVMdbtunBd7GzfIQkGE1GlVDIq9JQBt1O/7z a61Sil26+vQSK6IEBzcGJNs= Received: (qmail 96141 invoked by alias); 10 Jul 2018 08:31:18 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 96112 invoked by uid 89); 10 Jul 2018 08:31:17 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-25.5 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.2 spammy=blame, 1009, sk:fold_bi X-HELO: mail-wm0-f42.google.com Received: from mail-wm0-f42.google.com (HELO mail-wm0-f42.google.com) (74.125.82.42) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 10 Jul 2018 08:31:14 +0000 Received: by mail-wm0-f42.google.com with SMTP id i139-v6so23576769wmf.4 for ; Tue, 10 Jul 2018 01:31:13 -0700 (PDT) Received: from abulafia.quesejoda.com (247.red-79-147-188.dynamicip.rima-tde.net. [79.147.188.247]) by smtp.gmail.com with ESMTPSA id f18-v6sm14026243wrt.64.2018.07.10.01.31.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 10 Jul 2018 01:31:11 -0700 (PDT) To: gcc-patches , Richard Biener From: Aldy Hernandez Subject: abstract wide int binop code from VRP Message-ID: <803fa05d-48e3-b9e8-b65c-abfae88e0d3a@redhat.com> Date: Tue, 10 Jul 2018 04:31:10 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 X-IsSubscribed: yes Howdy! Attached are more cleanups to VRP getting rid of some repetitive code, as well as abstracting wide int handling code into their own functions. There should be no change to existing functionality. You may notice that I have removed the PLUS/MINUS_EXPR handling in vrp_int_const_binop, even from the new abstracted code: - /* For addition, the operands must be of the same sign - to yield an overflow. Its sign is therefore that - of one of the operands, for example the first. */ - || (code == PLUS_EXPR && sgn1 >= 0) - /* For subtraction, operands must be of - different signs to yield an overflow. Its sign is - therefore that of the first operand or the opposite of - that of the second operand. A first operand of 0 counts - as positive here, for the corner case 0 - (-INF), which - overflows, but must yield +INF. */ - || (code == MINUS_EXPR && sgn1 >= 0) This code is actually unreachable, as the switch above this snippet was already aborting if code was not one of the shift or mult/div operators. Oh yeah, don't blame me for the cryptic comment to range_easy_mask_min_mask(). That machine language comment was already there ;-). OK pending one more round of tests? Aldy gcc/ * fold-const.c (int_const_binop_2): Abstract wide int code to... (wide_int_binop): ...here. * fold-const.h (wide_int_binop): New. * tree-vrp.c (vrp_int_const_binop): Call wide_int_binop. Remove useless PLUS/MINUS_EXPR case. (zero_nonzero_bits_from_vr): Move wide int code... (zero_nonzero_bits_from_bounds): ...here. (extract_range_from_binary_expr_1): Move mask optimization code... (range_easy_mask_min_max): ...here. * tree-vrp.h (zero_nonzero_bits_from_bounds): New. (range_easy_mask_min_max): New. diff --git a/gcc/fold-const.c b/gcc/fold-const.c index 5b94c700c81..35171c5de08 100644 --- a/gcc/fold-const.c +++ b/gcc/fold-const.c @@ -966,21 +966,18 @@ int_binop_types_match_p (enum tree_code code, const_tree type1, const_tree type2 && TYPE_MODE (type1) == TYPE_MODE (type2); } -/* Subroutine of int_const_binop_1 that handles two INTEGER_CSTs. */ - -static tree -int_const_binop_2 (enum tree_code code, const_tree parg1, const_tree parg2, - int overflowable) -{ - wide_int res; - tree t; - tree type = TREE_TYPE (parg1); - signop sign = TYPE_SIGN (type); - wi::overflow_type overflow = wi::OVF_NONE; +/* Perform binary tree operation CODE on ARG1 and ARG2 and return the + result in RES. If an overflow occurs, it is stored in OVERFLOW. - wi::tree_to_wide_ref arg1 = wi::to_wide (parg1); - wide_int arg2 = wi::to_wide (parg2, TYPE_PRECISION (type)); + Return TRUE if the operation is handled and was successful. */ +bool +wide_int_binop (enum tree_code code, + wide_int &res, const wide_int &arg1, const wide_int &arg2, + signop sign, wi::overflow_type &overflow) +{ + wide_int tmp; + overflow = wi::OVF_NONE; switch (code) { case BIT_IOR_EXPR: @@ -999,37 +996,41 @@ int_const_binop_2 (enum tree_code code, const_tree parg1, const_tree parg2, case LSHIFT_EXPR: if (wi::neg_p (arg2)) { - arg2 = -arg2; + tmp = -arg2; if (code == RSHIFT_EXPR) code = LSHIFT_EXPR; else code = RSHIFT_EXPR; } + else + tmp = arg2; if (code == RSHIFT_EXPR) /* It's unclear from the C standard whether shifts can overflow. The following code ignores overflow; perhaps a C standard interpretation ruling is needed. */ - res = wi::rshift (arg1, arg2, sign); + res = wi::rshift (arg1, tmp, sign); else - res = wi::lshift (arg1, arg2); + res = wi::lshift (arg1, tmp); break; case RROTATE_EXPR: case LROTATE_EXPR: if (wi::neg_p (arg2)) { - arg2 = -arg2; + tmp = -arg2; if (code == RROTATE_EXPR) code = LROTATE_EXPR; else code = RROTATE_EXPR; } + else + tmp = arg2; if (code == RROTATE_EXPR) - res = wi::rrotate (arg1, arg2); + res = wi::rrotate (arg1, tmp); else - res = wi::lrotate (arg1, arg2); + res = wi::lrotate (arg1, tmp); break; case PLUS_EXPR: @@ -1051,49 +1052,49 @@ int_const_binop_2 (enum tree_code code, const_tree parg1, const_tree parg2, case TRUNC_DIV_EXPR: case EXACT_DIV_EXPR: if (arg2 == 0) - return NULL_TREE; + return false; res = wi::div_trunc (arg1, arg2, sign, &overflow); break; case FLOOR_DIV_EXPR: if (arg2 == 0) - return NULL_TREE; + return false; res = wi::div_floor (arg1, arg2, sign, &overflow); break; case CEIL_DIV_EXPR: if (arg2 == 0) - return NULL_TREE; + return false; res = wi::div_ceil (arg1, arg2, sign, &overflow); break; case ROUND_DIV_EXPR: if (arg2 == 0) - return NULL_TREE; + return false; res = wi::div_round (arg1, arg2, sign, &overflow); break; case TRUNC_MOD_EXPR: if (arg2 == 0) - return NULL_TREE; + return false; res = wi::mod_trunc (arg1, arg2, sign, &overflow); break; case FLOOR_MOD_EXPR: if (arg2 == 0) - return NULL_TREE; + return false; res = wi::mod_floor (arg1, arg2, sign, &overflow); break; case CEIL_MOD_EXPR: if (arg2 == 0) - return NULL_TREE; + return false; res = wi::mod_ceil (arg1, arg2, sign, &overflow); break; case ROUND_MOD_EXPR: if (arg2 == 0) - return NULL_TREE; + return false; res = wi::mod_round (arg1, arg2, sign, &overflow); break; @@ -1106,8 +1107,29 @@ int_const_binop_2 (enum tree_code code, const_tree parg1, const_tree parg2, break; default: - return NULL_TREE; + return false; } + return true; +} + +/* Subroutine of int_const_binop_1 that handles two INTEGER_CSTs. */ + + +static tree +int_const_binop_2 (enum tree_code code, const_tree parg1, const_tree parg2, + int overflowable) +{ + wide_int res; + tree t; + tree type = TREE_TYPE (parg1); + signop sign = TYPE_SIGN (type); + wi::overflow_type overflow = wi::OVF_NONE; + + wide_int arg1 = wi::to_wide (parg1); + wide_int arg2 = wi::to_wide (parg2, TYPE_PRECISION (type)); + + if (!wide_int_binop (code, res, arg1, arg2, sign, overflow)) + return NULL_TREE; t = force_fit_type (type, res, overflowable, (((sign == SIGNED || overflowable == -1) diff --git a/gcc/fold-const.h b/gcc/fold-const.h index c64b8d0ecf7..354e84a8e5b 100644 --- a/gcc/fold-const.h +++ b/gcc/fold-const.h @@ -100,6 +100,9 @@ extern tree fold_bit_and_mask (tree, tree, enum tree_code, tree, enum tree_code, tree, tree, tree, enum tree_code, tree, tree, tree *); extern tree fold_read_from_constant_string (tree); +extern bool wide_int_binop (enum tree_code, wide_int &res, + const wide_int &arg1, const wide_int &arg2, + signop, wi::overflow_type &); extern tree int_const_binop (enum tree_code, const_tree, const_tree); #define build_fold_addr_expr(T)\ build_fold_addr_expr_loc (UNKNOWN_LOCATION, (T)) diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c index a7453abba7f..7764f7b30b6 100644 --- a/gcc/tree-vrp.c +++ b/gcc/tree-vrp.c @@ -956,11 +956,13 @@ value_range_constant_singleton (value_range *vr) return NULL_TREE; } -/* Wrapper around int_const_binop. Return true if we can compute the - result; i.e. if the operation doesn't overflow or if the overflow is - undefined. In the latter case (if the operation overflows and - overflow is undefined), then adjust the result to be -INF or +INF - depending on CODE, VAL1 and VAL2. Return the value in *RES. +/* Wrapper around wide_int_binop that adjusts for overflow. + + Return true if we can compute the result; i.e. if the operation + doesn't overflow or if the overflow is undefined. In the latter + case (if the operation overflows and overflow is undefined), then + adjust the result to be -INF or +INF depending on CODE, VAL1 and + VAL2. Return the value in *RES. Return false for division by zero, for which the result is indeterminate. */ @@ -970,78 +972,36 @@ vrp_int_const_binop (enum tree_code code, tree val1, tree val2, wide_int *res) { wi::overflow_type overflow = wi::OVF_NONE; signop sign = TYPE_SIGN (TREE_TYPE (val1)); + wide_int w1 = wi::to_wide (val1); + wide_int w2 = wi::to_wide (val2); switch (code) { case RSHIFT_EXPR: case LSHIFT_EXPR: - { - wide_int wval2 = wi::to_wide (val2, TYPE_PRECISION (TREE_TYPE (val1))); - if (wi::neg_p (wval2)) - { - wval2 = -wval2; - if (code == RSHIFT_EXPR) - code = LSHIFT_EXPR; - else - code = RSHIFT_EXPR; - } - - if (code == RSHIFT_EXPR) - /* It's unclear from the C standard whether shifts can overflow. - The following code ignores overflow; perhaps a C standard - interpretation ruling is needed. */ - *res = wi::rshift (wi::to_wide (val1), wval2, sign); - else - *res = wi::lshift (wi::to_wide (val1), wval2); - break; - } - + w2 = wi::to_wide (val2, TYPE_PRECISION (TREE_TYPE (val1))); + /* FALLTHRU */ case MULT_EXPR: - *res = wi::mul (wi::to_wide (val1), - wi::to_wide (val2), sign, &overflow); - break; - case TRUNC_DIV_EXPR: case EXACT_DIV_EXPR: - if (val2 == 0) - return false; - else - *res = wi::div_trunc (wi::to_wide (val1), - wi::to_wide (val2), sign, &overflow); - break; - case FLOOR_DIV_EXPR: - if (val2 == 0) - return false; - *res = wi::div_floor (wi::to_wide (val1), - wi::to_wide (val2), sign, &overflow); - break; - case CEIL_DIV_EXPR: - if (val2 == 0) - return false; - *res = wi::div_ceil (wi::to_wide (val1), - wi::to_wide (val2), sign, &overflow); - break; - case ROUND_DIV_EXPR: - if (val2 == 0) + if (!wide_int_binop (code, *res, w1, w2, sign, overflow)) return false; - *res = wi::div_round (wi::to_wide (val1), - wi::to_wide (val2), sign, &overflow); break; default: gcc_unreachable (); } + /* If the operation overflowed return -INF or +INF depending on the + operation and the combination of signs of the operands. */ if (overflow && TYPE_OVERFLOW_UNDEFINED (TREE_TYPE (val1))) { - /* If the operation overflowed return -INF or +INF depending - on the operation and the combination of signs of the operands. */ - int sgn1 = tree_int_cst_sgn (val1); - int sgn2 = tree_int_cst_sgn (val2); + int sign1 = tree_int_cst_sgn (val1); + int sign2 = tree_int_cst_sgn (val2); /* Notice that we only need to handle the restricted set of operations handled by extract_range_from_binary_expr. @@ -1053,64 +1013,47 @@ vrp_int_const_binop (enum tree_code code, tree val1, tree val2, wide_int *res) /* For multiplication, the sign of the overflow is given by the comparison of the signs of the operands. */ - if ((code == MULT_EXPR && sgn1 == sgn2) - /* For addition, the operands must be of the same sign - to yield an overflow. Its sign is therefore that - of one of the operands, for example the first. */ - || (code == PLUS_EXPR && sgn1 >= 0) - /* For subtraction, operands must be of - different signs to yield an overflow. Its sign is - therefore that of the first operand or the opposite of - that of the second operand. A first operand of 0 counts - as positive here, for the corner case 0 - (-INF), which - overflows, but must yield +INF. */ - || (code == MINUS_EXPR && sgn1 >= 0) + if ((code == MULT_EXPR && sign1 == sign2) /* For division, the only case is -INF / -1 = +INF. */ || code == TRUNC_DIV_EXPR || code == FLOOR_DIV_EXPR || code == CEIL_DIV_EXPR || code == EXACT_DIV_EXPR || code == ROUND_DIV_EXPR) - *res = wi::max_value (TYPE_PRECISION (TREE_TYPE (val1)), - TYPE_SIGN (TREE_TYPE (val1))); + *res = wi::max_value (TYPE_PRECISION (TREE_TYPE (val1)), sign); else - *res = wi::min_value (TYPE_PRECISION (TREE_TYPE (val1)), - TYPE_SIGN (TREE_TYPE (val1))); + *res = wi::min_value (TYPE_PRECISION (TREE_TYPE (val1)), sign); return true; } return !overflow; } - -/* For range VR compute two wide_int bitmasks. In *MAY_BE_NONZERO - bitmask if some bit is unset, it means for all numbers in the range +/* For range [LB, UB] compute two wide_int bitmasks. In *MAY_BE_NONZERO + bitmask, if some bit is unset, it means for all numbers in the range the bit is 0, otherwise it might be 0 or 1. In *MUST_BE_NONZERO - bitmask if some bit is set, it means for all numbers in the range + bitmask, if some bit is set, it means for all numbers in the range the bit is 1, otherwise it might be 0 or 1. */ -bool -zero_nonzero_bits_from_vr (const tree expr_type, - value_range *vr, - wide_int *may_be_nonzero, - wide_int *must_be_nonzero) +void +zero_nonzero_bits_from_bounds (signop sign, + const wide_int &lb, const wide_int &ub, + wide_int *may_be_nonzero, + wide_int *must_be_nonzero) { - *may_be_nonzero = wi::minus_one (TYPE_PRECISION (expr_type)); - *must_be_nonzero = wi::zero (TYPE_PRECISION (expr_type)); - if (!range_int_cst_p (vr)) - return false; + *may_be_nonzero = wi::minus_one (lb.get_precision ()); + *must_be_nonzero = wi::zero (lb.get_precision ()); - if (range_int_cst_singleton_p (vr)) + if (wi::eq_p (lb, ub)) { - *may_be_nonzero = wi::to_wide (vr->min); + *may_be_nonzero = lb; *must_be_nonzero = *may_be_nonzero; } - else if (tree_int_cst_sgn (vr->min) >= 0 - || tree_int_cst_sgn (vr->max) < 0) + else if (wi::ge_p (lb, 0, sign) || wi::lt_p (ub, 0, sign)) { - wide_int xor_mask = wi::to_wide (vr->min) ^ wi::to_wide (vr->max); - *may_be_nonzero = wi::to_wide (vr->min) | wi::to_wide (vr->max); - *must_be_nonzero = wi::to_wide (vr->min) & wi::to_wide (vr->max); + wide_int xor_mask = lb ^ ub; + *may_be_nonzero = lb | ub; + *must_be_nonzero = lb & ub; if (xor_mask != 0) { wide_int mask = wi::mask (wi::floor_log2 (xor_mask), false, @@ -1119,7 +1062,26 @@ zero_nonzero_bits_from_vr (const tree expr_type, *must_be_nonzero = wi::bit_and_not (*must_be_nonzero, mask); } } +} +/* Like zero_nonzero_bits_from_bounds, but use the range in value_range VR. */ + +bool +zero_nonzero_bits_from_vr (const tree expr_type, + value_range *vr, + wide_int *may_be_nonzero, + wide_int *must_be_nonzero) +{ + if (!range_int_cst_p (vr)) + { + *may_be_nonzero = wi::minus_one (TYPE_PRECISION (expr_type)); + *must_be_nonzero = wi::zero (TYPE_PRECISION (expr_type)); + return false; + } + + zero_nonzero_bits_from_bounds (TYPE_SIGN (expr_type), + wi::to_wide (vr->min), wi::to_wide (vr->max), + may_be_nonzero, must_be_nonzero); return true; } @@ -1275,6 +1237,52 @@ extract_range_from_multiplicative_op_1 (value_range *vr, wide_int_to_tree (type, max), NULL); } +/* For op & or | attempt to optimize: + + [LB, UB] op Z + into: + [LB op Z, UB op Z] + + if Z is a constant which (for op | its bitwise not) has n + consecutive least significant bits cleared followed by m 1 + consecutive bits set immediately above it and either + m + n == precision, or (x >> (m + n)) == (y >> (m + n)). + + The least significant n bits of all the values in the range are + cleared or set, the m bits above it are preserved and any bits + above these are required to be the same for all values in the + range. + + Return TRUE if the min and max can simply be folded. */ + +bool +range_easy_mask_min_max (tree_code code, + const wide_int &lb, const wide_int &ub, + const wide_int &mask) + +{ + wide_int w = mask; + int m = 0, n = 0; + if (code == BIT_IOR_EXPR) + w = ~w; + if (wi::eq_p (w, 0)) + n = w.get_precision (); + else + { + n = wi::ctz (w); + w = ~(w | wi::mask (n, false, w.get_precision ())); + if (wi::eq_p (w, 0)) + m = w.get_precision () - n; + else + m = wi::ctz (w) - n; + } + wide_int new_mask = wi::mask (m + n, true, w.get_precision ()); + if ((new_mask & lb) == (new_mask & ub)) + return true; + + return false; +} + /* If BOUND will include a symbolic bound, adjust it accordingly, otherwise leave it as is. @@ -2175,39 +2183,14 @@ extract_range_from_binary_expr_1 (value_range *vr, vr1p = &vr0; } /* For op & or | attempt to optimize: - [x, y] op z into [x op z, y op z] - if z is a constant which (for op | its bitwise not) has n - consecutive least significant bits cleared followed by m 1 - consecutive bits set immediately above it and either - m + n == precision, or (x >> (m + n)) == (y >> (m + n)). - The least significant n bits of all the values in the range are - cleared or set, the m bits above it are preserved and any bits - above these are required to be the same for all values in the - range. */ - if (vr0p && range_int_cst_p (vr0p)) + [x, y] op z into [x op z, y op z]. */ + if (vr0p && range_int_cst_p (vr0p) + && range_easy_mask_min_max (code, wi::to_wide (vr0p->min), + wi::to_wide (vr0p->max), + wi::to_wide (vr1p->min))) { - wide_int w = wi::to_wide (vr1p->min); - int m = 0, n = 0; - if (code == BIT_IOR_EXPR) - w = ~w; - if (wi::eq_p (w, 0)) - n = TYPE_PRECISION (expr_type); - else - { - n = wi::ctz (w); - w = ~(w | wi::mask (n, false, w.get_precision ())); - if (wi::eq_p (w, 0)) - m = TYPE_PRECISION (expr_type) - n; - else - m = wi::ctz (w) - n; - } - wide_int mask = wi::mask (m + n, true, w.get_precision ()); - if ((mask & wi::to_wide (vr0p->min)) - == (mask & wi::to_wide (vr0p->max))) - { - min = int_const_binop (code, vr0p->min, vr1p->min); - max = int_const_binop (code, vr0p->max, vr1p->min); - } + min = int_const_binop (code, vr0p->min, vr1p->min); + max = int_const_binop (code, vr0p->max, vr1p->min); } } diff --git a/gcc/tree-vrp.h b/gcc/tree-vrp.h index 608ca5658b5..946e26e29b4 100644 --- a/gcc/tree-vrp.h +++ b/gcc/tree-vrp.h @@ -112,8 +112,14 @@ extern bool range_int_cst_p (value_range *); extern int operand_less_p (tree, tree); extern bool find_case_label_range (gswitch *, tree, tree, size_t *, size_t *); extern bool find_case_label_index (gswitch *, size_t, tree, size_t *); +extern void zero_nonzero_bits_from_bounds (signop, const wide_int&, + const wide_int&, wide_int *, + wide_int *); extern bool zero_nonzero_bits_from_vr (const tree, value_range *, wide_int *, wide_int *); +extern bool range_easy_mask_min_max (tree_code, + const wide_int &lb, const wide_int &ub, + const wide_int &mask); extern bool overflow_comparison_p (tree_code, tree, tree, bool, tree *); extern bool range_int_cst_singleton_p (value_range *); extern int value_inside_range (tree, tree, tree);