Patchwork Finish double_int conversion.

login
register
mail settings
Submitter Lawrence Crowl
Date Sept. 12, 2012, 1 a.m.
Message ID <CAGqM8fb27dr3z6hnZYAFeWWCfitbvhHUmor6R3XSPNoBgtVsBQ@mail.gmail.com>
Download mbox | patch
Permalink /patch/183224/
State New
Headers show

Comments

Lawrence Crowl - Sept. 12, 2012, 1 a.m.
Finish conversion of uses of double_int to the new API.

Some old functionality required new interfaces, and these have been
added to double-int.[hc]:

  double_int::from_pair - static constructor function
  wide_mul_with_sign - double-wide multiply instruction
  neg_with_overflow - negation with overlow testing
  divmod_with_overflow - div and mod with overlow testing

The prior two generations of the interface have been removed.

Some of these old interfaces are still used as static implementation
in double-int.c.

The changed compiler appears 0.321% faster with 80% confidence of
being faster.

Tested on x86_64.  However, there are changes to the avr and sparc
config files, and I have not tested those.  Could the maintainers
please apply the patch and see if it works?

After testing is complete, okay for trunk?
Richard Guenther - Sept. 12, 2012, 8:42 a.m.
On Tue, 11 Sep 2012, Lawrence Crowl wrote:

> Finish conversion of uses of double_int to the new API.
> 
> Some old functionality required new interfaces, and these have been
> added to double-int.[hc]:
> 
>   double_int::from_pair - static constructor function
>   wide_mul_with_sign - double-wide multiply instruction
>   neg_with_overflow - negation with overlow testing
>   divmod_with_overflow - div and mod with overlow testing
> 
> The prior two generations of the interface have been removed.
> 
> Some of these old interfaces are still used as static implementation
> in double-int.c.
> 
> The changed compiler appears 0.321% faster with 80% confidence of
> being faster.
> 
> Tested on x86_64.  However, there are changes to the avr and sparc
> config files, and I have not tested those.  Could the maintainers
> please apply the patch and see if it works?
> 
> After testing is complete, okay for trunk?

Ok.  For avr and sparc you can build cross cc1 (configure for the
cross and do make all-gcc), if that works it should be fine.

Thanks,
Richard.

> ========================
> 
> Index: gcc/java/ChangeLog
> 
> 2012-09-10  Lawrence Crowl  <crowl@google.com>
> 
> 	* decl.c (java_init_decl_processing): Change to new double_int API.
> 	* jcf-parse.c (get_constant): Likewise.
> 	* boehm.c (mark_reference_fields): Likewise.
> 	(get_boehm_type_descriptor): Likewise.
> 
> Index: gcc/ChangeLog
> 
> 2012-09-10  Lawrence Crowl  <crowl@google.com>
> 
> 	* double-int.h (double_int::from_pair): New.
> 	(double_int::wide_mul_with_sign): New.
> 	(double_int::neg_with_overflow): New.
> 	(double_int::divmod_with_overflow): New.
> 	(shwi_to_double_int): Remove.
> 	(uhwi_to_double_int): Remove.
> 	(double_int_to_shwi): Remove.
> 	(double_int_to_uhwi): Remove.
> 	(double_int_fits_in_uhwi_p): Remove.
> 	(double_int_fits_in_shwi_p): Remove.
> 	(double_int_fits_in_hwi_p): Remove.
> 	(double_int_mul): Remove.
> 	(double_int_mul_with_sign): Remove.
> 	(double_int_add): Remove.
> 	(double_int_sub): Remove.
> 	(double_int_neg): Remove.
> 	(double_int_div): Remove.
> 	(double_int_sdiv): Remove.
> 	(double_int_udiv): Remove.
> 	(double_int_mod): Remove.
> 	(double_int_smod): Remove.
> 	(double_int_umod): Remove.
> 	(double_int_divmod): Remove.
> 	(double_int_sdivmod): Remove.
> 	(double_int_udivmod): Remove.
> 	(double_int_multiple_of): Remove.
> 	(double_int_setbit): Remove.
> 	(double_int_ctz): Remove.
> 	(double_int_not): Remove.
> 	(double_int_ior): Remove.
> 	(double_int_and): Remove.
> 	(double_int_and_not): Remove.
> 	(double_int_xor): Remove.
> 	(double_int_lshift): Remove.
> 	(double_int_rshift): Remove.
> 	(double_int_lrotate): Remove.
> 	(double_int_rrotate): Remove.
> 	(double_int_negative_p): Remove.
> 	(double_int_cmp): Remove.
> 	(double_int_scmp): Remove.
> 	(double_int_ucmp): Remove.
> 	(double_int_max): Remove.
> 	(double_int_smax): Remove.
> 	(double_int_umax): Remove.
> 	(double_int_min): Remove.
> 	(double_int_smin): Remove.
> 	(double_int_umin): Remove.
> 	(double_int_ext): Remove.
> 	(double_int_sext): Remove.
> 	(double_int_zext): Remove.
> 	(double_int_mask): Remove.
> 	(double_int_max_value): Remove.
> 	(double_int_min_value): Remove.
> 	(double_int_zero_p): Remove.
> 	(double_int_one_p): Remove.
> 	(double_int_minus_one_p): Remove.
> 	(double_int_equal_p): Remove.
> 	(double_int_popcount): Remove.
> 	(extern add_double_with_sign): Remove.
> 	(#define add_double): Remove.
> 	(extern neg_double): Remove.
> 	(extern mul_double_with_sign): Remove.
> 	(extern mul_double_wide_with_sign): Remove.
> 	(#define mul_double): Remove.
> 	(extern lshift_double): Remove.
> 	(extern div_and_round_double): Remove.
> 	* double-int.c (add_double_with_sign): Make static.
> 	(#defined add_double): Localized from header.
> 	(neg_double): Make static.
> 	(mul_double_with_sign): Make static.
> 	(mul_double_wide_with_sign): Make static.
> 	(#defined mul_double): Localized from header.
> 	(lshift_double): Make static.
> 	(div_and_round_double): Make static.
> 	(double_int::wide_mul_with_sign): New.
> 	(double_int::neg_with_overflow): New.
> 	(double_int::divmod_with_overflow): New.
> 	* emit-rtl.c (init_emit_once): Change to new double_int API.
> 	* explow.c (plus_constant): Likewise.
> 	* expmed.c (choose_multiplier): Likewise.
> 	* fold-const.c (int_const_binop_1): Likewise.
> 	(fold_div_compare): Likewise.
> 	(maybe_canonicalize_comparison): Likewise.
> 	(pointer_may_wrap_p): Likewise.
> 	(fold_negate_const): Likewise.
> 	(fold_abs_const): Likewise.
> 	* simplify-rtx.c (simplify_const_unary_operation): Likewise.
> 	(simplify_const_binary_operation): Likewise.
> 	* tree-chrec.c (tree_fold_binomial): Likewise.
> 	* tree-vrp.c (extract_range_from_binary_expr_1): Likewise.
> 	* config/sparc/sparc.c (sparc_fold_builtin): Likewise.
> 	* config/avr/avr.c (avr_double_int_push_digit): Likewise.
> 	(avr_map): Likewise.
> 	(avr_map_decompose): Likewise.
> 	(avr_out_insert_bits): Likewise.
> 
> Index: gcc/cp/ChangeLog
> 
> 2012-09-10  Lawrence Crowl  <crowl@google.com>
> 
> 	* init.c (build_new_1): Change to new double_int API.
> 	* decl.c (build_enumerator): Likewise.
> 	* typeck2.c (process_init_constructor_array): Likewise.
> 	* mangle.c (write_array_type): Likewise.
> 
> Index: gcc/fortran/ChangeLog
> 
> 2012-09-10  Lawrence Crowl  <crowl@google.com>
> 
> 	* trans-expr.c (gfc_conv_cst_int_power): Change to new double_int API.
> 	* target-memory.c (gfc_interpret_logical): Likewise.
> 
> ========================
> 
> Index: gcc/tree-vrp.c
> ===================================================================
> --- gcc/tree-vrp.c	(revision 191083)
> +++ gcc/tree-vrp.c	(working copy)
> @@ -2478,7 +2478,7 @@ extract_range_from_binary_expr_1 (value_
>  		  if (tmin.cmp (tmax, uns) < 0)
>  		    covers = true;
>  		  tmax = tem + double_int_minus_one;
> -		  if (double_int_cmp (tmax, tem, uns) > 0)
> +		  if (tmax.cmp (tem, uns) > 0)
>  		    covers = true;
>  		  /* If the anti-range would cover nothing, drop to varying.
>  		     Likewise if the anti-range bounds are outside of the
> @@ -2632,37 +2632,26 @@ extract_range_from_binary_expr_1 (value_
>  	    }
>  	  uns = uns0 & uns1;
> 
> -	  mul_double_wide_with_sign (min0.low, min0.high,
> -				     min1.low, min1.high,
> -				     &prod0l.low, &prod0l.high,
> -				     &prod0h.low, &prod0h.high, true);
> +	  bool overflow;
> +	  prod0l = min0.wide_mul_with_sign (min1, true, &prod0h, &overflow);
>  	  if (!uns0 && min0.is_negative ())
>  	    prod0h -= min1;
>  	  if (!uns1 && min1.is_negative ())
>  	    prod0h -= min0;
> 
> -	  mul_double_wide_with_sign (min0.low, min0.high,
> -				     max1.low, max1.high,
> -				     &prod1l.low, &prod1l.high,
> -				     &prod1h.low, &prod1h.high, true);
> +	  prod1l = min0.wide_mul_with_sign (max1, true, &prod1h, &overflow);
>  	  if (!uns0 && min0.is_negative ())
>  	    prod1h -= max1;
>  	  if (!uns1 && max1.is_negative ())
>  	    prod1h -= min0;
> 
> -	  mul_double_wide_with_sign (max0.low, max0.high,
> -				     min1.low, min1.high,
> -				     &prod2l.low, &prod2l.high,
> -				     &prod2h.low, &prod2h.high, true);
> +	  prod2l = max0.wide_mul_with_sign (min1, true, &prod2h, &overflow);
>  	  if (!uns0 && max0.is_negative ())
>  	    prod2h -= min1;
>  	  if (!uns1 && min1.is_negative ())
>  	    prod2h -= max0;
> 
> -	  mul_double_wide_with_sign (max0.low, max0.high,
> -				     max1.low, max1.high,
> -				     &prod3l.low, &prod3l.high,
> -				     &prod3h.low, &prod3h.high, true);
> +	  prod3l = max0.wide_mul_with_sign (max1, true, &prod3h, &overflow);
>  	  if (!uns0 && max0.is_negative ())
>  	    prod3h -= max1;
>  	  if (!uns1 && max1.is_negative ())
> Index: gcc/java/decl.c
> ===================================================================
> --- gcc/java/decl.c	(revision 191083)
> +++ gcc/java/decl.c	(working copy)
> @@ -617,7 +617,7 @@ java_init_decl_processing (void)
>    decimal_int_max = build_int_cstu (unsigned_int_type_node, 0x80000000);
>    decimal_long_max
>      = double_int_to_tree (unsigned_long_type_node,
> -			  double_int_setbit (double_int_zero, 64));
> +			  double_int_zero.set_bit (64));
> 
>    long_zero_node = build_int_cst (long_type_node, 0);
> 
> Index: gcc/java/jcf-parse.c
> ===================================================================
> --- gcc/java/jcf-parse.c	(revision 191083)
> +++ gcc/java/jcf-parse.c	(working copy)
> @@ -1043,9 +1043,9 @@ get_constant (JCF *jcf, int index)
>  	double_int val;
> 
>  	num = JPOOL_UINT (jcf, index);
> -	val = double_int_lshift (uhwi_to_double_int (num), 32, 64, false);
> +	val = double_int::from_uhwi (num).llshift (32, 64);
>  	num = JPOOL_UINT (jcf, index + 1);
> -	val = double_int_ior (val, uhwi_to_double_int (num));
> +	val |= double_int::from_uhwi (num);
> 
>  	value = double_int_to_tree (long_type_node, val);
>  	break;
> Index: gcc/java/boehm.c
> ===================================================================
> --- gcc/java/boehm.c	(revision 191083)
> +++ gcc/java/boehm.c	(working copy)
> @@ -108,7 +108,7 @@ mark_reference_fields (tree field,
>  	     bits for all words in the record. This is conservative, but the
>  	     size_words != 1 case is impossible in regular java code. */
>  	  for (i = 0; i < size_words; ++i)
> -	    *mask = double_int_setbit (*mask, ubit - count - i - 1);
> +	    *mask = (*mask).set_bit (ubit - count - i - 1);
> 
>  	  if (count >= ubit - 2)
>  	    *pointer_after_end = 1;
> @@ -200,7 +200,7 @@ get_boehm_type_descriptor (tree type)
>        while (last_set_index)
>  	{
>  	  if ((last_set_index & 1))
> -	    mask = double_int_setbit (mask, log2_size + count);
> +	    mask = mask.set_bit (log2_size + count);
>  	  last_set_index >>= 1;
>  	  ++count;
>  	}
> @@ -209,7 +209,7 @@ get_boehm_type_descriptor (tree type)
>    else if (! pointer_after_end)
>      {
>        /* Bottom two bits for bitmap mark type are 01.  */
> -      mask = double_int_setbit (mask, 0);
> +      mask = mask.set_bit (0);
>        value = double_int_to_tree (value_type, mask);
>      }
>    else
> Index: gcc/fold-const.c
> ===================================================================
> --- gcc/fold-const.c	(revision 191083)
> +++ gcc/fold-const.c	(working copy)
> @@ -982,12 +982,6 @@ int_const_binop_1 (enum tree_code code,
>        break;
> 
>      case MINUS_EXPR:
> -/* FIXME(crowl) Remove this code if the replacment works.
> -      neg_double (op2.low, op2.high, &res.low, &res.high);
> -      add_double (op1.low, op1.high, res.low, res.high,
> -		  &res.low, &res.high);
> -      overflow = OVERFLOW_SUM_SIGN (res.high, op2.high, op1.high);
> -*/
>        res = op1.add_with_sign (-op2, false, &overflow);
>        break;
> 
> @@ -1035,10 +1029,7 @@ int_const_binop_1 (enum tree_code code,
>  	  res = double_int_one;
>  	  break;
>  	}
> -      overflow = div_and_round_double (code, uns,
> -				       op1.low, op1.high, op2.low, op2.high,
> -				       &res.low, &res.high,
> -				       &tmp.low, &tmp.high);
> +      res = op1.divmod_with_overflow (op2, uns, code, &tmp, &overflow);
>        break;
> 
>      case TRUNC_MOD_EXPR:
> @@ -1060,10 +1051,7 @@ int_const_binop_1 (enum tree_code code,
>      case ROUND_MOD_EXPR:
>        if (op2.is_zero ())
>  	return NULL_TREE;
> -      overflow = div_and_round_double (code, uns,
> -				       op1.low, op1.high, op2.low, op2.high,
> -				       &tmp.low, &tmp.high,
> -				       &res.low, &res.high);
> +      tmp = op1.divmod_with_overflow (op2, uns, code, &res, &overflow);
>        break;
> 
>      case MIN_EXPR:
> @@ -6290,15 +6278,12 @@ fold_div_compare (location_t loc,
>    double_int val;
>    bool unsigned_p = TYPE_UNSIGNED (TREE_TYPE (arg0));
>    bool neg_overflow;
> -  int overflow;
> +  bool overflow;
> 
>    /* We have to do this the hard way to detect unsigned overflow.
>       prod = int_const_binop (MULT_EXPR, arg01, arg1);  */
> -  overflow = mul_double_with_sign (TREE_INT_CST_LOW (arg01),
> -				   TREE_INT_CST_HIGH (arg01),
> -				   TREE_INT_CST_LOW (arg1),
> -				   TREE_INT_CST_HIGH (arg1),
> -				   &val.low, &val.high, unsigned_p);
> +  val = TREE_INT_CST (arg01)
> +	.mul_with_sign (TREE_INT_CST (arg1), unsigned_p, &overflow);
>    prod = force_fit_type_double (TREE_TYPE (arg00), val, -1, overflow);
>    neg_overflow = false;
> 
> @@ -6309,11 +6294,8 @@ fold_div_compare (location_t loc,
>        lo = prod;
> 
>        /* Likewise hi = int_const_binop (PLUS_EXPR, prod, tmp).  */
> -      overflow = add_double_with_sign (TREE_INT_CST_LOW (prod),
> -				       TREE_INT_CST_HIGH (prod),
> -				       TREE_INT_CST_LOW (tmp),
> -				       TREE_INT_CST_HIGH (tmp),
> -				       &val.low, &val.high, unsigned_p);
> +      val = TREE_INT_CST (prod)
> +	    .add_with_sign (TREE_INT_CST (tmp), unsigned_p, &overflow);
>        hi = force_fit_type_double (TREE_TYPE (arg00), val,
>  				  -1, overflow | TREE_OVERFLOW (prod));
>      }
> @@ -8693,8 +8675,7 @@ maybe_canonicalize_comparison (location_
>  static bool
>  pointer_may_wrap_p (tree base, tree offset, HOST_WIDE_INT bitpos)
>  {
> -  unsigned HOST_WIDE_INT offset_low, total_low;
> -  HOST_WIDE_INT size, offset_high, total_high;
> +  double_int di_offset, total;
> 
>    if (!POINTER_TYPE_P (TREE_TYPE (base)))
>      return true;
> @@ -8703,28 +8684,22 @@ pointer_may_wrap_p (tree base, tree offs
>      return true;
> 
>    if (offset == NULL_TREE)
> -    {
> -      offset_low = 0;
> -      offset_high = 0;
> -    }
> +    di_offset = double_int_zero;
>    else if (TREE_CODE (offset) != INTEGER_CST || TREE_OVERFLOW (offset))
>      return true;
>    else
> -    {
> -      offset_low = TREE_INT_CST_LOW (offset);
> -      offset_high = TREE_INT_CST_HIGH (offset);
> -    }
> +    di_offset = TREE_INT_CST (offset);
> 
> -  if (add_double_with_sign (offset_low, offset_high,
> -			    bitpos / BITS_PER_UNIT, 0,
> -			    &total_low, &total_high,
> -			    true))
> +  bool overflow;
> +  double_int units = double_int::from_uhwi (bitpos / BITS_PER_UNIT);
> +  total = di_offset.add_with_sign (units, true, &overflow);
> +  if (overflow)
>      return true;
> 
> -  if (total_high != 0)
> +  if (total.high != 0)
>      return true;
> 
> -  size = int_size_in_bytes (TREE_TYPE (TREE_TYPE (base)));
> +  HOST_WIDE_INT size = int_size_in_bytes (TREE_TYPE (TREE_TYPE (base)));
>    if (size <= 0)
>      return true;
> 
> @@ -8739,7 +8714,7 @@ pointer_may_wrap_p (tree base, tree offs
>  	size = base_size;
>      }
> 
> -  return total_low > (unsigned HOST_WIDE_INT) size;
> +  return total.low > (unsigned HOST_WIDE_INT) size;
>  }
> 
>  /* Subroutine of fold_binary.  This routine performs all of the
> @@ -15939,8 +15914,8 @@ fold_negate_const (tree arg0, tree type)
>      case INTEGER_CST:
>        {
>  	double_int val = tree_to_double_int (arg0);
> -	int overflow = neg_double (val.low, val.high, &val.low, &val.high);
> -
> +	bool overflow;
> +	val = val.neg_with_overflow (&overflow);
>  	t = force_fit_type_double (type, val, 1,
>  				   (overflow | TREE_OVERFLOW (arg0))
>  				   && !TYPE_UNSIGNED (type));
> @@ -15997,9 +15972,8 @@ fold_abs_const (tree arg0, tree type)
>  	   its negation.  */
>  	else
>  	  {
> -	    int overflow;
> -
> -	    overflow = neg_double (val.low, val.high, &val.low, &val.high);
> +	    bool overflow;
> +	    val = val.neg_with_overflow (&overflow);
>  	    t = force_fit_type_double (type, val, -1,
>  				       overflow | TREE_OVERFLOW (arg0));
>  	  }
> Index: gcc/tree-chrec.c
> ===================================================================
> --- gcc/tree-chrec.c	(revision 191083)
> +++ gcc/tree-chrec.c	(working copy)
> @@ -461,8 +461,8 @@ chrec_fold_multiply (tree type,
>  static tree
>  tree_fold_binomial (tree type, tree n, unsigned int k)
>  {
> -  unsigned HOST_WIDE_INT lidx, lnum, ldenom, lres, ldum;
> -  HOST_WIDE_INT hidx, hnum, hdenom, hres, hdum;
> +  double_int num, denom, idx, di_res;
> +  bool overflow;
>    unsigned int i;
>    tree res;
> 
> @@ -472,59 +472,41 @@ tree_fold_binomial (tree type, tree n, u
>    if (k == 1)
>      return fold_convert (type, n);
> 
> +  /* Numerator = n.  */
> +  num = TREE_INT_CST (n);
> +
>    /* Check that k <= n.  */
> -  if (TREE_INT_CST_HIGH (n) == 0
> -      && TREE_INT_CST_LOW (n) < k)
> +  if (num.ult (double_int::from_uhwi (k)))
>      return NULL_TREE;
> 
> -  /* Numerator = n.  */
> -  lnum = TREE_INT_CST_LOW (n);
> -  hnum = TREE_INT_CST_HIGH (n);
> -
>    /* Denominator = 2.  */
> -  ldenom = 2;
> -  hdenom = 0;
> +  denom = double_int::from_uhwi (2);
> 
>    /* Index = Numerator-1.  */
> -  if (lnum == 0)
> -    {
> -      hidx = hnum - 1;
> -      lidx = ~ (unsigned HOST_WIDE_INT) 0;
> -    }
> -  else
> -    {
> -      hidx = hnum;
> -      lidx = lnum - 1;
> -    }
> +  idx = num - double_int_one;
> 
>    /* Numerator = Numerator*Index = n*(n-1).  */
> -  if (mul_double (lnum, hnum, lidx, hidx, &lnum, &hnum))
> +  num = num.mul_with_sign (idx, false, &overflow);
> +  if (overflow)
>      return NULL_TREE;
> 
>    for (i = 3; i <= k; i++)
>      {
>        /* Index--.  */
> -      if (lidx == 0)
> -	{
> -	  hidx--;
> -	  lidx = ~ (unsigned HOST_WIDE_INT) 0;
> -	}
> -      else
> -        lidx--;
> +      --idx;
> 
>        /* Numerator *= Index.  */
> -      if (mul_double (lnum, hnum, lidx, hidx, &lnum, &hnum))
> +      num = num.mul_with_sign (idx, false, &overflow);
> +      if (overflow)
>  	return NULL_TREE;
> 
>        /* Denominator *= i.  */
> -      mul_double (ldenom, hdenom, i, 0, &ldenom, &hdenom);
> +      denom *= double_int::from_uhwi (i);
>      }
> 
>    /* Result = Numerator / Denominator.  */
> -  div_and_round_double (EXACT_DIV_EXPR, 1, lnum, hnum, ldenom, hdenom,
> -			&lres, &hres, &ldum, &hdum);
> -
> -  res = build_int_cst_wide (type, lres, hres);
> +  di_res = num.div (denom, true, EXACT_DIV_EXPR);
> +  res = build_int_cst_wide (type, di_res.low, di_res.high);
>    return int_fits_type_p (res, type) ? res : NULL_TREE;
>  }
> 
> Index: gcc/cp/init.c
> ===================================================================
> --- gcc/cp/init.c	(revision 191083)
> +++ gcc/cp/init.c	(working copy)
> @@ -2239,11 +2239,11 @@ build_new_1 (VEC(tree,gc) **placement, t
>        if (TREE_CONSTANT (inner_nelts_cst)
>  	  && TREE_CODE (inner_nelts_cst) == INTEGER_CST)
>  	{
> -	  double_int result;
> -	  if (mul_double (TREE_INT_CST_LOW (inner_nelts_cst),
> -			  TREE_INT_CST_HIGH (inner_nelts_cst),
> -			  inner_nelts_count.low, inner_nelts_count.high,
> -			  &result.low, &result.high))
> +	  bool overflow;
> +	  double_int result = TREE_INT_CST (inner_nelts_cst)
> +			      .mul_with_sign (inner_nelts_count,
> +					      false, &overflow);
> +	  if (overflow)
>  	    {
>  	      if (complain & tf_error)
>  		error ("integer overflow in array size");
> @@ -2345,8 +2345,8 @@ build_new_1 (VEC(tree,gc) **placement, t
>        /* Maximum available size in bytes.  Half of the address space
>  	 minus the cookie size.  */
>        double_int max_size
> -	= double_int_lshift (double_int_one, TYPE_PRECISION (sizetype) - 1,
> -			     HOST_BITS_PER_DOUBLE_INT, false);
> +	= double_int_one.llshift (TYPE_PRECISION (sizetype) - 1,
> +				  HOST_BITS_PER_DOUBLE_INT);
>        /* Size of the inner array elements. */
>        double_int inner_size;
>        /* Maximum number of outer elements which can be allocated. */
> @@ -2356,22 +2356,21 @@ build_new_1 (VEC(tree,gc) **placement, t
>        gcc_assert (TREE_CODE (size) == INTEGER_CST);
>        cookie_size = targetm.cxx.get_cookie_size (elt_type);
>        gcc_assert (TREE_CODE (cookie_size) == INTEGER_CST);
> -      gcc_checking_assert (double_int_ucmp
> -			   (TREE_INT_CST (cookie_size), max_size) < 0);
> +      gcc_checking_assert (TREE_INT_CST (cookie_size).ult (max_size));
>        /* Unconditionally substract the cookie size.  This decreases the
>  	 maximum object size and is safe even if we choose not to use
>  	 a cookie after all.  */
> -      max_size = double_int_sub (max_size, TREE_INT_CST (cookie_size));
> -      if (mul_double (TREE_INT_CST_LOW (size), TREE_INT_CST_HIGH (size),
> -		      inner_nelts_count.low, inner_nelts_count.high,
> -		      &inner_size.low, &inner_size.high)
> -	  || double_int_ucmp (inner_size, max_size) > 0)
> +      max_size -= TREE_INT_CST (cookie_size);
> +      bool overflow;
> +      inner_size = TREE_INT_CST (size)
> +		   .mul_with_sign (inner_nelts_count, false, &overflow);
> +      if (overflow || inner_size.ugt (max_size))
>  	{
>  	  if (complain & tf_error)
>  	    error ("size of array is too large");
>  	  return error_mark_node;
>  	}
> -      max_outer_nelts = double_int_udiv (max_size, inner_size, TRUNC_DIV_EXPR);
> +      max_outer_nelts = max_size.udiv (inner_size, TRUNC_DIV_EXPR);
>        /* Only keep the top-most seven bits, to simplify encoding the
>  	 constant in the instruction stream.  */
>        {
> @@ -2379,10 +2378,8 @@ build_new_1 (VEC(tree,gc) **placement, t
>  	  - (max_outer_nelts.high ? clz_hwi (max_outer_nelts.high)
>  	     : (HOST_BITS_PER_WIDE_INT + clz_hwi (max_outer_nelts.low)));
>  	max_outer_nelts
> -	  = double_int_lshift (double_int_rshift
> -			       (max_outer_nelts, shift,
> -				HOST_BITS_PER_DOUBLE_INT, false),
> -			       shift, HOST_BITS_PER_DOUBLE_INT, false);
> +	  = max_outer_nelts.lrshift (shift, HOST_BITS_PER_DOUBLE_INT)
> +	    .llshift (shift, HOST_BITS_PER_DOUBLE_INT);
>        }
>        max_outer_nelts_tree = double_int_to_tree (sizetype, max_outer_nelts);
> 
> Index: gcc/cp/decl.c
> ===================================================================
> --- gcc/cp/decl.c	(revision 191083)
> +++ gcc/cp/decl.c	(working copy)
> @@ -12448,8 +12448,6 @@ build_enumerator (tree name, tree value,
>  	{
>  	  if (TYPE_VALUES (enumtype))
>  	    {
> -	      HOST_WIDE_INT hi;
> -	      unsigned HOST_WIDE_INT lo;
>  	      tree prev_value;
>  	      bool overflowed;
> 
> @@ -12465,15 +12463,13 @@ build_enumerator (tree name, tree value,
>  		value = error_mark_node;
>  	      else
>  		{
> -		  overflowed = add_double (TREE_INT_CST_LOW (prev_value),
> -					   TREE_INT_CST_HIGH (prev_value),
> -					   1, 0, &lo, &hi);
> +		  double_int di = TREE_INT_CST (prev_value)
> +				  .add_with_sign (double_int_one,
> +						  false, &overflowed);
>  		  if (!overflowed)
>  		    {
> -		      double_int di;
>  		      tree type = TREE_TYPE (prev_value);
> -		      bool pos = (TYPE_UNSIGNED (type) || hi >= 0);
> -		      di.low = lo; di.high = hi;
> +		      bool pos = TYPE_UNSIGNED (type) || !di.is_negative ();
>  		      if (!double_int_fits_to_tree_p (type, di))
>  			{
>  			  unsigned int itk;
> Index: gcc/cp/typeck2.c
> ===================================================================
> --- gcc/cp/typeck2.c	(revision 191083)
> +++ gcc/cp/typeck2.c	(working copy)
> @@ -1055,14 +1055,12 @@ process_init_constructor_array (tree typ
>      {
>        tree domain = TYPE_DOMAIN (type);
>        if (domain)
> -	len = double_int_ext
> -	        (double_int_add
> -		  (double_int_sub
> -		    (tree_to_double_int (TYPE_MAX_VALUE (domain)),
> -		     tree_to_double_int (TYPE_MIN_VALUE (domain))),
> -		    double_int_one),
> -		  TYPE_PRECISION (TREE_TYPE (domain)),
> -		  TYPE_UNSIGNED (TREE_TYPE (domain))).low;
> +	len = (tree_to_double_int (TYPE_MAX_VALUE (domain))
> +	       - tree_to_double_int (TYPE_MIN_VALUE (domain))
> +	       + double_int_one)
> +	      .ext (TYPE_PRECISION (TREE_TYPE (domain)),
> +		    TYPE_UNSIGNED (TREE_TYPE (domain)))
> +	      .low;
>        else
>  	unbounded = true;  /* Take as many as there are.  */
>      }
> Index: gcc/cp/mangle.c
> ===================================================================
> --- gcc/cp/mangle.c	(revision 191083)
> +++ gcc/cp/mangle.c	(working copy)
> @@ -3119,12 +3119,11 @@ write_array_type (const tree type)
>  	{
>  	  /* The ABI specifies that we should mangle the number of
>  	     elements in the array, not the largest allowed index.  */
> -	  double_int dmax
> -	    = double_int_add (tree_to_double_int (max), double_int_one);
> +	  double_int dmax = tree_to_double_int (max) + double_int_one;
>  	  /* Truncate the result - this will mangle [0, SIZE_INT_MAX]
>  	     number of elements as zero.  */
> -	  dmax = double_int_zext (dmax, TYPE_PRECISION (TREE_TYPE (max)));
> -	  gcc_assert (double_int_fits_in_uhwi_p (dmax));
> +	  dmax = dmax.zext (TYPE_PRECISION (TREE_TYPE (max)));
> +	  gcc_assert (dmax.fits_uhwi ());
>  	  write_unsigned_number (dmax.low);
>  	}
>        else
> Index: gcc/double-int.c
> ===================================================================
> --- gcc/double-int.c	(revision 191083)
> +++ gcc/double-int.c	(working copy)
> @@ -23,6 +23,41 @@ along with GCC; see the file COPYING3.
>  #include "tm.h"			/* For SHIFT_COUNT_TRUNCATED.  */
>  #include "tree.h"
> 
> +static int add_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> +				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> +				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
> +				 bool);
> +
> +#define add_double(l1,h1,l2,h2,lv,hv) \
> +  add_double_with_sign (l1, h1, l2, h2, lv, hv, false)
> +
> +static int neg_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> +		       unsigned HOST_WIDE_INT *, HOST_WIDE_INT *);
> +
> +static int mul_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> +				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> +				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
> +				 bool);
> +
> +static int mul_double_wide_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> +				      unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> +				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
> +				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
> +				      bool);
> +
> +#define mul_double(l1,h1,l2,h2,lv,hv) \
> +  mul_double_with_sign (l1, h1, l2, h2, lv, hv, false)
> +
> +static void lshift_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> +			   HOST_WIDE_INT, unsigned int,
> +			   unsigned HOST_WIDE_INT *, HOST_WIDE_INT *, bool);
> +
> +static int div_and_round_double (unsigned, int, unsigned HOST_WIDE_INT,
> +				 HOST_WIDE_INT, unsigned HOST_WIDE_INT,
> +				 HOST_WIDE_INT, unsigned HOST_WIDE_INT *,
> +				 HOST_WIDE_INT *, unsigned HOST_WIDE_INT *,
> +				 HOST_WIDE_INT *);
> +
>  /* We know that A1 + B1 = SUM1, using 2's complement arithmetic and ignoring
>     overflow.  Suppose A, B and SUM have the same respective signs as A1, B1,
>     and SUM1.  Then this yields nonzero if overflow occurred during the
> @@ -75,7 +110,7 @@ decode (HOST_WIDE_INT *words, unsigned H
>     One argument is L1 and H1; the other, L2 and H2.
>     The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */
> 
> -int
> +static int
>  add_double_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>  		      unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
>  		      unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
> @@ -105,7 +140,7 @@ add_double_with_sign (unsigned HOST_WIDE
>     The argument is given as two `HOST_WIDE_INT' pieces in L1 and H1.
>     The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */
> 
> -int
> +static int
>  neg_double (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>  	    unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv)
>  {
> @@ -129,7 +164,7 @@ neg_double (unsigned HOST_WIDE_INT l1, H
>     One argument is L1 and H1; the other, L2 and H2.
>     The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */
> 
> -int
> +static int
>  mul_double_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>  		      unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
>  		      unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
> @@ -143,7 +178,7 @@ mul_double_with_sign (unsigned HOST_WIDE
>  				    unsigned_p);
>  }
> 
> -int
> +static int
>  mul_double_wide_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>  			   unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
>  			   unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
> @@ -269,7 +304,7 @@ rshift_double (unsigned HOST_WIDE_INT l1
>     ARITH nonzero specifies arithmetic shifting; otherwise use logical shift.
>     Store the value as two `HOST_WIDE_INT' pieces in *LV and *HV.  */
> 
> -void
> +static void
>  lshift_double (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>  	       HOST_WIDE_INT count, unsigned int prec,
>  	       unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv, bool arith)
> @@ -335,7 +370,7 @@ lshift_double (unsigned HOST_WIDE_INT l1
>     Return nonzero if the operation overflows.
>     UNS nonzero says do unsigned division.  */
> 
> -int
> +static int
>  div_and_round_double (unsigned code, int uns,
>  		      /* num == numerator == dividend */
>  		      unsigned HOST_WIDE_INT lnum_orig,
> @@ -762,6 +797,19 @@ double_int::mul_with_sign (double_int b,
>    return ret;
>  }
> 
> +double_int
> +double_int::wide_mul_with_sign (double_int b, bool unsigned_p,
> +				double_int *higher, bool *overflow) const
> +
> +{
> +  double_int lower;
> +  *overflow = mul_double_wide_with_sign (low, high, b.low, b.high,
> +					 &lower.low, &lower.high,
> +					 &higher->low, &higher->high,
> +					 unsigned_p);
> +  return lower;
> +}
> +
>  /* Returns A + B.  */
> 
>  double_int
> @@ -809,12 +857,33 @@ double_int::operator - () const
>    return ret;
>  }
> 
> +double_int
> +double_int::neg_with_overflow (bool *overflow) const
> +{
> +  double_int ret;
> +  *overflow = neg_double (low, high, &ret.low, &ret.high);
> +  return ret;
> +}
> +
>  /* Returns A / B (computed as unsigned depending on UNS, and rounded as
>     specified by CODE).  CODE is enum tree_code in fact, but double_int.h
>     must be included before tree.h.  The remainder after the division is
>     stored to MOD.  */
> 
>  double_int
> +double_int::divmod_with_overflow (double_int b, bool uns, unsigned code,
> +				  double_int *mod, bool *overflow) const
> +{
> +  const double_int &a = *this;
> +  double_int ret;
> +
> +  *overflow = div_and_round_double (code, uns, a.low, a.high,
> +				    b.low, b.high, &ret.low, &ret.high,
> +				    &mod->low, &mod->high);
> +  return ret;
> +}
> +
> +double_int
>  double_int::divmod (double_int b, bool uns, unsigned code,
>  		    double_int *mod) const
>  {
> Index: gcc/double-int.h
> ===================================================================
> --- gcc/double-int.h	(revision 191083)
> +++ gcc/double-int.h	(working copy)
> @@ -61,6 +61,7 @@ struct double_int
> 
>    static double_int from_uhwi (unsigned HOST_WIDE_INT cst);
>    static double_int from_shwi (HOST_WIDE_INT cst);
> +  static double_int from_pair (HOST_WIDE_INT high, unsigned HOST_WIDE_INT low);
> 
>    /* No copy assignment operator or destructor to keep the type a POD.  */
> 
> @@ -105,9 +106,16 @@ struct double_int
> 
>    /* Arithmetic operation functions.  */
> 
> +  /* The following operations perform arithmetics modulo 2^precision, so you
> +     do not need to call .ext between them, even if you are representing
> +     numbers with precision less than HOST_BITS_PER_DOUBLE_INT bits.  */
> +
>    double_int set_bit (unsigned) const;
>    double_int mul_with_sign (double_int, bool unsigned_p, bool *overflow) const;
> +  double_int wide_mul_with_sign (double_int, bool unsigned_p,
> +				 double_int *higher, bool *overflow) const;
>    double_int add_with_sign (double_int, bool unsigned_p, bool *overflow) const;
> +  double_int neg_with_overflow (bool *overflow) const;
> 
>    double_int operator * (double_int) const;
>    double_int operator + (double_int) const;
> @@ -131,12 +139,15 @@ struct double_int
>    /* You must ensure that double_int::ext is called on the operands
>       of the following operations, if the precision of the numbers
>       is less than HOST_BITS_PER_DOUBLE_INT bits.  */
> +
>    double_int div (double_int, bool, unsigned) const;
>    double_int sdiv (double_int, unsigned) const;
>    double_int udiv (double_int, unsigned) const;
>    double_int mod (double_int, bool, unsigned) const;
>    double_int smod (double_int, unsigned) const;
>    double_int umod (double_int, unsigned) const;
> +  double_int divmod_with_overflow (double_int, bool, unsigned,
> +				   double_int *, bool *) const;
>    double_int divmod (double_int, bool, unsigned, double_int *) const;
>    double_int sdivmod (double_int, unsigned, double_int *) const;
>    double_int udivmod (double_int, unsigned, double_int *) const;
> @@ -199,13 +210,6 @@ double_int::from_shwi (HOST_WIDE_INT cst
>    return r;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline double_int
> -shwi_to_double_int (HOST_WIDE_INT cst)
> -{
> -  return double_int::from_shwi (cst);
> -}
> -
>  /* Some useful constants.  */
>  /* FIXME(crowl): Maybe remove after converting callers?
>     The problem is that a named constant would not be as optimizable,
> @@ -229,11 +233,13 @@ double_int::from_uhwi (unsigned HOST_WID
>    return r;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline double_int
> -uhwi_to_double_int (unsigned HOST_WIDE_INT cst)
> +inline double_int
> +double_int::from_pair (HOST_WIDE_INT high, unsigned HOST_WIDE_INT low)
>  {
> -  return double_int::from_uhwi (cst);
> +  double_int r;
> +  r.low = low;
> +  r.high = high;
> +  return r;
>  }
> 
>  inline double_int &
> @@ -301,13 +307,6 @@ double_int::to_shwi () const
>    return (HOST_WIDE_INT) low;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline HOST_WIDE_INT
> -double_int_to_shwi (double_int cst)
> -{
> -  return cst.to_shwi ();
> -}
> -
>  /* Returns value of CST as an unsigned number.  CST must satisfy
>     double_int::fits_unsigned.  */
> 
> @@ -317,13 +316,6 @@ double_int::to_uhwi () const
>    return low;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline unsigned HOST_WIDE_INT
> -double_int_to_uhwi (double_int cst)
> -{
> -  return cst.to_uhwi ();
> -}
> -
>  /* Returns true if CST fits in unsigned HOST_WIDE_INT.  */
> 
>  inline bool
> @@ -332,164 +324,6 @@ double_int::fits_uhwi () const
>    return high == 0;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline bool
> -double_int_fits_in_uhwi_p (double_int cst)
> -{
> -  return cst.fits_uhwi ();
> -}
> -
> -/* Returns true if CST fits in signed HOST_WIDE_INT.  */
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline bool
> -double_int_fits_in_shwi_p (double_int cst)
> -{
> -  return cst.fits_shwi ();
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline bool
> -double_int_fits_in_hwi_p (double_int cst, bool uns)
> -{
> -  return cst.fits_hwi (uns);
> -}
> -
> -/* The following operations perform arithmetics modulo 2^precision,
> -   so you do not need to call double_int_ext between them, even if
> -   you are representing numbers with precision less than
> -   HOST_BITS_PER_DOUBLE_INT bits.  */
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_mul (double_int a, double_int b)
> -{
> -  return a * b;
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_mul_with_sign (double_int a, double_int b,
> -			  bool unsigned_p, int *overflow)
> -{
> -  bool ovf;
> -  return a.mul_with_sign (b, unsigned_p, &ovf);
> -  *overflow = ovf;
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_add (double_int a, double_int b)
> -{
> -  return a + b;
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_sub (double_int a, double_int b)
> -{
> -  return a - b;
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_neg (double_int a)
> -{
> -  return -a;
> -}
> -
> -/* You must ensure that double_int_ext is called on the operands
> -   of the following operations, if the precision of the numbers
> -   is less than HOST_BITS_PER_DOUBLE_INT bits.  */
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_div (double_int a, double_int b, bool uns, unsigned code)
> -{
> -  return a.div (b, uns, code);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_sdiv (double_int a, double_int b, unsigned code)
> -{
> -  return a.sdiv (b, code);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_udiv (double_int a, double_int b, unsigned code)
> -{
> -  return a.udiv (b, code);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_mod (double_int a, double_int b, bool uns, unsigned code)
> -{
> -  return a.mod (b, uns, code);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_smod (double_int a, double_int b, unsigned code)
> -{
> -  return a.smod (b, code);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_umod (double_int a, double_int b, unsigned code)
> -{
> -  return a.umod (b, code);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_divmod (double_int a, double_int b, bool uns,
> -		   unsigned code, double_int *mod)
> -{
> -  return a.divmod (b, uns, code, mod);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_sdivmod (double_int a, double_int b, unsigned code, double_int *mod)
> -{
> -  return a.sdivmod (b, code, mod);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_udivmod (double_int a, double_int b, unsigned code, double_int *mod)
> -{
> -  return a.udivmod (b, code, mod);
> -}
> -
> -/***/
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline bool
> -double_int_multiple_of (double_int product, double_int factor,
> -                        bool unsigned_p, double_int *multiple)
> -{
> -  return product.multiple_of (factor, unsigned_p, multiple);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_setbit (double_int a, unsigned bitpos)
> -{
> -  return a.set_bit (bitpos);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline int
> -double_int_ctz (double_int a)
> -{
> -  return a.trailing_zeros ();
> -}
> -
>  /* Logical operations.  */
> 
>  /* Returns ~A.  */
> @@ -503,13 +337,6 @@ double_int::operator ~ () const
>    return result;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline double_int
> -double_int_not (double_int a)
> -{
> -  return ~a;
> -}
> -
>  /* Returns A | B.  */
> 
>  inline double_int
> @@ -521,13 +348,6 @@ double_int::operator | (double_int b) co
>    return result;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline double_int
> -double_int_ior (double_int a, double_int b)
> -{
> -  return a | b;
> -}
> -
>  /* Returns A & B.  */
> 
>  inline double_int
> @@ -539,13 +359,6 @@ double_int::operator & (double_int b) co
>    return result;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline double_int
> -double_int_and (double_int a, double_int b)
> -{
> -  return a & b;
> -}
> -
>  /* Returns A & ~B.  */
> 
>  inline double_int
> @@ -557,13 +370,6 @@ double_int::and_not (double_int b) const
>    return result;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline double_int
> -double_int_and_not (double_int a, double_int b)
> -{
> -  return a.and_not (b);
> -}
> -
>  /* Returns A ^ B.  */
> 
>  inline double_int
> @@ -575,165 +381,8 @@ double_int::operator ^ (double_int b) co
>    return result;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline double_int
> -double_int_xor (double_int a, double_int b)
> -{
> -  return a ^ b;
> -}
> -
> -
> -/* Shift operations.  */
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_lshift (double_int a, HOST_WIDE_INT count, unsigned int prec,
> -		   bool arith)
> -{
> -  return a.lshift (count, prec, arith);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_rshift (double_int a, HOST_WIDE_INT count, unsigned int prec,
> -		   bool arith)
> -{
> -  return a.rshift (count, prec, arith);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_lrotate (double_int a, HOST_WIDE_INT count, unsigned int prec)
> -{
> -  return a.lrotate (count, prec);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_rrotate (double_int a, HOST_WIDE_INT count, unsigned int prec)
> -{
> -  return a.rrotate (count, prec);
> -}
> -
> -/* Returns true if CST is negative.  Of course, CST is considered to
> -   be signed.  */
> -
> -static inline bool
> -double_int_negative_p (double_int cst)
> -{
> -  return cst.high < 0;
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline int
> -double_int_cmp (double_int a, double_int b, bool uns)
> -{
> -  return a.cmp (b, uns);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline int
> -double_int_scmp (double_int a, double_int b)
> -{
> -  return a.scmp (b);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline int
> -double_int_ucmp (double_int a, double_int b)
> -{
> -  return a.ucmp (b);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_max (double_int a, double_int b, bool uns)
> -{
> -  return a.max (b, uns);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_smax (double_int a, double_int b)
> -{
> -  return a.smax (b);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_umax (double_int a, double_int b)
> -{
> -  return a.umax (b);
> -}
> -
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_min (double_int a, double_int b, bool uns)
> -{
> -  return a.min (b, uns);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_smin (double_int a, double_int b)
> -{
> -  return a.smin (b);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_umin (double_int a, double_int b)
> -{
> -  return a.umin (b);
> -}
> -
>  void dump_double_int (FILE *, double_int, bool);
> 
> -/* Zero and sign extension of numbers in smaller precisions.  */
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_ext (double_int a, unsigned prec, bool uns)
> -{
> -  return a.ext (prec, uns);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_sext (double_int a, unsigned prec)
> -{
> -  return a.sext (prec);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_zext (double_int a, unsigned prec)
> -{
> -  return a.zext (prec);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_mask (unsigned prec)
> -{
> -  return double_int::mask (prec);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_max_value (unsigned int prec, bool uns)
> -{
> -  return double_int::max_value (prec, uns);
> -}
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -inline double_int
> -double_int_min_value (unsigned int prec, bool uns)
> -{
> -  return double_int::min_value (prec, uns);
> -}
> -
>  #define ALL_ONES (~((unsigned HOST_WIDE_INT) 0))
> 
>  /* The operands of the following comparison functions must be processed
> @@ -748,13 +397,6 @@ double_int::is_zero () const
>    return low == 0 && high == 0;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline bool
> -double_int_zero_p (double_int cst)
> -{
> -  return cst.is_zero ();
> -}
> -
>  /* Returns true if CST is one.  */
> 
>  inline bool
> @@ -763,13 +405,6 @@ double_int::is_one () const
>    return low == 1 && high == 0;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline bool
> -double_int_one_p (double_int cst)
> -{
> -  return cst.is_one ();
> -}
> -
>  /* Returns true if CST is minus one.  */
> 
>  inline bool
> @@ -778,13 +413,6 @@ double_int::is_minus_one () const
>    return low == ALL_ONES && high == -1;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline bool
> -double_int_minus_one_p (double_int cst)
> -{
> -  return cst.is_minus_one ();
> -}
> -
>  /* Returns true if CST is negative.  */
> 
>  inline bool
> @@ -801,13 +429,6 @@ double_int::operator == (double_int cst2
>    return low == cst2.low && high == cst2.high;
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline bool
> -double_int_equal_p (double_int cst1, double_int cst2)
> -{
> -  return cst1 == cst2;
> -}
> -
>  /* Returns true if CST1 != CST2.  */
> 
>  inline bool
> @@ -824,52 +445,6 @@ double_int::popcount () const
>    return popcount_hwi (high) + popcount_hwi (low);
>  }
> 
> -/* FIXME(crowl): Remove after converting callers.  */
> -static inline int
> -double_int_popcount (double_int cst)
> -{
> -  return cst.popcount ();
> -}
> -
> -
> -/* Legacy interface with decomposed high/low parts.  */
> -
> -/* FIXME(crowl): Remove after converting callers.  */
> -extern int add_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> -				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> -				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
> -				 bool);
> -/* FIXME(crowl): Remove after converting callers.  */
> -#define add_double(l1,h1,l2,h2,lv,hv) \
> -  add_double_with_sign (l1, h1, l2, h2, lv, hv, false)
> -/* FIXME(crowl): Remove after converting callers.  */
> -extern int neg_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> -		       unsigned HOST_WIDE_INT *, HOST_WIDE_INT *);
> -/* FIXME(crowl): Remove after converting callers.  */
> -extern int mul_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> -				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> -				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
> -				 bool);
> -/* FIXME(crowl): Remove after converting callers.  */
> -extern int mul_double_wide_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> -				      unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> -				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
> -				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
> -				      bool);
> -/* FIXME(crowl): Remove after converting callers.  */
> -#define mul_double(l1,h1,l2,h2,lv,hv) \
> -  mul_double_with_sign (l1, h1, l2, h2, lv, hv, false)
> -/* FIXME(crowl): Remove after converting callers.  */
> -extern void lshift_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
> -			   HOST_WIDE_INT, unsigned int,
> -			   unsigned HOST_WIDE_INT *, HOST_WIDE_INT *, bool);
> -/* FIXME(crowl): Remove after converting callers.  */
> -extern int div_and_round_double (unsigned, int, unsigned HOST_WIDE_INT,
> -				 HOST_WIDE_INT, unsigned HOST_WIDE_INT,
> -				 HOST_WIDE_INT, unsigned HOST_WIDE_INT *,
> -				 HOST_WIDE_INT *, unsigned HOST_WIDE_INT *,
> -				 HOST_WIDE_INT *);
> -
> 
>  #ifndef GENERATOR_FILE
>  /* Conversion to and from GMP integer representations.  */
> Index: gcc/fortran/trans-expr.c
> ===================================================================
> --- gcc/fortran/trans-expr.c	(revision 191083)
> +++ gcc/fortran/trans-expr.c	(working copy)
> @@ -1657,10 +1657,10 @@ gfc_conv_cst_int_power (gfc_se * se, tre
> 
>    /* If exponent is too large, we won't expand it anyway, so don't bother
>       with large integer values.  */
> -  if (!double_int_fits_in_shwi_p (TREE_INT_CST (rhs)))
> +  if (!TREE_INT_CST (rhs).fits_shwi ())
>      return 0;
> 
> -  m = double_int_to_shwi (TREE_INT_CST (rhs));
> +  m = TREE_INT_CST (rhs).to_shwi ();
>    /* There's no ABS for HOST_WIDE_INT, so here we go. It also takes care
>       of the asymmetric range of the integer type.  */
>    n = (unsigned HOST_WIDE_INT) (m < 0 ? -m : m);
> Index: gcc/fortran/target-memory.c
> ===================================================================
> --- gcc/fortran/target-memory.c	(revision 191083)
> +++ gcc/fortran/target-memory.c	(working copy)
> @@ -395,8 +395,7 @@ gfc_interpret_logical (int kind, unsigne
>  {
>    tree t = native_interpret_expr (gfc_get_logical_type (kind), buffer,
>  				  buffer_size);
> -  *logical = double_int_zero_p (tree_to_double_int (t))
> -	     ? 0 : 1;
> +  *logical = tree_to_double_int (t).is_zero () ? 0 : 1;
>    return size_logical (kind);
>  }
> 
> Index: gcc/expmed.c
> ===================================================================
> --- gcc/expmed.c	(revision 191083)
> +++ gcc/expmed.c	(working copy)
> @@ -3404,12 +3404,9 @@ choose_multiplier (unsigned HOST_WIDE_IN
>  		   unsigned HOST_WIDE_INT *multiplier_ptr,
>  		   int *post_shift_ptr, int *lgup_ptr)
>  {
> -  HOST_WIDE_INT mhigh_hi, mlow_hi;
> -  unsigned HOST_WIDE_INT mhigh_lo, mlow_lo;
> +  double_int mhigh, mlow;
>    int lgup, post_shift;
>    int pow, pow2;
> -  unsigned HOST_WIDE_INT nl, dummy1;
> -  HOST_WIDE_INT nh, dummy2;
> 
>    /* lgup = ceil(log2(divisor)); */
>    lgup = ceil_log2 (d);
> @@ -3425,32 +3422,17 @@ choose_multiplier (unsigned HOST_WIDE_IN
>    gcc_assert (pow != HOST_BITS_PER_DOUBLE_INT);
> 
>    /* mlow = 2^(N + lgup)/d */
> - if (pow >= HOST_BITS_PER_WIDE_INT)
> -    {
> -      nh = (HOST_WIDE_INT) 1 << (pow - HOST_BITS_PER_WIDE_INT);
> -      nl = 0;
> -    }
> -  else
> -    {
> -      nh = 0;
> -      nl = (unsigned HOST_WIDE_INT) 1 << pow;
> -    }
> -  div_and_round_double (TRUNC_DIV_EXPR, 1, nl, nh, d, (HOST_WIDE_INT) 0,
> -			&mlow_lo, &mlow_hi, &dummy1, &dummy2);
> +  double_int val = double_int_zero.set_bit (pow);
> +  mlow = val.div (double_int::from_uhwi (d), true, TRUNC_DIV_EXPR);
> 
> -  /* mhigh = (2^(N + lgup) + 2^N + lgup - precision)/d */
> -  if (pow2 >= HOST_BITS_PER_WIDE_INT)
> -    nh |= (HOST_WIDE_INT) 1 << (pow2 - HOST_BITS_PER_WIDE_INT);
> -  else
> -    nl |= (unsigned HOST_WIDE_INT) 1 << pow2;
> -  div_and_round_double (TRUNC_DIV_EXPR, 1, nl, nh, d, (HOST_WIDE_INT) 0,
> -			&mhigh_lo, &mhigh_hi, &dummy1, &dummy2);
> +  /* mhigh = (2^(N + lgup) + 2^(N + lgup - precision))/d */
> +  val |= double_int_zero.set_bit (pow2);
> +  mhigh = val.div (double_int::from_uhwi (d), true, TRUNC_DIV_EXPR);
> 
> -  gcc_assert (!mhigh_hi || nh - d < d);
> -  gcc_assert (mhigh_hi <= 1 && mlow_hi <= 1);
> +  gcc_assert (!mhigh.high || val.high - d < d);
> +  gcc_assert (mhigh.high <= 1 && mlow.high <= 1);
>    /* Assert that mlow < mhigh.  */
> -  gcc_assert (mlow_hi < mhigh_hi
> -	      || (mlow_hi == mhigh_hi && mlow_lo < mhigh_lo));
> +  gcc_assert (mlow.ult (mhigh));
> 
>    /* If precision == N, then mlow, mhigh exceed 2^N
>       (but they do not exceed 2^(N+1)).  */
> @@ -3458,15 +3440,14 @@ choose_multiplier (unsigned HOST_WIDE_IN
>    /* Reduce to lowest terms.  */
>    for (post_shift = lgup; post_shift > 0; post_shift--)
>      {
> -      unsigned HOST_WIDE_INT ml_lo = (mlow_hi <<
> (HOST_BITS_PER_WIDE_INT - 1)) | (mlow_lo >> 1);
> -      unsigned HOST_WIDE_INT mh_lo = (mhigh_hi <<
> (HOST_BITS_PER_WIDE_INT - 1)) | (mhigh_lo >> 1);
> +      int shft = HOST_BITS_PER_WIDE_INT - 1;
> +      unsigned HOST_WIDE_INT ml_lo = (mlow.high << shft) | (mlow.low >> 1);
> +      unsigned HOST_WIDE_INT mh_lo = (mhigh.high << shft) | (mhigh.low >> 1);
>        if (ml_lo >= mh_lo)
>  	break;
> 
> -      mlow_hi = 0;
> -      mlow_lo = ml_lo;
> -      mhigh_hi = 0;
> -      mhigh_lo = mh_lo;
> +      mlow = double_int::from_uhwi (ml_lo);
> +      mhigh = double_int::from_uhwi (mh_lo);
>      }
> 
>    *post_shift_ptr = post_shift;
> @@ -3474,13 +3455,13 @@ choose_multiplier (unsigned HOST_WIDE_IN
>    if (n < HOST_BITS_PER_WIDE_INT)
>      {
>        unsigned HOST_WIDE_INT mask = ((unsigned HOST_WIDE_INT) 1 << n) - 1;
> -      *multiplier_ptr = mhigh_lo & mask;
> -      return mhigh_lo >= mask;
> +      *multiplier_ptr = mhigh.low & mask;
> +      return mhigh.low >= mask;
>      }
>    else
>      {
> -      *multiplier_ptr = mhigh_lo;
> -      return mhigh_hi;
> +      *multiplier_ptr = mhigh.low;
> +      return mhigh.high;
>      }
>  }
> 
> Index: gcc/emit-rtl.c
> ===================================================================
> --- gcc/emit-rtl.c	(revision 191083)
> +++ gcc/emit-rtl.c	(working copy)
> @@ -5736,11 +5736,10 @@ init_emit_once (void)
>        FCONST1(mode).data.high = 0;
>        FCONST1(mode).data.low = 0;
>        FCONST1(mode).mode = mode;
> -      lshift_double (1, 0, GET_MODE_FBIT (mode),
> -                     HOST_BITS_PER_DOUBLE_INT,
> -                     &FCONST1(mode).data.low,
> -		     &FCONST1(mode).data.high,
> -                     SIGNED_FIXED_POINT_MODE_P (mode));
> +      FCONST1(mode).data
> +	= double_int_one.lshift (GET_MODE_FBIT (mode),
> +				 HOST_BITS_PER_DOUBLE_INT,
> +				 SIGNED_FIXED_POINT_MODE_P (mode));
>        const_tiny_rtx[1][(int) mode] = CONST_FIXED_FROM_FIXED_VALUE (
>  				      FCONST1 (mode), mode);
>      }
> @@ -5759,11 +5758,10 @@ init_emit_once (void)
>        FCONST1(mode).data.high = 0;
>        FCONST1(mode).data.low = 0;
>        FCONST1(mode).mode = mode;
> -      lshift_double (1, 0, GET_MODE_FBIT (mode),
> -                     HOST_BITS_PER_DOUBLE_INT,
> -                     &FCONST1(mode).data.low,
> -		     &FCONST1(mode).data.high,
> -                     SIGNED_FIXED_POINT_MODE_P (mode));
> +      FCONST1(mode).data
> +	= double_int_one.lshift (GET_MODE_FBIT (mode),
> +				 HOST_BITS_PER_DOUBLE_INT,
> +				 SIGNED_FIXED_POINT_MODE_P (mode));
>        const_tiny_rtx[1][(int) mode] = CONST_FIXED_FROM_FIXED_VALUE (
>  				      FCONST1 (mode), mode);
>      }
> Index: gcc/simplify-rtx.c
> ===================================================================
> --- gcc/simplify-rtx.c	(revision 191083)
> +++ gcc/simplify-rtx.c	(working copy)
> @@ -1525,109 +1525,117 @@ simplify_const_unary_operation (enum rtx
>    else if (width <= HOST_BITS_PER_DOUBLE_INT
>  	   && (CONST_DOUBLE_AS_INT_P (op) || CONST_INT_P (op)))
>      {
> -      unsigned HOST_WIDE_INT l1, lv;
> -      HOST_WIDE_INT h1, hv;
> +      double_int first, value;
> 
>        if (CONST_DOUBLE_AS_INT_P (op))
> -	l1 = CONST_DOUBLE_LOW (op), h1 = CONST_DOUBLE_HIGH (op);
> +	first = double_int::from_pair (CONST_DOUBLE_HIGH (op),
> +				       CONST_DOUBLE_LOW (op));
>        else
> -	l1 = INTVAL (op), h1 = HWI_SIGN_EXTEND (l1);
> +	first = double_int::from_shwi (INTVAL (op));
> 
>        switch (code)
>  	{
>  	case NOT:
> -	  lv = ~ l1;
> -	  hv = ~ h1;
> +	  value = ~first;
>  	  break;
> 
>  	case NEG:
> -	  neg_double (l1, h1, &lv, &hv);
> +	  value = -first;
>  	  break;
> 
>  	case ABS:
> -	  if (h1 < 0)
> -	    neg_double (l1, h1, &lv, &hv);
> +	  if (first.is_negative ())
> +	    value = -first;
>  	  else
> -	    lv = l1, hv = h1;
> +	    value = first;
>  	  break;
> 
>  	case FFS:
> -	  hv = 0;
> -	  if (l1 != 0)
> -	    lv = ffs_hwi (l1);
> -	  else if (h1 != 0)
> -	    lv = HOST_BITS_PER_WIDE_INT + ffs_hwi (h1);
> +	  value.high = 0;
> +	  if (first.low != 0)
> +	    value.low = ffs_hwi (first.low);
> +	  else if (first.high != 0)
> +	    value.low = HOST_BITS_PER_WIDE_INT + ffs_hwi (first.high);
>  	  else
> -	    lv = 0;
> +	    value.low = 0;
>  	  break;
> 
>  	case CLZ:
> -	  hv = 0;
> -	  if (h1 != 0)
> -	    lv = GET_MODE_PRECISION (mode) - floor_log2 (h1) - 1
> -	      - HOST_BITS_PER_WIDE_INT;
> -	  else if (l1 != 0)
> -	    lv = GET_MODE_PRECISION (mode) - floor_log2 (l1) - 1;
> -	  else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, lv))
> -	    lv = GET_MODE_PRECISION (mode);
> +	  value.high = 0;
> +	  if (first.high != 0)
> +	    value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.high) - 1
> +	              - HOST_BITS_PER_WIDE_INT;
> +	  else if (first.low != 0)
> +	    value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.low) - 1;
> +	  else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, value.low))
> +	    value.low = GET_MODE_PRECISION (mode);
>  	  break;
> 
>  	case CTZ:
> -	  hv = 0;
> -	  if (l1 != 0)
> -	    lv = ctz_hwi (l1);
> -	  else if (h1 != 0)
> -	    lv = HOST_BITS_PER_WIDE_INT + ctz_hwi (h1);
> -	  else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, lv))
> -	    lv = GET_MODE_PRECISION (mode);
> +	  value.high = 0;
> +	  if (first.low != 0)
> +	    value.low = ctz_hwi (first.low);
> +	  else if (first.high != 0)
> +	    value.low = HOST_BITS_PER_WIDE_INT + ctz_hwi (first.high);
> +	  else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, value.low))
> +	    value.low = GET_MODE_PRECISION (mode);
>  	  break;
> 
>  	case POPCOUNT:
> -	  hv = 0;
> -	  lv = 0;
> -	  while (l1)
> -	    lv++, l1 &= l1 - 1;
> -	  while (h1)
> -	    lv++, h1 &= h1 - 1;
> +	  value = double_int_zero;
> +	  while (first.low)
> +	    {
> +	      value.low++;
> +	      first.low &= first.low - 1;
> +	    }
> +	  while (first.high)
> +	    {
> +	      value.low++;
> +	      first.high &= first.high - 1;
> +	    }
>  	  break;
> 
>  	case PARITY:
> -	  hv = 0;
> -	  lv = 0;
> -	  while (l1)
> -	    lv++, l1 &= l1 - 1;
> -	  while (h1)
> -	    lv++, h1 &= h1 - 1;
> -	  lv &= 1;
> +	  value = double_int_zero;
> +	  while (first.low)
> +	    {
> +	      value.low++;
> +	      first.low &= first.low - 1;
> +	    }
> +	  while (first.high)
> +	    {
> +	      value.low++;
> +	      first.high &= first.high - 1;
> +	    }
> +	  value.low &= 1;
>  	  break;
> 
>  	case BSWAP:
>  	  {
>  	    unsigned int s;
> 
> -	    hv = 0;
> -	    lv = 0;
> +	    value = double_int_zero;
>  	    for (s = 0; s < width; s += 8)
>  	      {
>  		unsigned int d = width - s - 8;
>  		unsigned HOST_WIDE_INT byte;
> 
>  		if (s < HOST_BITS_PER_WIDE_INT)
> -		  byte = (l1 >> s) & 0xff;
> +		  byte = (first.low >> s) & 0xff;
>  		else
> -		  byte = (h1 >> (s - HOST_BITS_PER_WIDE_INT)) & 0xff;
> +		  byte = (first.high >> (s - HOST_BITS_PER_WIDE_INT)) & 0xff;
> 
>  		if (d < HOST_BITS_PER_WIDE_INT)
> -		  lv |= byte << d;
> +		  value.low |= byte << d;
>  		else
> -		  hv |= byte << (d - HOST_BITS_PER_WIDE_INT);
> +		  value.high |= byte << (d - HOST_BITS_PER_WIDE_INT);
>  	      }
>  	  }
>  	  break;
> 
>  	case TRUNCATE:
>  	  /* This is just a change-of-mode, so do nothing.  */
> -	  lv = l1, hv = h1;
> +	  value = first;
>  	  break;
> 
>  	case ZERO_EXTEND:
> @@ -1636,8 +1644,7 @@ simplify_const_unary_operation (enum rtx
>  	  if (op_width > HOST_BITS_PER_WIDE_INT)
>  	    return 0;
> 
> -	  hv = 0;
> -	  lv = l1 & GET_MODE_MASK (op_mode);
> +	  value = double_int::from_uhwi (first.low & GET_MODE_MASK (op_mode));
>  	  break;
> 
>  	case SIGN_EXTEND:
> @@ -1646,11 +1653,11 @@ simplify_const_unary_operation (enum rtx
>  	    return 0;
>  	  else
>  	    {
> -	      lv = l1 & GET_MODE_MASK (op_mode);
> -	      if (val_signbit_known_set_p (op_mode, lv))
> -		lv |= ~GET_MODE_MASK (op_mode);
> +	      value.low = first.low & GET_MODE_MASK (op_mode);
> +	      if (val_signbit_known_set_p (op_mode, value.low))
> +		value.low |= ~GET_MODE_MASK (op_mode);
> 
> -	      hv = HWI_SIGN_EXTEND (lv);
> +	      value.high = HWI_SIGN_EXTEND (value.low);
>  	    }
>  	  break;
> 
> @@ -1661,7 +1668,7 @@ simplify_const_unary_operation (enum rtx
>  	  return 0;
>  	}
> 
> -      return immed_double_const (lv, hv, mode);
> +      return immed_double_int_const (value, mode);
>      }
> 
>    else if (CONST_DOUBLE_AS_FLOAT_P (op)
> @@ -3578,6 +3585,7 @@ simplify_const_binary_operation (enum rt
>        && (CONST_DOUBLE_AS_INT_P (op1) || CONST_INT_P (op1)))
>      {
>        double_int o0, o1, res, tmp;
> +      bool overflow;
> 
>        o0 = rtx_to_double_int (op0);
>        o1 = rtx_to_double_int (op1);
> @@ -3599,34 +3607,30 @@ simplify_const_binary_operation (enum rt
>  	  break;
> 
>  	case DIV:
> -	  if (div_and_round_double (TRUNC_DIV_EXPR, 0,
> -				    o0.low, o0.high, o1.low, o1.high,
> -				    &res.low, &res.high,
> -				    &tmp.low, &tmp.high))
> +          res = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR,
> +					 &tmp, &overflow);
> +	  if (overflow)
>  	    return 0;
>  	  break;
> 
>  	case MOD:
> -	  if (div_and_round_double (TRUNC_DIV_EXPR, 0,
> -				    o0.low, o0.high, o1.low, o1.high,
> -				    &tmp.low, &tmp.high,
> -				    &res.low, &res.high))
> +          tmp = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR,
> +					 &res, &overflow);
> +	  if (overflow)
>  	    return 0;
>  	  break;
> 
>  	case UDIV:
> -	  if (div_and_round_double (TRUNC_DIV_EXPR, 1,
> -				    o0.low, o0.high, o1.low, o1.high,
> -				    &res.low, &res.high,
> -				    &tmp.low, &tmp.high))
> +          res = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR,
> +					 &tmp, &overflow);
> +	  if (overflow)
>  	    return 0;
>  	  break;
> 
>  	case UMOD:
> -	  if (div_and_round_double (TRUNC_DIV_EXPR, 1,
> -				    o0.low, o0.high, o1.low, o1.high,
> -				    &tmp.low, &tmp.high,
> -				    &res.low, &res.high))
> +          tmp = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR,
> +					 &res, &overflow);
> +	  if (overflow)
>  	    return 0;
>  	  break;
> 
> Index: gcc/explow.c
> ===================================================================
> --- gcc/explow.c	(revision 191083)
> +++ gcc/explow.c	(working copy)
> @@ -100,36 +100,33 @@ plus_constant (enum machine_mode mode, r
>      case CONST_INT:
>        if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
>  	{
> -	  unsigned HOST_WIDE_INT l1 = INTVAL (x);
> -	  HOST_WIDE_INT h1 = (l1 >> (HOST_BITS_PER_WIDE_INT - 1)) ? -1 : 0;
> -	  unsigned HOST_WIDE_INT l2 = c;
> -	  HOST_WIDE_INT h2 = c < 0 ? -1 : 0;
> -	  unsigned HOST_WIDE_INT lv;
> -	  HOST_WIDE_INT hv;
> +	  double_int di_x = double_int::from_shwi (INTVAL (x));
> +	  double_int di_c = double_int::from_shwi (c);
> 
> -	  if (add_double_with_sign (l1, h1, l2, h2, &lv, &hv, false))
> +	  bool overflow;
> +	  double_int v = di_x.add_with_sign (di_c, false, &overflow);
> +	  if (overflow)
>  	    gcc_unreachable ();
> 
> -	  return immed_double_const (lv, hv, VOIDmode);
> +	  return immed_double_int_const (v, VOIDmode);
>  	}
> 
>        return GEN_INT (INTVAL (x) + c);
> 
>      case CONST_DOUBLE:
>        {
> -	unsigned HOST_WIDE_INT l1 = CONST_DOUBLE_LOW (x);
> -	HOST_WIDE_INT h1 = CONST_DOUBLE_HIGH (x);
> -	unsigned HOST_WIDE_INT l2 = c;
> -	HOST_WIDE_INT h2 = c < 0 ? -1 : 0;
> -	unsigned HOST_WIDE_INT lv;
> -	HOST_WIDE_INT hv;
> +	double_int di_x = double_int::from_pair (CONST_DOUBLE_HIGH (x),
> +						 CONST_DOUBLE_LOW (x));
> +	double_int di_c = double_int::from_shwi (c);
> 
> -	if (add_double_with_sign (l1, h1, l2, h2, &lv, &hv, false))
> +	bool overflow;
> +	double_int v = di_x.add_with_sign (di_c, false, &overflow);
> +	if (overflow)
>  	  /* Sorry, we have no way to represent overflows this wide.
>  	     To fix, add constant support wider than CONST_DOUBLE.  */
>  	  gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT);
> 
> -	return immed_double_const (lv, hv, VOIDmode);
> +	return immed_double_int_const (v, VOIDmode);
>        }
> 
>      case MEM:
> Index: gcc/config/sparc/sparc.c
> ===================================================================
> --- gcc/config/sparc/sparc.c	(revision 191083)
> +++ gcc/config/sparc/sparc.c	(working copy)
> @@ -10113,33 +10113,27 @@ sparc_fold_builtin (tree fndecl, int n_a
>  	  && TREE_CODE (arg1) == VECTOR_CST
>  	  && TREE_CODE (arg2) == INTEGER_CST)
>  	{
> -	  int overflow = 0;
> -	  unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (arg2);
> -	  HOST_WIDE_INT high = TREE_INT_CST_HIGH (arg2);
> +	  bool overflow = false;
> +	  double_int di_arg2 = TREE_INT_CST (arg2);
>  	  unsigned i;
> 
>  	  for (i = 0; i < VECTOR_CST_NELTS (arg0); ++i)
>  	    {
> -	      unsigned HOST_WIDE_INT
> -		low0 = TREE_INT_CST_LOW (VECTOR_CST_ELT (arg0, i)),
> -		low1 = TREE_INT_CST_LOW (VECTOR_CST_ELT (arg1, i));
> -	      HOST_WIDE_INT
> -		high0 = TREE_INT_CST_HIGH (VECTOR_CST_ELT (arg0, i));
> -	      HOST_WIDE_INT
> -		high1 = TREE_INT_CST_HIGH (VECTOR_CST_ELT (arg1, i));
> +	      double_int e0 = TREE_INT_CST (VECTOR_CST_ELT (arg0, i));
> +	      double_int e1 = TREE_INT_CST (VECTOR_CST_ELT (arg1, i));
> 
> -	      unsigned HOST_WIDE_INT l;
> -	      HOST_WIDE_INT h;
> +	      bool neg1_ovf, neg2_ovf, add1_ovf, add2_ovf;
> 
> -	      overflow |= neg_double (low1, high1, &l, &h);
> -	      overflow |= add_double (low0, high0, l, h, &l, &h);
> -	      if (h < 0)
> -		overflow |= neg_double (l, h, &l, &h);
> +	      double_int tmp = e1.neg_with_overflow (&neg1_ovf);
> +	      tmp = e0.add_with_sign (tmp, false, &add1_ovf);
> +	      if (tmp.is_negative ())
> +		tmp = tmp.neg_with_overflow (&neg2_ovf);
> 
> -	      overflow |= add_double (low, high, l, h, &low, &high);
> +	      tmp = di_arg2.add_with_sign (tmp, false, &add2_ovf);
> +	      overflow |= neg1_ovf | neg2_ovf | add1_ovf | add2_ovf;
>  	    }
> 
> -	  gcc_assert (overflow == 0);
> +	  gcc_assert (!overflow);
> 
>  	  return build_int_cst_wide (rtype, low, high);
>  	}
> Index: gcc/config/avr/avr.c
> ===================================================================
> --- gcc/config/avr/avr.c	(revision 191083)
> +++ gcc/config/avr/avr.c	(working copy)
> @@ -10518,10 +10518,10 @@ avr_double_int_push_digit (double_int va
>                             unsigned HOST_WIDE_INT digit)
>  {
>    val = 0 == base
> -    ? double_int_lshift (val, 32, 64, false)
> -    : double_int_mul (val, uhwi_to_double_int (base));
> +    ? val.llshift (32, 64)
> +    : val * double_int::from_uhwi (base);
> 
> -  return double_int_add (val, uhwi_to_double_int (digit));
> +  return val + double_int::from_uhwi (digit);
>  }
> 
> 
> @@ -10530,7 +10530,7 @@ avr_double_int_push_digit (double_int va
>  static int
>  avr_map (double_int f, int x)
>  {
> -  return 0xf & double_int_to_uhwi (double_int_rshift (f, 4*x, 64, false));
> +  return 0xf & f.lrshift (4*x, 64).to_uhwi ();
>  }
> 
> 
> @@ -10703,7 +10703,7 @@ avr_map_decompose (double_int f, const a
>           are mapped to 0 and used operands are reloaded to xop[0].  */
> 
>        xop[0] = all_regs_rtx[24];
> -      xop[1] = gen_int_mode (double_int_to_uhwi (f_ginv.map), SImode);
> +      xop[1] = gen_int_mode (f_ginv.map.to_uhwi (), SImode);
>        xop[2] = all_regs_rtx[25];
>        xop[3] = val_used_p ? xop[0] : const0_rtx;
> 
> @@ -10799,7 +10799,7 @@ avr_out_insert_bits (rtx *op, int *plen)
>    else if (flag_print_asm_name)
>      fprintf (asm_out_file,
>               ASM_COMMENT_START "map = 0x%08" HOST_LONG_FORMAT "x\n",
> -             double_int_to_uhwi (map) & GET_MODE_MASK (SImode));
> +             map.to_uhwi () & GET_MODE_MASK (SImode));
> 
>    /* If MAP has fixed points it might be better to initialize the result
>       with the bits to be inserted instead of moving all bits by hand.  */
> 
> 
>
Eric Botcazou - Sept. 12, 2012, 10:12 p.m.
> Index: gcc/config/sparc/sparc.c
> ===================================================================
> --- gcc/config/sparc/sparc.c	(revision 191083)
> +++ gcc/config/sparc/sparc.c	(working copy)
> @@ -10113,33 +10113,27 @@ sparc_fold_builtin (tree fndecl, int n_a
>  	  && TREE_CODE (arg1) == VECTOR_CST
>  	  && TREE_CODE (arg2) == INTEGER_CST)
>  	{
> -	  int overflow = 0;
> -	  unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (arg2);
> -	  HOST_WIDE_INT high = TREE_INT_CST_HIGH (arg2);
> +	  bool overflow = false;
> +	  double_int di_arg2 = TREE_INT_CST (arg2);
>  	  unsigned i;
> 
>  	  for (i = 0; i < VECTOR_CST_NELTS (arg0); ++i)
>  	    {
> -	      unsigned HOST_WIDE_INT
> -		low0 = TREE_INT_CST_LOW (VECTOR_CST_ELT (arg0, i)),
> -		low1 = TREE_INT_CST_LOW (VECTOR_CST_ELT (arg1, i));
> -	      HOST_WIDE_INT
> -		high0 = TREE_INT_CST_HIGH (VECTOR_CST_ELT (arg0, i));
> -	      HOST_WIDE_INT
> -		high1 = TREE_INT_CST_HIGH (VECTOR_CST_ELT (arg1, i));
> +	      double_int e0 = TREE_INT_CST (VECTOR_CST_ELT (arg0, i));
> +	      double_int e1 = TREE_INT_CST (VECTOR_CST_ELT (arg1, i));
> 
> -	      unsigned HOST_WIDE_INT l;
> -	      HOST_WIDE_INT h;
> +	      bool neg1_ovf, neg2_ovf, add1_ovf, add2_ovf;
> 
> -	      overflow |= neg_double (low1, high1, &l, &h);
> -	      overflow |= add_double (low0, high0, l, h, &l, &h);
> -	      if (h < 0)
> -		overflow |= neg_double (l, h, &l, &h);
> +	      double_int tmp = e1.neg_with_overflow (&neg1_ovf);
> +	      tmp = e0.add_with_sign (tmp, false, &add1_ovf);
> +	      if (tmp.is_negative ())
> +		tmp = tmp.neg_with_overflow (&neg2_ovf);
> 
> -	      overflow |= add_double (low, high, l, h, &low, &high);
> +	      tmp = di_arg2.add_with_sign (tmp, false, &add2_ovf);
> +	      overflow |= neg1_ovf | neg2_ovf | add1_ovf | add2_ovf;
>  	    }
> 
> -	  gcc_assert (overflow == 0);
> +	  gcc_assert (!overflow);
> 
>  	  return build_int_cst_wide (rtype, low, high);
>  	}

This cannot build because of the references to low and high in the last line.

As Richard said, building a cross cc1 is very easy.
Georg-Johann Lay - Sept. 25, 2012, 10:45 a.m.
Richard Guenther wrote:
> On Tue, 11 Sep 2012, Lawrence Crowl wrote:
> 
>> Finish conversion of uses of double_int to the new API.
>>
>> Some old functionality required new interfaces, and these have been
>> added to double-int.[hc]:
>>
>>   double_int::from_pair - static constructor function
>>   wide_mul_with_sign - double-wide multiply instruction
>>   neg_with_overflow - negation with overlow testing
>>   divmod_with_overflow - div and mod with overlow testing
>>
>> The prior two generations of the interface have been removed.
>>
>> Some of these old interfaces are still used as static implementation
>> in double-int.c.
>>
>> The changed compiler appears 0.321% faster with 80% confidence of
>> being faster.
>>
>> Tested on x86_64.  However, there are changes to the avr and sparc
>> config files, and I have not tested those.  Could the maintainers
>> please apply the patch and see if it works?
>>
>> After testing is complete, okay for trunk?
> 
> Ok.  For avr and sparc you can build cross cc1 (configure for the
> cross and do make all-gcc), if that works it should be fine.

It does not work, at least for avr:

http://gcc.gnu.org/PR54701

g++ -c   -g -O2 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE  -fno-exceptions -fno-rtti
-W -Wall -Wwrite-strings -Wcast-qual -Wmissing-format-attribute -pedantic
-Wno-long-long -Wno-variadic-macros -Wno-overlength-strings -fno-common
-DHAVE_CONFIG_H -I. -I. -I../../../gcc.gnu.org/trunk/gcc
-I../../../gcc.gnu.org/trunk/gcc/. -I../../../gcc.gnu.org/trunk/gcc/../include
-I../../../gcc.gnu.org/trunk/gcc/../libcpp/include
-I/home/georg/gnu/build/gcc-trunk-avr/./gmp
-I/home/georg/gnu/gcc.gnu.org/trunk/gmp
-I/home/georg/gnu/build/gcc-trunk-avr/./mpfr
-I/home/georg/gnu/gcc.gnu.org/trunk/mpfr
-I/home/georg/gnu/gcc.gnu.org/trunk/mpc/src
-I../../../gcc.gnu.org/trunk/gcc/../libdecnumber
-I../../../gcc.gnu.org/trunk/gcc/../libdecnumber/dpd -I../libdecnumber    \
                ../../../gcc.gnu.org/trunk/gcc/config/avr/avr.c -o avr.o
../../../gcc.gnu.org/trunk/gcc/config/avr/avr.c: In function 'avr_map_op_t
avr_map_decompose(double_int, const avr_map_op_t*, bool)':
../../../gcc.gnu.org/trunk/gcc/config/avr/avr.c:10955: error:
'uhwi_to_double_int' was not declared in this scope
make[2]: *** [avr.o] Error 1
make[2]: Leaving directory `/local/gnu/build/gcc-trunk-avr/gcc'
make[1]: *** [all-gcc] Error 2
make[1]: Leaving directory `/local/gnu/build/gcc-trunk-avr'
make: *** [all] Error 2


> 
> Thanks,
> Richard.
> 
>> ========================
>>
>> Index: gcc/java/ChangeLog
>>
>> 2012-09-10  Lawrence Crowl  <crowl@google.com>
>>
>> 	* decl.c (java_init_decl_processing): Change to new double_int API.
>> 	* jcf-parse.c (get_constant): Likewise.
>> 	* boehm.c (mark_reference_fields): Likewise.
>> 	(get_boehm_type_descriptor): Likewise.
>>
>> Index: gcc/ChangeLog
>>
>> 2012-09-10  Lawrence Crowl  <crowl@google.com>
>>
>> 	* double-int.h (double_int::from_pair): New.
>> 	(double_int::wide_mul_with_sign): New.
>> 	(double_int::neg_with_overflow): New.
>> 	(double_int::divmod_with_overflow): New.
>> 	(shwi_to_double_int): Remove.
>> 	(uhwi_to_double_int): Remove.
>> 	(double_int_to_shwi): Remove.
>> 	(double_int_to_uhwi): Remove.
>> 	(double_int_fits_in_uhwi_p): Remove.
>> 	(double_int_fits_in_shwi_p): Remove.
>> 	(double_int_fits_in_hwi_p): Remove.
>> 	(double_int_mul): Remove.
>> 	(double_int_mul_with_sign): Remove.
>> 	(double_int_add): Remove.
>> 	(double_int_sub): Remove.
>> 	(double_int_neg): Remove.
>> 	(double_int_div): Remove.
>> 	(double_int_sdiv): Remove.
>> 	(double_int_udiv): Remove.
>> 	(double_int_mod): Remove.
>> 	(double_int_smod): Remove.
>> 	(double_int_umod): Remove.
>> 	(double_int_divmod): Remove.
>> 	(double_int_sdivmod): Remove.
>> 	(double_int_udivmod): Remove.
>> 	(double_int_multiple_of): Remove.
>> 	(double_int_setbit): Remove.
>> 	(double_int_ctz): Remove.
>> 	(double_int_not): Remove.
>> 	(double_int_ior): Remove.
>> 	(double_int_and): Remove.
>> 	(double_int_and_not): Remove.
>> 	(double_int_xor): Remove.
>> 	(double_int_lshift): Remove.
>> 	(double_int_rshift): Remove.
>> 	(double_int_lrotate): Remove.
>> 	(double_int_rrotate): Remove.
>> 	(double_int_negative_p): Remove.
>> 	(double_int_cmp): Remove.
>> 	(double_int_scmp): Remove.
>> 	(double_int_ucmp): Remove.
>> 	(double_int_max): Remove.
>> 	(double_int_smax): Remove.
>> 	(double_int_umax): Remove.
>> 	(double_int_min): Remove.
>> 	(double_int_smin): Remove.
>> 	(double_int_umin): Remove.
>> 	(double_int_ext): Remove.
>> 	(double_int_sext): Remove.
>> 	(double_int_zext): Remove.
>> 	(double_int_mask): Remove.
>> 	(double_int_max_value): Remove.
>> 	(double_int_min_value): Remove.
>> 	(double_int_zero_p): Remove.
>> 	(double_int_one_p): Remove.
>> 	(double_int_minus_one_p): Remove.
>> 	(double_int_equal_p): Remove.
>> 	(double_int_popcount): Remove.
>> 	(extern add_double_with_sign): Remove.
>> 	(#define add_double): Remove.
>> 	(extern neg_double): Remove.
>> 	(extern mul_double_with_sign): Remove.
>> 	(extern mul_double_wide_with_sign): Remove.
>> 	(#define mul_double): Remove.
>> 	(extern lshift_double): Remove.
>> 	(extern div_and_round_double): Remove.
>> 	* double-int.c (add_double_with_sign): Make static.
>> 	(#defined add_double): Localized from header.
>> 	(neg_double): Make static.
>> 	(mul_double_with_sign): Make static.
>> 	(mul_double_wide_with_sign): Make static.
>> 	(#defined mul_double): Localized from header.
>> 	(lshift_double): Make static.
>> 	(div_and_round_double): Make static.
>> 	(double_int::wide_mul_with_sign): New.
>> 	(double_int::neg_with_overflow): New.
>> 	(double_int::divmod_with_overflow): New.
>> 	* emit-rtl.c (init_emit_once): Change to new double_int API.
>> 	* explow.c (plus_constant): Likewise.
>> 	* expmed.c (choose_multiplier): Likewise.
>> 	* fold-const.c (int_const_binop_1): Likewise.
>> 	(fold_div_compare): Likewise.
>> 	(maybe_canonicalize_comparison): Likewise.
>> 	(pointer_may_wrap_p): Likewise.
>> 	(fold_negate_const): Likewise.
>> 	(fold_abs_const): Likewise.
>> 	* simplify-rtx.c (simplify_const_unary_operation): Likewise.
>> 	(simplify_const_binary_operation): Likewise.
>> 	* tree-chrec.c (tree_fold_binomial): Likewise.
>> 	* tree-vrp.c (extract_range_from_binary_expr_1): Likewise.
>> 	* config/sparc/sparc.c (sparc_fold_builtin): Likewise.
>> 	* config/avr/avr.c (avr_double_int_push_digit): Likewise.
>> 	(avr_map): Likewise.
>> 	(avr_map_decompose): Likewise.
>> 	(avr_out_insert_bits): Likewise.
>>
>> Index: gcc/cp/ChangeLog
>>
>> 2012-09-10  Lawrence Crowl  <crowl@google.com>
>>
>> 	* init.c (build_new_1): Change to new double_int API.
>> 	* decl.c (build_enumerator): Likewise.
>> 	* typeck2.c (process_init_constructor_array): Likewise.
>> 	* mangle.c (write_array_type): Likewise.
>>
>> Index: gcc/fortran/ChangeLog
>>
>> 2012-09-10  Lawrence Crowl  <crowl@google.com>
>>
>> 	* trans-expr.c (gfc_conv_cst_int_power): Change to new double_int API.
>> 	* target-memory.c (gfc_interpret_logical): Likewise.
>>
>> ========================
>>
>> Index: gcc/tree-vrp.c
>> ===================================================================
>> --- gcc/tree-vrp.c	(revision 191083)
>> +++ gcc/tree-vrp.c	(working copy)
>> @@ -2478,7 +2478,7 @@ extract_range_from_binary_expr_1 (value_
>>  		  if (tmin.cmp (tmax, uns) < 0)
>>  		    covers = true;
>>  		  tmax = tem + double_int_minus_one;
>> -		  if (double_int_cmp (tmax, tem, uns) > 0)
>> +		  if (tmax.cmp (tem, uns) > 0)
>>  		    covers = true;
>>  		  /* If the anti-range would cover nothing, drop to varying.
>>  		     Likewise if the anti-range bounds are outside of the
>> @@ -2632,37 +2632,26 @@ extract_range_from_binary_expr_1 (value_
>>  	    }
>>  	  uns = uns0 & uns1;
>>
>> -	  mul_double_wide_with_sign (min0.low, min0.high,
>> -				     min1.low, min1.high,
>> -				     &prod0l.low, &prod0l.high,
>> -				     &prod0h.low, &prod0h.high, true);
>> +	  bool overflow;
>> +	  prod0l = min0.wide_mul_with_sign (min1, true, &prod0h, &overflow);
>>  	  if (!uns0 && min0.is_negative ())
>>  	    prod0h -= min1;
>>  	  if (!uns1 && min1.is_negative ())
>>  	    prod0h -= min0;
>>
>> -	  mul_double_wide_with_sign (min0.low, min0.high,
>> -				     max1.low, max1.high,
>> -				     &prod1l.low, &prod1l.high,
>> -				     &prod1h.low, &prod1h.high, true);
>> +	  prod1l = min0.wide_mul_with_sign (max1, true, &prod1h, &overflow);
>>  	  if (!uns0 && min0.is_negative ())
>>  	    prod1h -= max1;
>>  	  if (!uns1 && max1.is_negative ())
>>  	    prod1h -= min0;
>>
>> -	  mul_double_wide_with_sign (max0.low, max0.high,
>> -				     min1.low, min1.high,
>> -				     &prod2l.low, &prod2l.high,
>> -				     &prod2h.low, &prod2h.high, true);
>> +	  prod2l = max0.wide_mul_with_sign (min1, true, &prod2h, &overflow);
>>  	  if (!uns0 && max0.is_negative ())
>>  	    prod2h -= min1;
>>  	  if (!uns1 && min1.is_negative ())
>>  	    prod2h -= max0;
>>
>> -	  mul_double_wide_with_sign (max0.low, max0.high,
>> -				     max1.low, max1.high,
>> -				     &prod3l.low, &prod3l.high,
>> -				     &prod3h.low, &prod3h.high, true);
>> +	  prod3l = max0.wide_mul_with_sign (max1, true, &prod3h, &overflow);
>>  	  if (!uns0 && max0.is_negative ())
>>  	    prod3h -= max1;
>>  	  if (!uns1 && max1.is_negative ())
>> Index: gcc/java/decl.c
>> ===================================================================
>> --- gcc/java/decl.c	(revision 191083)
>> +++ gcc/java/decl.c	(working copy)
>> @@ -617,7 +617,7 @@ java_init_decl_processing (void)
>>    decimal_int_max = build_int_cstu (unsigned_int_type_node, 0x80000000);
>>    decimal_long_max
>>      = double_int_to_tree (unsigned_long_type_node,
>> -			  double_int_setbit (double_int_zero, 64));
>> +			  double_int_zero.set_bit (64));
>>
>>    long_zero_node = build_int_cst (long_type_node, 0);
>>
>> Index: gcc/java/jcf-parse.c
>> ===================================================================
>> --- gcc/java/jcf-parse.c	(revision 191083)
>> +++ gcc/java/jcf-parse.c	(working copy)
>> @@ -1043,9 +1043,9 @@ get_constant (JCF *jcf, int index)
>>  	double_int val;
>>
>>  	num = JPOOL_UINT (jcf, index);
>> -	val = double_int_lshift (uhwi_to_double_int (num), 32, 64, false);
>> +	val = double_int::from_uhwi (num).llshift (32, 64);
>>  	num = JPOOL_UINT (jcf, index + 1);
>> -	val = double_int_ior (val, uhwi_to_double_int (num));
>> +	val |= double_int::from_uhwi (num);
>>
>>  	value = double_int_to_tree (long_type_node, val);
>>  	break;
>> Index: gcc/java/boehm.c
>> ===================================================================
>> --- gcc/java/boehm.c	(revision 191083)
>> +++ gcc/java/boehm.c	(working copy)
>> @@ -108,7 +108,7 @@ mark_reference_fields (tree field,
>>  	     bits for all words in the record. This is conservative, but the
>>  	     size_words != 1 case is impossible in regular java code. */
>>  	  for (i = 0; i < size_words; ++i)
>> -	    *mask = double_int_setbit (*mask, ubit - count - i - 1);
>> +	    *mask = (*mask).set_bit (ubit - count - i - 1);
>>
>>  	  if (count >= ubit - 2)
>>  	    *pointer_after_end = 1;
>> @@ -200,7 +200,7 @@ get_boehm_type_descriptor (tree type)
>>        while (last_set_index)
>>  	{
>>  	  if ((last_set_index & 1))
>> -	    mask = double_int_setbit (mask, log2_size + count);
>> +	    mask = mask.set_bit (log2_size + count);
>>  	  last_set_index >>= 1;
>>  	  ++count;
>>  	}
>> @@ -209,7 +209,7 @@ get_boehm_type_descriptor (tree type)
>>    else if (! pointer_after_end)
>>      {
>>        /* Bottom two bits for bitmap mark type are 01.  */
>> -      mask = double_int_setbit (mask, 0);
>> +      mask = mask.set_bit (0);
>>        value = double_int_to_tree (value_type, mask);
>>      }
>>    else
>> Index: gcc/fold-const.c
>> ===================================================================
>> --- gcc/fold-const.c	(revision 191083)
>> +++ gcc/fold-const.c	(working copy)
>> @@ -982,12 +982,6 @@ int_const_binop_1 (enum tree_code code,
>>        break;
>>
>>      case MINUS_EXPR:
>> -/* FIXME(crowl) Remove this code if the replacment works.
>> -      neg_double (op2.low, op2.high, &res.low, &res.high);
>> -      add_double (op1.low, op1.high, res.low, res.high,
>> -		  &res.low, &res.high);
>> -      overflow = OVERFLOW_SUM_SIGN (res.high, op2.high, op1.high);
>> -*/
>>        res = op1.add_with_sign (-op2, false, &overflow);
>>        break;
>>
>> @@ -1035,10 +1029,7 @@ int_const_binop_1 (enum tree_code code,
>>  	  res = double_int_one;
>>  	  break;
>>  	}
>> -      overflow = div_and_round_double (code, uns,
>> -				       op1.low, op1.high, op2.low, op2.high,
>> -				       &res.low, &res.high,
>> -				       &tmp.low, &tmp.high);
>> +      res = op1.divmod_with_overflow (op2, uns, code, &tmp, &overflow);
>>        break;
>>
>>      case TRUNC_MOD_EXPR:
>> @@ -1060,10 +1051,7 @@ int_const_binop_1 (enum tree_code code,
>>      case ROUND_MOD_EXPR:
>>        if (op2.is_zero ())
>>  	return NULL_TREE;
>> -      overflow = div_and_round_double (code, uns,
>> -				       op1.low, op1.high, op2.low, op2.high,
>> -				       &tmp.low, &tmp.high,
>> -				       &res.low, &res.high);
>> +      tmp = op1.divmod_with_overflow (op2, uns, code, &res, &overflow);
>>        break;
>>
>>      case MIN_EXPR:
>> @@ -6290,15 +6278,12 @@ fold_div_compare (location_t loc,
>>    double_int val;
>>    bool unsigned_p = TYPE_UNSIGNED (TREE_TYPE (arg0));
>>    bool neg_overflow;
>> -  int overflow;
>> +  bool overflow;
>>
>>    /* We have to do this the hard way to detect unsigned overflow.
>>       prod = int_const_binop (MULT_EXPR, arg01, arg1);  */
>> -  overflow = mul_double_with_sign (TREE_INT_CST_LOW (arg01),
>> -				   TREE_INT_CST_HIGH (arg01),
>> -				   TREE_INT_CST_LOW (arg1),
>> -				   TREE_INT_CST_HIGH (arg1),
>> -				   &val.low, &val.high, unsigned_p);
>> +  val = TREE_INT_CST (arg01)
>> +	.mul_with_sign (TREE_INT_CST (arg1), unsigned_p, &overflow);
>>    prod = force_fit_type_double (TREE_TYPE (arg00), val, -1, overflow);
>>    neg_overflow = false;
>>
>> @@ -6309,11 +6294,8 @@ fold_div_compare (location_t loc,
>>        lo = prod;
>>
>>        /* Likewise hi = int_const_binop (PLUS_EXPR, prod, tmp).  */
>> -      overflow = add_double_with_sign (TREE_INT_CST_LOW (prod),
>> -				       TREE_INT_CST_HIGH (prod),
>> -				       TREE_INT_CST_LOW (tmp),
>> -				       TREE_INT_CST_HIGH (tmp),
>> -				       &val.low, &val.high, unsigned_p);
>> +      val = TREE_INT_CST (prod)
>> +	    .add_with_sign (TREE_INT_CST (tmp), unsigned_p, &overflow);
>>        hi = force_fit_type_double (TREE_TYPE (arg00), val,
>>  				  -1, overflow | TREE_OVERFLOW (prod));
>>      }
>> @@ -8693,8 +8675,7 @@ maybe_canonicalize_comparison (location_
>>  static bool
>>  pointer_may_wrap_p (tree base, tree offset, HOST_WIDE_INT bitpos)
>>  {
>> -  unsigned HOST_WIDE_INT offset_low, total_low;
>> -  HOST_WIDE_INT size, offset_high, total_high;
>> +  double_int di_offset, total;
>>
>>    if (!POINTER_TYPE_P (TREE_TYPE (base)))
>>      return true;
>> @@ -8703,28 +8684,22 @@ pointer_may_wrap_p (tree base, tree offs
>>      return true;
>>
>>    if (offset == NULL_TREE)
>> -    {
>> -      offset_low = 0;
>> -      offset_high = 0;
>> -    }
>> +    di_offset = double_int_zero;
>>    else if (TREE_CODE (offset) != INTEGER_CST || TREE_OVERFLOW (offset))
>>      return true;
>>    else
>> -    {
>> -      offset_low = TREE_INT_CST_LOW (offset);
>> -      offset_high = TREE_INT_CST_HIGH (offset);
>> -    }
>> +    di_offset = TREE_INT_CST (offset);
>>
>> -  if (add_double_with_sign (offset_low, offset_high,
>> -			    bitpos / BITS_PER_UNIT, 0,
>> -			    &total_low, &total_high,
>> -			    true))
>> +  bool overflow;
>> +  double_int units = double_int::from_uhwi (bitpos / BITS_PER_UNIT);
>> +  total = di_offset.add_with_sign (units, true, &overflow);
>> +  if (overflow)
>>      return true;
>>
>> -  if (total_high != 0)
>> +  if (total.high != 0)
>>      return true;
>>
>> -  size = int_size_in_bytes (TREE_TYPE (TREE_TYPE (base)));
>> +  HOST_WIDE_INT size = int_size_in_bytes (TREE_TYPE (TREE_TYPE (base)));
>>    if (size <= 0)
>>      return true;
>>
>> @@ -8739,7 +8714,7 @@ pointer_may_wrap_p (tree base, tree offs
>>  	size = base_size;
>>      }
>>
>> -  return total_low > (unsigned HOST_WIDE_INT) size;
>> +  return total.low > (unsigned HOST_WIDE_INT) size;
>>  }
>>
>>  /* Subroutine of fold_binary.  This routine performs all of the
>> @@ -15939,8 +15914,8 @@ fold_negate_const (tree arg0, tree type)
>>      case INTEGER_CST:
>>        {
>>  	double_int val = tree_to_double_int (arg0);
>> -	int overflow = neg_double (val.low, val.high, &val.low, &val.high);
>> -
>> +	bool overflow;
>> +	val = val.neg_with_overflow (&overflow);
>>  	t = force_fit_type_double (type, val, 1,
>>  				   (overflow | TREE_OVERFLOW (arg0))
>>  				   && !TYPE_UNSIGNED (type));
>> @@ -15997,9 +15972,8 @@ fold_abs_const (tree arg0, tree type)
>>  	   its negation.  */
>>  	else
>>  	  {
>> -	    int overflow;
>> -
>> -	    overflow = neg_double (val.low, val.high, &val.low, &val.high);
>> +	    bool overflow;
>> +	    val = val.neg_with_overflow (&overflow);
>>  	    t = force_fit_type_double (type, val, -1,
>>  				       overflow | TREE_OVERFLOW (arg0));
>>  	  }
>> Index: gcc/tree-chrec.c
>> ===================================================================
>> --- gcc/tree-chrec.c	(revision 191083)
>> +++ gcc/tree-chrec.c	(working copy)
>> @@ -461,8 +461,8 @@ chrec_fold_multiply (tree type,
>>  static tree
>>  tree_fold_binomial (tree type, tree n, unsigned int k)
>>  {
>> -  unsigned HOST_WIDE_INT lidx, lnum, ldenom, lres, ldum;
>> -  HOST_WIDE_INT hidx, hnum, hdenom, hres, hdum;
>> +  double_int num, denom, idx, di_res;
>> +  bool overflow;
>>    unsigned int i;
>>    tree res;
>>
>> @@ -472,59 +472,41 @@ tree_fold_binomial (tree type, tree n, u
>>    if (k == 1)
>>      return fold_convert (type, n);
>>
>> +  /* Numerator = n.  */
>> +  num = TREE_INT_CST (n);
>> +
>>    /* Check that k <= n.  */
>> -  if (TREE_INT_CST_HIGH (n) == 0
>> -      && TREE_INT_CST_LOW (n) < k)
>> +  if (num.ult (double_int::from_uhwi (k)))
>>      return NULL_TREE;
>>
>> -  /* Numerator = n.  */
>> -  lnum = TREE_INT_CST_LOW (n);
>> -  hnum = TREE_INT_CST_HIGH (n);
>> -
>>    /* Denominator = 2.  */
>> -  ldenom = 2;
>> -  hdenom = 0;
>> +  denom = double_int::from_uhwi (2);
>>
>>    /* Index = Numerator-1.  */
>> -  if (lnum == 0)
>> -    {
>> -      hidx = hnum - 1;
>> -      lidx = ~ (unsigned HOST_WIDE_INT) 0;
>> -    }
>> -  else
>> -    {
>> -      hidx = hnum;
>> -      lidx = lnum - 1;
>> -    }
>> +  idx = num - double_int_one;
>>
>>    /* Numerator = Numerator*Index = n*(n-1).  */
>> -  if (mul_double (lnum, hnum, lidx, hidx, &lnum, &hnum))
>> +  num = num.mul_with_sign (idx, false, &overflow);
>> +  if (overflow)
>>      return NULL_TREE;
>>
>>    for (i = 3; i <= k; i++)
>>      {
>>        /* Index--.  */
>> -      if (lidx == 0)
>> -	{
>> -	  hidx--;
>> -	  lidx = ~ (unsigned HOST_WIDE_INT) 0;
>> -	}
>> -      else
>> -        lidx--;
>> +      --idx;
>>
>>        /* Numerator *= Index.  */
>> -      if (mul_double (lnum, hnum, lidx, hidx, &lnum, &hnum))
>> +      num = num.mul_with_sign (idx, false, &overflow);
>> +      if (overflow)
>>  	return NULL_TREE;
>>
>>        /* Denominator *= i.  */
>> -      mul_double (ldenom, hdenom, i, 0, &ldenom, &hdenom);
>> +      denom *= double_int::from_uhwi (i);
>>      }
>>
>>    /* Result = Numerator / Denominator.  */
>> -  div_and_round_double (EXACT_DIV_EXPR, 1, lnum, hnum, ldenom, hdenom,
>> -			&lres, &hres, &ldum, &hdum);
>> -
>> -  res = build_int_cst_wide (type, lres, hres);
>> +  di_res = num.div (denom, true, EXACT_DIV_EXPR);
>> +  res = build_int_cst_wide (type, di_res.low, di_res.high);
>>    return int_fits_type_p (res, type) ? res : NULL_TREE;
>>  }
>>
>> Index: gcc/cp/init.c
>> ===================================================================
>> --- gcc/cp/init.c	(revision 191083)
>> +++ gcc/cp/init.c	(working copy)
>> @@ -2239,11 +2239,11 @@ build_new_1 (VEC(tree,gc) **placement, t
>>        if (TREE_CONSTANT (inner_nelts_cst)
>>  	  && TREE_CODE (inner_nelts_cst) == INTEGER_CST)
>>  	{
>> -	  double_int result;
>> -	  if (mul_double (TREE_INT_CST_LOW (inner_nelts_cst),
>> -			  TREE_INT_CST_HIGH (inner_nelts_cst),
>> -			  inner_nelts_count.low, inner_nelts_count.high,
>> -			  &result.low, &result.high))
>> +	  bool overflow;
>> +	  double_int result = TREE_INT_CST (inner_nelts_cst)
>> +			      .mul_with_sign (inner_nelts_count,
>> +					      false, &overflow);
>> +	  if (overflow)
>>  	    {
>>  	      if (complain & tf_error)
>>  		error ("integer overflow in array size");
>> @@ -2345,8 +2345,8 @@ build_new_1 (VEC(tree,gc) **placement, t
>>        /* Maximum available size in bytes.  Half of the address space
>>  	 minus the cookie size.  */
>>        double_int max_size
>> -	= double_int_lshift (double_int_one, TYPE_PRECISION (sizetype) - 1,
>> -			     HOST_BITS_PER_DOUBLE_INT, false);
>> +	= double_int_one.llshift (TYPE_PRECISION (sizetype) - 1,
>> +				  HOST_BITS_PER_DOUBLE_INT);
>>        /* Size of the inner array elements. */
>>        double_int inner_size;
>>        /* Maximum number of outer elements which can be allocated. */
>> @@ -2356,22 +2356,21 @@ build_new_1 (VEC(tree,gc) **placement, t
>>        gcc_assert (TREE_CODE (size) == INTEGER_CST);
>>        cookie_size = targetm.cxx.get_cookie_size (elt_type);
>>        gcc_assert (TREE_CODE (cookie_size) == INTEGER_CST);
>> -      gcc_checking_assert (double_int_ucmp
>> -			   (TREE_INT_CST (cookie_size), max_size) < 0);
>> +      gcc_checking_assert (TREE_INT_CST (cookie_size).ult (max_size));
>>        /* Unconditionally substract the cookie size.  This decreases the
>>  	 maximum object size and is safe even if we choose not to use
>>  	 a cookie after all.  */
>> -      max_size = double_int_sub (max_size, TREE_INT_CST (cookie_size));
>> -      if (mul_double (TREE_INT_CST_LOW (size), TREE_INT_CST_HIGH (size),
>> -		      inner_nelts_count.low, inner_nelts_count.high,
>> -		      &inner_size.low, &inner_size.high)
>> -	  || double_int_ucmp (inner_size, max_size) > 0)
>> +      max_size -= TREE_INT_CST (cookie_size);
>> +      bool overflow;
>> +      inner_size = TREE_INT_CST (size)
>> +		   .mul_with_sign (inner_nelts_count, false, &overflow);
>> +      if (overflow || inner_size.ugt (max_size))
>>  	{
>>  	  if (complain & tf_error)
>>  	    error ("size of array is too large");
>>  	  return error_mark_node;
>>  	}
>> -      max_outer_nelts = double_int_udiv (max_size, inner_size, TRUNC_DIV_EXPR);
>> +      max_outer_nelts = max_size.udiv (inner_size, TRUNC_DIV_EXPR);
>>        /* Only keep the top-most seven bits, to simplify encoding the
>>  	 constant in the instruction stream.  */
>>        {
>> @@ -2379,10 +2378,8 @@ build_new_1 (VEC(tree,gc) **placement, t
>>  	  - (max_outer_nelts.high ? clz_hwi (max_outer_nelts.high)
>>  	     : (HOST_BITS_PER_WIDE_INT + clz_hwi (max_outer_nelts.low)));
>>  	max_outer_nelts
>> -	  = double_int_lshift (double_int_rshift
>> -			       (max_outer_nelts, shift,
>> -				HOST_BITS_PER_DOUBLE_INT, false),
>> -			       shift, HOST_BITS_PER_DOUBLE_INT, false);
>> +	  = max_outer_nelts.lrshift (shift, HOST_BITS_PER_DOUBLE_INT)
>> +	    .llshift (shift, HOST_BITS_PER_DOUBLE_INT);
>>        }
>>        max_outer_nelts_tree = double_int_to_tree (sizetype, max_outer_nelts);
>>
>> Index: gcc/cp/decl.c
>> ===================================================================
>> --- gcc/cp/decl.c	(revision 191083)
>> +++ gcc/cp/decl.c	(working copy)
>> @@ -12448,8 +12448,6 @@ build_enumerator (tree name, tree value,
>>  	{
>>  	  if (TYPE_VALUES (enumtype))
>>  	    {
>> -	      HOST_WIDE_INT hi;
>> -	      unsigned HOST_WIDE_INT lo;
>>  	      tree prev_value;
>>  	      bool overflowed;
>>
>> @@ -12465,15 +12463,13 @@ build_enumerator (tree name, tree value,
>>  		value = error_mark_node;
>>  	      else
>>  		{
>> -		  overflowed = add_double (TREE_INT_CST_LOW (prev_value),
>> -					   TREE_INT_CST_HIGH (prev_value),
>> -					   1, 0, &lo, &hi);
>> +		  double_int di = TREE_INT_CST (prev_value)
>> +				  .add_with_sign (double_int_one,
>> +						  false, &overflowed);
>>  		  if (!overflowed)
>>  		    {
>> -		      double_int di;
>>  		      tree type = TREE_TYPE (prev_value);
>> -		      bool pos = (TYPE_UNSIGNED (type) || hi >= 0);
>> -		      di.low = lo; di.high = hi;
>> +		      bool pos = TYPE_UNSIGNED (type) || !di.is_negative ();
>>  		      if (!double_int_fits_to_tree_p (type, di))
>>  			{
>>  			  unsigned int itk;
>> Index: gcc/cp/typeck2.c
>> ===================================================================
>> --- gcc/cp/typeck2.c	(revision 191083)
>> +++ gcc/cp/typeck2.c	(working copy)
>> @@ -1055,14 +1055,12 @@ process_init_constructor_array (tree typ
>>      {
>>        tree domain = TYPE_DOMAIN (type);
>>        if (domain)
>> -	len = double_int_ext
>> -	        (double_int_add
>> -		  (double_int_sub
>> -		    (tree_to_double_int (TYPE_MAX_VALUE (domain)),
>> -		     tree_to_double_int (TYPE_MIN_VALUE (domain))),
>> -		    double_int_one),
>> -		  TYPE_PRECISION (TREE_TYPE (domain)),
>> -		  TYPE_UNSIGNED (TREE_TYPE (domain))).low;
>> +	len = (tree_to_double_int (TYPE_MAX_VALUE (domain))
>> +	       - tree_to_double_int (TYPE_MIN_VALUE (domain))
>> +	       + double_int_one)
>> +	      .ext (TYPE_PRECISION (TREE_TYPE (domain)),
>> +		    TYPE_UNSIGNED (TREE_TYPE (domain)))
>> +	      .low;
>>        else
>>  	unbounded = true;  /* Take as many as there are.  */
>>      }
>> Index: gcc/cp/mangle.c
>> ===================================================================
>> --- gcc/cp/mangle.c	(revision 191083)
>> +++ gcc/cp/mangle.c	(working copy)
>> @@ -3119,12 +3119,11 @@ write_array_type (const tree type)
>>  	{
>>  	  /* The ABI specifies that we should mangle the number of
>>  	     elements in the array, not the largest allowed index.  */
>> -	  double_int dmax
>> -	    = double_int_add (tree_to_double_int (max), double_int_one);
>> +	  double_int dmax = tree_to_double_int (max) + double_int_one;
>>  	  /* Truncate the result - this will mangle [0, SIZE_INT_MAX]
>>  	     number of elements as zero.  */
>> -	  dmax = double_int_zext (dmax, TYPE_PRECISION (TREE_TYPE (max)));
>> -	  gcc_assert (double_int_fits_in_uhwi_p (dmax));
>> +	  dmax = dmax.zext (TYPE_PRECISION (TREE_TYPE (max)));
>> +	  gcc_assert (dmax.fits_uhwi ());
>>  	  write_unsigned_number (dmax.low);
>>  	}
>>        else
>> Index: gcc/double-int.c
>> ===================================================================
>> --- gcc/double-int.c	(revision 191083)
>> +++ gcc/double-int.c	(working copy)
>> @@ -23,6 +23,41 @@ along with GCC; see the file COPYING3.
>>  #include "tm.h"			/* For SHIFT_COUNT_TRUNCATED.  */
>>  #include "tree.h"
>>
>> +static int add_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> +				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> +				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
>> +				 bool);
>> +
>> +#define add_double(l1,h1,l2,h2,lv,hv) \
>> +  add_double_with_sign (l1, h1, l2, h2, lv, hv, false)
>> +
>> +static int neg_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> +		       unsigned HOST_WIDE_INT *, HOST_WIDE_INT *);
>> +
>> +static int mul_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> +				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> +				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
>> +				 bool);
>> +
>> +static int mul_double_wide_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> +				      unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> +				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
>> +				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
>> +				      bool);
>> +
>> +#define mul_double(l1,h1,l2,h2,lv,hv) \
>> +  mul_double_with_sign (l1, h1, l2, h2, lv, hv, false)
>> +
>> +static void lshift_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> +			   HOST_WIDE_INT, unsigned int,
>> +			   unsigned HOST_WIDE_INT *, HOST_WIDE_INT *, bool);
>> +
>> +static int div_and_round_double (unsigned, int, unsigned HOST_WIDE_INT,
>> +				 HOST_WIDE_INT, unsigned HOST_WIDE_INT,
>> +				 HOST_WIDE_INT, unsigned HOST_WIDE_INT *,
>> +				 HOST_WIDE_INT *, unsigned HOST_WIDE_INT *,
>> +				 HOST_WIDE_INT *);
>> +
>>  /* We know that A1 + B1 = SUM1, using 2's complement arithmetic and ignoring
>>     overflow.  Suppose A, B and SUM have the same respective signs as A1, B1,
>>     and SUM1.  Then this yields nonzero if overflow occurred during the
>> @@ -75,7 +110,7 @@ decode (HOST_WIDE_INT *words, unsigned H
>>     One argument is L1 and H1; the other, L2 and H2.
>>     The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */
>>
>> -int
>> +static int
>>  add_double_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>>  		      unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
>>  		      unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
>> @@ -105,7 +140,7 @@ add_double_with_sign (unsigned HOST_WIDE
>>     The argument is given as two `HOST_WIDE_INT' pieces in L1 and H1.
>>     The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */
>>
>> -int
>> +static int
>>  neg_double (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>>  	    unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv)
>>  {
>> @@ -129,7 +164,7 @@ neg_double (unsigned HOST_WIDE_INT l1, H
>>     One argument is L1 and H1; the other, L2 and H2.
>>     The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */
>>
>> -int
>> +static int
>>  mul_double_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>>  		      unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
>>  		      unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
>> @@ -143,7 +178,7 @@ mul_double_with_sign (unsigned HOST_WIDE
>>  				    unsigned_p);
>>  }
>>
>> -int
>> +static int
>>  mul_double_wide_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>>  			   unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
>>  			   unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
>> @@ -269,7 +304,7 @@ rshift_double (unsigned HOST_WIDE_INT l1
>>     ARITH nonzero specifies arithmetic shifting; otherwise use logical shift.
>>     Store the value as two `HOST_WIDE_INT' pieces in *LV and *HV.  */
>>
>> -void
>> +static void
>>  lshift_double (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
>>  	       HOST_WIDE_INT count, unsigned int prec,
>>  	       unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv, bool arith)
>> @@ -335,7 +370,7 @@ lshift_double (unsigned HOST_WIDE_INT l1
>>     Return nonzero if the operation overflows.
>>     UNS nonzero says do unsigned division.  */
>>
>> -int
>> +static int
>>  div_and_round_double (unsigned code, int uns,
>>  		      /* num == numerator == dividend */
>>  		      unsigned HOST_WIDE_INT lnum_orig,
>> @@ -762,6 +797,19 @@ double_int::mul_with_sign (double_int b,
>>    return ret;
>>  }
>>
>> +double_int
>> +double_int::wide_mul_with_sign (double_int b, bool unsigned_p,
>> +				double_int *higher, bool *overflow) const
>> +
>> +{
>> +  double_int lower;
>> +  *overflow = mul_double_wide_with_sign (low, high, b.low, b.high,
>> +					 &lower.low, &lower.high,
>> +					 &higher->low, &higher->high,
>> +					 unsigned_p);
>> +  return lower;
>> +}
>> +
>>  /* Returns A + B.  */
>>
>>  double_int
>> @@ -809,12 +857,33 @@ double_int::operator - () const
>>    return ret;
>>  }
>>
>> +double_int
>> +double_int::neg_with_overflow (bool *overflow) const
>> +{
>> +  double_int ret;
>> +  *overflow = neg_double (low, high, &ret.low, &ret.high);
>> +  return ret;
>> +}
>> +
>>  /* Returns A / B (computed as unsigned depending on UNS, and rounded as
>>     specified by CODE).  CODE is enum tree_code in fact, but double_int.h
>>     must be included before tree.h.  The remainder after the division is
>>     stored to MOD.  */
>>
>>  double_int
>> +double_int::divmod_with_overflow (double_int b, bool uns, unsigned code,
>> +				  double_int *mod, bool *overflow) const
>> +{
>> +  const double_int &a = *this;
>> +  double_int ret;
>> +
>> +  *overflow = div_and_round_double (code, uns, a.low, a.high,
>> +				    b.low, b.high, &ret.low, &ret.high,
>> +				    &mod->low, &mod->high);
>> +  return ret;
>> +}
>> +
>> +double_int
>>  double_int::divmod (double_int b, bool uns, unsigned code,
>>  		    double_int *mod) const
>>  {
>> Index: gcc/double-int.h
>> ===================================================================
>> --- gcc/double-int.h	(revision 191083)
>> +++ gcc/double-int.h	(working copy)
>> @@ -61,6 +61,7 @@ struct double_int
>>
>>    static double_int from_uhwi (unsigned HOST_WIDE_INT cst);
>>    static double_int from_shwi (HOST_WIDE_INT cst);
>> +  static double_int from_pair (HOST_WIDE_INT high, unsigned HOST_WIDE_INT low);
>>
>>    /* No copy assignment operator or destructor to keep the type a POD.  */
>>
>> @@ -105,9 +106,16 @@ struct double_int
>>
>>    /* Arithmetic operation functions.  */
>>
>> +  /* The following operations perform arithmetics modulo 2^precision, so you
>> +     do not need to call .ext between them, even if you are representing
>> +     numbers with precision less than HOST_BITS_PER_DOUBLE_INT bits.  */
>> +
>>    double_int set_bit (unsigned) const;
>>    double_int mul_with_sign (double_int, bool unsigned_p, bool *overflow) const;
>> +  double_int wide_mul_with_sign (double_int, bool unsigned_p,
>> +				 double_int *higher, bool *overflow) const;
>>    double_int add_with_sign (double_int, bool unsigned_p, bool *overflow) const;
>> +  double_int neg_with_overflow (bool *overflow) const;
>>
>>    double_int operator * (double_int) const;
>>    double_int operator + (double_int) const;
>> @@ -131,12 +139,15 @@ struct double_int
>>    /* You must ensure that double_int::ext is called on the operands
>>       of the following operations, if the precision of the numbers
>>       is less than HOST_BITS_PER_DOUBLE_INT bits.  */
>> +
>>    double_int div (double_int, bool, unsigned) const;
>>    double_int sdiv (double_int, unsigned) const;
>>    double_int udiv (double_int, unsigned) const;
>>    double_int mod (double_int, bool, unsigned) const;
>>    double_int smod (double_int, unsigned) const;
>>    double_int umod (double_int, unsigned) const;
>> +  double_int divmod_with_overflow (double_int, bool, unsigned,
>> +				   double_int *, bool *) const;
>>    double_int divmod (double_int, bool, unsigned, double_int *) const;
>>    double_int sdivmod (double_int, unsigned, double_int *) const;
>>    double_int udivmod (double_int, unsigned, double_int *) const;
>> @@ -199,13 +210,6 @@ double_int::from_shwi (HOST_WIDE_INT cst
>>    return r;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline double_int
>> -shwi_to_double_int (HOST_WIDE_INT cst)
>> -{
>> -  return double_int::from_shwi (cst);
>> -}
>> -
>>  /* Some useful constants.  */
>>  /* FIXME(crowl): Maybe remove after converting callers?
>>     The problem is that a named constant would not be as optimizable,
>> @@ -229,11 +233,13 @@ double_int::from_uhwi (unsigned HOST_WID
>>    return r;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline double_int
>> -uhwi_to_double_int (unsigned HOST_WIDE_INT cst)
>> +inline double_int
>> +double_int::from_pair (HOST_WIDE_INT high, unsigned HOST_WIDE_INT low)
>>  {
>> -  return double_int::from_uhwi (cst);
>> +  double_int r;
>> +  r.low = low;
>> +  r.high = high;
>> +  return r;
>>  }
>>
>>  inline double_int &
>> @@ -301,13 +307,6 @@ double_int::to_shwi () const
>>    return (HOST_WIDE_INT) low;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline HOST_WIDE_INT
>> -double_int_to_shwi (double_int cst)
>> -{
>> -  return cst.to_shwi ();
>> -}
>> -
>>  /* Returns value of CST as an unsigned number.  CST must satisfy
>>     double_int::fits_unsigned.  */
>>
>> @@ -317,13 +316,6 @@ double_int::to_uhwi () const
>>    return low;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline unsigned HOST_WIDE_INT
>> -double_int_to_uhwi (double_int cst)
>> -{
>> -  return cst.to_uhwi ();
>> -}
>> -
>>  /* Returns true if CST fits in unsigned HOST_WIDE_INT.  */
>>
>>  inline bool
>> @@ -332,164 +324,6 @@ double_int::fits_uhwi () const
>>    return high == 0;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline bool
>> -double_int_fits_in_uhwi_p (double_int cst)
>> -{
>> -  return cst.fits_uhwi ();
>> -}
>> -
>> -/* Returns true if CST fits in signed HOST_WIDE_INT.  */
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline bool
>> -double_int_fits_in_shwi_p (double_int cst)
>> -{
>> -  return cst.fits_shwi ();
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline bool
>> -double_int_fits_in_hwi_p (double_int cst, bool uns)
>> -{
>> -  return cst.fits_hwi (uns);
>> -}
>> -
>> -/* The following operations perform arithmetics modulo 2^precision,
>> -   so you do not need to call double_int_ext between them, even if
>> -   you are representing numbers with precision less than
>> -   HOST_BITS_PER_DOUBLE_INT bits.  */
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_mul (double_int a, double_int b)
>> -{
>> -  return a * b;
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_mul_with_sign (double_int a, double_int b,
>> -			  bool unsigned_p, int *overflow)
>> -{
>> -  bool ovf;
>> -  return a.mul_with_sign (b, unsigned_p, &ovf);
>> -  *overflow = ovf;
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_add (double_int a, double_int b)
>> -{
>> -  return a + b;
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_sub (double_int a, double_int b)
>> -{
>> -  return a - b;
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_neg (double_int a)
>> -{
>> -  return -a;
>> -}
>> -
>> -/* You must ensure that double_int_ext is called on the operands
>> -   of the following operations, if the precision of the numbers
>> -   is less than HOST_BITS_PER_DOUBLE_INT bits.  */
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_div (double_int a, double_int b, bool uns, unsigned code)
>> -{
>> -  return a.div (b, uns, code);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_sdiv (double_int a, double_int b, unsigned code)
>> -{
>> -  return a.sdiv (b, code);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_udiv (double_int a, double_int b, unsigned code)
>> -{
>> -  return a.udiv (b, code);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_mod (double_int a, double_int b, bool uns, unsigned code)
>> -{
>> -  return a.mod (b, uns, code);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_smod (double_int a, double_int b, unsigned code)
>> -{
>> -  return a.smod (b, code);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_umod (double_int a, double_int b, unsigned code)
>> -{
>> -  return a.umod (b, code);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_divmod (double_int a, double_int b, bool uns,
>> -		   unsigned code, double_int *mod)
>> -{
>> -  return a.divmod (b, uns, code, mod);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_sdivmod (double_int a, double_int b, unsigned code, double_int *mod)
>> -{
>> -  return a.sdivmod (b, code, mod);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_udivmod (double_int a, double_int b, unsigned code, double_int *mod)
>> -{
>> -  return a.udivmod (b, code, mod);
>> -}
>> -
>> -/***/
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline bool
>> -double_int_multiple_of (double_int product, double_int factor,
>> -                        bool unsigned_p, double_int *multiple)
>> -{
>> -  return product.multiple_of (factor, unsigned_p, multiple);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_setbit (double_int a, unsigned bitpos)
>> -{
>> -  return a.set_bit (bitpos);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline int
>> -double_int_ctz (double_int a)
>> -{
>> -  return a.trailing_zeros ();
>> -}
>> -
>>  /* Logical operations.  */
>>
>>  /* Returns ~A.  */
>> @@ -503,13 +337,6 @@ double_int::operator ~ () const
>>    return result;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline double_int
>> -double_int_not (double_int a)
>> -{
>> -  return ~a;
>> -}
>> -
>>  /* Returns A | B.  */
>>
>>  inline double_int
>> @@ -521,13 +348,6 @@ double_int::operator | (double_int b) co
>>    return result;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline double_int
>> -double_int_ior (double_int a, double_int b)
>> -{
>> -  return a | b;
>> -}
>> -
>>  /* Returns A & B.  */
>>
>>  inline double_int
>> @@ -539,13 +359,6 @@ double_int::operator & (double_int b) co
>>    return result;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline double_int
>> -double_int_and (double_int a, double_int b)
>> -{
>> -  return a & b;
>> -}
>> -
>>  /* Returns A & ~B.  */
>>
>>  inline double_int
>> @@ -557,13 +370,6 @@ double_int::and_not (double_int b) const
>>    return result;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline double_int
>> -double_int_and_not (double_int a, double_int b)
>> -{
>> -  return a.and_not (b);
>> -}
>> -
>>  /* Returns A ^ B.  */
>>
>>  inline double_int
>> @@ -575,165 +381,8 @@ double_int::operator ^ (double_int b) co
>>    return result;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline double_int
>> -double_int_xor (double_int a, double_int b)
>> -{
>> -  return a ^ b;
>> -}
>> -
>> -
>> -/* Shift operations.  */
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_lshift (double_int a, HOST_WIDE_INT count, unsigned int prec,
>> -		   bool arith)
>> -{
>> -  return a.lshift (count, prec, arith);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_rshift (double_int a, HOST_WIDE_INT count, unsigned int prec,
>> -		   bool arith)
>> -{
>> -  return a.rshift (count, prec, arith);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_lrotate (double_int a, HOST_WIDE_INT count, unsigned int prec)
>> -{
>> -  return a.lrotate (count, prec);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_rrotate (double_int a, HOST_WIDE_INT count, unsigned int prec)
>> -{
>> -  return a.rrotate (count, prec);
>> -}
>> -
>> -/* Returns true if CST is negative.  Of course, CST is considered to
>> -   be signed.  */
>> -
>> -static inline bool
>> -double_int_negative_p (double_int cst)
>> -{
>> -  return cst.high < 0;
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline int
>> -double_int_cmp (double_int a, double_int b, bool uns)
>> -{
>> -  return a.cmp (b, uns);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline int
>> -double_int_scmp (double_int a, double_int b)
>> -{
>> -  return a.scmp (b);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline int
>> -double_int_ucmp (double_int a, double_int b)
>> -{
>> -  return a.ucmp (b);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_max (double_int a, double_int b, bool uns)
>> -{
>> -  return a.max (b, uns);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_smax (double_int a, double_int b)
>> -{
>> -  return a.smax (b);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_umax (double_int a, double_int b)
>> -{
>> -  return a.umax (b);
>> -}
>> -
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_min (double_int a, double_int b, bool uns)
>> -{
>> -  return a.min (b, uns);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_smin (double_int a, double_int b)
>> -{
>> -  return a.smin (b);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_umin (double_int a, double_int b)
>> -{
>> -  return a.umin (b);
>> -}
>> -
>>  void dump_double_int (FILE *, double_int, bool);
>>
>> -/* Zero and sign extension of numbers in smaller precisions.  */
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_ext (double_int a, unsigned prec, bool uns)
>> -{
>> -  return a.ext (prec, uns);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_sext (double_int a, unsigned prec)
>> -{
>> -  return a.sext (prec);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_zext (double_int a, unsigned prec)
>> -{
>> -  return a.zext (prec);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_mask (unsigned prec)
>> -{
>> -  return double_int::mask (prec);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_max_value (unsigned int prec, bool uns)
>> -{
>> -  return double_int::max_value (prec, uns);
>> -}
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -inline double_int
>> -double_int_min_value (unsigned int prec, bool uns)
>> -{
>> -  return double_int::min_value (prec, uns);
>> -}
>> -
>>  #define ALL_ONES (~((unsigned HOST_WIDE_INT) 0))
>>
>>  /* The operands of the following comparison functions must be processed
>> @@ -748,13 +397,6 @@ double_int::is_zero () const
>>    return low == 0 && high == 0;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline bool
>> -double_int_zero_p (double_int cst)
>> -{
>> -  return cst.is_zero ();
>> -}
>> -
>>  /* Returns true if CST is one.  */
>>
>>  inline bool
>> @@ -763,13 +405,6 @@ double_int::is_one () const
>>    return low == 1 && high == 0;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline bool
>> -double_int_one_p (double_int cst)
>> -{
>> -  return cst.is_one ();
>> -}
>> -
>>  /* Returns true if CST is minus one.  */
>>
>>  inline bool
>> @@ -778,13 +413,6 @@ double_int::is_minus_one () const
>>    return low == ALL_ONES && high == -1;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline bool
>> -double_int_minus_one_p (double_int cst)
>> -{
>> -  return cst.is_minus_one ();
>> -}
>> -
>>  /* Returns true if CST is negative.  */
>>
>>  inline bool
>> @@ -801,13 +429,6 @@ double_int::operator == (double_int cst2
>>    return low == cst2.low && high == cst2.high;
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline bool
>> -double_int_equal_p (double_int cst1, double_int cst2)
>> -{
>> -  return cst1 == cst2;
>> -}
>> -
>>  /* Returns true if CST1 != CST2.  */
>>
>>  inline bool
>> @@ -824,52 +445,6 @@ double_int::popcount () const
>>    return popcount_hwi (high) + popcount_hwi (low);
>>  }
>>
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -static inline int
>> -double_int_popcount (double_int cst)
>> -{
>> -  return cst.popcount ();
>> -}
>> -
>> -
>> -/* Legacy interface with decomposed high/low parts.  */
>> -
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -extern int add_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> -				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> -				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
>> -				 bool);
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -#define add_double(l1,h1,l2,h2,lv,hv) \
>> -  add_double_with_sign (l1, h1, l2, h2, lv, hv, false)
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -extern int neg_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> -		       unsigned HOST_WIDE_INT *, HOST_WIDE_INT *);
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -extern int mul_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> -				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> -				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
>> -				 bool);
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -extern int mul_double_wide_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> -				      unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> -				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
>> -				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
>> -				      bool);
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -#define mul_double(l1,h1,l2,h2,lv,hv) \
>> -  mul_double_with_sign (l1, h1, l2, h2, lv, hv, false)
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -extern void lshift_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
>> -			   HOST_WIDE_INT, unsigned int,
>> -			   unsigned HOST_WIDE_INT *, HOST_WIDE_INT *, bool);
>> -/* FIXME(crowl): Remove after converting callers.  */
>> -extern int div_and_round_double (unsigned, int, unsigned HOST_WIDE_INT,
>> -				 HOST_WIDE_INT, unsigned HOST_WIDE_INT,
>> -				 HOST_WIDE_INT, unsigned HOST_WIDE_INT *,
>> -				 HOST_WIDE_INT *, unsigned HOST_WIDE_INT *,
>> -				 HOST_WIDE_INT *);
>> -
>>
>>  #ifndef GENERATOR_FILE
>>  /* Conversion to and from GMP integer representations.  */
>> Index: gcc/fortran/trans-expr.c
>> ===================================================================
>> --- gcc/fortran/trans-expr.c	(revision 191083)
>> +++ gcc/fortran/trans-expr.c	(working copy)
>> @@ -1657,10 +1657,10 @@ gfc_conv_cst_int_power (gfc_se * se, tre
>>
>>    /* If exponent is too large, we won't expand it anyway, so don't bother
>>       with large integer values.  */
>> -  if (!double_int_fits_in_shwi_p (TREE_INT_CST (rhs)))
>> +  if (!TREE_INT_CST (rhs).fits_shwi ())
>>      return 0;
>>
>> -  m = double_int_to_shwi (TREE_INT_CST (rhs));
>> +  m = TREE_INT_CST (rhs).to_shwi ();
>>    /* There's no ABS for HOST_WIDE_INT, so here we go. It also takes care
>>       of the asymmetric range of the integer type.  */
>>    n = (unsigned HOST_WIDE_INT) (m < 0 ? -m : m);
>> Index: gcc/fortran/target-memory.c
>> ===================================================================
>> --- gcc/fortran/target-memory.c	(revision 191083)
>> +++ gcc/fortran/target-memory.c	(working copy)
>> @@ -395,8 +395,7 @@ gfc_interpret_logical (int kind, unsigne
>>  {
>>    tree t = native_interpret_expr (gfc_get_logical_type (kind), buffer,
>>  				  buffer_size);
>> -  *logical = double_int_zero_p (tree_to_double_int (t))
>> -	     ? 0 : 1;
>> +  *logical = tree_to_double_int (t).is_zero () ? 0 : 1;
>>    return size_logical (kind);
>>  }
>>
>> Index: gcc/expmed.c
>> ===================================================================
>> --- gcc/expmed.c	(revision 191083)
>> +++ gcc/expmed.c	(working copy)
>> @@ -3404,12 +3404,9 @@ choose_multiplier (unsigned HOST_WIDE_IN
>>  		   unsigned HOST_WIDE_INT *multiplier_ptr,
>>  		   int *post_shift_ptr, int *lgup_ptr)
>>  {
>> -  HOST_WIDE_INT mhigh_hi, mlow_hi;
>> -  unsigned HOST_WIDE_INT mhigh_lo, mlow_lo;
>> +  double_int mhigh, mlow;
>>    int lgup, post_shift;
>>    int pow, pow2;
>> -  unsigned HOST_WIDE_INT nl, dummy1;
>> -  HOST_WIDE_INT nh, dummy2;
>>
>>    /* lgup = ceil(log2(divisor)); */
>>    lgup = ceil_log2 (d);
>> @@ -3425,32 +3422,17 @@ choose_multiplier (unsigned HOST_WIDE_IN
>>    gcc_assert (pow != HOST_BITS_PER_DOUBLE_INT);
>>
>>    /* mlow = 2^(N + lgup)/d */
>> - if (pow >= HOST_BITS_PER_WIDE_INT)
>> -    {
>> -      nh = (HOST_WIDE_INT) 1 << (pow - HOST_BITS_PER_WIDE_INT);
>> -      nl = 0;
>> -    }
>> -  else
>> -    {
>> -      nh = 0;
>> -      nl = (unsigned HOST_WIDE_INT) 1 << pow;
>> -    }
>> -  div_and_round_double (TRUNC_DIV_EXPR, 1, nl, nh, d, (HOST_WIDE_INT) 0,
>> -			&mlow_lo, &mlow_hi, &dummy1, &dummy2);
>> +  double_int val = double_int_zero.set_bit (pow);
>> +  mlow = val.div (double_int::from_uhwi (d), true, TRUNC_DIV_EXPR);
>>
>> -  /* mhigh = (2^(N + lgup) + 2^N + lgup - precision)/d */
>> -  if (pow2 >= HOST_BITS_PER_WIDE_INT)
>> -    nh |= (HOST_WIDE_INT) 1 << (pow2 - HOST_BITS_PER_WIDE_INT);
>> -  else
>> -    nl |= (unsigned HOST_WIDE_INT) 1 << pow2;
>> -  div_and_round_double (TRUNC_DIV_EXPR, 1, nl, nh, d, (HOST_WIDE_INT) 0,
>> -			&mhigh_lo, &mhigh_hi, &dummy1, &dummy2);
>> +  /* mhigh = (2^(N + lgup) + 2^(N + lgup - precision))/d */
>> +  val |= double_int_zero.set_bit (pow2);
>> +  mhigh = val.div (double_int::from_uhwi (d), true, TRUNC_DIV_EXPR);
>>
>> -  gcc_assert (!mhigh_hi || nh - d < d);
>> -  gcc_assert (mhigh_hi <= 1 && mlow_hi <= 1);
>> +  gcc_assert (!mhigh.high || val.high - d < d);
>> +  gcc_assert (mhigh.high <= 1 && mlow.high <= 1);
>>    /* Assert that mlow < mhigh.  */
>> -  gcc_assert (mlow_hi < mhigh_hi
>> -	      || (mlow_hi == mhigh_hi && mlow_lo < mhigh_lo));
>> +  gcc_assert (mlow.ult (mhigh));
>>
>>    /* If precision == N, then mlow, mhigh exceed 2^N
>>       (but they do not exceed 2^(N+1)).  */
>> @@ -3458,15 +3440,14 @@ choose_multiplier (unsigned HOST_WIDE_IN
>>    /* Reduce to lowest terms.  */
>>    for (post_shift = lgup; post_shift > 0; post_shift--)
>>      {
>> -      unsigned HOST_WIDE_INT ml_lo = (mlow_hi <<
>> (HOST_BITS_PER_WIDE_INT - 1)) | (mlow_lo >> 1);
>> -      unsigned HOST_WIDE_INT mh_lo = (mhigh_hi <<
>> (HOST_BITS_PER_WIDE_INT - 1)) | (mhigh_lo >> 1);
>> +      int shft = HOST_BITS_PER_WIDE_INT - 1;
>> +      unsigned HOST_WIDE_INT ml_lo = (mlow.high << shft) | (mlow.low >> 1);
>> +      unsigned HOST_WIDE_INT mh_lo = (mhigh.high << shft) | (mhigh.low >> 1);
>>        if (ml_lo >= mh_lo)
>>  	break;
>>
>> -      mlow_hi = 0;
>> -      mlow_lo = ml_lo;
>> -      mhigh_hi = 0;
>> -      mhigh_lo = mh_lo;
>> +      mlow = double_int::from_uhwi (ml_lo);
>> +      mhigh = double_int::from_uhwi (mh_lo);
>>      }
>>
>>    *post_shift_ptr = post_shift;
>> @@ -3474,13 +3455,13 @@ choose_multiplier (unsigned HOST_WIDE_IN
>>    if (n < HOST_BITS_PER_WIDE_INT)
>>      {
>>        unsigned HOST_WIDE_INT mask = ((unsigned HOST_WIDE_INT) 1 << n) - 1;
>> -      *multiplier_ptr = mhigh_lo & mask;
>> -      return mhigh_lo >= mask;
>> +      *multiplier_ptr = mhigh.low & mask;
>> +      return mhigh.low >= mask;
>>      }
>>    else
>>      {
>> -      *multiplier_ptr = mhigh_lo;
>> -      return mhigh_hi;
>> +      *multiplier_ptr = mhigh.low;
>> +      return mhigh.high;
>>      }
>>  }
>>
>> Index: gcc/emit-rtl.c
>> ===================================================================
>> --- gcc/emit-rtl.c	(revision 191083)
>> +++ gcc/emit-rtl.c	(working copy)
>> @@ -5736,11 +5736,10 @@ init_emit_once (void)
>>        FCONST1(mode).data.high = 0;
>>        FCONST1(mode).data.low = 0;
>>        FCONST1(mode).mode = mode;
>> -      lshift_double (1, 0, GET_MODE_FBIT (mode),
>> -                     HOST_BITS_PER_DOUBLE_INT,
>> -                     &FCONST1(mode).data.low,
>> -		     &FCONST1(mode).data.high,
>> -                     SIGNED_FIXED_POINT_MODE_P (mode));
>> +      FCONST1(mode).data
>> +	= double_int_one.lshift (GET_MODE_FBIT (mode),
>> +				 HOST_BITS_PER_DOUBLE_INT,
>> +				 SIGNED_FIXED_POINT_MODE_P (mode));
>>        const_tiny_rtx[1][(int) mode] = CONST_FIXED_FROM_FIXED_VALUE (
>>  				      FCONST1 (mode), mode);
>>      }
>> @@ -5759,11 +5758,10 @@ init_emit_once (void)
>>        FCONST1(mode).data.high = 0;
>>        FCONST1(mode).data.low = 0;
>>        FCONST1(mode).mode = mode;
>> -      lshift_double (1, 0, GET_MODE_FBIT (mode),
>> -                     HOST_BITS_PER_DOUBLE_INT,
>> -                     &FCONST1(mode).data.low,
>> -		     &FCONST1(mode).data.high,
>> -                     SIGNED_FIXED_POINT_MODE_P (mode));
>> +      FCONST1(mode).data
>> +	= double_int_one.lshift (GET_MODE_FBIT (mode),
>> +				 HOST_BITS_PER_DOUBLE_INT,
>> +				 SIGNED_FIXED_POINT_MODE_P (mode));
>>        const_tiny_rtx[1][(int) mode] = CONST_FIXED_FROM_FIXED_VALUE (
>>  				      FCONST1 (mode), mode);
>>      }
>> Index: gcc/simplify-rtx.c
>> ===================================================================
>> --- gcc/simplify-rtx.c	(revision 191083)
>> +++ gcc/simplify-rtx.c	(working copy)
>> @@ -1525,109 +1525,117 @@ simplify_const_unary_operation (enum rtx
>>    else if (width <= HOST_BITS_PER_DOUBLE_INT
>>  	   && (CONST_DOUBLE_AS_INT_P (op) || CONST_INT_P (op)))
>>      {
>> -      unsigned HOST_WIDE_INT l1, lv;
>> -      HOST_WIDE_INT h1, hv;
>> +      double_int first, value;
>>
>>        if (CONST_DOUBLE_AS_INT_P (op))
>> -	l1 = CONST_DOUBLE_LOW (op), h1 = CONST_DOUBLE_HIGH (op);
>> +	first = double_int::from_pair (CONST_DOUBLE_HIGH (op),
>> +				       CONST_DOUBLE_LOW (op));
>>        else
>> -	l1 = INTVAL (op), h1 = HWI_SIGN_EXTEND (l1);
>> +	first = double_int::from_shwi (INTVAL (op));
>>
>>        switch (code)
>>  	{
>>  	case NOT:
>> -	  lv = ~ l1;
>> -	  hv = ~ h1;
>> +	  value = ~first;
>>  	  break;
>>
>>  	case NEG:
>> -	  neg_double (l1, h1, &lv, &hv);
>> +	  value = -first;
>>  	  break;
>>
>>  	case ABS:
>> -	  if (h1 < 0)
>> -	    neg_double (l1, h1, &lv, &hv);
>> +	  if (first.is_negative ())
>> +	    value = -first;
>>  	  else
>> -	    lv = l1, hv = h1;
>> +	    value = first;
>>  	  break;
>>
>>  	case FFS:
>> -	  hv = 0;
>> -	  if (l1 != 0)
>> -	    lv = ffs_hwi (l1);
>> -	  else if (h1 != 0)
>> -	    lv = HOST_BITS_PER_WIDE_INT + ffs_hwi (h1);
>> +	  value.high = 0;
>> +	  if (first.low != 0)
>> +	    value.low = ffs_hwi (first.low);
>> +	  else if (first.high != 0)
>> +	    value.low = HOST_BITS_PER_WIDE_INT + ffs_hwi (first.high);
>>  	  else
>> -	    lv = 0;
>> +	    value.low = 0;
>>  	  break;
>>
>>  	case CLZ:
>> -	  hv = 0;
>> -	  if (h1 != 0)
>> -	    lv = GET_MODE_PRECISION (mode) - floor_log2 (h1) - 1
>> -	      - HOST_BITS_PER_WIDE_INT;
>> -	  else if (l1 != 0)
>> -	    lv = GET_MODE_PRECISION (mode) - floor_log2 (l1) - 1;
>> -	  else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, lv))
>> -	    lv = GET_MODE_PRECISION (mode);
>> +	  value.high = 0;
>> +	  if (first.high != 0)
>> +	    value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.high) - 1
>> +	              - HOST_BITS_PER_WIDE_INT;
>> +	  else if (first.low != 0)
>> +	    value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.low) - 1;
>> +	  else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, value.low))
>> +	    value.low = GET_MODE_PRECISION (mode);
>>  	  break;
>>
>>  	case CTZ:
>> -	  hv = 0;
>> -	  if (l1 != 0)
>> -	    lv = ctz_hwi (l1);
>> -	  else if (h1 != 0)
>> -	    lv = HOST_BITS_PER_WIDE_INT + ctz_hwi (h1);
>> -	  else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, lv))
>> -	    lv = GET_MODE_PRECISION (mode);
>> +	  value.high = 0;
>> +	  if (first.low != 0)
>> +	    value.low = ctz_hwi (first.low);
>> +	  else if (first.high != 0)
>> +	    value.low = HOST_BITS_PER_WIDE_INT + ctz_hwi (first.high);
>> +	  else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, value.low))
>> +	    value.low = GET_MODE_PRECISION (mode);
>>  	  break;
>>
>>  	case POPCOUNT:
>> -	  hv = 0;
>> -	  lv = 0;
>> -	  while (l1)
>> -	    lv++, l1 &= l1 - 1;
>> -	  while (h1)
>> -	    lv++, h1 &= h1 - 1;
>> +	  value = double_int_zero;
>> +	  while (first.low)
>> +	    {
>> +	      value.low++;
>> +	      first.low &= first.low - 1;
>> +	    }
>> +	  while (first.high)
>> +	    {
>> +	      value.low++;
>> +	      first.high &= first.high - 1;
>> +	    }
>>  	  break;
>>
>>  	case PARITY:
>> -	  hv = 0;
>> -	  lv = 0;
>> -	  while (l1)
>> -	    lv++, l1 &= l1 - 1;
>> -	  while (h1)
>> -	    lv++, h1 &= h1 - 1;
>> -	  lv &= 1;
>> +	  value = double_int_zero;
>> +	  while (first.low)
>> +	    {
>> +	      value.low++;
>> +	      first.low &= first.low - 1;
>> +	    }
>> +	  while (first.high)
>> +	    {
>> +	      value.low++;
>> +	      first.high &= first.high - 1;
>> +	    }
>> +	  value.low &= 1;
>>  	  break;
>>
>>  	case BSWAP:
>>  	  {
>>  	    unsigned int s;
>>
>> -	    hv = 0;
>> -	    lv = 0;
>> +	    value = double_int_zero;
>>  	    for (s = 0; s < width; s += 8)
>>  	      {
>>  		unsigned int d = width - s - 8;
>>  		unsigned HOST_WIDE_INT byte;
>>
>>  		if (s < HOST_BITS_PER_WIDE_INT)
>> -		  byte = (l1 >> s) & 0xff;
>> +		  byte = (first.low >> s) & 0xff;
>>  		else
>> -		  byte = (h1 >> (s - HOST_BITS_PER_WIDE_INT)) & 0xff;
>> +		  byte = (first.high >> (s - HOST_BITS_PER_WIDE_INT)) & 0xff;
>>
>>  		if (d < HOST_BITS_PER_WIDE_INT)
>> -		  lv |= byte << d;
>> +		  value.low |= byte << d;
>>  		else
>> -		  hv |= byte << (d - HOST_BITS_PER_WIDE_INT);
>> +		  value.high |= byte << (d - HOST_BITS_PER_WIDE_INT);
>>  	      }
>>  	  }
>>  	  break;
>>
>>  	case TRUNCATE:
>>  	  /* This is just a change-of-mode, so do nothing.  */
>> -	  lv = l1, hv = h1;
>> +	  value = first;
>>  	  break;
>>
>>  	case ZERO_EXTEND:
>> @@ -1636,8 +1644,7 @@ simplify_const_unary_operation (enum rtx
>>  	  if (op_width > HOST_BITS_PER_WIDE_INT)
>>  	    return 0;
>>
>> -	  hv = 0;
>> -	  lv = l1 & GET_MODE_MASK (op_mode);
>> +	  value = double_int::from_uhwi (first.low & GET_MODE_MASK (op_mode));
>>  	  break;
>>
>>  	case SIGN_EXTEND:
>> @@ -1646,11 +1653,11 @@ simplify_const_unary_operation (enum rtx
>>  	    return 0;
>>  	  else
>>  	    {
>> -	      lv = l1 & GET_MODE_MASK (op_mode);
>> -	      if (val_signbit_known_set_p (op_mode, lv))
>> -		lv |= ~GET_MODE_MASK (op_mode);
>> +	      value.low = first.low & GET_MODE_MASK (op_mode);
>> +	      if (val_signbit_known_set_p (op_mode, value.low))
>> +		value.low |= ~GET_MODE_MASK (op_mode);
>>
>> -	      hv = HWI_SIGN_EXTEND (lv);
>> +	      value.high = HWI_SIGN_EXTEND (value.low);
>>  	    }
>>  	  break;
>>
>> @@ -1661,7 +1668,7 @@ simplify_const_unary_operation (enum rtx
>>  	  return 0;
>>  	}
>>
>> -      return immed_double_const (lv, hv, mode);
>> +      return immed_double_int_const (value, mode);
>>      }
>>
>>    else if (CONST_DOUBLE_AS_FLOAT_P (op)
>> @@ -3578,6 +3585,7 @@ simplify_const_binary_operation (enum rt
>>        && (CONST_DOUBLE_AS_INT_P (op1) || CONST_INT_P (op1)))
>>      {
>>        double_int o0, o1, res, tmp;
>> +      bool overflow;
>>
>>        o0 = rtx_to_double_int (op0);
>>        o1 = rtx_to_double_int (op1);
>> @@ -3599,34 +3607,30 @@ simplify_const_binary_operation (enum rt
>>  	  break;
>>
>>  	case DIV:
>> -	  if (div_and_round_double (TRUNC_DIV_EXPR, 0,
>> -				    o0.low, o0.high, o1.low, o1.high,
>> -				    &res.low, &res.high,
>> -				    &tmp.low, &tmp.high))
>> +          res = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR,
>> +					 &tmp, &overflow);
>> +	  if (overflow)
>>  	    return 0;
>>  	  break;
>>
>>  	case MOD:
>> -	  if (div_and_round_double (TRUNC_DIV_EXPR, 0,
>> -				    o0.low, o0.high, o1.low, o1.high,
>> -				    &tmp.low, &tmp.high,
>> -				    &res.low, &res.high))
>> +          tmp = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR,
>> +					 &res, &overflow);
>> +	  if (overflow)
>>  	    return 0;
>>  	  break;
>>
>>  	case UDIV:
>> -	  if (div_and_round_double (TRUNC_DIV_EXPR, 1,
>> -				    o0.low, o0.high, o1.low, o1.high,
>> -				    &res.low, &res.high,
>> -				    &tmp.low, &tmp.high))
>> +          res = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR,
>> +					 &tmp, &overflow);
>> +	  if (overflow)
>>  	    return 0;
>>  	  break;
>>
>>  	case UMOD:
>> -	  if (div_and_round_double (TRUNC_DIV_EXPR, 1,
>> -				    o0.low, o0.high, o1.low, o1.high,
>> -				    &tmp.low, &tmp.high,
>> -				    &res.low, &res.high))
>> +          tmp = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR,
>> +					 &res, &overflow);
>> +	  if (overflow)
>>  	    return 0;
>>  	  break;
>>
>> Index: gcc/explow.c
>> ===================================================================
>> --- gcc/explow.c	(revision 191083)
>> +++ gcc/explow.c	(working copy)
>> @@ -100,36 +100,33 @@ plus_constant (enum machine_mode mode, r
>>      case CONST_INT:
>>        if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
>>  	{
>> -	  unsigned HOST_WIDE_INT l1 = INTVAL (x);
>> -	  HOST_WIDE_INT h1 = (l1 >> (HOST_BITS_PER_WIDE_INT - 1)) ? -1 : 0;
>> -	  unsigned HOST_WIDE_INT l2 = c;
>> -	  HOST_WIDE_INT h2 = c < 0 ? -1 : 0;
>> -	  unsigned HOST_WIDE_INT lv;
>> -	  HOST_WIDE_INT hv;
>> +	  double_int di_x = double_int::from_shwi (INTVAL (x));
>> +	  double_int di_c = double_int::from_shwi (c);
>>
>> -	  if (add_double_with_sign (l1, h1, l2, h2, &lv, &hv, false))
>> +	  bool overflow;
>> +	  double_int v = di_x.add_with_sign (di_c, false, &overflow);
>> +	  if (overflow)
>>  	    gcc_unreachable ();
>>
>> -	  return immed_double_const (lv, hv, VOIDmode);
>> +	  return immed_double_int_const (v, VOIDmode);
>>  	}
>>
>>        return GEN_INT (INTVAL (x) + c);
>>
>>      case CONST_DOUBLE:
>>        {
>> -	unsigned HOST_WIDE_INT l1 = CONST_DOUBLE_LOW (x);
>> -	HOST_WIDE_INT h1 = CONST_DOUBLE_HIGH (x);
>> -	unsigned HOST_WIDE_INT l2 = c;
>> -	HOST_WIDE_INT h2 = c < 0 ? -1 : 0;
>> -	unsigned HOST_WIDE_INT lv;
>> -	HOST_WIDE_INT hv;
>> +	double_int di_x = double_int::from_pair (CONST_DOUBLE_HIGH (x),
>> +						 CONST_DOUBLE_LOW (x));
>> +	double_int di_c = double_int::from_shwi (c);
>>
>> -	if (add_double_with_sign (l1, h1, l2, h2, &lv, &hv, false))
>> +	bool overflow;
>> +	double_int v = di_x.add_with_sign (di_c, false, &overflow);
>> +	if (overflow)
>>  	  /* Sorry, we have no way to represent overflows this wide.
>>  	     To fix, add constant support wider than CONST_DOUBLE.  */
>>  	  gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT);
>>
>> -	return immed_double_const (lv, hv, VOIDmode);
>> +	return immed_double_int_const (v, VOIDmode);
>>        }
>>
>>      case MEM:
>> Index: gcc/config/sparc/sparc.c
>> ===================================================================
>> --- gcc/config/sparc/sparc.c	(revision 191083)
>> +++ gcc/config/sparc/sparc.c	(working copy)
>> @@ -10113,33 +10113,27 @@ sparc_fold_builtin (tree fndecl, int n_a
>>  	  && TREE_CODE (arg1) == VECTOR_CST
>>  	  && TREE_CODE (arg2) == INTEGER_CST)
>>  	{
>> -	  int overflow = 0;
>> -	  unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (arg2);
>> -	  HOST_WIDE_INT high = TREE_INT_CST_HIGH (arg2);
>> +	  bool overflow = false;
>> +	  double_int di_arg2 = TREE_INT_CST (arg2);
>>  	  unsigned i;
>>
>>  	  for (i = 0; i < VECTOR_CST_NELTS (arg0); ++i)
>>  	    {
>> -	      unsigned HOST_WIDE_INT
>> -		low0 = TREE_INT_CST_LOW (VECTOR_CST_ELT (arg0, i)),
>> -		low1 = TREE_INT_CST_LOW (VECTOR_CST_ELT (arg1, i));
>> -	      HOST_WIDE_INT
>> -		high0 = TREE_INT_CST_HIGH (VECTOR_CST_ELT (arg0, i));
>> -	      HOST_WIDE_INT
>> -		high1 = TREE_INT_CST_HIGH (VECTOR_CST_ELT (arg1, i));
>> +	      double_int e0 = TREE_INT_CST (VECTOR_CST_ELT (arg0, i));
>> +	      double_int e1 = TREE_INT_CST (VECTOR_CST_ELT (arg1, i));
>>
>> -	      unsigned HOST_WIDE_INT l;
>> -	      HOST_WIDE_INT h;
>> +	      bool neg1_ovf, neg2_ovf, add1_ovf, add2_ovf;
>>
>> -	      overflow |= neg_double (low1, high1, &l, &h);
>> -	      overflow |= add_double (low0, high0, l, h, &l, &h);
>> -	      if (h < 0)
>> -		overflow |= neg_double (l, h, &l, &h);
>> +	      double_int tmp = e1.neg_with_overflow (&neg1_ovf);
>> +	      tmp = e0.add_with_sign (tmp, false, &add1_ovf);
>> +	      if (tmp.is_negative ())
>> +		tmp = tmp.neg_with_overflow (&neg2_ovf);
>>
>> -	      overflow |= add_double (low, high, l, h, &low, &high);
>> +	      tmp = di_arg2.add_with_sign (tmp, false, &add2_ovf);
>> +	      overflow |= neg1_ovf | neg2_ovf | add1_ovf | add2_ovf;
>>  	    }
>>
>> -	  gcc_assert (overflow == 0);
>> +	  gcc_assert (!overflow);
>>
>>  	  return build_int_cst_wide (rtype, low, high);
>>  	}
>> Index: gcc/config/avr/avr.c
>> ===================================================================
>> --- gcc/config/avr/avr.c	(revision 191083)
>> +++ gcc/config/avr/avr.c	(working copy)
>> @@ -10518,10 +10518,10 @@ avr_double_int_push_digit (double_int va
>>                             unsigned HOST_WIDE_INT digit)
>>  {
>>    val = 0 == base
>> -    ? double_int_lshift (val, 32, 64, false)
>> -    : double_int_mul (val, uhwi_to_double_int (base));
>> +    ? val.llshift (32, 64)
>> +    : val * double_int::from_uhwi (base);
>>
>> -  return double_int_add (val, uhwi_to_double_int (digit));
>> +  return val + double_int::from_uhwi (digit);
>>  }
>>
>>
>> @@ -10530,7 +10530,7 @@ avr_double_int_push_digit (double_int va
>>  static int
>>  avr_map (double_int f, int x)
>>  {
>> -  return 0xf & double_int_to_uhwi (double_int_rshift (f, 4*x, 64, false));
>> +  return 0xf & f.lrshift (4*x, 64).to_uhwi ();
>>  }
>>
>>
>> @@ -10703,7 +10703,7 @@ avr_map_decompose (double_int f, const a
>>           are mapped to 0 and used operands are reloaded to xop[0].  */
>>
>>        xop[0] = all_regs_rtx[24];
>> -      xop[1] = gen_int_mode (double_int_to_uhwi (f_ginv.map), SImode);
>> +      xop[1] = gen_int_mode (f_ginv.map.to_uhwi (), SImode);
>>        xop[2] = all_regs_rtx[25];
>>        xop[3] = val_used_p ? xop[0] : const0_rtx;
>>
>> @@ -10799,7 +10799,7 @@ avr_out_insert_bits (rtx *op, int *plen)
>>    else if (flag_print_asm_name)
>>      fprintf (asm_out_file,
>>               ASM_COMMENT_START "map = 0x%08" HOST_LONG_FORMAT "x\n",
>> -             double_int_to_uhwi (map) & GET_MODE_MASK (SImode));
>> +             map.to_uhwi () & GET_MODE_MASK (SImode));
>>
>>    /* If MAP has fixed points it might be better to initialize the result
>>       with the bits to be inserted instead of moving all bits by hand.  */

Patch

========================

Index: gcc/java/ChangeLog

2012-09-10  Lawrence Crowl  <crowl@google.com>

	* decl.c (java_init_decl_processing): Change to new double_int API.
	* jcf-parse.c (get_constant): Likewise.
	* boehm.c (mark_reference_fields): Likewise.
	(get_boehm_type_descriptor): Likewise.

Index: gcc/ChangeLog

2012-09-10  Lawrence Crowl  <crowl@google.com>

	* double-int.h (double_int::from_pair): New.
	(double_int::wide_mul_with_sign): New.
	(double_int::neg_with_overflow): New.
	(double_int::divmod_with_overflow): New.
	(shwi_to_double_int): Remove.
	(uhwi_to_double_int): Remove.
	(double_int_to_shwi): Remove.
	(double_int_to_uhwi): Remove.
	(double_int_fits_in_uhwi_p): Remove.
	(double_int_fits_in_shwi_p): Remove.
	(double_int_fits_in_hwi_p): Remove.
	(double_int_mul): Remove.
	(double_int_mul_with_sign): Remove.
	(double_int_add): Remove.
	(double_int_sub): Remove.
	(double_int_neg): Remove.
	(double_int_div): Remove.
	(double_int_sdiv): Remove.
	(double_int_udiv): Remove.
	(double_int_mod): Remove.
	(double_int_smod): Remove.
	(double_int_umod): Remove.
	(double_int_divmod): Remove.
	(double_int_sdivmod): Remove.
	(double_int_udivmod): Remove.
	(double_int_multiple_of): Remove.
	(double_int_setbit): Remove.
	(double_int_ctz): Remove.
	(double_int_not): Remove.
	(double_int_ior): Remove.
	(double_int_and): Remove.
	(double_int_and_not): Remove.
	(double_int_xor): Remove.
	(double_int_lshift): Remove.
	(double_int_rshift): Remove.
	(double_int_lrotate): Remove.
	(double_int_rrotate): Remove.
	(double_int_negative_p): Remove.
	(double_int_cmp): Remove.
	(double_int_scmp): Remove.
	(double_int_ucmp): Remove.
	(double_int_max): Remove.
	(double_int_smax): Remove.
	(double_int_umax): Remove.
	(double_int_min): Remove.
	(double_int_smin): Remove.
	(double_int_umin): Remove.
	(double_int_ext): Remove.
	(double_int_sext): Remove.
	(double_int_zext): Remove.
	(double_int_mask): Remove.
	(double_int_max_value): Remove.
	(double_int_min_value): Remove.
	(double_int_zero_p): Remove.
	(double_int_one_p): Remove.
	(double_int_minus_one_p): Remove.
	(double_int_equal_p): Remove.
	(double_int_popcount): Remove.
	(extern add_double_with_sign): Remove.
	(#define add_double): Remove.
	(extern neg_double): Remove.
	(extern mul_double_with_sign): Remove.
	(extern mul_double_wide_with_sign): Remove.
	(#define mul_double): Remove.
	(extern lshift_double): Remove.
	(extern div_and_round_double): Remove.
	* double-int.c (add_double_with_sign): Make static.
	(#defined add_double): Localized from header.
	(neg_double): Make static.
	(mul_double_with_sign): Make static.
	(mul_double_wide_with_sign): Make static.
	(#defined mul_double): Localized from header.
	(lshift_double): Make static.
	(div_and_round_double): Make static.
	(double_int::wide_mul_with_sign): New.
	(double_int::neg_with_overflow): New.
	(double_int::divmod_with_overflow): New.
	* emit-rtl.c (init_emit_once): Change to new double_int API.
	* explow.c (plus_constant): Likewise.
	* expmed.c (choose_multiplier): Likewise.
	* fold-const.c (int_const_binop_1): Likewise.
	(fold_div_compare): Likewise.
	(maybe_canonicalize_comparison): Likewise.
	(pointer_may_wrap_p): Likewise.
	(fold_negate_const): Likewise.
	(fold_abs_const): Likewise.
	* simplify-rtx.c (simplify_const_unary_operation): Likewise.
	(simplify_const_binary_operation): Likewise.
	* tree-chrec.c (tree_fold_binomial): Likewise.
	* tree-vrp.c (extract_range_from_binary_expr_1): Likewise.
	* config/sparc/sparc.c (sparc_fold_builtin): Likewise.
	* config/avr/avr.c (avr_double_int_push_digit): Likewise.
	(avr_map): Likewise.
	(avr_map_decompose): Likewise.
	(avr_out_insert_bits): Likewise.

Index: gcc/cp/ChangeLog

2012-09-10  Lawrence Crowl  <crowl@google.com>

	* init.c (build_new_1): Change to new double_int API.
	* decl.c (build_enumerator): Likewise.
	* typeck2.c (process_init_constructor_array): Likewise.
	* mangle.c (write_array_type): Likewise.

Index: gcc/fortran/ChangeLog

2012-09-10  Lawrence Crowl  <crowl@google.com>

	* trans-expr.c (gfc_conv_cst_int_power): Change to new double_int API.
	* target-memory.c (gfc_interpret_logical): Likewise.

========================

Index: gcc/tree-vrp.c
===================================================================
--- gcc/tree-vrp.c	(revision 191083)
+++ gcc/tree-vrp.c	(working copy)
@@ -2478,7 +2478,7 @@  extract_range_from_binary_expr_1 (value_
 		  if (tmin.cmp (tmax, uns) < 0)
 		    covers = true;
 		  tmax = tem + double_int_minus_one;
-		  if (double_int_cmp (tmax, tem, uns) > 0)
+		  if (tmax.cmp (tem, uns) > 0)
 		    covers = true;
 		  /* If the anti-range would cover nothing, drop to varying.
 		     Likewise if the anti-range bounds are outside of the
@@ -2632,37 +2632,26 @@  extract_range_from_binary_expr_1 (value_
 	    }
 	  uns = uns0 & uns1;

-	  mul_double_wide_with_sign (min0.low, min0.high,
-				     min1.low, min1.high,
-				     &prod0l.low, &prod0l.high,
-				     &prod0h.low, &prod0h.high, true);
+	  bool overflow;
+	  prod0l = min0.wide_mul_with_sign (min1, true, &prod0h, &overflow);
 	  if (!uns0 && min0.is_negative ())
 	    prod0h -= min1;
 	  if (!uns1 && min1.is_negative ())
 	    prod0h -= min0;

-	  mul_double_wide_with_sign (min0.low, min0.high,
-				     max1.low, max1.high,
-				     &prod1l.low, &prod1l.high,
-				     &prod1h.low, &prod1h.high, true);
+	  prod1l = min0.wide_mul_with_sign (max1, true, &prod1h, &overflow);
 	  if (!uns0 && min0.is_negative ())
 	    prod1h -= max1;
 	  if (!uns1 && max1.is_negative ())
 	    prod1h -= min0;

-	  mul_double_wide_with_sign (max0.low, max0.high,
-				     min1.low, min1.high,
-				     &prod2l.low, &prod2l.high,
-				     &prod2h.low, &prod2h.high, true);
+	  prod2l = max0.wide_mul_with_sign (min1, true, &prod2h, &overflow);
 	  if (!uns0 && max0.is_negative ())
 	    prod2h -= min1;
 	  if (!uns1 && min1.is_negative ())
 	    prod2h -= max0;

-	  mul_double_wide_with_sign (max0.low, max0.high,
-				     max1.low, max1.high,
-				     &prod3l.low, &prod3l.high,
-				     &prod3h.low, &prod3h.high, true);
+	  prod3l = max0.wide_mul_with_sign (max1, true, &prod3h, &overflow);
 	  if (!uns0 && max0.is_negative ())
 	    prod3h -= max1;
 	  if (!uns1 && max1.is_negative ())
Index: gcc/java/decl.c
===================================================================
--- gcc/java/decl.c	(revision 191083)
+++ gcc/java/decl.c	(working copy)
@@ -617,7 +617,7 @@  java_init_decl_processing (void)
   decimal_int_max = build_int_cstu (unsigned_int_type_node, 0x80000000);
   decimal_long_max
     = double_int_to_tree (unsigned_long_type_node,
-			  double_int_setbit (double_int_zero, 64));
+			  double_int_zero.set_bit (64));

   long_zero_node = build_int_cst (long_type_node, 0);

Index: gcc/java/jcf-parse.c
===================================================================
--- gcc/java/jcf-parse.c	(revision 191083)
+++ gcc/java/jcf-parse.c	(working copy)
@@ -1043,9 +1043,9 @@  get_constant (JCF *jcf, int index)
 	double_int val;

 	num = JPOOL_UINT (jcf, index);
-	val = double_int_lshift (uhwi_to_double_int (num), 32, 64, false);
+	val = double_int::from_uhwi (num).llshift (32, 64);
 	num = JPOOL_UINT (jcf, index + 1);
-	val = double_int_ior (val, uhwi_to_double_int (num));
+	val |= double_int::from_uhwi (num);

 	value = double_int_to_tree (long_type_node, val);
 	break;
Index: gcc/java/boehm.c
===================================================================
--- gcc/java/boehm.c	(revision 191083)
+++ gcc/java/boehm.c	(working copy)
@@ -108,7 +108,7 @@  mark_reference_fields (tree field,
 	     bits for all words in the record. This is conservative, but the
 	     size_words != 1 case is impossible in regular java code. */
 	  for (i = 0; i < size_words; ++i)
-	    *mask = double_int_setbit (*mask, ubit - count - i - 1);
+	    *mask = (*mask).set_bit (ubit - count - i - 1);

 	  if (count >= ubit - 2)
 	    *pointer_after_end = 1;
@@ -200,7 +200,7 @@  get_boehm_type_descriptor (tree type)
       while (last_set_index)
 	{
 	  if ((last_set_index & 1))
-	    mask = double_int_setbit (mask, log2_size + count);
+	    mask = mask.set_bit (log2_size + count);
 	  last_set_index >>= 1;
 	  ++count;
 	}
@@ -209,7 +209,7 @@  get_boehm_type_descriptor (tree type)
   else if (! pointer_after_end)
     {
       /* Bottom two bits for bitmap mark type are 01.  */
-      mask = double_int_setbit (mask, 0);
+      mask = mask.set_bit (0);
       value = double_int_to_tree (value_type, mask);
     }
   else
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	(revision 191083)
+++ gcc/fold-const.c	(working copy)
@@ -982,12 +982,6 @@  int_const_binop_1 (enum tree_code code,
       break;

     case MINUS_EXPR:
-/* FIXME(crowl) Remove this code if the replacment works.
-      neg_double (op2.low, op2.high, &res.low, &res.high);
-      add_double (op1.low, op1.high, res.low, res.high,
-		  &res.low, &res.high);
-      overflow = OVERFLOW_SUM_SIGN (res.high, op2.high, op1.high);
-*/
       res = op1.add_with_sign (-op2, false, &overflow);
       break;

@@ -1035,10 +1029,7 @@  int_const_binop_1 (enum tree_code code,
 	  res = double_int_one;
 	  break;
 	}
-      overflow = div_and_round_double (code, uns,
-				       op1.low, op1.high, op2.low, op2.high,
-				       &res.low, &res.high,
-				       &tmp.low, &tmp.high);
+      res = op1.divmod_with_overflow (op2, uns, code, &tmp, &overflow);
       break;

     case TRUNC_MOD_EXPR:
@@ -1060,10 +1051,7 @@  int_const_binop_1 (enum tree_code code,
     case ROUND_MOD_EXPR:
       if (op2.is_zero ())
 	return NULL_TREE;
-      overflow = div_and_round_double (code, uns,
-				       op1.low, op1.high, op2.low, op2.high,
-				       &tmp.low, &tmp.high,
-				       &res.low, &res.high);
+      tmp = op1.divmod_with_overflow (op2, uns, code, &res, &overflow);
       break;

     case MIN_EXPR:
@@ -6290,15 +6278,12 @@  fold_div_compare (location_t loc,
   double_int val;
   bool unsigned_p = TYPE_UNSIGNED (TREE_TYPE (arg0));
   bool neg_overflow;
-  int overflow;
+  bool overflow;

   /* We have to do this the hard way to detect unsigned overflow.
      prod = int_const_binop (MULT_EXPR, arg01, arg1);  */
-  overflow = mul_double_with_sign (TREE_INT_CST_LOW (arg01),
-				   TREE_INT_CST_HIGH (arg01),
-				   TREE_INT_CST_LOW (arg1),
-				   TREE_INT_CST_HIGH (arg1),
-				   &val.low, &val.high, unsigned_p);
+  val = TREE_INT_CST (arg01)
+	.mul_with_sign (TREE_INT_CST (arg1), unsigned_p, &overflow);
   prod = force_fit_type_double (TREE_TYPE (arg00), val, -1, overflow);
   neg_overflow = false;

@@ -6309,11 +6294,8 @@  fold_div_compare (location_t loc,
       lo = prod;

       /* Likewise hi = int_const_binop (PLUS_EXPR, prod, tmp).  */
-      overflow = add_double_with_sign (TREE_INT_CST_LOW (prod),
-				       TREE_INT_CST_HIGH (prod),
-				       TREE_INT_CST_LOW (tmp),
-				       TREE_INT_CST_HIGH (tmp),
-				       &val.low, &val.high, unsigned_p);
+      val = TREE_INT_CST (prod)
+	    .add_with_sign (TREE_INT_CST (tmp), unsigned_p, &overflow);
       hi = force_fit_type_double (TREE_TYPE (arg00), val,
 				  -1, overflow | TREE_OVERFLOW (prod));
     }
@@ -8693,8 +8675,7 @@  maybe_canonicalize_comparison (location_
 static bool
 pointer_may_wrap_p (tree base, tree offset, HOST_WIDE_INT bitpos)
 {
-  unsigned HOST_WIDE_INT offset_low, total_low;
-  HOST_WIDE_INT size, offset_high, total_high;
+  double_int di_offset, total;

   if (!POINTER_TYPE_P (TREE_TYPE (base)))
     return true;
@@ -8703,28 +8684,22 @@  pointer_may_wrap_p (tree base, tree offs
     return true;

   if (offset == NULL_TREE)
-    {
-      offset_low = 0;
-      offset_high = 0;
-    }
+    di_offset = double_int_zero;
   else if (TREE_CODE (offset) != INTEGER_CST || TREE_OVERFLOW (offset))
     return true;
   else
-    {
-      offset_low = TREE_INT_CST_LOW (offset);
-      offset_high = TREE_INT_CST_HIGH (offset);
-    }
+    di_offset = TREE_INT_CST (offset);

-  if (add_double_with_sign (offset_low, offset_high,
-			    bitpos / BITS_PER_UNIT, 0,
-			    &total_low, &total_high,
-			    true))
+  bool overflow;
+  double_int units = double_int::from_uhwi (bitpos / BITS_PER_UNIT);
+  total = di_offset.add_with_sign (units, true, &overflow);
+  if (overflow)
     return true;

-  if (total_high != 0)
+  if (total.high != 0)
     return true;

-  size = int_size_in_bytes (TREE_TYPE (TREE_TYPE (base)));
+  HOST_WIDE_INT size = int_size_in_bytes (TREE_TYPE (TREE_TYPE (base)));
   if (size <= 0)
     return true;

@@ -8739,7 +8714,7 @@  pointer_may_wrap_p (tree base, tree offs
 	size = base_size;
     }

-  return total_low > (unsigned HOST_WIDE_INT) size;
+  return total.low > (unsigned HOST_WIDE_INT) size;
 }

 /* Subroutine of fold_binary.  This routine performs all of the
@@ -15939,8 +15914,8 @@  fold_negate_const (tree arg0, tree type)
     case INTEGER_CST:
       {
 	double_int val = tree_to_double_int (arg0);
-	int overflow = neg_double (val.low, val.high, &val.low, &val.high);
-
+	bool overflow;
+	val = val.neg_with_overflow (&overflow);
 	t = force_fit_type_double (type, val, 1,
 				   (overflow | TREE_OVERFLOW (arg0))
 				   && !TYPE_UNSIGNED (type));
@@ -15997,9 +15972,8 @@  fold_abs_const (tree arg0, tree type)
 	   its negation.  */
 	else
 	  {
-	    int overflow;
-
-	    overflow = neg_double (val.low, val.high, &val.low, &val.high);
+	    bool overflow;
+	    val = val.neg_with_overflow (&overflow);
 	    t = force_fit_type_double (type, val, -1,
 				       overflow | TREE_OVERFLOW (arg0));
 	  }
Index: gcc/tree-chrec.c
===================================================================
--- gcc/tree-chrec.c	(revision 191083)
+++ gcc/tree-chrec.c	(working copy)
@@ -461,8 +461,8 @@  chrec_fold_multiply (tree type,
 static tree
 tree_fold_binomial (tree type, tree n, unsigned int k)
 {
-  unsigned HOST_WIDE_INT lidx, lnum, ldenom, lres, ldum;
-  HOST_WIDE_INT hidx, hnum, hdenom, hres, hdum;
+  double_int num, denom, idx, di_res;
+  bool overflow;
   unsigned int i;
   tree res;

@@ -472,59 +472,41 @@  tree_fold_binomial (tree type, tree n, u
   if (k == 1)
     return fold_convert (type, n);

+  /* Numerator = n.  */
+  num = TREE_INT_CST (n);
+
   /* Check that k <= n.  */
-  if (TREE_INT_CST_HIGH (n) == 0
-      && TREE_INT_CST_LOW (n) < k)
+  if (num.ult (double_int::from_uhwi (k)))
     return NULL_TREE;

-  /* Numerator = n.  */
-  lnum = TREE_INT_CST_LOW (n);
-  hnum = TREE_INT_CST_HIGH (n);
-
   /* Denominator = 2.  */
-  ldenom = 2;
-  hdenom = 0;
+  denom = double_int::from_uhwi (2);

   /* Index = Numerator-1.  */
-  if (lnum == 0)
-    {
-      hidx = hnum - 1;
-      lidx = ~ (unsigned HOST_WIDE_INT) 0;
-    }
-  else
-    {
-      hidx = hnum;
-      lidx = lnum - 1;
-    }
+  idx = num - double_int_one;

   /* Numerator = Numerator*Index = n*(n-1).  */
-  if (mul_double (lnum, hnum, lidx, hidx, &lnum, &hnum))
+  num = num.mul_with_sign (idx, false, &overflow);
+  if (overflow)
     return NULL_TREE;

   for (i = 3; i <= k; i++)
     {
       /* Index--.  */
-      if (lidx == 0)
-	{
-	  hidx--;
-	  lidx = ~ (unsigned HOST_WIDE_INT) 0;
-	}
-      else
-        lidx--;
+      --idx;

       /* Numerator *= Index.  */
-      if (mul_double (lnum, hnum, lidx, hidx, &lnum, &hnum))
+      num = num.mul_with_sign (idx, false, &overflow);
+      if (overflow)
 	return NULL_TREE;

       /* Denominator *= i.  */
-      mul_double (ldenom, hdenom, i, 0, &ldenom, &hdenom);
+      denom *= double_int::from_uhwi (i);
     }

   /* Result = Numerator / Denominator.  */
-  div_and_round_double (EXACT_DIV_EXPR, 1, lnum, hnum, ldenom, hdenom,
-			&lres, &hres, &ldum, &hdum);
-
-  res = build_int_cst_wide (type, lres, hres);
+  di_res = num.div (denom, true, EXACT_DIV_EXPR);
+  res = build_int_cst_wide (type, di_res.low, di_res.high);
   return int_fits_type_p (res, type) ? res : NULL_TREE;
 }

Index: gcc/cp/init.c
===================================================================
--- gcc/cp/init.c	(revision 191083)
+++ gcc/cp/init.c	(working copy)
@@ -2239,11 +2239,11 @@  build_new_1 (VEC(tree,gc) **placement, t
       if (TREE_CONSTANT (inner_nelts_cst)
 	  && TREE_CODE (inner_nelts_cst) == INTEGER_CST)
 	{
-	  double_int result;
-	  if (mul_double (TREE_INT_CST_LOW (inner_nelts_cst),
-			  TREE_INT_CST_HIGH (inner_nelts_cst),
-			  inner_nelts_count.low, inner_nelts_count.high,
-			  &result.low, &result.high))
+	  bool overflow;
+	  double_int result = TREE_INT_CST (inner_nelts_cst)
+			      .mul_with_sign (inner_nelts_count,
+					      false, &overflow);
+	  if (overflow)
 	    {
 	      if (complain & tf_error)
 		error ("integer overflow in array size");
@@ -2345,8 +2345,8 @@  build_new_1 (VEC(tree,gc) **placement, t
       /* Maximum available size in bytes.  Half of the address space
 	 minus the cookie size.  */
       double_int max_size
-	= double_int_lshift (double_int_one, TYPE_PRECISION (sizetype) - 1,
-			     HOST_BITS_PER_DOUBLE_INT, false);
+	= double_int_one.llshift (TYPE_PRECISION (sizetype) - 1,
+				  HOST_BITS_PER_DOUBLE_INT);
       /* Size of the inner array elements. */
       double_int inner_size;
       /* Maximum number of outer elements which can be allocated. */
@@ -2356,22 +2356,21 @@  build_new_1 (VEC(tree,gc) **placement, t
       gcc_assert (TREE_CODE (size) == INTEGER_CST);
       cookie_size = targetm.cxx.get_cookie_size (elt_type);
       gcc_assert (TREE_CODE (cookie_size) == INTEGER_CST);
-      gcc_checking_assert (double_int_ucmp
-			   (TREE_INT_CST (cookie_size), max_size) < 0);
+      gcc_checking_assert (TREE_INT_CST (cookie_size).ult (max_size));
       /* Unconditionally substract the cookie size.  This decreases the
 	 maximum object size and is safe even if we choose not to use
 	 a cookie after all.  */
-      max_size = double_int_sub (max_size, TREE_INT_CST (cookie_size));
-      if (mul_double (TREE_INT_CST_LOW (size), TREE_INT_CST_HIGH (size),
-		      inner_nelts_count.low, inner_nelts_count.high,
-		      &inner_size.low, &inner_size.high)
-	  || double_int_ucmp (inner_size, max_size) > 0)
+      max_size -= TREE_INT_CST (cookie_size);
+      bool overflow;
+      inner_size = TREE_INT_CST (size)
+		   .mul_with_sign (inner_nelts_count, false, &overflow);
+      if (overflow || inner_size.ugt (max_size))
 	{
 	  if (complain & tf_error)
 	    error ("size of array is too large");
 	  return error_mark_node;
 	}
-      max_outer_nelts = double_int_udiv (max_size, inner_size, TRUNC_DIV_EXPR);
+      max_outer_nelts = max_size.udiv (inner_size, TRUNC_DIV_EXPR);
       /* Only keep the top-most seven bits, to simplify encoding the
 	 constant in the instruction stream.  */
       {
@@ -2379,10 +2378,8 @@  build_new_1 (VEC(tree,gc) **placement, t
 	  - (max_outer_nelts.high ? clz_hwi (max_outer_nelts.high)
 	     : (HOST_BITS_PER_WIDE_INT + clz_hwi (max_outer_nelts.low)));
 	max_outer_nelts
-	  = double_int_lshift (double_int_rshift
-			       (max_outer_nelts, shift,
-				HOST_BITS_PER_DOUBLE_INT, false),
-			       shift, HOST_BITS_PER_DOUBLE_INT, false);
+	  = max_outer_nelts.lrshift (shift, HOST_BITS_PER_DOUBLE_INT)
+	    .llshift (shift, HOST_BITS_PER_DOUBLE_INT);
       }
       max_outer_nelts_tree = double_int_to_tree (sizetype, max_outer_nelts);

Index: gcc/cp/decl.c
===================================================================
--- gcc/cp/decl.c	(revision 191083)
+++ gcc/cp/decl.c	(working copy)
@@ -12448,8 +12448,6 @@  build_enumerator (tree name, tree value,
 	{
 	  if (TYPE_VALUES (enumtype))
 	    {
-	      HOST_WIDE_INT hi;
-	      unsigned HOST_WIDE_INT lo;
 	      tree prev_value;
 	      bool overflowed;

@@ -12465,15 +12463,13 @@  build_enumerator (tree name, tree value,
 		value = error_mark_node;
 	      else
 		{
-		  overflowed = add_double (TREE_INT_CST_LOW (prev_value),
-					   TREE_INT_CST_HIGH (prev_value),
-					   1, 0, &lo, &hi);
+		  double_int di = TREE_INT_CST (prev_value)
+				  .add_with_sign (double_int_one,
+						  false, &overflowed);
 		  if (!overflowed)
 		    {
-		      double_int di;
 		      tree type = TREE_TYPE (prev_value);
-		      bool pos = (TYPE_UNSIGNED (type) || hi >= 0);
-		      di.low = lo; di.high = hi;
+		      bool pos = TYPE_UNSIGNED (type) || !di.is_negative ();
 		      if (!double_int_fits_to_tree_p (type, di))
 			{
 			  unsigned int itk;
Index: gcc/cp/typeck2.c
===================================================================
--- gcc/cp/typeck2.c	(revision 191083)
+++ gcc/cp/typeck2.c	(working copy)
@@ -1055,14 +1055,12 @@  process_init_constructor_array (tree typ
     {
       tree domain = TYPE_DOMAIN (type);
       if (domain)
-	len = double_int_ext
-	        (double_int_add
-		  (double_int_sub
-		    (tree_to_double_int (TYPE_MAX_VALUE (domain)),
-		     tree_to_double_int (TYPE_MIN_VALUE (domain))),
-		    double_int_one),
-		  TYPE_PRECISION (TREE_TYPE (domain)),
-		  TYPE_UNSIGNED (TREE_TYPE (domain))).low;
+	len = (tree_to_double_int (TYPE_MAX_VALUE (domain))
+	       - tree_to_double_int (TYPE_MIN_VALUE (domain))
+	       + double_int_one)
+	      .ext (TYPE_PRECISION (TREE_TYPE (domain)),
+		    TYPE_UNSIGNED (TREE_TYPE (domain)))
+	      .low;
       else
 	unbounded = true;  /* Take as many as there are.  */
     }
Index: gcc/cp/mangle.c
===================================================================
--- gcc/cp/mangle.c	(revision 191083)
+++ gcc/cp/mangle.c	(working copy)
@@ -3119,12 +3119,11 @@  write_array_type (const tree type)
 	{
 	  /* The ABI specifies that we should mangle the number of
 	     elements in the array, not the largest allowed index.  */
-	  double_int dmax
-	    = double_int_add (tree_to_double_int (max), double_int_one);
+	  double_int dmax = tree_to_double_int (max) + double_int_one;
 	  /* Truncate the result - this will mangle [0, SIZE_INT_MAX]
 	     number of elements as zero.  */
-	  dmax = double_int_zext (dmax, TYPE_PRECISION (TREE_TYPE (max)));
-	  gcc_assert (double_int_fits_in_uhwi_p (dmax));
+	  dmax = dmax.zext (TYPE_PRECISION (TREE_TYPE (max)));
+	  gcc_assert (dmax.fits_uhwi ());
 	  write_unsigned_number (dmax.low);
 	}
       else
Index: gcc/double-int.c
===================================================================
--- gcc/double-int.c	(revision 191083)
+++ gcc/double-int.c	(working copy)
@@ -23,6 +23,41 @@  along with GCC; see the file COPYING3.
 #include "tm.h"			/* For SHIFT_COUNT_TRUNCATED.  */
 #include "tree.h"

+static int add_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
+				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
+				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
+				 bool);
+
+#define add_double(l1,h1,l2,h2,lv,hv) \
+  add_double_with_sign (l1, h1, l2, h2, lv, hv, false)
+
+static int neg_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
+		       unsigned HOST_WIDE_INT *, HOST_WIDE_INT *);
+
+static int mul_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
+				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
+				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
+				 bool);
+
+static int mul_double_wide_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
+				      unsigned HOST_WIDE_INT, HOST_WIDE_INT,
+				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
+				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
+				      bool);
+
+#define mul_double(l1,h1,l2,h2,lv,hv) \
+  mul_double_with_sign (l1, h1, l2, h2, lv, hv, false)
+
+static void lshift_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
+			   HOST_WIDE_INT, unsigned int,
+			   unsigned HOST_WIDE_INT *, HOST_WIDE_INT *, bool);
+
+static int div_and_round_double (unsigned, int, unsigned HOST_WIDE_INT,
+				 HOST_WIDE_INT, unsigned HOST_WIDE_INT,
+				 HOST_WIDE_INT, unsigned HOST_WIDE_INT *,
+				 HOST_WIDE_INT *, unsigned HOST_WIDE_INT *,
+				 HOST_WIDE_INT *);
+
 /* We know that A1 + B1 = SUM1, using 2's complement arithmetic and ignoring
    overflow.  Suppose A, B and SUM have the same respective signs as A1, B1,
    and SUM1.  Then this yields nonzero if overflow occurred during the
@@ -75,7 +110,7 @@  decode (HOST_WIDE_INT *words, unsigned H
    One argument is L1 and H1; the other, L2 and H2.
    The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */

-int
+static int
 add_double_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
 		      unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
 		      unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
@@ -105,7 +140,7 @@  add_double_with_sign (unsigned HOST_WIDE
    The argument is given as two `HOST_WIDE_INT' pieces in L1 and H1.
    The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */

-int
+static int
 neg_double (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
 	    unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv)
 {
@@ -129,7 +164,7 @@  neg_double (unsigned HOST_WIDE_INT l1, H
    One argument is L1 and H1; the other, L2 and H2.
    The value is stored as two `HOST_WIDE_INT' pieces in *LV and *HV.  */

-int
+static int
 mul_double_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
 		      unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
 		      unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
@@ -143,7 +178,7 @@  mul_double_with_sign (unsigned HOST_WIDE
 				    unsigned_p);
 }

-int
+static int
 mul_double_wide_with_sign (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
 			   unsigned HOST_WIDE_INT l2, HOST_WIDE_INT h2,
 			   unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv,
@@ -269,7 +304,7 @@  rshift_double (unsigned HOST_WIDE_INT l1
    ARITH nonzero specifies arithmetic shifting; otherwise use logical shift.
    Store the value as two `HOST_WIDE_INT' pieces in *LV and *HV.  */

-void
+static void
 lshift_double (unsigned HOST_WIDE_INT l1, HOST_WIDE_INT h1,
 	       HOST_WIDE_INT count, unsigned int prec,
 	       unsigned HOST_WIDE_INT *lv, HOST_WIDE_INT *hv, bool arith)
@@ -335,7 +370,7 @@  lshift_double (unsigned HOST_WIDE_INT l1
    Return nonzero if the operation overflows.
    UNS nonzero says do unsigned division.  */

-int
+static int
 div_and_round_double (unsigned code, int uns,
 		      /* num == numerator == dividend */
 		      unsigned HOST_WIDE_INT lnum_orig,
@@ -762,6 +797,19 @@  double_int::mul_with_sign (double_int b,
   return ret;
 }

+double_int
+double_int::wide_mul_with_sign (double_int b, bool unsigned_p,
+				double_int *higher, bool *overflow) const
+
+{
+  double_int lower;
+  *overflow = mul_double_wide_with_sign (low, high, b.low, b.high,
+					 &lower.low, &lower.high,
+					 &higher->low, &higher->high,
+					 unsigned_p);
+  return lower;
+}
+
 /* Returns A + B.  */

 double_int
@@ -809,12 +857,33 @@  double_int::operator - () const
   return ret;
 }

+double_int
+double_int::neg_with_overflow (bool *overflow) const
+{
+  double_int ret;
+  *overflow = neg_double (low, high, &ret.low, &ret.high);
+  return ret;
+}
+
 /* Returns A / B (computed as unsigned depending on UNS, and rounded as
    specified by CODE).  CODE is enum tree_code in fact, but double_int.h
    must be included before tree.h.  The remainder after the division is
    stored to MOD.  */

 double_int
+double_int::divmod_with_overflow (double_int b, bool uns, unsigned code,
+				  double_int *mod, bool *overflow) const
+{
+  const double_int &a = *this;
+  double_int ret;
+
+  *overflow = div_and_round_double (code, uns, a.low, a.high,
+				    b.low, b.high, &ret.low, &ret.high,
+				    &mod->low, &mod->high);
+  return ret;
+}
+
+double_int
 double_int::divmod (double_int b, bool uns, unsigned code,
 		    double_int *mod) const
 {
Index: gcc/double-int.h
===================================================================
--- gcc/double-int.h	(revision 191083)
+++ gcc/double-int.h	(working copy)
@@ -61,6 +61,7 @@  struct double_int

   static double_int from_uhwi (unsigned HOST_WIDE_INT cst);
   static double_int from_shwi (HOST_WIDE_INT cst);
+  static double_int from_pair (HOST_WIDE_INT high, unsigned HOST_WIDE_INT low);

   /* No copy assignment operator or destructor to keep the type a POD.  */

@@ -105,9 +106,16 @@  struct double_int

   /* Arithmetic operation functions.  */

+  /* The following operations perform arithmetics modulo 2^precision, so you
+     do not need to call .ext between them, even if you are representing
+     numbers with precision less than HOST_BITS_PER_DOUBLE_INT bits.  */
+
   double_int set_bit (unsigned) const;
   double_int mul_with_sign (double_int, bool unsigned_p, bool *overflow) const;
+  double_int wide_mul_with_sign (double_int, bool unsigned_p,
+				 double_int *higher, bool *overflow) const;
   double_int add_with_sign (double_int, bool unsigned_p, bool *overflow) const;
+  double_int neg_with_overflow (bool *overflow) const;

   double_int operator * (double_int) const;
   double_int operator + (double_int) const;
@@ -131,12 +139,15 @@  struct double_int
   /* You must ensure that double_int::ext is called on the operands
      of the following operations, if the precision of the numbers
      is less than HOST_BITS_PER_DOUBLE_INT bits.  */
+
   double_int div (double_int, bool, unsigned) const;
   double_int sdiv (double_int, unsigned) const;
   double_int udiv (double_int, unsigned) const;
   double_int mod (double_int, bool, unsigned) const;
   double_int smod (double_int, unsigned) const;
   double_int umod (double_int, unsigned) const;
+  double_int divmod_with_overflow (double_int, bool, unsigned,
+				   double_int *, bool *) const;
   double_int divmod (double_int, bool, unsigned, double_int *) const;
   double_int sdivmod (double_int, unsigned, double_int *) const;
   double_int udivmod (double_int, unsigned, double_int *) const;
@@ -199,13 +210,6 @@  double_int::from_shwi (HOST_WIDE_INT cst
   return r;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline double_int
-shwi_to_double_int (HOST_WIDE_INT cst)
-{
-  return double_int::from_shwi (cst);
-}
-
 /* Some useful constants.  */
 /* FIXME(crowl): Maybe remove after converting callers?
    The problem is that a named constant would not be as optimizable,
@@ -229,11 +233,13 @@  double_int::from_uhwi (unsigned HOST_WID
   return r;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline double_int
-uhwi_to_double_int (unsigned HOST_WIDE_INT cst)
+inline double_int
+double_int::from_pair (HOST_WIDE_INT high, unsigned HOST_WIDE_INT low)
 {
-  return double_int::from_uhwi (cst);
+  double_int r;
+  r.low = low;
+  r.high = high;
+  return r;
 }

 inline double_int &
@@ -301,13 +307,6 @@  double_int::to_shwi () const
   return (HOST_WIDE_INT) low;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline HOST_WIDE_INT
-double_int_to_shwi (double_int cst)
-{
-  return cst.to_shwi ();
-}
-
 /* Returns value of CST as an unsigned number.  CST must satisfy
    double_int::fits_unsigned.  */

@@ -317,13 +316,6 @@  double_int::to_uhwi () const
   return low;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline unsigned HOST_WIDE_INT
-double_int_to_uhwi (double_int cst)
-{
-  return cst.to_uhwi ();
-}
-
 /* Returns true if CST fits in unsigned HOST_WIDE_INT.  */

 inline bool
@@ -332,164 +324,6 @@  double_int::fits_uhwi () const
   return high == 0;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline bool
-double_int_fits_in_uhwi_p (double_int cst)
-{
-  return cst.fits_uhwi ();
-}
-
-/* Returns true if CST fits in signed HOST_WIDE_INT.  */
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline bool
-double_int_fits_in_shwi_p (double_int cst)
-{
-  return cst.fits_shwi ();
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline bool
-double_int_fits_in_hwi_p (double_int cst, bool uns)
-{
-  return cst.fits_hwi (uns);
-}
-
-/* The following operations perform arithmetics modulo 2^precision,
-   so you do not need to call double_int_ext between them, even if
-   you are representing numbers with precision less than
-   HOST_BITS_PER_DOUBLE_INT bits.  */
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_mul (double_int a, double_int b)
-{
-  return a * b;
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_mul_with_sign (double_int a, double_int b,
-			  bool unsigned_p, int *overflow)
-{
-  bool ovf;
-  return a.mul_with_sign (b, unsigned_p, &ovf);
-  *overflow = ovf;
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_add (double_int a, double_int b)
-{
-  return a + b;
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_sub (double_int a, double_int b)
-{
-  return a - b;
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_neg (double_int a)
-{
-  return -a;
-}
-
-/* You must ensure that double_int_ext is called on the operands
-   of the following operations, if the precision of the numbers
-   is less than HOST_BITS_PER_DOUBLE_INT bits.  */
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_div (double_int a, double_int b, bool uns, unsigned code)
-{
-  return a.div (b, uns, code);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_sdiv (double_int a, double_int b, unsigned code)
-{
-  return a.sdiv (b, code);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_udiv (double_int a, double_int b, unsigned code)
-{
-  return a.udiv (b, code);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_mod (double_int a, double_int b, bool uns, unsigned code)
-{
-  return a.mod (b, uns, code);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_smod (double_int a, double_int b, unsigned code)
-{
-  return a.smod (b, code);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_umod (double_int a, double_int b, unsigned code)
-{
-  return a.umod (b, code);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_divmod (double_int a, double_int b, bool uns,
-		   unsigned code, double_int *mod)
-{
-  return a.divmod (b, uns, code, mod);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_sdivmod (double_int a, double_int b, unsigned code, double_int *mod)
-{
-  return a.sdivmod (b, code, mod);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_udivmod (double_int a, double_int b, unsigned code, double_int *mod)
-{
-  return a.udivmod (b, code, mod);
-}
-
-/***/
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline bool
-double_int_multiple_of (double_int product, double_int factor,
-                        bool unsigned_p, double_int *multiple)
-{
-  return product.multiple_of (factor, unsigned_p, multiple);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_setbit (double_int a, unsigned bitpos)
-{
-  return a.set_bit (bitpos);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline int
-double_int_ctz (double_int a)
-{
-  return a.trailing_zeros ();
-}
-
 /* Logical operations.  */

 /* Returns ~A.  */
@@ -503,13 +337,6 @@  double_int::operator ~ () const
   return result;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline double_int
-double_int_not (double_int a)
-{
-  return ~a;
-}
-
 /* Returns A | B.  */

 inline double_int
@@ -521,13 +348,6 @@  double_int::operator | (double_int b) co
   return result;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline double_int
-double_int_ior (double_int a, double_int b)
-{
-  return a | b;
-}
-
 /* Returns A & B.  */

 inline double_int
@@ -539,13 +359,6 @@  double_int::operator & (double_int b) co
   return result;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline double_int
-double_int_and (double_int a, double_int b)
-{
-  return a & b;
-}
-
 /* Returns A & ~B.  */

 inline double_int
@@ -557,13 +370,6 @@  double_int::and_not (double_int b) const
   return result;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline double_int
-double_int_and_not (double_int a, double_int b)
-{
-  return a.and_not (b);
-}
-
 /* Returns A ^ B.  */

 inline double_int
@@ -575,165 +381,8 @@  double_int::operator ^ (double_int b) co
   return result;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline double_int
-double_int_xor (double_int a, double_int b)
-{
-  return a ^ b;
-}
-
-
-/* Shift operations.  */
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_lshift (double_int a, HOST_WIDE_INT count, unsigned int prec,
-		   bool arith)
-{
-  return a.lshift (count, prec, arith);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_rshift (double_int a, HOST_WIDE_INT count, unsigned int prec,
-		   bool arith)
-{
-  return a.rshift (count, prec, arith);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_lrotate (double_int a, HOST_WIDE_INT count, unsigned int prec)
-{
-  return a.lrotate (count, prec);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_rrotate (double_int a, HOST_WIDE_INT count, unsigned int prec)
-{
-  return a.rrotate (count, prec);
-}
-
-/* Returns true if CST is negative.  Of course, CST is considered to
-   be signed.  */
-
-static inline bool
-double_int_negative_p (double_int cst)
-{
-  return cst.high < 0;
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline int
-double_int_cmp (double_int a, double_int b, bool uns)
-{
-  return a.cmp (b, uns);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline int
-double_int_scmp (double_int a, double_int b)
-{
-  return a.scmp (b);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline int
-double_int_ucmp (double_int a, double_int b)
-{
-  return a.ucmp (b);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_max (double_int a, double_int b, bool uns)
-{
-  return a.max (b, uns);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_smax (double_int a, double_int b)
-{
-  return a.smax (b);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_umax (double_int a, double_int b)
-{
-  return a.umax (b);
-}
-
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_min (double_int a, double_int b, bool uns)
-{
-  return a.min (b, uns);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_smin (double_int a, double_int b)
-{
-  return a.smin (b);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_umin (double_int a, double_int b)
-{
-  return a.umin (b);
-}
-
 void dump_double_int (FILE *, double_int, bool);

-/* Zero and sign extension of numbers in smaller precisions.  */
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_ext (double_int a, unsigned prec, bool uns)
-{
-  return a.ext (prec, uns);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_sext (double_int a, unsigned prec)
-{
-  return a.sext (prec);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_zext (double_int a, unsigned prec)
-{
-  return a.zext (prec);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_mask (unsigned prec)
-{
-  return double_int::mask (prec);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_max_value (unsigned int prec, bool uns)
-{
-  return double_int::max_value (prec, uns);
-}
-
-/* FIXME(crowl): Remove after converting callers.  */
-inline double_int
-double_int_min_value (unsigned int prec, bool uns)
-{
-  return double_int::min_value (prec, uns);
-}
-
 #define ALL_ONES (~((unsigned HOST_WIDE_INT) 0))

 /* The operands of the following comparison functions must be processed
@@ -748,13 +397,6 @@  double_int::is_zero () const
   return low == 0 && high == 0;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline bool
-double_int_zero_p (double_int cst)
-{
-  return cst.is_zero ();
-}
-
 /* Returns true if CST is one.  */

 inline bool
@@ -763,13 +405,6 @@  double_int::is_one () const
   return low == 1 && high == 0;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline bool
-double_int_one_p (double_int cst)
-{
-  return cst.is_one ();
-}
-
 /* Returns true if CST is minus one.  */

 inline bool
@@ -778,13 +413,6 @@  double_int::is_minus_one () const
   return low == ALL_ONES && high == -1;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline bool
-double_int_minus_one_p (double_int cst)
-{
-  return cst.is_minus_one ();
-}
-
 /* Returns true if CST is negative.  */

 inline bool
@@ -801,13 +429,6 @@  double_int::operator == (double_int cst2
   return low == cst2.low && high == cst2.high;
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline bool
-double_int_equal_p (double_int cst1, double_int cst2)
-{
-  return cst1 == cst2;
-}
-
 /* Returns true if CST1 != CST2.  */

 inline bool
@@ -824,52 +445,6 @@  double_int::popcount () const
   return popcount_hwi (high) + popcount_hwi (low);
 }

-/* FIXME(crowl): Remove after converting callers.  */
-static inline int
-double_int_popcount (double_int cst)
-{
-  return cst.popcount ();
-}
-
-
-/* Legacy interface with decomposed high/low parts.  */
-
-/* FIXME(crowl): Remove after converting callers.  */
-extern int add_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
-				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
-				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
-				 bool);
-/* FIXME(crowl): Remove after converting callers.  */
-#define add_double(l1,h1,l2,h2,lv,hv) \
-  add_double_with_sign (l1, h1, l2, h2, lv, hv, false)
-/* FIXME(crowl): Remove after converting callers.  */
-extern int neg_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
-		       unsigned HOST_WIDE_INT *, HOST_WIDE_INT *);
-/* FIXME(crowl): Remove after converting callers.  */
-extern int mul_double_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
-				 unsigned HOST_WIDE_INT, HOST_WIDE_INT,
-				 unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
-				 bool);
-/* FIXME(crowl): Remove after converting callers.  */
-extern int mul_double_wide_with_sign (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
-				      unsigned HOST_WIDE_INT, HOST_WIDE_INT,
-				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
-				      unsigned HOST_WIDE_INT *, HOST_WIDE_INT *,
-				      bool);
-/* FIXME(crowl): Remove after converting callers.  */
-#define mul_double(l1,h1,l2,h2,lv,hv) \
-  mul_double_with_sign (l1, h1, l2, h2, lv, hv, false)
-/* FIXME(crowl): Remove after converting callers.  */
-extern void lshift_double (unsigned HOST_WIDE_INT, HOST_WIDE_INT,
-			   HOST_WIDE_INT, unsigned int,
-			   unsigned HOST_WIDE_INT *, HOST_WIDE_INT *, bool);
-/* FIXME(crowl): Remove after converting callers.  */
-extern int div_and_round_double (unsigned, int, unsigned HOST_WIDE_INT,
-				 HOST_WIDE_INT, unsigned HOST_WIDE_INT,
-				 HOST_WIDE_INT, unsigned HOST_WIDE_INT *,
-				 HOST_WIDE_INT *, unsigned HOST_WIDE_INT *,
-				 HOST_WIDE_INT *);
-

 #ifndef GENERATOR_FILE
 /* Conversion to and from GMP integer representations.  */
Index: gcc/fortran/trans-expr.c
===================================================================
--- gcc/fortran/trans-expr.c	(revision 191083)
+++ gcc/fortran/trans-expr.c	(working copy)
@@ -1657,10 +1657,10 @@  gfc_conv_cst_int_power (gfc_se * se, tre

   /* If exponent is too large, we won't expand it anyway, so don't bother
      with large integer values.  */
-  if (!double_int_fits_in_shwi_p (TREE_INT_CST (rhs)))
+  if (!TREE_INT_CST (rhs).fits_shwi ())
     return 0;

-  m = double_int_to_shwi (TREE_INT_CST (rhs));
+  m = TREE_INT_CST (rhs).to_shwi ();
   /* There's no ABS for HOST_WIDE_INT, so here we go. It also takes care
      of the asymmetric range of the integer type.  */
   n = (unsigned HOST_WIDE_INT) (m < 0 ? -m : m);
Index: gcc/fortran/target-memory.c
===================================================================
--- gcc/fortran/target-memory.c	(revision 191083)
+++ gcc/fortran/target-memory.c	(working copy)
@@ -395,8 +395,7 @@  gfc_interpret_logical (int kind, unsigne
 {
   tree t = native_interpret_expr (gfc_get_logical_type (kind), buffer,
 				  buffer_size);
-  *logical = double_int_zero_p (tree_to_double_int (t))
-	     ? 0 : 1;
+  *logical = tree_to_double_int (t).is_zero () ? 0 : 1;
   return size_logical (kind);
 }

Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c	(revision 191083)
+++ gcc/expmed.c	(working copy)
@@ -3404,12 +3404,9 @@  choose_multiplier (unsigned HOST_WIDE_IN
 		   unsigned HOST_WIDE_INT *multiplier_ptr,
 		   int *post_shift_ptr, int *lgup_ptr)
 {
-  HOST_WIDE_INT mhigh_hi, mlow_hi;
-  unsigned HOST_WIDE_INT mhigh_lo, mlow_lo;
+  double_int mhigh, mlow;
   int lgup, post_shift;
   int pow, pow2;
-  unsigned HOST_WIDE_INT nl, dummy1;
-  HOST_WIDE_INT nh, dummy2;

   /* lgup = ceil(log2(divisor)); */
   lgup = ceil_log2 (d);
@@ -3425,32 +3422,17 @@  choose_multiplier (unsigned HOST_WIDE_IN
   gcc_assert (pow != HOST_BITS_PER_DOUBLE_INT);

   /* mlow = 2^(N + lgup)/d */
- if (pow >= HOST_BITS_PER_WIDE_INT)
-    {
-      nh = (HOST_WIDE_INT) 1 << (pow - HOST_BITS_PER_WIDE_INT);
-      nl = 0;
-    }
-  else
-    {
-      nh = 0;
-      nl = (unsigned HOST_WIDE_INT) 1 << pow;
-    }
-  div_and_round_double (TRUNC_DIV_EXPR, 1, nl, nh, d, (HOST_WIDE_INT) 0,
-			&mlow_lo, &mlow_hi, &dummy1, &dummy2);
+  double_int val = double_int_zero.set_bit (pow);
+  mlow = val.div (double_int::from_uhwi (d), true, TRUNC_DIV_EXPR);

-  /* mhigh = (2^(N + lgup) + 2^N + lgup - precision)/d */
-  if (pow2 >= HOST_BITS_PER_WIDE_INT)
-    nh |= (HOST_WIDE_INT) 1 << (pow2 - HOST_BITS_PER_WIDE_INT);
-  else
-    nl |= (unsigned HOST_WIDE_INT) 1 << pow2;
-  div_and_round_double (TRUNC_DIV_EXPR, 1, nl, nh, d, (HOST_WIDE_INT) 0,
-			&mhigh_lo, &mhigh_hi, &dummy1, &dummy2);
+  /* mhigh = (2^(N + lgup) + 2^(N + lgup - precision))/d */
+  val |= double_int_zero.set_bit (pow2);
+  mhigh = val.div (double_int::from_uhwi (d), true, TRUNC_DIV_EXPR);

-  gcc_assert (!mhigh_hi || nh - d < d);
-  gcc_assert (mhigh_hi <= 1 && mlow_hi <= 1);
+  gcc_assert (!mhigh.high || val.high - d < d);
+  gcc_assert (mhigh.high <= 1 && mlow.high <= 1);
   /* Assert that mlow < mhigh.  */
-  gcc_assert (mlow_hi < mhigh_hi
-	      || (mlow_hi == mhigh_hi && mlow_lo < mhigh_lo));
+  gcc_assert (mlow.ult (mhigh));

   /* If precision == N, then mlow, mhigh exceed 2^N
      (but they do not exceed 2^(N+1)).  */
@@ -3458,15 +3440,14 @@  choose_multiplier (unsigned HOST_WIDE_IN
   /* Reduce to lowest terms.  */
   for (post_shift = lgup; post_shift > 0; post_shift--)
     {
-      unsigned HOST_WIDE_INT ml_lo = (mlow_hi <<
(HOST_BITS_PER_WIDE_INT - 1)) | (mlow_lo >> 1);
-      unsigned HOST_WIDE_INT mh_lo = (mhigh_hi <<
(HOST_BITS_PER_WIDE_INT - 1)) | (mhigh_lo >> 1);
+      int shft = HOST_BITS_PER_WIDE_INT - 1;
+      unsigned HOST_WIDE_INT ml_lo = (mlow.high << shft) | (mlow.low >> 1);
+      unsigned HOST_WIDE_INT mh_lo = (mhigh.high << shft) | (mhigh.low >> 1);
       if (ml_lo >= mh_lo)
 	break;

-      mlow_hi = 0;
-      mlow_lo = ml_lo;
-      mhigh_hi = 0;
-      mhigh_lo = mh_lo;
+      mlow = double_int::from_uhwi (ml_lo);
+      mhigh = double_int::from_uhwi (mh_lo);
     }

   *post_shift_ptr = post_shift;
@@ -3474,13 +3455,13 @@  choose_multiplier (unsigned HOST_WIDE_IN
   if (n < HOST_BITS_PER_WIDE_INT)
     {
       unsigned HOST_WIDE_INT mask = ((unsigned HOST_WIDE_INT) 1 << n) - 1;
-      *multiplier_ptr = mhigh_lo & mask;
-      return mhigh_lo >= mask;
+      *multiplier_ptr = mhigh.low & mask;
+      return mhigh.low >= mask;
     }
   else
     {
-      *multiplier_ptr = mhigh_lo;
-      return mhigh_hi;
+      *multiplier_ptr = mhigh.low;
+      return mhigh.high;
     }
 }

Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c	(revision 191083)
+++ gcc/emit-rtl.c	(working copy)
@@ -5736,11 +5736,10 @@  init_emit_once (void)
       FCONST1(mode).data.high = 0;
       FCONST1(mode).data.low = 0;
       FCONST1(mode).mode = mode;
-      lshift_double (1, 0, GET_MODE_FBIT (mode),
-                     HOST_BITS_PER_DOUBLE_INT,
-                     &FCONST1(mode).data.low,
-		     &FCONST1(mode).data.high,
-                     SIGNED_FIXED_POINT_MODE_P (mode));
+      FCONST1(mode).data
+	= double_int_one.lshift (GET_MODE_FBIT (mode),
+				 HOST_BITS_PER_DOUBLE_INT,
+				 SIGNED_FIXED_POINT_MODE_P (mode));
       const_tiny_rtx[1][(int) mode] = CONST_FIXED_FROM_FIXED_VALUE (
 				      FCONST1 (mode), mode);
     }
@@ -5759,11 +5758,10 @@  init_emit_once (void)
       FCONST1(mode).data.high = 0;
       FCONST1(mode).data.low = 0;
       FCONST1(mode).mode = mode;
-      lshift_double (1, 0, GET_MODE_FBIT (mode),
-                     HOST_BITS_PER_DOUBLE_INT,
-                     &FCONST1(mode).data.low,
-		     &FCONST1(mode).data.high,
-                     SIGNED_FIXED_POINT_MODE_P (mode));
+      FCONST1(mode).data
+	= double_int_one.lshift (GET_MODE_FBIT (mode),
+				 HOST_BITS_PER_DOUBLE_INT,
+				 SIGNED_FIXED_POINT_MODE_P (mode));
       const_tiny_rtx[1][(int) mode] = CONST_FIXED_FROM_FIXED_VALUE (
 				      FCONST1 (mode), mode);
     }
Index: gcc/simplify-rtx.c
===================================================================
--- gcc/simplify-rtx.c	(revision 191083)
+++ gcc/simplify-rtx.c	(working copy)
@@ -1525,109 +1525,117 @@  simplify_const_unary_operation (enum rtx
   else if (width <= HOST_BITS_PER_DOUBLE_INT
 	   && (CONST_DOUBLE_AS_INT_P (op) || CONST_INT_P (op)))
     {
-      unsigned HOST_WIDE_INT l1, lv;
-      HOST_WIDE_INT h1, hv;
+      double_int first, value;

       if (CONST_DOUBLE_AS_INT_P (op))
-	l1 = CONST_DOUBLE_LOW (op), h1 = CONST_DOUBLE_HIGH (op);
+	first = double_int::from_pair (CONST_DOUBLE_HIGH (op),
+				       CONST_DOUBLE_LOW (op));
       else
-	l1 = INTVAL (op), h1 = HWI_SIGN_EXTEND (l1);
+	first = double_int::from_shwi (INTVAL (op));

       switch (code)
 	{
 	case NOT:
-	  lv = ~ l1;
-	  hv = ~ h1;
+	  value = ~first;
 	  break;

 	case NEG:
-	  neg_double (l1, h1, &lv, &hv);
+	  value = -first;
 	  break;

 	case ABS:
-	  if (h1 < 0)
-	    neg_double (l1, h1, &lv, &hv);
+	  if (first.is_negative ())
+	    value = -first;
 	  else
-	    lv = l1, hv = h1;
+	    value = first;
 	  break;

 	case FFS:
-	  hv = 0;
-	  if (l1 != 0)
-	    lv = ffs_hwi (l1);
-	  else if (h1 != 0)
-	    lv = HOST_BITS_PER_WIDE_INT + ffs_hwi (h1);
+	  value.high = 0;
+	  if (first.low != 0)
+	    value.low = ffs_hwi (first.low);
+	  else if (first.high != 0)
+	    value.low = HOST_BITS_PER_WIDE_INT + ffs_hwi (first.high);
 	  else
-	    lv = 0;
+	    value.low = 0;
 	  break;

 	case CLZ:
-	  hv = 0;
-	  if (h1 != 0)
-	    lv = GET_MODE_PRECISION (mode) - floor_log2 (h1) - 1
-	      - HOST_BITS_PER_WIDE_INT;
-	  else if (l1 != 0)
-	    lv = GET_MODE_PRECISION (mode) - floor_log2 (l1) - 1;
-	  else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, lv))
-	    lv = GET_MODE_PRECISION (mode);
+	  value.high = 0;
+	  if (first.high != 0)
+	    value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.high) - 1
+	              - HOST_BITS_PER_WIDE_INT;
+	  else if (first.low != 0)
+	    value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.low) - 1;
+	  else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, value.low))
+	    value.low = GET_MODE_PRECISION (mode);
 	  break;

 	case CTZ:
-	  hv = 0;
-	  if (l1 != 0)
-	    lv = ctz_hwi (l1);
-	  else if (h1 != 0)
-	    lv = HOST_BITS_PER_WIDE_INT + ctz_hwi (h1);
-	  else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, lv))
-	    lv = GET_MODE_PRECISION (mode);
+	  value.high = 0;
+	  if (first.low != 0)
+	    value.low = ctz_hwi (first.low);
+	  else if (first.high != 0)
+	    value.low = HOST_BITS_PER_WIDE_INT + ctz_hwi (first.high);
+	  else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, value.low))
+	    value.low = GET_MODE_PRECISION (mode);
 	  break;

 	case POPCOUNT:
-	  hv = 0;
-	  lv = 0;
-	  while (l1)
-	    lv++, l1 &= l1 - 1;
-	  while (h1)
-	    lv++, h1 &= h1 - 1;
+	  value = double_int_zero;
+	  while (first.low)
+	    {
+	      value.low++;
+	      first.low &= first.low - 1;
+	    }
+	  while (first.high)
+	    {
+	      value.low++;
+	      first.high &= first.high - 1;
+	    }
 	  break;

 	case PARITY:
-	  hv = 0;
-	  lv = 0;
-	  while (l1)
-	    lv++, l1 &= l1 - 1;
-	  while (h1)
-	    lv++, h1 &= h1 - 1;
-	  lv &= 1;
+	  value = double_int_zero;
+	  while (first.low)
+	    {
+	      value.low++;
+	      first.low &= first.low - 1;
+	    }
+	  while (first.high)
+	    {
+	      value.low++;
+	      first.high &= first.high - 1;
+	    }
+	  value.low &= 1;
 	  break;

 	case BSWAP:
 	  {
 	    unsigned int s;

-	    hv = 0;
-	    lv = 0;
+	    value = double_int_zero;
 	    for (s = 0; s < width; s += 8)
 	      {
 		unsigned int d = width - s - 8;
 		unsigned HOST_WIDE_INT byte;

 		if (s < HOST_BITS_PER_WIDE_INT)
-		  byte = (l1 >> s) & 0xff;
+		  byte = (first.low >> s) & 0xff;
 		else
-		  byte = (h1 >> (s - HOST_BITS_PER_WIDE_INT)) & 0xff;
+		  byte = (first.high >> (s - HOST_BITS_PER_WIDE_INT)) & 0xff;

 		if (d < HOST_BITS_PER_WIDE_INT)
-		  lv |= byte << d;
+		  value.low |= byte << d;
 		else
-		  hv |= byte << (d - HOST_BITS_PER_WIDE_INT);
+		  value.high |= byte << (d - HOST_BITS_PER_WIDE_INT);
 	      }
 	  }
 	  break;

 	case TRUNCATE:
 	  /* This is just a change-of-mode, so do nothing.  */
-	  lv = l1, hv = h1;
+	  value = first;
 	  break;

 	case ZERO_EXTEND:
@@ -1636,8 +1644,7 @@  simplify_const_unary_operation (enum rtx
 	  if (op_width > HOST_BITS_PER_WIDE_INT)
 	    return 0;

-	  hv = 0;
-	  lv = l1 & GET_MODE_MASK (op_mode);
+	  value = double_int::from_uhwi (first.low & GET_MODE_MASK (op_mode));
 	  break;

 	case SIGN_EXTEND:
@@ -1646,11 +1653,11 @@  simplify_const_unary_operation (enum rtx
 	    return 0;
 	  else
 	    {
-	      lv = l1 & GET_MODE_MASK (op_mode);
-	      if (val_signbit_known_set_p (op_mode, lv))
-		lv |= ~GET_MODE_MASK (op_mode);
+	      value.low = first.low & GET_MODE_MASK (op_mode);
+	      if (val_signbit_known_set_p (op_mode, value.low))
+		value.low |= ~GET_MODE_MASK (op_mode);

-	      hv = HWI_SIGN_EXTEND (lv);
+	      value.high = HWI_SIGN_EXTEND (value.low);
 	    }
 	  break;

@@ -1661,7 +1668,7 @@  simplify_const_unary_operation (enum rtx
 	  return 0;
 	}

-      return immed_double_const (lv, hv, mode);
+      return immed_double_int_const (value, mode);
     }

   else if (CONST_DOUBLE_AS_FLOAT_P (op)
@@ -3578,6 +3585,7 @@  simplify_const_binary_operation (enum rt
       && (CONST_DOUBLE_AS_INT_P (op1) || CONST_INT_P (op1)))
     {
       double_int o0, o1, res, tmp;
+      bool overflow;

       o0 = rtx_to_double_int (op0);
       o1 = rtx_to_double_int (op1);
@@ -3599,34 +3607,30 @@  simplify_const_binary_operation (enum rt
 	  break;

 	case DIV:
-	  if (div_and_round_double (TRUNC_DIV_EXPR, 0,
-				    o0.low, o0.high, o1.low, o1.high,
-				    &res.low, &res.high,
-				    &tmp.low, &tmp.high))
+          res = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR,
+					 &tmp, &overflow);
+	  if (overflow)
 	    return 0;
 	  break;

 	case MOD:
-	  if (div_and_round_double (TRUNC_DIV_EXPR, 0,
-				    o0.low, o0.high, o1.low, o1.high,
-				    &tmp.low, &tmp.high,
-				    &res.low, &res.high))
+          tmp = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR,
+					 &res, &overflow);
+	  if (overflow)
 	    return 0;
 	  break;

 	case UDIV:
-	  if (div_and_round_double (TRUNC_DIV_EXPR, 1,
-				    o0.low, o0.high, o1.low, o1.high,
-				    &res.low, &res.high,
-				    &tmp.low, &tmp.high))
+          res = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR,
+					 &tmp, &overflow);
+	  if (overflow)
 	    return 0;
 	  break;

 	case UMOD:
-	  if (div_and_round_double (TRUNC_DIV_EXPR, 1,
-				    o0.low, o0.high, o1.low, o1.high,
-				    &tmp.low, &tmp.high,
-				    &res.low, &res.high))
+          tmp = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR,
+					 &res, &overflow);
+	  if (overflow)
 	    return 0;
 	  break;

Index: gcc/explow.c
===================================================================
--- gcc/explow.c	(revision 191083)
+++ gcc/explow.c	(working copy)
@@ -100,36 +100,33 @@  plus_constant (enum machine_mode mode, r
     case CONST_INT:
       if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT)
 	{
-	  unsigned HOST_WIDE_INT l1 = INTVAL (x);
-	  HOST_WIDE_INT h1 = (l1 >> (HOST_BITS_PER_WIDE_INT - 1)) ? -1 : 0;
-	  unsigned HOST_WIDE_INT l2 = c;
-	  HOST_WIDE_INT h2 = c < 0 ? -1 : 0;
-	  unsigned HOST_WIDE_INT lv;
-	  HOST_WIDE_INT hv;
+	  double_int di_x = double_int::from_shwi (INTVAL (x));
+	  double_int di_c = double_int::from_shwi (c);

-	  if (add_double_with_sign (l1, h1, l2, h2, &lv, &hv, false))
+	  bool overflow;
+	  double_int v = di_x.add_with_sign (di_c, false, &overflow);
+	  if (overflow)
 	    gcc_unreachable ();

-	  return immed_double_const (lv, hv, VOIDmode);
+	  return immed_double_int_const (v, VOIDmode);
 	}

       return GEN_INT (INTVAL (x) + c);

     case CONST_DOUBLE:
       {
-	unsigned HOST_WIDE_INT l1 = CONST_DOUBLE_LOW (x);
-	HOST_WIDE_INT h1 = CONST_DOUBLE_HIGH (x);
-	unsigned HOST_WIDE_INT l2 = c;
-	HOST_WIDE_INT h2 = c < 0 ? -1 : 0;
-	unsigned HOST_WIDE_INT lv;
-	HOST_WIDE_INT hv;
+	double_int di_x = double_int::from_pair (CONST_DOUBLE_HIGH (x),
+						 CONST_DOUBLE_LOW (x));
+	double_int di_c = double_int::from_shwi (c);

-	if (add_double_with_sign (l1, h1, l2, h2, &lv, &hv, false))
+	bool overflow;
+	double_int v = di_x.add_with_sign (di_c, false, &overflow);
+	if (overflow)
 	  /* Sorry, we have no way to represent overflows this wide.
 	     To fix, add constant support wider than CONST_DOUBLE.  */
 	  gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT);

-	return immed_double_const (lv, hv, VOIDmode);
+	return immed_double_int_const (v, VOIDmode);
       }

     case MEM:
Index: gcc/config/sparc/sparc.c
===================================================================
--- gcc/config/sparc/sparc.c	(revision 191083)
+++ gcc/config/sparc/sparc.c	(working copy)
@@ -10113,33 +10113,27 @@  sparc_fold_builtin (tree fndecl, int n_a
 	  && TREE_CODE (arg1) == VECTOR_CST
 	  && TREE_CODE (arg2) == INTEGER_CST)
 	{
-	  int overflow = 0;
-	  unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (arg2);
-	  HOST_WIDE_INT high = TREE_INT_CST_HIGH (arg2);
+	  bool overflow = false;
+	  double_int di_arg2 = TREE_INT_CST (arg2);
 	  unsigned i;

 	  for (i = 0; i < VECTOR_CST_NELTS (arg0); ++i)
 	    {
-	      unsigned HOST_WIDE_INT
-		low0 = TREE_INT_CST_LOW (VECTOR_CST_ELT (arg0, i)),
-		low1 = TREE_INT_CST_LOW (VECTOR_CST_ELT (arg1, i));
-	      HOST_WIDE_INT
-		high0 = TREE_INT_CST_HIGH (VECTOR_CST_ELT (arg0, i));
-	      HOST_WIDE_INT
-		high1 = TREE_INT_CST_HIGH (VECTOR_CST_ELT (arg1, i));
+	      double_int e0 = TREE_INT_CST (VECTOR_CST_ELT (arg0, i));
+	      double_int e1 = TREE_INT_CST (VECTOR_CST_ELT (arg1, i));

-	      unsigned HOST_WIDE_INT l;
-	      HOST_WIDE_INT h;
+	      bool neg1_ovf, neg2_ovf, add1_ovf, add2_ovf;

-	      overflow |= neg_double (low1, high1, &l, &h);
-	      overflow |= add_double (low0, high0, l, h, &l, &h);
-	      if (h < 0)
-		overflow |= neg_double (l, h, &l, &h);
+	      double_int tmp = e1.neg_with_overflow (&neg1_ovf);
+	      tmp = e0.add_with_sign (tmp, false, &add1_ovf);
+	      if (tmp.is_negative ())
+		tmp = tmp.neg_with_overflow (&neg2_ovf);

-	      overflow |= add_double (low, high, l, h, &low, &high);
+	      tmp = di_arg2.add_with_sign (tmp, false, &add2_ovf);
+	      overflow |= neg1_ovf | neg2_ovf | add1_ovf | add2_ovf;
 	    }

-	  gcc_assert (overflow == 0);
+	  gcc_assert (!overflow);

 	  return build_int_cst_wide (rtype, low, high);
 	}
Index: gcc/config/avr/avr.c
===================================================================
--- gcc/config/avr/avr.c	(revision 191083)
+++ gcc/config/avr/avr.c	(working copy)
@@ -10518,10 +10518,10 @@  avr_double_int_push_digit (double_int va
                            unsigned HOST_WIDE_INT digit)
 {
   val = 0 == base
-    ? double_int_lshift (val, 32, 64, false)
-    : double_int_mul (val, uhwi_to_double_int (base));
+    ? val.llshift (32, 64)
+    : val * double_int::from_uhwi (base);

-  return double_int_add (val, uhwi_to_double_int (digit));
+  return val + double_int::from_uhwi (digit);
 }


@@ -10530,7 +10530,7 @@  avr_double_int_push_digit (double_int va
 static int
 avr_map (double_int f, int x)
 {
-  return 0xf & double_int_to_uhwi (double_int_rshift (f, 4*x, 64, false));
+  return 0xf & f.lrshift (4*x, 64).to_uhwi ();
 }


@@ -10703,7 +10703,7 @@  avr_map_decompose (double_int f, const a
          are mapped to 0 and used operands are reloaded to xop[0].  */

       xop[0] = all_regs_rtx[24];
-      xop[1] = gen_int_mode (double_int_to_uhwi (f_ginv.map), SImode);
+      xop[1] = gen_int_mode (f_ginv.map.to_uhwi (), SImode);
       xop[2] = all_regs_rtx[25];
       xop[3] = val_used_p ? xop[0] : const0_rtx;

@@ -10799,7 +10799,7 @@  avr_out_insert_bits (rtx *op, int *plen)
   else if (flag_print_asm_name)
     fprintf (asm_out_file,
              ASM_COMMENT_START "map = 0x%08" HOST_LONG_FORMAT "x\n",
-             double_int_to_uhwi (map) & GET_MODE_MASK (SImode));
+             map.to_uhwi () & GET_MODE_MASK (SImode));

   /* If MAP has fixed points it might be better to initialize the result
      with the bits to be inserted instead of moving all bits by hand.  */