Patchwork patch to fix constant math - 4th patch - the wide-int class - patch ping for the next stage 1

login
register
mail settings
Submitter Kenneth Zadeck
Date April 17, 2013, 12:01 p.m.
Message ID <516E8F35.4080308@naturalbridge.com>
Download mbox | patch
Permalink /patch/237209/
State New
Headers show

Comments

Kenneth Zadeck - April 17, 2013, 12:01 p.m.
richard,

here is a copy of the patch without the debugging that you asked me to 
remove.
I forgot to remove it before sending this patch.

kenny

On 04/16/2013 04:07 PM, Kenneth Zadeck wrote:
> Richard,
>
> I made major changes to wide-int along the lines you suggested. Each 
> of the binary operations is now a template.
> There are 5 possible implementations of those operations, one for each 
> of HWI, unsigned HWI, wide-int, rtl, and tree.   Note that this is not 
> exactly as you suggested, but it is along the same lines.
>
> The HWI template sign extends the value to the precision of the first 
> operand, the unsigned HWI is the same except that it is an unsigned 
> extension.   The wide-int version is used as before, but is in truth 
> rarely used.  The rtl and tree "logically" convert the value to a 
> wide-int but in practice do something more efficient than converting 
> to the wide-int.   What they do is look inside the rtl or the tree and 
> pass a pointer to the data and a length to the binary operation.  This 
> is perfectly safe in the position of a second operand to the binary 
> operation because the lifetime is guaranteed to be very short.  The 
> wide-int implementation was also modified to do the same pointer trick 
> allowing all 5 templates to share the same use of the data.
>
> Note that currently the tree code is more crufty than one would 
> like.   This will clean up nicely when the tree-cst is changed to 
> represent the value with an array and a length field.
>
> So now, at least for the second operand of binary operations, the 
> storage is never copied.    I do not believe that there is a good 
> similar trick for the first operand.  i did not consider something 
> like wide_int::add (a, b) to be a viable option; it seems to mis the 
> point of using an object oriented language.   So I think that you 
> really have to copy the data into an instance of a wide int.
>
> However, while all of this avoids ever having to pass a precision into 
> the second operand, this patch does preserve the finite math 
> implementation of wide-int.    Finite math is really what people 
> expect an optimizer to do, because it seamlessly matches what the 
> machine is going to do.
>
> I hope at this point, i can get a comprehensive review on these 
> patches.   I believe that I have done what is required.
>
> There are two other patches that will be submitted in the next few 
> minutes.   The first one is an updated version of the rtl level 
> patch.   The only changes from what you have seen before are that the 
> binary operations now use the templated binary operations. The second 
> one is the first of the tree level patches.   It converts builtins.c 
> to use both use wide-int and it removes all assumptions that tree-csts 
> are built with two HWIs.
>
> Once builtins.c is accepted, i will convert the rest of the middle end 
> patches.   They will all be converted in a similar way.
>
> Kenny
2013-04-16  Kenneth Zadeck <zadeck@naturalbridge.com>

	* Makefile.in (wide-int.c, wide-int.h): New files.
	* wide-int.c: New file containing implementation of wide_int class.
	* wide-int.h: New file containing public spec for wide_int class.

Patch

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 109f865..514fabc 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -857,7 +857,7 @@  RTL_BASE_H = coretypes.h rtl.h rtl.def $(MACHMODE_H) reg-notes.def \
   insn-notes.def $(INPUT_H) $(REAL_H) statistics.h $(VEC_H) \
   $(FIXED_VALUE_H) alias.h $(HASHTAB_H)
 FIXED_VALUE_H = fixed-value.h $(MACHMODE_H) double-int.h
-RTL_H = $(RTL_BASE_H) $(FLAGS_H) genrtl.h
+RTL_H = $(RTL_BASE_H) $(FLAGS_H) genrtl.h wide-int.h
 RTL_ERROR_H = rtl-error.h $(RTL_H) $(DIAGNOSTIC_CORE_H)
 READ_MD_H = $(OBSTACK_H) $(HASHTAB_H) read-md.h
 PARAMS_H = params.h params.def
@@ -945,7 +945,7 @@  TREE_PRETTY_PRINT_H = tree-pretty-print.h $(PRETTY_PRINT_H)
 GIMPLE_PRETTY_PRINT_H = gimple-pretty-print.h $(TREE_PRETTY_PRINT_H)
 DIAGNOSTIC_CORE_H = diagnostic-core.h $(INPUT_H) bversion.h diagnostic.def
 DIAGNOSTIC_H = diagnostic.h $(DIAGNOSTIC_CORE_H) $(PRETTY_PRINT_H)
-DWARF2OUT_H = dwarf2out.h $(DWARF2_H)
+DWARF2OUT_H = dwarf2out.h $(DWARF2_H) wide-int.h
 C_PRETTY_PRINT_H = c-family/c-pretty-print.h $(PRETTY_PRINT_H) \
 	$(C_COMMON_H) $(TREE_H)
 SCEV_H = tree-scalar-evolution.h $(GGC_H) tree-chrec.h $(PARAMS_H)
@@ -1458,6 +1458,7 @@  OBJS = \
 	varpool.o \
 	vmsdbgout.o \
 	web.o \
+	wide-int.o \
 	xcoffout.o \
 	$(out_object_file) \
 	$(EXTRA_OBJS) \
@@ -2687,6 +2688,7 @@  targhooks.o : targhooks.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TREE_H) \
    tree-ssa-alias.h $(TREE_FLOW_H)
 common/common-targhooks.o : common/common-targhooks.c $(CONFIG_H) $(SYSTEM_H) \
    coretypes.h $(INPUT_H) $(TM_H) $(COMMON_TARGET_H) common/common-targhooks.h
+wide-int.o: wide-int.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H)
 
 bversion.h: s-bversion; @true
 s-bversion: BASE-VER
@@ -2853,10 +2855,10 @@  emit-rtl.o : emit-rtl.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
    $(HASHTAB_H) $(TM_P_H) debug.h langhooks.h $(TREE_PASS_H) gt-emit-rtl.h \
    $(DF_H) $(PARAMS_H) $(TARGET_H)
 real.o : real.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) \
-   $(DIAGNOSTIC_CORE_H) $(TM_P_H) $(REAL_H) dfp.h realmpfr.h
+   $(DIAGNOSTIC_CORE_H) $(TM_P_H) $(REAL_H) dfp.h realmpfr.h wide-int.h
 realmpfr.o : realmpfr.c realmpfr.h $(CONFIG_H) $(SYSTEM_H) coretypes.h $(REAL_H) $(TREE_H)
 dfp.o : dfp.c dfp.h $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H)	$(TREE_H) \
-   $(TM_P_H) $(REAL_H) $(DECNUM_H)
+   $(TM_P_H) $(REAL_H) $(DECNUM_H) wide-int.h
 fixed-value.o: fixed-value.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
    $(TREE_H) $(REAL_H) $(DIAGNOSTIC_CORE_H)
 jump.o : jump.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
@@ -3941,15 +3943,16 @@  CFLAGS-gengtype-parse.o += -DGENERATOR_FILE
 build/gengtype-parse.o: $(BCONFIG_H)
 
 gengtype-state.o build/gengtype-state.o: gengtype-state.c $(SYSTEM_H) \
-  gengtype.h errors.h double-int.h version.h $(HASHTAB_H) $(OBSTACK_H) \
-  $(XREGEX_H)
+  gengtype.h errors.h double-int.h version.h $(HASHTAB_H)    \
+  $(OBSTACK_H) $(XREGEX_H)
 gengtype-state.o: $(CONFIG_H)
 CFLAGS-gengtype-state.o += -DGENERATOR_FILE
 build/gengtype-state.o: $(BCONFIG_H)
-
+wide-int.h: $(GTM_H) $(TREE_H) hwint.h $(OPTIONS_H) $(TM_H)             \
+  $(MACHMODE_H) double-int.h dumpfile.h $(REAL_H)
 gengtype.o build/gengtype.o : gengtype.c $(SYSTEM_H) gengtype.h 	\
-  rtl.def insn-notes.def errors.h double-int.h version.h $(HASHTAB_H) \
-  $(OBSTACK_H) $(XREGEX_H)
+  rtl.def insn-notes.def errors.h double-int.h wide-int.h version.h     \
+  $(HASHTAB_H) $(OBSTACK_H) $(XREGEX_H)
 gengtype.o: $(CONFIG_H)
 CFLAGS-gengtype.o += -DGENERATOR_FILE
 build/gengtype.o: $(BCONFIG_H)
diff --git a/gcc/wide-int.c b/gcc/wide-int.c
new file mode 100644
index 0000000..1ba2ca4
--- /dev/null
+++ b/gcc/wide-int.c
@@ -0,0 +1,2727 @@ 
+/* Operations with very long integers.
+   Copyright (C) 2012-2013 Free Software Foundation, Inc.
+   Contributed by Kenneth Zadeck <zadeck@naturalbridge.com>
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 3, or (at your option) any
+later version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "hwint.h"
+#include "wide-int.h"
+#include "rtl.h"
+#include "tree.h"
+#include "dumpfile.h"
+
+/* This is the maximal size of the buffer needed for dump.  */
+const int MAX_SIZE = 4 * (MAX_BITSIZE_MODE_ANY_INT / 4
+		     + MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT + 32);
+
+/*
+ * Internal utilities.
+ */
+
+/* Quantities to deal with values that hold half of a wide int.  Used
+   in multiply and divide.  */
+#define HALF_INT_MASK (((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
+
+#define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
+#define BLOCKS_NEEDED(PREC) \
+  (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT)
+#define SIGN_MASK(X) (((HOST_WIDE_INT)X) >> (HOST_BITS_PER_WIDE_INT - 1))
+/*
+ * Conversion routines in and out of wide-int.
+ */
+
+/* Convert OP0 into a wide int of PRECISION.  If the precision is less
+   than HOST_BITS_PER_WIDE_INT, zero extend the value of the word.
+   The overflow bit are set if the number was too large to fit in the
+   mode.  */
+
+wide_int
+wide_int::from_shwi (HOST_WIDE_INT op0,
+		     unsigned int precision, bool *overflow)
+{
+  wide_int result;
+
+  result.precision = precision;
+
+  if (precision < HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT t = sext_hwi (op0, precision);
+      if (t != op0 && overflow)
+	*overflow = true; 
+      op0 = t;
+    }
+
+  result.val[0] = op0;
+  result.len = 1;
+
+  return result;
+}
+
+/* Convert OP0 into a wide int of PRECISION.  If the precision is less
+   than HOST_BITS_PER_WIDE_INT, zero extend the value of the word.
+   The overflow bit are set if the number was too large to fit in the
+   mode.  */
+
+wide_int
+wide_int::from_uhwi (unsigned HOST_WIDE_INT op0, 
+		     unsigned int precision, bool *overflow)
+{
+  wide_int result;
+
+  result.precision = precision;
+
+  if (precision < HOST_BITS_PER_WIDE_INT)
+    {
+      unsigned HOST_WIDE_INT t = zext_hwi (op0, precision);
+      if (t != op0 && overflow)
+	*overflow = true;
+      op0 = t;
+    }
+
+  result.val[0] = op0;
+
+  /* If the top bit is a 1, we need to add another word of 0s since
+     that would not expand the right value since the infinite
+     expansion of any unsigned number must have 0s at the top.  */
+  if ((HOST_WIDE_INT)op0 < 0 && precision > HOST_BITS_PER_WIDE_INT)
+    {
+      result.val[1] = 0;
+      result.len = 2;
+    }
+  else
+    result.len = 1;
+
+  return result;
+}
+
+/* Create a wide_int from an array of host_wide_ints in OP1 of LEN.
+   The result has PRECISION.  */
+
+wide_int
+wide_int::from_array (const HOST_WIDE_INT *op1, unsigned int len, 
+		      unsigned int precision, bool need_canon)
+{
+  unsigned int i;
+  wide_int result;
+  
+  result.len = len;
+  result.precision = precision;
+
+  for (i=0; i < len; i++)
+    result.val[i] = op1[i];
+
+  if (need_canon)
+    result.canonize ();
+  return result;
+}
+
+/* Convert a double int into a wide int with precision PREC.  */
+
+wide_int
+wide_int::from_double_int (double_int di, unsigned int prec)
+{
+  HOST_WIDE_INT op = di.low;
+  wide_int result;
+
+  result.precision = prec;
+  result.len = (prec <= HOST_BITS_PER_WIDE_INT) ? 1 : 2;
+
+  if (prec < HOST_BITS_PER_WIDE_INT)
+    result.val[0] = sext_hwi (op, prec);
+  else
+    {
+      result.val[0] = op;
+      if (prec > HOST_BITS_PER_WIDE_INT)
+	{
+	  if (prec < HOST_BITS_PER_DOUBLE_INT)
+	    result.val[1] = sext_hwi (di.high, prec);
+	  else
+	    result.val[1] = di.high;
+	}
+    }
+
+  if (result.len == 2)
+    result.canonize ();
+
+  return result;
+}
+
+/* Convert a integer cst into a wide int.  */
+
+wide_int
+wide_int::from_tree (const_tree tcst)
+{
+  wide_int result;
+  tree type = TREE_TYPE (tcst);
+  unsigned int prec = TYPE_PRECISION (type);
+
+  result.precision = prec;
+
+  /* This will simplify out to just an array copy and a len copy.  */
+  result.val[0] = TREE_INT_CST_LOW (tcst);
+  result.val[1] = TREE_INT_CST_HIGH (tcst);
+  if (prec == HOST_BITS_PER_DOUBLE_INT 
+      && TYPE_UNSIGNED (type)
+      && TREE_INT_CST_HIGH (tcst) < 0)
+    {
+      result.val[2] = 0;
+      result.len = 3;
+    }
+  else if (prec > HOST_BITS_PER_WIDE_INT)
+    result.len = 2;
+  else if (prec == HOST_BITS_PER_WIDE_INT
+	   && TYPE_UNSIGNED (type)
+	   && (HOST_WIDE_INT)TREE_INT_CST_LOW (tcst) < 0)
+    result.len = 2;
+  else
+    result.len = 1;
+
+  return result;
+}
+
+/* Extract a constant integer from the X of type MODE.  The bits of
+   the integer are returned.  */
+
+wide_int
+wide_int::from_rtx (const_rtx x, enum machine_mode mode)
+{
+  wide_int result;
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  gcc_assert (mode != VOIDmode);
+
+  result.precision = prec;
+
+  switch (GET_CODE (x))
+    {
+    case CONST_INT:
+      result.val[0] = INTVAL (x);
+      result.len = 1;
+      break;
+
+#if TARGET_SUPPORTS_WIDE_INT
+    case CONST_WIDE_INT:
+      {
+	int i;
+	result.len = CONST_WIDE_INT_NUNITS (x);
+	
+	for (i = 0; i < result.len; i++)
+	  result.val[i] = CONST_WIDE_INT_ELT (x, i);
+      }
+      break;
+#else
+    case CONST_DOUBLE:
+      result.len = 2;
+      result.val[0] = CONST_DOUBLE_LOW (x);
+      result.val[1] = CONST_DOUBLE_HIGH (x);
+      break;
+#endif
+
+    default:
+      gcc_unreachable ();
+    }
+
+  return result;
+}
+
+/* Construct from a buffer of length LEN.  BUFFER will be read according
+   to byte endianess and word endianess.  Only the lower LEN bytes
+   of the result are set; the remaining high bytes are cleared.  */
+
+wide_int
+wide_int::from_buffer (const unsigned char *buffer, int len)
+{
+  wide_int result = wide_int::zero (len * BITS_PER_UNIT);
+  int words = len / UNITS_PER_WORD;
+
+  for (int byte = 0; byte < len; byte++)
+    {
+      int offset;
+      int index;
+      int bitpos = byte * BITS_PER_UNIT;
+      unsigned HOST_WIDE_INT value;
+
+      if (len > UNITS_PER_WORD)
+	{
+	  int word = byte / UNITS_PER_WORD;
+
+	  if (WORDS_BIG_ENDIAN)
+	    word = (words - 1) - word;
+
+	  offset = word * UNITS_PER_WORD;
+
+	  if (BYTES_BIG_ENDIAN)
+	    offset += (UNITS_PER_WORD - 1) - (byte % UNITS_PER_WORD);
+	  else
+	    offset += byte % UNITS_PER_WORD;
+	}
+      else
+	offset = BYTES_BIG_ENDIAN ? (len - 1) - byte : byte;
+
+      value = (unsigned HOST_WIDE_INT) buffer[offset];
+
+      index = bitpos / HOST_BITS_PER_WIDE_INT;
+      result.val[index] |= value << bitpos;
+    }
+
+  result.canonize ();
+  return result;
+}
+
+
+/*
+ * Largest and smallest values in a mode.
+ */
+
+/* Produce the largest SGNed number that is represented in PREC.  The
+   resulting number is placed in a wide int of size PRECISION.  */
+
+wide_int
+wide_int::max_value (unsigned int prec, unsigned int precision, 
+		     SignOp sgn)
+{
+  wide_int result;
+  
+  result.precision = precision;
+
+  if (sgn == UNSIGNED)
+    {
+      /* The unsigned max is just all ones, for which the compressed
+	 rep is just a single HWI.  */ 
+      result.len = 1;
+      result.val[0] = (HOST_WIDE_INT)-1;
+    }
+  else
+    {
+      /* The signed max is all ones except the top bit.  This must be
+	 explicitly represented.  */
+      int i;
+      int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
+      int shift = (small_prec == 0) 
+	? HOST_BITS_PER_WIDE_INT - 1 : small_prec - 1;
+
+      result.len = BLOCKS_NEEDED (prec);
+      for (i = 0; i < result.len - 1; i++)
+	result.val[i] = (HOST_WIDE_INT)-1;
+
+      result.val[result.len - 1] = ((HOST_WIDE_INT)1 << shift) - 1;
+    }
+  
+  return result;
+}
+
+/* Produce the smallest SGNed number that is represented in PREC.  The
+   resulting number is placed in a wide int of size PRECISION.  */
+
+wide_int
+wide_int::min_value (unsigned int prec, unsigned int precision, 
+		     SignOp sgn)
+{
+  if (sgn == UNSIGNED)
+    {
+      /* The unsigned min is just all zeros, for which the compressed
+	 rep is just a single HWI.  */ 
+      wide_int result;
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = 0;
+      return result;
+    }
+  else
+    {
+      /* The signed min is all zeros except the top bit.  This must be
+	 explicitly represented.  */
+      return set_bit_in_zero (prec - 1, precision);
+    }
+}
+
+/*
+ * Public utilities.
+ */
+
+/* Check the upper HOST_WIDE_INTs of src to see if the length can be
+   shortened.  An upper HOST_WIDE_INT is unnecessary if it is all ones
+   or zeros and the top bit of the next lower word matches.
+
+   This function may change the representation of THIS, but does not
+   change the value that THIS represents.  It does not sign extend in
+   the case that the size of the mode is less than
+   HOST_BITS_PER_WIDE_INT.  */
+
+void
+wide_int::canonize ()
+{
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  int blocks_needed = BLOCKS_NEEDED (precision);
+  HOST_WIDE_INT top;
+  int i;
+
+  if (len > blocks_needed)
+    len = blocks_needed;
+
+  /* Clean up the top bits for any mode that is not a multiple of a HWI.  */
+  if (len == blocks_needed && small_prec)
+    val[len - 1] = sext_hwi (val[len - 1], small_prec);
+
+  if (len == 1)
+    return;
+
+  top = val[len - 1];
+  if (top != 0 && top != (HOST_WIDE_INT)-1)
+    return;
+
+  /* At this point we know that the top is either 0 or -1.  Find the
+     first block that is not a copy of this.  */
+  for (i = len - 2; i >= 0; i--)
+    {
+      HOST_WIDE_INT x = val[i];
+      if (x != top)
+	{
+	  if (SIGN_MASK (x) == top)
+	    {
+	      len = i + 1;
+	      return;
+	    }
+
+	  /* We need an extra block because the top bit block i does
+	     not match the extension.  */
+	  len = i + 2;
+	  return;
+	}
+    }
+
+  /* The number is 0 or -1.  */
+  len = 1;
+}
+
+
+/* Make a copy of this.  */
+
+wide_int
+wide_int::copy () const
+{
+  wide_int result;
+  int i;
+
+  result.precision = precision;
+  result.len = len;
+
+  for (i = 0; i < result.len; i++)
+    result.val[i] = val[i];
+  return result;
+}
+
+
+/* Copy THIS replacing the precision with PREC.
+   It can do any of truncation, extension or copying.  */
+
+wide_int
+wide_int::force_to_size (unsigned int prec, SignOp sgn) const
+{
+  wide_int result;
+  int blocks_needed = BLOCKS_NEEDED (prec);
+  int i;
+
+  result.precision = prec;
+  result.len = blocks_needed < len ? blocks_needed : len;
+  for (i = 0; i < result.len; i++)
+    result.val[i] = val[i];
+
+  if (prec >= precision) 
+    {
+      /* Expanding */
+      int small_precision = precision & (HOST_BITS_PER_WIDE_INT - 1);
+
+      /* The only case we care about is unsigned because the rep is
+	 inherantly signed.  */
+      if (sgn == UNSIGNED)
+	{
+	  /* The top block in the existing rep must be zero extended,
+	     but this is all the work we need to do.  */
+	  if (small_precision 
+	      && (len == BLOCKS_NEEDED (precision)
+		  || len == blocks_needed))
+	    result.val[len-1] = zext_hwi (result.val[len-1], small_precision);
+	  else if (len == BLOCKS_NEEDED (precision) 
+		   && len < blocks_needed
+		   && small_precision == 0
+		   && result.val[result.len - 1] < 0)
+		    /* We need to put the 0 block on top to keep the value
+		       from being sign extended.  */ 
+		    result.val[result.len++] = 0;
+	}
+    }
+  else
+    {
+      /* Truncating.  */
+      int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
+      /* The only weird case we need to look at here is when we are
+         truncating within the top block.  We need to make sure that
+         everything in the block above the new precision is sign
+         extended.  Note that this is independent of the SGN.  This is
+         just to stay canonical.  */
+      if (small_prec && (blocks_needed == len))
+	result.val[blocks_needed-1]
+	  = sext_hwi (result.val[blocks_needed-1], small_prec);
+    }
+
+  return result;
+}
+
+/*
+ * public printing routines.
+ */
+
+/* Try to print the signed self in decimal to BUF if the number fits
+   in a HWI.  Other print in hex.  */
+
+void 
+wide_int::print_decs (char *buf) const
+{
+  if ((precision <= HOST_BITS_PER_WIDE_INT)
+      || (len == 1 && !neg_p ()))
+      sprintf (buf, HOST_WIDE_INT_PRINT_DEC, val[0]);
+  else
+    print_hex (buf);
+}
+
+/* Try to print the signed self in decimal to FILE if the number fits
+   in a HWI.  Other print in hex.  */
+
+void 
+wide_int::print_decs (FILE *file) const
+{
+  char buf[(2 * MAX_BITSIZE_MODE_ANY_INT / BITS_PER_UNIT) + 4];
+  print_decs (buf);
+  fputs (buf, file);
+}
+
+/* Try to print the unsigned self in decimal to BUF if the number fits
+   in a HWI.  Other print in hex.  */
+
+void 
+wide_int::print_decu (char *buf) const
+{
+  if ((precision <= HOST_BITS_PER_WIDE_INT)
+      || (len == 1 && !neg_p ()))
+      sprintf (buf, HOST_WIDE_INT_PRINT_UNSIGNED, val[0]);
+  else
+    print_hex (buf);
+}
+
+/* Try to print the signed self in decimal to FILE if the number fits
+   in a HWI.  Other print in hex.  */
+
+void 
+wide_int::print_decu (FILE *file) const
+{
+  char buf[(2 * MAX_BITSIZE_MODE_ANY_INT / BITS_PER_UNIT) + 4];
+  print_decu (buf);
+  fputs (buf, file);
+}
+
+void 
+wide_int::print_hex (char *buf) const
+{
+  int i = len;
+
+  if (zero_p ())
+    sprintf (buf, "0x");
+  else
+    {
+      if (neg_p ())
+	{
+	  int j;
+	  /* If the number is negative, we may need to pad value with
+	     0xFFF...  because the leading elements may be missing and
+	     we do not print a '-' with hex.  */
+	  for (j = BLOCKS_NEEDED (precision); j > i; j--)
+	    buf += sprintf (buf, HOST_WIDE_INT_PRINT_PADDED_HEX, (HOST_WIDE_INT) -1);
+	    
+	}
+      else
+	buf += sprintf (buf, HOST_WIDE_INT_PRINT_HEX, val [--i]);
+      while (-- i >= 0)
+	buf += sprintf (buf, HOST_WIDE_INT_PRINT_PADDED_HEX, val [i]);
+    }
+}
+
+/* Print one big hex number to FILE.  Note that some assemblers may not
+   accept this for large modes.  */
+void 
+wide_int::print_hex (FILE *file) const
+{
+  char buf[(2 * MAX_BITSIZE_MODE_ANY_INT / BITS_PER_UNIT) + 4];
+  print_hex (buf);
+  fputs (buf, file);
+}
+
+/*
+ * Comparisons, note that only equality is an operator.  The other
+ * comparisons cannot be operators since they are inherently singed or
+ * unsigned and C++ has no such operators.
+ */
+
+/* Return true if THIS == OP1.  */
+
+bool
+wide_int::eq_p_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  HOST_WIDE_INT op0mask = sign_mask ();
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  while (l0 > l1)
+    if (val[l0--] != op1mask)
+      return false;
+
+  while (l1 > l0)
+    if (op1[l1--] != op0mask)
+      return false;
+
+  while (l0 >= 0)
+    if (val[l0--] != op1[l1--])
+      return false;
+
+  return true;
+}
+
+/* Return true if THIS < OP1 using signed comparisons.  */
+
+bool
+wide_int::lts_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, 
+		       const HOST_WIDE_INT *op1, unsigned int op1len, 
+		       unsigned int precision)
+{
+  int l;
+  HOST_WIDE_INT s0, s1;
+  unsigned HOST_WIDE_INT u0, u1;
+  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  HOST_WIDE_INT op0mask = SIGN_MASK (op0[op0len - 1]);
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  /* Only the top block is compared as signed.  The rest are unsigned
+     comparisons.  */
+  s0 = blocks_needed == op0len ? op0[blocks_needed - 1] : op0mask;
+  s1 = blocks_needed == op1len ? op1[blocks_needed - 1] : op1mask;
+  if (s0 < s1)
+    return true;
+  if (s0 > s1)
+    return false;
+
+  l = (signed int)MAX (op0len, op1len);
+  /* We have already checked to top block so skip down.  */
+  l = (l == (signed int)blocks_needed) ? l - 2 : l - 1;
+
+  while (l >= 0)
+    {
+      u0 = l < (signed int)op0len ? op0[l] : op0mask;
+      u1 = l < (signed int)op1len ? op1[l] : op1mask;
+
+      if (u0 < u1)
+	return true;
+      if (u0 > u1)
+	return false;
+      l--;
+    }
+
+  return false;
+}
+
+/* Returns -1 if THIS < OP1, 0 if THIS == OP1 and 1 if A > OP1 using
+   signed compares.  */
+
+int
+wide_int::cmps_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  int l;
+  HOST_WIDE_INT s0, s1;
+  unsigned HOST_WIDE_INT u0, u1;
+  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  HOST_WIDE_INT op0mask = sign_mask ();
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  /* Only the top block is compared as signed.  The rest are unsigned
+     comparisons.  */
+  s0 = blocks_needed == (unsigned int)len ? val[blocks_needed - 1] : op0mask;
+  s1 = blocks_needed == op1len ? op1[blocks_needed - 1] : op1mask;
+  if (s0 < s1)
+    return -1;
+  if (s0 > s1)
+    return 1;
+
+  l = MAX (len, (signed int)op1len);
+  /* We have already checked to top block so skip down.  */
+  l = (l == (signed int)blocks_needed) ? l - 2 : l - 1;
+
+  while (l >= 0)
+    {
+      u0 = l < len ? val [l] : op0mask;
+      u1 = l < (signed int)op1len ? op1[l] : op1mask;
+
+      if (u0 < u1)
+	return -1;
+      if (u0 > u1)
+	return 1;
+      l--;
+    }
+
+  return 0;
+}
+
+/* Return true if OP0 < OP1 using unsigned comparisons.  */
+
+bool
+wide_int::ltu_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, 
+		       const HOST_WIDE_INT *op1, unsigned int op1len)
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  int l0 = op0len - 1;
+  int l1 = op1len - 1;
+  HOST_WIDE_INT op0mask = SIGN_MASK (op0[op0len - 1]);
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  while (l0 > l1)
+    {
+      x0 = op0[l0--];
+      x1 = op1mask;
+      if (x0 < x1)
+	return true;
+      if (x0 > x1)
+	return false;
+    }
+
+  while (l1 > l0)
+    {
+      x0 = op0mask;
+      x1 = op1[l1--];
+      if (x0 < x1)
+	return true;
+      if (x0 > x1)
+	return false;
+    }
+
+  while (l0 >= 0)
+    {
+      x0 = op0[l0--];
+      x1 = op1[l1--];
+      if (x0 < x1)
+	return true;
+      if (x0 > x1)
+	return false;
+    }
+
+  return false;
+}
+
+/* Returns -1 if THIS < OP1, 0 if THIS == OP1 and 1 if A > OP1 using
+   unsigned compares.  */
+
+int
+wide_int::cmpu_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  HOST_WIDE_INT op0mask = sign_mask ();
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  while (l0 > l1)
+    {
+      x0 = val[l0--];
+      x1 = op1mask;
+      if (x0 < x1)
+	return -1;
+      else if (x0 > x1)
+	return 1;
+    }
+
+  while (l1 > l0)
+    {
+      x0 = op0mask;
+      x1 = op1[l1--];
+      if (x0 < x1)
+	return -1;
+      if (x0 > x1)
+	return 1;
+    }
+
+  while (l0 >= 0)
+    {
+      x0 = val[l0--];
+      x1 = op1[l1--];
+      if (x0 < x1)
+	return -1;
+      if (x0 > x1)
+	return 1;
+    }
+
+  return 0;
+}
+
+/* Return true if THIS has the sign bit set to 1 and all other bits are
+   zero.  */
+
+bool
+wide_int::only_sign_bit_p (unsigned int prec) const
+{
+  int i;
+  HOST_WIDE_INT x;
+  int small_prec;
+  bool result;
+
+  if (BLOCKS_NEEDED (prec) != len)
+    {
+      result = false;
+      goto ex;
+    }
+
+  for (i=0; i < len - 1; i++)
+    if (val[i] != 0)
+      {
+	result = false;
+	goto ex;
+      }
+
+  x = val[len - 1];
+  small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
+  if (small_prec)
+    x = x << (HOST_BITS_PER_WIDE_INT - small_prec);
+
+  result = x == ((HOST_WIDE_INT)1) << (HOST_BITS_PER_WIDE_INT - 1);
+
+ ex:
+  return result;
+}
+
+/* Returns true if THIS fits into range of TYPE.  Signedness of OP0 is
+   assumed to be the same as the signedness of TYPE.  */
+
+bool
+wide_int::fits_to_tree_p (const_tree type) const
+{
+  int type_prec = TYPE_PRECISION (type);
+
+  if (TYPE_UNSIGNED (type))
+    return fits_u_p (type_prec);
+  else
+    return fits_s_p (type_prec);
+}
+
+/* Returns true of THIS fits in the unsigned range of precision.  */
+
+bool
+wide_int::fits_s_p (unsigned int prec) const
+{
+  if (len < BLOCKS_NEEDED (prec))
+    return true;
+
+  if (precision <= prec)
+    return true;
+
+  return *this == sext (prec);
+}
+
+
+/* Returns true if THIS fits into range of TYPE.  */
+
+bool
+wide_int::fits_u_p (unsigned int prec) const
+{
+  if (len < BLOCKS_NEEDED (prec))
+    return true;
+
+  if (precision <= prec)
+    return true;
+
+  return *this == zext (prec);
+}
+
+/*
+ * Extension.
+ */
+
+/* Sign extend THIS starting at OFFSET.  The precision of the result
+   are the same as THIS.  */
+
+wide_int
+wide_int::sext (unsigned int offset) const
+{
+  wide_int result;
+  int off;
+
+  gcc_assert (precision >= offset);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.precision = precision;
+      if (offset < precision)
+	result.val[0] = sext_hwi (val[0], offset);
+      else
+	/* If offset is greater or equal to precision there is nothing
+	   to do since the internal rep is already sign extended.  */
+	result.val[0] = val[0];
+
+      result.len = 1;
+    }
+  else if (precision == offset)
+    result = *this;
+  else
+    {
+      result = decompress (offset, precision);
+      
+      /* Now we can do the real sign extension.  */
+      off = offset & (HOST_BITS_PER_WIDE_INT - 1);
+      if (off)
+	{
+	  int block = BLOCK_OF (offset);
+	  result.val[block] = sext_hwi (val[block], off);
+	  result.len = block + 1;
+	}
+      /* We never need an extra element for sign extended values.  */
+    }    
+
+  return result;
+}
+
+/* Zero extend THIS starting at OFFSET.  The precision of the result
+   are the same as THIS.  */
+
+wide_int
+wide_int::zext (unsigned int offset) const
+{
+  wide_int result;
+  int off;
+  int block;
+
+  gcc_assert (precision >= offset);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.precision = precision;
+      if (offset < precision)
+	result.val[0] = zext_hwi (val[0], offset);
+      else if (offset == precision)
+	result.val[0] = val[0];
+	/* If offset was greater than the precision we need to zero
+	   extend from the old precision since the internal rep was
+	   equivalent to sign extended.  */
+      else
+	result.val[0] = zext_hwi (val[0], precision);
+	
+      result.len = 1;
+    }
+  else if (precision == offset)
+    result = *this;
+  else
+    {
+      result = decompress (offset, precision);
+
+      /* Now we can do the real zero extension.  */
+      off = offset & (HOST_BITS_PER_WIDE_INT - 1);
+      block = BLOCK_OF (offset);
+      if (off)
+	{
+	  result.val[block] = zext_hwi (val[block], off);
+	  result.len = block + 1;
+	}
+      else
+	/* See if we need an extra zero element to satisfy the
+	   compression rule.  */
+	if (val[block - 1] < 0 && offset < precision)
+	  {
+	    result.val[block] = 0;
+	    result.len += 1;
+	  }
+    }
+  return result;
+}
+
+/*
+ * Masking, inserting, shifting, rotating.
+ */
+
+/* Return a value with a one bit inserted in THIS at BITPOS.  */
+
+wide_int
+wide_int::set_bit (unsigned int bitpos) const
+{
+  wide_int result;
+  int i, j;
+
+  if (bitpos >= precision)
+    result = copy ();
+  else
+    {
+      result = decompress (bitpos, precision);
+      j = bitpos / HOST_BITS_PER_WIDE_INT;
+      i = bitpos & (HOST_BITS_PER_WIDE_INT - 1);
+      result.val[j] |= ((HOST_WIDE_INT)1) << i;
+    }
+
+  return result;
+}
+
+/* Insert a 1 bit into 0 at BITPOS producing an number with PRECISION.  */
+
+wide_int
+wide_int::set_bit_in_zero (unsigned int bitpos, unsigned int prec)
+{
+  wide_int result;
+  int blocks_needed = BLOCKS_NEEDED (bitpos + 1);
+  int i, j;
+
+  result.precision = prec;
+  if (bitpos >= prec)
+    {
+      result.len = 1;
+      result.val[0] = 0;
+    }
+  else
+    {
+      result.len = blocks_needed;
+      for (i = 0; i < blocks_needed; i++)
+	result.val[i] = 0;
+      
+      j = bitpos / HOST_BITS_PER_WIDE_INT;
+      i = bitpos & (HOST_BITS_PER_WIDE_INT - 1);
+      result.val[j] |= ((HOST_WIDE_INT)1) << i;
+    }
+
+  return result;
+}
+
+/* Insert WIDTH bits from OP0 into THIS starting at START.  */
+
+wide_int
+wide_int::insert (const wide_int &op0, unsigned int start, 
+		  unsigned int width) const
+{
+  wide_int result;
+  wide_int mask;
+  wide_int tmp;
+
+  if (start + width >= precision) 
+    width = precision - start;
+
+  mask = shifted_mask (start, width, false, precision);
+  tmp = op0.lshift (start, 0, precision, NONE);
+  result = tmp & mask;
+
+  tmp = and_not (mask);
+  result = result | tmp;
+
+  return result;
+}
+
+/* bswap THIS.  */
+
+wide_int
+wide_int::bswap () const
+{
+  wide_int result;
+  int i, s;
+  int end;
+  int len = BLOCKS_NEEDED (precision);
+
+  /* This is not a well defined operation if the precision is not a
+     multiple of 8.  */
+  gcc_assert ((precision & 0x7) == 0);
+
+  result.precision = precision;
+  result.len = len;
+
+  for (i = 0; i < len; i++)
+    result.val[i] = 0;
+
+  /* Only swap the bytes that are not the padding.  */
+  if ((precision & (HOST_BITS_PER_WIDE_INT - 1))
+      && (this->len == len))
+    end = precision;
+  else
+    end = this->len * HOST_BITS_PER_WIDE_INT;
+
+  for (s = 0; s < end; s += 8)
+    {
+      unsigned int d = precision - s - 8;
+      unsigned HOST_WIDE_INT byte;
+
+      int block = s / HOST_BITS_PER_WIDE_INT;
+      int offset = s & (HOST_BITS_PER_WIDE_INT - 1);
+
+      byte = (val[block] >> offset) & 0xff;
+
+      block = d / HOST_BITS_PER_WIDE_INT;
+      offset = d & (HOST_BITS_PER_WIDE_INT - 1);
+
+      result.val[block] |= byte << offset;
+    }
+
+  result.canonize ();
+  return result;
+}
+
+/* Return a result mask where the lower WIDTH bits are ones and the
+   bits above that up to the precision are zeros.  The result is
+   inverted if NEGATE is true.  The result is made with PREC. */
+
+wide_int
+wide_int::mask (unsigned int width, bool negate, unsigned int prec)
+{
+  wide_int result;
+  unsigned int i = 0;
+  int shift;
+
+  gcc_assert (width < 2 * MAX_BITSIZE_MODE_ANY_INT);
+  gcc_assert (prec <= 2 * MAX_BITSIZE_MODE_ANY_INT);
+
+  if (width == 0)
+    {
+      if (negate)
+	result = wide_int::minus_one (prec);
+      else
+	result = wide_int::zero (prec);
+    }
+  else
+    {
+      result.precision = prec;
+      
+      while (i < width / HOST_BITS_PER_WIDE_INT)
+	result.val[i++] = negate ? 0 : (HOST_WIDE_INT)-1;
+      
+      shift = width & (HOST_BITS_PER_WIDE_INT - 1);
+      if (shift != 0)
+	{
+	  HOST_WIDE_INT last = (((HOST_WIDE_INT)1) << shift) - 1;
+	  result.val[i++] = negate ? ~last : last;
+	}
+      result.len = i;
+    }
+
+  return result;
+}
+
+/* Return a result mask of WIDTH ones starting at START and the
+   bits above that up to the precision are zeros.  The result is
+   inverted if NEGATE is true.  */
+wide_int
+wide_int::shifted_mask (unsigned int start, unsigned int width, 
+			bool negate, unsigned int prec)
+{
+  wide_int result;
+  unsigned int i = 0;
+  unsigned int shift;
+  unsigned int end = start + width;
+  HOST_WIDE_INT block;
+
+  if (start + width > prec)
+    width = prec - start;
+ 
+  if (width == 0)
+    {
+      if (negate)
+	result = wide_int::minus_one (prec);
+      else
+	result = wide_int::zero (prec);
+      return result;
+    }
+
+  result.precision = prec;
+
+  while (i < start / HOST_BITS_PER_WIDE_INT)
+    result.val[i++] = negate ? (HOST_WIDE_INT)-1 : 0;
+
+  shift = start & (HOST_BITS_PER_WIDE_INT - 1);
+  if (shift)
+    {
+      block = (((HOST_WIDE_INT)1) << shift) - 1;
+      shift = (end) & (HOST_BITS_PER_WIDE_INT - 1);
+      if (shift)
+	{
+	  /* case 000111000 */
+	  block = (((HOST_WIDE_INT)1) << shift) - block - 1;
+	  result.val[i++] = negate ? ~block : block;
+	  result.len = i;
+	  return result;
+	}
+      else
+	/* ...111000 */
+	result.val[i++] = negate ? block : ~block;
+    }
+
+  while (i < end / HOST_BITS_PER_WIDE_INT)
+    /* 1111111 */
+    result.val[i++] = negate ? 0 : (HOST_WIDE_INT)-1;
+
+  shift = end & (HOST_BITS_PER_WIDE_INT - 1);
+  if (shift != 0)
+    {
+      /* 000011111 */
+      block = (((HOST_WIDE_INT)1) << shift) - 1;
+      result.val[i++] = negate ? ~block : block;
+    }
+
+  result.len = i;
+  return result;
+}
+
+
+/*
+ * logical operations.
+ */
+
+/* Return THIS & OP1.  */
+
+wide_int
+wide_int::and_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  bool need_canon = true;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  if (l0 > l1)
+    {
+      if (SIGN_MASK (op1[op1len - 1]) == 0)
+	{
+	  l0 = l1;
+	  result.len = l1 + 1;
+	}
+      else
+	{
+	  need_canon = false;
+	  while (l0 > l1)
+	    {
+	      result.val[l0] = val[l0];
+	      l0--;
+	    }
+	}
+    }
+  else if (l1 > l0)
+    {
+      if (sign_mask () == 0)
+	result.len = l0 + 1;
+      else
+	{
+	  need_canon = false;
+	  while (l1 > l0)
+	    {
+	      result.val[l1] = op1[l1];
+	      l1--;
+	    }
+	}
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] & op1[l0];
+      l0--;
+    }
+
+  if (need_canon)
+    result.canonize ();
+  return result;
+}
+
+/* Return THIS & ~OP1.  */
+
+wide_int
+wide_int::and_not_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  bool need_canon = true;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  if (l0 > l1)
+    {
+      if (SIGN_MASK (op1[op1len - 1]) != 0)
+	{
+	  l0 = l1;
+	  result.len = l1 + 1;
+	}
+      else
+	{
+	  need_canon = false;
+	  while (l0 > l1)
+	    {
+	      result.val[l0] = val[l0];
+	      l0--;
+	    }
+	}
+    }
+  else if (l1 > l0)
+    {
+      if (sign_mask () == 0)
+	result.len = l0 + 1;
+      else
+	{
+	  need_canon = false;
+	  while (l1 > l0)
+	    {
+	      result.val[l1] = ~op1[l1];
+	      l1--;
+	    }
+	}
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] & ~op1[l0];
+      l0--;
+    }
+
+  if (need_canon)
+    result.canonize ();
+
+  return result;
+}
+
+/* Return THIS | OP1.  */
+
+wide_int
+wide_int::or_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  bool need_canon = true;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  if (l0 > l1)
+    {
+      if (SIGN_MASK (op1[op1len - 1]) != 0)
+	{
+	  l0 = l1;
+	  result.len = l1 + 1;
+	}
+      else
+	{
+	  need_canon = false;
+	  while (l0 > l1)
+	    {
+	      result.val[l0] = val[l0];
+	      l0--;
+	    }
+	}
+    }
+  else if (l1 > l0)
+    {
+      if (sign_mask () != 0)
+	result.len = l0 + 1;
+      else
+	{
+	  need_canon = false;
+	  while (l1 > l0)
+	    {
+	      result.val[l1] = op1[l1];
+	      l1--;
+	    }
+	}
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] | op1[l0];
+      l0--;
+    }
+
+  if (need_canon)
+    result.canonize ();
+
+  return result;
+}
+
+/* Return THIS | ~OP1.  */
+
+wide_int
+wide_int::or_not_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  bool need_canon = true;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  if (l0 > l1)
+    {
+      if (SIGN_MASK (op1[op1len - 1]) == 0)
+	{
+	  l0 = l1;
+	  result.len = l1 + 1;
+	}
+      else
+	{
+	  need_canon = false;
+	  while (l0 > l1)
+	    {
+	      result.val[l0] = val[l0];
+	      l0--;
+	    }
+	}
+    }
+  else if (l1 > l0)
+    {
+      if (sign_mask () != 0)
+	result.len = l0 + 1;
+      else
+	{
+	  need_canon = false;
+	  while (l1 > l0)
+	    {
+	      result.val[l1] = ~op1[l1];
+	      l1--;
+	    }
+	}
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] | ~op1[l0];
+      l0--;
+    }
+
+  if (need_canon)
+    result.canonize ();
+  return result;
+}
+
+/* Return the exclusive ior (xor) of THIS and OP1.  */
+
+wide_int
+wide_int::xor_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  while (l0 > l1)
+    {
+      result.val[l0] = val[l0] ^ SIGN_MASK (op1[op1len - 1]);
+      l0--;
+    }
+
+  while (l1 > l0)
+    {
+      result.val[l1] = sign_mask () ^ op1[l1];
+      l1--;
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] ^ op1[l0];
+      l0--;
+    }
+
+  result.canonize ();
+  return result;
+}
+
+/*
+ * math
+ */
+
+/* Absolute value of THIS.  */
+
+wide_int
+wide_int::abs () const
+{
+  if (sign_mask ())
+    return neg ();
+
+  wide_int result = copy ();
+  return result;
+}
+
+/* Add of THIS and OP1.  No overflow is detected.  */
+
+wide_int
+wide_int::add_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  unsigned HOST_WIDE_INT o0, o1;
+  unsigned HOST_WIDE_INT x = 0;
+  unsigned HOST_WIDE_INT carry = 0;
+  unsigned HOST_WIDE_INT mask0, mask1;
+  unsigned int i, small_prec;
+
+  result.precision = precision;
+  result.len = MAX (len, op1len);
+  mask0 = sign_mask ();
+  mask1 = SIGN_MASK (op1[op1len - 1]);
+  /* Add all of the explicitly defined elements.  */
+  for (i = 0; i < result.len; i++)
+    {
+      o0 = i < len ? (unsigned HOST_WIDE_INT)val[i] : mask0;
+      o1 = i < op1len ? (unsigned HOST_WIDE_INT)op1[i] : mask1;
+      x = o0 + o1 + carry;
+      result.val[i] = x;
+      carry = x < o0;
+    }
+
+  if (len * HOST_BITS_PER_WIDE_INT < precision)
+    {
+      result.val[result.len] = mask0 + mask1 + carry;
+      result.len++;
+    }
+
+  small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  if (small_prec != 0 && BLOCKS_NEEDED (precision) == result.len)
+    {
+      /* Modes with weird precisions.  */
+      i = result.len - 1;
+      result.val[i] = sext_hwi (result.val[i], small_prec);
+    }
+
+  result.canonize ();
+  return result;
+}
+
+/* Add of OP0 and OP1 with overflow checking.  If the result overflows
+   within the precision, set OVERFLOW.  OVERFLOW is assumed to be
+   sticky so it should be initialized.  SGN controls if signed or
+   unsigned overflow is checked.  */
+
+wide_int
+wide_int::add (const wide_int &op1, SignOp sgn, bool *overflow) const
+{
+  wide_int result;
+  unsigned HOST_WIDE_INT o0 = 0;
+  unsigned HOST_WIDE_INT o1 = 0;
+  unsigned HOST_WIDE_INT x = 0;
+  unsigned HOST_WIDE_INT carry = 0;
+  unsigned HOST_WIDE_INT old_carry = 0;
+  unsigned HOST_WIDE_INT mask0, mask1;
+  int i, small_prec;
+
+  gcc_checking_assert (precision == op1.precision);
+
+  result.precision = precision;
+  result.len = MAX (len, op1.len);
+  mask0 = sign_mask ();
+  mask1 = op1.sign_mask ();
+
+  /* Add all of the explicitly defined elements.  */
+  for (i = 0; i < len; i++)
+    {
+      o0 = val[i];
+      o1 = op1.val[i];
+      x = o0 + o1 + carry;
+      result.val [i] = x;
+      old_carry = carry;
+      carry = x < o0;
+    }
+
+  /* Uncompress the rest.  */
+  for (i = len; i < op1.len; i++)
+    {
+      o0 = val[i];
+      o1 = mask1;
+      x = o0 + o1 + carry;
+      result.val[i] = x;
+      old_carry = carry;
+      carry = x < o0;
+    }
+  for (i = len; i < op1.len; i++)
+    {
+      o0 = mask0;
+      o1 = op1.val[i];
+      x = o0 + o1 + carry;
+      result.val[i] = x;
+      old_carry = carry;
+      carry = x < o0;
+    }
+
+  if (len * HOST_BITS_PER_WIDE_INT < precision)
+    {
+      result.val[result.len] = mask0 + mask1 + carry;
+      result.len++;
+      /* If we were short, we could not have overflowed.  */
+      goto ex;
+    }
+
+  small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  if (small_prec == 0)
+    {
+      if (sgn == wide_int::SIGNED)
+	{
+	  if (((!(o0 ^ o1)) & (x ^ o0)) >> (HOST_BITS_PER_WIDE_INT - 1))
+	    *overflow = true;
+	}
+      else if (old_carry)
+	{
+	  if ((~o0) <= o1)
+	    *overflow = true;
+	}
+      else
+	{
+	  if ((~o0) < o1)
+	    *overflow = true;
+	}
+    }
+  else
+    {
+      if (sgn == wide_int::UNSIGNED)
+	{
+	  /* The caveat for unsigned is to get rid of the bits above
+	     the precision before doing the addition.  To check the
+	     overflow, clear these bits and then redo the last
+	     addition.  If there are any non zero bits above the prec,
+	     we overflowed. */
+	  o0 = zext_hwi (o0, small_prec);
+	  o1 = zext_hwi (o1, small_prec);
+	  x = o0 + o1 + old_carry;
+	  if (x >> small_prec)
+	    *overflow = true;
+	}
+      else 
+	{
+	  /* Overflow in this case is easy since we can see bits beyond
+	     the precision.  If the value computed is not the sign
+	     extended value, then we have overflow.  */
+	  unsigned HOST_WIDE_INT y = sext_hwi (x, small_prec);
+	  if (x != y)
+	    *overflow = true;
+	}
+    }
+
+ ex:
+  result.canonize ();
+  return result;
+}
+
+
+/* Count leading zeros of THIS but only looking at the bits in the
+   smallest HWI of size mode.  */
+
+wide_int
+wide_int::clz () const
+{
+  int i;
+  int start;
+  int count;
+  HOST_WIDE_INT v;
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+
+  if (zero_p ())
+    {
+      enum machine_mode mode = mode_for_size (precision, MODE_INT, 0);
+      if (mode == BLKmode)
+	mode_for_size (precision, MODE_PARTIAL_INT, 0); 
+
+      /* Even if the value at zero is undefined, we have to come up
+	 with some replacement.  Seems good enough.  */
+      if (mode == BLKmode)
+	count = precision;
+      else if (!CLZ_DEFINED_VALUE_AT_ZERO (mode, count))
+	count = precision;
+    }
+  else
+    {
+      /* The high order block is special if it is the last block and the
+	 precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
+	 have to clear out any ones above the precision before doing clz
+	 on this block.  */
+      if (BLOCKS_NEEDED (precision) == len && small_prec)
+	{
+	  v = zext_hwi (val[len - 1], small_prec);
+	  count = clz_hwi (v) - (HOST_BITS_PER_WIDE_INT - small_prec);
+	  start = len - 2;
+	  if (v != 0)
+	    return from_shwi (count, precision);
+	}
+      else
+	{
+	  count = HOST_BITS_PER_WIDE_INT * (BLOCKS_NEEDED (precision) - len);
+	  start = len - 1;
+	}
+      
+      for (i = start; i >= 0; i--)
+	{
+	  v = elt (i);
+	  count += clz_hwi (v);
+	  if (v != 0)
+	    break;
+	}
+    }
+  return from_shwi (count, precision);
+}
+
+/* Count the number of redundant leading bits of THIS.  Return result
+   as a HOST_WIDE_INT.  */
+
+wide_int
+wide_int::clrsb () const
+{
+  if (neg_p ())
+    return operator ~ ().clz () - 1;
+
+  return clz () - 1;
+}
+
+/* Count zeros of THIS.   */
+
+wide_int
+wide_int::ctz () const
+{
+  int i;
+  unsigned int count = 0;
+  HOST_WIDE_INT v;
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  int end;
+  bool more_to_do;
+
+  if (zero_p ())
+    {
+      enum machine_mode mode = mode_for_size (precision, MODE_INT, 0);
+      if (mode == BLKmode)
+	mode_for_size (precision, MODE_PARTIAL_INT, 0); 
+
+      /* Even if the value at zero is undefined, we have to come up
+	 with some replacement.  Seems good enough.  */
+      if (mode == BLKmode)
+	count = precision;
+      else if (!CTZ_DEFINED_VALUE_AT_ZERO (mode, count))
+	count = precision;
+    }
+  else
+    {
+      /* The high order block is special if it is the last block and the
+	 precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
+	 have to clear out any ones above the precision before doing clz
+	 on this block.  */
+      if (BLOCKS_NEEDED (precision) == len && small_prec)
+	{
+	  end = len - 1;
+	  more_to_do = true;
+	}
+      else
+	{
+	  end = len;
+	  more_to_do = false;
+	}
+      
+      for (i = 0; i < end; i++)
+	{
+	  v = val[i];
+	  count += ctz_hwi (v);
+	  if (v != 0)
+	    return wide_int::from_shwi (count, precision);
+	}
+      
+      if (more_to_do)
+	{
+	  v = zext_hwi (val[len - 1], small_prec);
+	  count = ctz_hwi (v);
+	  /* The top word was all zeros so we have to cut it back to prec,
+	     because we are counting some of the zeros above the
+	     interesting part.  */
+	  if (count > precision)
+	    count = precision;
+	}
+      else
+	/* Skip over the blocks that are not represented.  They must be
+	   all zeros at this point.  */
+	count = precision;
+    }
+
+  return wide_int::from_shwi (count, precision);
+}
+
+/* ffs of THIS.  */
+
+wide_int
+wide_int::ffs () const
+{
+  HOST_WIDE_INT count = ctz ().to_shwi ();
+  if (count == precision)
+    count = 0;
+  else
+    count += 1;
+  return wide_int::from_shwi (count, precision);
+}
+
+/* Subroutines of the multiplication and division operations.  Unpack
+   the first IN_LEN HOST_WIDE_INTs in INPUT into 2 * IN_LEN
+   HOST_HALF_WIDE_INTs of RESULT.  The rest of RESULT is filled by
+   uncompressing the top bit of INPUT[IN_LEN - 1].  */
+
+static void
+wi_unpack (unsigned HOST_HALF_WIDE_INT *result, 
+	   const unsigned HOST_WIDE_INT *input,
+	   int in_len, int out_len)
+{
+  int i;
+  int j = 0;
+  HOST_WIDE_INT mask;
+
+  for (i = 0; i <in_len; i++)
+    {
+      result[j++] = input[i];
+      result[j++] = input[i] >> HOST_BITS_PER_HALF_WIDE_INT;
+    }
+  mask = SIGN_MASK (input[in_len - 1]);
+  mask &= HALF_INT_MASK;
+
+  /* Smear the sign bit.  */
+  while (j < out_len)
+    result[j++] = mask;
+}
+
+/* The inverse of wi_unpack.  IN_LEN is the the number of input
+   blocks.  The number of output blocks will be half this amount.  */
+
+static void
+wi_pack (unsigned HOST_WIDE_INT *result, 
+	 const unsigned HOST_HALF_WIDE_INT *input, 
+	 int in_len)
+{
+  int i = 0;
+  int j = 0;
+
+  while (i < in_len - 2)
+    {
+      result[j++] = (unsigned HOST_WIDE_INT)input[i] 
+	| ((unsigned HOST_WIDE_INT)input[i + 1] << HOST_BITS_PER_HALF_WIDE_INT);
+      i += 2;
+    }
+
+  /* Handle the case where in_len is odd.   For this we zero extend.  */
+  if (i & 1)
+    result[j++] = (unsigned HOST_WIDE_INT)input[i];
+  else
+    result[j++] = (unsigned HOST_WIDE_INT)input[i] 
+      | ((unsigned HOST_WIDE_INT)input[i + 1] << HOST_BITS_PER_HALF_WIDE_INT);
+}
+
+/* Return an integer that is the exact log2 of THIS.  */
+
+wide_int
+wide_int::exact_log2 () const
+{
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  wide_int count;
+  wide_int result;
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT v;
+      if (small_prec)
+	v = zext_hwi (val[0], small_prec);
+      else
+	v = val[0];
+      result = wide_int::from_shwi (::exact_log2 (v), precision);
+    }
+  else
+    {
+      count = ctz ();
+      if (clz () + count + 1 == precision)
+	result = count;
+      else
+	result = wide_int::from_shwi (-1, precision);
+    }
+  return result;
+}
+
+/* Return an integer that is the floor log2 of THIS.  */
+
+wide_int
+wide_int::floor_log2 () const
+{
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  wide_int result;
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT v;
+      if (small_prec)
+	v = zext_hwi (val[0], small_prec);
+      else
+	v = val[0];
+      result = wide_int::from_shwi (::floor_log2 (v), precision);
+    }
+  else
+    result = wide_int::from_shwi (precision, precision) - 1 - clz ();
+  return result;
+}
+
+
+/* Multiply Op1 by Op2.  If HIGH is set, only the upper half of the
+   result is returned.  If FULL is set, the entire result is returned
+   in a mode that is twice the width of the inputs.  However, that
+   mode needs to exist if the value is to be usable.  Clients that use
+   FULL need to check for this.
+
+   If HIGH or FULL are not setm throw away the upper half after the check
+   is made to see if it overflows.  Unfortunately there is no better
+   way to check for overflow than to do this.  OVERFLOW is assumed to
+   be sticky so it should be initialized.  SGN controls the signess
+   and is used to check overflow or if HIGH or FULL is set.  */
+
+wide_int
+wide_int::mul_internal (bool high, bool full, 
+			const wide_int *op1, 
+			const HOST_WIDE_INT *op2, unsigned int op2len,
+			wide_int::SignOp sgn,  bool *overflow, 
+			bool needs_overflow)
+{
+  wide_int result;
+  unsigned HOST_WIDE_INT o0, o1, k, t;
+  unsigned int i;
+  unsigned int j;
+  unsigned int prec = op1->get_precision ();
+  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
+  unsigned int half_blocks_needed = blocks_needed * 2;
+  /* The sizes here are scaled to support a 2x largest mode by 2x
+     largest mode yielding a 4x largest mode result.  This is what is
+     needed by vpn.  */
+
+  unsigned HOST_HALF_WIDE_INT 
+    u[2 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT 
+    v[2 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  /* The '2' in 'R' is because we are internally doing a full
+     multiply.  */
+  unsigned HOST_HALF_WIDE_INT 
+    r[2 * 2 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
+
+  /* If the top level routine did not really pass in an overflow, then
+     just make sure that we never attempt to set it.  */
+  if (overflow == 0)
+    needs_overflow = false;
+  result.precision = op1->precision;
+
+  if (high || full || needs_overflow)
+    {
+      /* If we need to check for overflow, we can only do half wide
+	 multiplies quickly because we need to look at the top bits to
+	 check for the overflow.  */
+      if (prec <= HOST_BITS_PER_HALF_WIDE_INT)
+	{
+	  HOST_WIDE_INT t, r;
+	  result.len = 1;
+	  o0 = op1->elt (0);
+	  o1 = op2[0];
+	  r = o0 * o1;
+	  /* Signed shift down will leave 0 or -1 if there was no
+	     overflow for signed or 0 for unsigned.  */
+	  t = SIGN_MASK (r);
+	  if (needs_overflow)
+	    {
+	      if (sgn == wide_int::SIGNED)
+		{
+		  if (t != (HOST_WIDE_INT)-1 && t != 0)
+		    *overflow = true;
+		}
+	      else
+		{
+		  if (t != 0)
+		    *overflow = true;
+		}
+	    }
+	  if (full)
+	    {
+	      result.val[0] = sext_hwi (r, prec * 2);
+	      result.precision = op1->precision * 2;
+	    }
+	  else if (high)
+	    result.val[0] = r >> prec;
+	  else
+	    result.val[0] = sext_hwi (r, prec);
+	  return result;
+	}
+    }
+
+  wi_unpack (u, (const unsigned HOST_WIDE_INT*)op1->val, op1->len,
+	     half_blocks_needed);
+  wi_unpack (v, (const unsigned HOST_WIDE_INT*)op2, op2len,
+	     half_blocks_needed);
+
+  /* The 2 is for a full mult.  */
+  memset (r, 0, half_blocks_needed * 2 
+	  * HOST_BITS_PER_HALF_WIDE_INT / BITS_PER_UNIT);
+
+  for (j = 0; j < half_blocks_needed; j++)
+    {
+      k = 0;
+      for (i = 0; i < half_blocks_needed; i++)
+	{
+	  t = ((unsigned HOST_WIDE_INT)u[i] * (unsigned HOST_WIDE_INT)v[j]
+	       + r[i + j] + k);
+	  r[i + j] = t & HALF_INT_MASK;
+	  k = t >> HOST_BITS_PER_HALF_WIDE_INT;
+	}
+      r[j + half_blocks_needed] = k;
+    }
+
+  /* We did unsigned math above.  For signed we must adjust the
+     product (assuming we need to see that).  */
+  if (sgn == wide_int::SIGNED && (full || high || needs_overflow))
+    {
+      unsigned HOST_WIDE_INT b;
+      if ((*op1).neg_p ())
+	{
+	  b = 0;
+	  for (i = 0; i < half_blocks_needed; i++)
+	    {
+	      t = (unsigned HOST_WIDE_INT)r[i + half_blocks_needed]
+		- (unsigned HOST_WIDE_INT)v[i] - b;
+	      r[i + half_blocks_needed] = t & HALF_INT_MASK;
+	      b = t >> (HOST_BITS_PER_WIDE_INT - 1);
+	    }
+	}
+      if (op2[op2len-1] < 0)
+	{
+	  b = 0;
+	  for (i = 0; i < half_blocks_needed; i++)
+	    {
+	      t = (unsigned HOST_WIDE_INT)r[i + half_blocks_needed]
+		- (unsigned HOST_WIDE_INT)u[i] - b;
+	      r[i + half_blocks_needed] = t & HALF_INT_MASK;
+	      b = t >> (HOST_BITS_PER_WIDE_INT - 1);
+	    }
+	}
+    }
+
+  if (needs_overflow)
+    {
+      HOST_WIDE_INT top;
+
+      /* For unsigned, overflow is true if any of the top bits are set.
+	 For signed, overflow is true if any of the top bits are not equal
+	 to the sign bit.  */
+      if (sgn == wide_int::UNSIGNED)
+	top = 0;
+      else
+	{
+	  top = r[(half_blocks_needed) - 1];
+	  top = SIGN_MASK (top << (HOST_BITS_PER_WIDE_INT / 2));
+	  top &= mask;
+	}
+      
+      for (i = half_blocks_needed; i < half_blocks_needed * 2; i++)
+	if (((HOST_WIDE_INT)(r[i] & mask)) != top)
+	  *overflow = true; 
+    }
+
+  if (full)
+    {
+      /* compute [2prec] <- [prec] * [prec] */
+      wi_pack ((unsigned HOST_WIDE_INT*)result.val, r, 2 * half_blocks_needed);
+      result.len = blocks_needed * 2;
+      result.precision = op1->precision * 2;
+    }
+  else if (high)
+    {
+      /* compute [prec] <- ([prec] * [prec]) >> [prec] */
+      wi_pack ((unsigned HOST_WIDE_INT*)&result.val [blocks_needed >> 1],
+	       r, half_blocks_needed);
+      result.len = blocks_needed;
+    }
+  else
+    {
+      /* compute [prec] <- ([prec] * [prec]) && ((1 << [prec]) - 1) */
+      wi_pack ((unsigned HOST_WIDE_INT*)result.val, r, half_blocks_needed);
+      result.len = blocks_needed;
+    }
+      
+  result.canonize ();
+  return result;
+}
+
+/* Compute the parity of THIS.  */
+
+wide_int
+wide_int::parity () const
+{
+  wide_int count = popcount ();
+  return count & 1;
+}
+
+/* Compute the population count of THIS.  */
+
+wide_int
+wide_int::popcount () const
+{
+  int i;
+  int start;
+  int count;
+  HOST_WIDE_INT v;
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+
+  /* The high order block is special if it is the last block and the
+     precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
+     have to clear out any ones above the precision before doing clz
+     on this block.  */
+  if (BLOCKS_NEEDED (precision) == len && small_prec)
+    {
+      v = zext_hwi (val[len - 1], small_prec);
+      count = popcount_hwi (v);
+      start = len - 2;
+    }
+  else
+    {
+      if (sign_mask ())
+	count = HOST_BITS_PER_WIDE_INT * (BLOCKS_NEEDED (precision) - len);
+      else
+	count = 0;
+      start = len - 1;
+    }
+
+  for (i = start; i >= 0; i--)
+    {
+      v = val[i];
+      count += popcount_hwi (v);
+    }
+  return wide_int::from_shwi (count, precision);
+}
+
+/* Subtract of THIS and OP1.  No overflow is detected.  */
+
+wide_int
+wide_int::sub_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  unsigned HOST_WIDE_INT o0, o1;
+  unsigned HOST_WIDE_INT x = 0;
+  /* We implement subtraction as an in place negate and add.  Negation
+     is just inversion and add 1, so we can do the add of 1 by just
+     starting the borrow in of the first element at 1.  */
+  unsigned HOST_WIDE_INT borrow = 0;
+  unsigned HOST_WIDE_INT mask0, mask1;
+  unsigned int i, small_prec;
+
+  result.precision = precision;
+  result.len = MAX (len, op1len);
+  mask0 = sign_mask ();
+  mask1 = SIGN_MASK (op1[op1len - 1]);
+
+  /* Subtract all of the explicitly defined elements.  */
+  for (i = 0; i < result.len; i++)
+    {
+      o0 = i < len ? (unsigned HOST_WIDE_INT)val[i] : mask0;
+      o1 = i < op1len ? (unsigned HOST_WIDE_INT)op1[i] : mask1;
+      x = o0 - o1 - borrow;
+      result.val[i] = x;
+      borrow = o0 < o1;
+    }
+
+  if (len * HOST_BITS_PER_WIDE_INT < precision)
+    {
+      result.val[result.len] = mask0 - mask1 - borrow;
+      result.len++;
+    }
+
+  small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  if (small_prec != 0 && BLOCKS_NEEDED (precision) == result.len)
+    {
+      /* Modes with weird precisions.  */
+      i = result.len - 1;
+      result.val[i] = sext_hwi (result.val[i], small_prec);
+    }
+
+  result.canonize ();
+  return result;
+}
+
+/* Subtract of THIS and OP1 with overflow checking.  If the result
+   overflows within the precision, set OVERFLOW.  OVERFLOW is assumed
+   to be sticky so it should be initialized.  SGN controls if signed or
+   unsigned overflow is checked.  */
+
+wide_int
+wide_int::sub (const wide_int &op1, SignOp sgn, bool *overflow) const
+{
+  wide_int result;
+  unsigned HOST_WIDE_INT o0 = 0;
+  unsigned HOST_WIDE_INT o1 = 0;
+  unsigned HOST_WIDE_INT x = 0;
+  unsigned HOST_WIDE_INT borrow = 0;
+  unsigned HOST_WIDE_INT old_borrow = 0;
+  unsigned HOST_WIDE_INT mask0, mask1;
+  int i, small_prec;
+
+  gcc_checking_assert (precision == op1.precision);
+
+  result.precision = precision;
+  result.len = MAX (len, op1.len);
+  mask0 = sign_mask ();
+  mask1 = op1.sign_mask ();
+
+  /* Subtract all of the explicitly defined elements.  */
+  for (i = 0; i < len; i++)
+    {
+      o0 = val[i];
+      o1 = op1.val[i];
+      x = o0 - o1 - borrow;
+      result.val[i] = x;
+      old_borrow = borrow;
+      borrow = o0 < o1;
+    }
+
+  /* Uncompress the rest.  */
+  for (i = len; i < op1.len; i++)
+    {
+      o0 = val[i];
+      o1 = mask1;
+      x = o0 - o1 - borrow;
+      result.val[i] = x;
+      old_borrow = borrow;
+      borrow = o0 < o1;
+    }
+
+  for (i = op1.len; i < len; i++)
+    {
+      o0 = mask0;
+      o1 = op1.val[i];
+      x = o0 - o1 - borrow;
+      result.val[i] = x;
+      old_borrow = borrow;
+      borrow = o0 < o1;
+    }
+
+  if (len * HOST_BITS_PER_WIDE_INT < precision)
+    {
+      result.val[result.len] = mask0 - mask1 - borrow;
+      result.len++;
+      /* If we were short, we could not have overflowed.  */
+    }
+  else
+    {
+      small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+      if (small_prec == 0)
+	{
+	  if (sgn == wide_int::SIGNED)
+	    {
+	      if (((x ^ o0) & (x ^ o0)) >> (HOST_BITS_PER_WIDE_INT - 1))
+		*overflow = true;
+	    }
+	  else if (old_borrow)
+	    {
+	      if ((~o0) <= o1)
+		*overflow = true;
+	    }
+	  else
+	    {
+	      if ((~o0) < o1)
+		*overflow = true;
+	    }
+	}
+      else
+	{
+	  if (sgn == wide_int::UNSIGNED)
+	    {
+	      /* The caveat for unsigned is to get rid of the bits above
+		 the precision before doing the addition.  To check the
+		 overflow, clear these bits and then redo the last
+		 addition.  If there are any non zero bits above the prec,
+		 we overflowed. */
+	      o0 = zext_hwi (o0, small_prec);
+	      o1 = zext_hwi (o1, small_prec);
+	      x = o0 - o1 - old_borrow;
+	      if (x >> small_prec)
+		*overflow = true;
+	    }
+	  else 
+	    {
+	      /* Overflow in this case is easy since we can see bits beyond
+		 the precision.  If the value computed is not the sign
+		 extended value, then we have overflow.  */
+	      unsigned HOST_WIDE_INT y = sext_hwi (x, small_prec);
+	      if (x != y)
+		*overflow = true;
+	    }
+	}
+    }
+
+  result.canonize ();
+  return result;
+}
+
+/*
+ * Division and Mod
+ */
+
+/* Compute B_QUOTIENT and B_REMAINDER from B_DIVIDEND/B_DIVISOR.  The
+   algorithm is a small modification of the algorithm in Hacker's
+   Delight by Warren, which itself is a small modification of Knuth's
+   algorithm.  M is the number of significant elements of U however
+   there needs to be at least one extra element of B_DIVIDEND
+   allocated, N is the number of elements of B_DIVISOR.  */
+
+void
+wide_int::divmod_internal_2 (unsigned HOST_HALF_WIDE_INT *b_quotient, 
+			     unsigned HOST_HALF_WIDE_INT *b_remainder,
+			     unsigned HOST_HALF_WIDE_INT *b_dividend, 
+			     unsigned HOST_HALF_WIDE_INT *b_divisor, 
+			     int m, int n)
+{
+  /* The "digits" are a HOST_HALF_WIDE_INT which the size of half of a
+     HOST_WIDE_INT and stored in the lower bits of each word.  This
+     algorithm should work properly on both 32 and 64 bit
+     machines.  */
+  unsigned HOST_WIDE_INT b
+    = (unsigned HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT;
+  unsigned HOST_WIDE_INT qhat;   /* Estimate of quotient digit.  */
+  unsigned HOST_WIDE_INT rhat;   /* A remainder.  */
+  unsigned HOST_WIDE_INT p;      /* Product of two digits.  */
+  HOST_WIDE_INT s, i, j, t, k;
+
+  /* Single digit divisor.  */
+  if (n == 1)
+    {
+      k = 0;
+      for (j = m - 1; j >= 0; j--)
+	{
+	  b_quotient[j] = (k * b + b_dividend[j])/b_divisor[0];
+	  k = ((k * b + b_dividend[j])
+	       - ((unsigned HOST_WIDE_INT)b_quotient[j]
+		  * (unsigned HOST_WIDE_INT)b_divisor[0]));
+	}
+      b_remainder[0] = k;
+      return;
+    }
+
+  s = clz_hwi (b_divisor[n-1]) - HOST_BITS_PER_HALF_WIDE_INT; /* CHECK clz */
+
+  /* Normalize B_DIVIDEND and B_DIVISOR.  Unlike the published
+     algorithm, we can overwrite b_dividend and b_divisor, so we do
+     that.  */
+  for (i = n - 1; i > 0; i--)
+    b_divisor[i] = (b_divisor[i] << s)
+      | (b_divisor[i-1] >> (HOST_BITS_PER_HALF_WIDE_INT - s));
+  b_divisor[0] = b_divisor[0] << s;
+
+  b_dividend[m] = b_dividend[m-1] >> (HOST_BITS_PER_HALF_WIDE_INT - s);
+  for (i = m - 1; i > 0; i--)
+    b_dividend[i] = (b_dividend[i] << s)
+      | (b_dividend[i-1] >> (HOST_BITS_PER_HALF_WIDE_INT - s));
+  b_dividend[0] = b_dividend[0] << s;
+
+  /* Main loop.  */
+  for (j = m - n; j >= 0; j--)
+    {
+      qhat = (b_dividend[j+n] * b + b_dividend[j+n-1]) / b_divisor[n-1];
+      rhat = (b_dividend[j+n] * b + b_dividend[j+n-1]) - qhat * b_divisor[n-1];
+    again:
+      if (qhat >= b || qhat * b_divisor[n-2] > b * rhat + b_dividend[j+n-2])
+	{
+	  qhat -= 1;
+	  rhat += b_divisor[n-1];
+	  if (rhat < b)
+	    goto again;
+	}
+
+      /* Multiply and subtract.  */
+      k = 0;
+      for (i = 0; i < n; i++)
+	{
+	  p = qhat * b_divisor[i];
+	  t = b_dividend[i+j] - k - (p & HALF_INT_MASK);
+	  b_dividend[i + j] = t;
+	  k = ((p >> HOST_BITS_PER_HALF_WIDE_INT)
+	       - (t >> HOST_BITS_PER_HALF_WIDE_INT));
+	}
+      t = b_dividend[j+n] - k;
+      b_dividend[j+n] = t;
+
+      b_quotient[j] = qhat;
+      if (t < 0)
+	{
+	  b_quotient[j] -= 1;
+	  k = 0;
+	  for (i = 0; i < n; i++)
+	    {
+	      t = (HOST_WIDE_INT)b_dividend[i+j] + b_divisor[i] + k;
+	      b_dividend[i+j] = t;
+	      k = t >> HOST_BITS_PER_HALF_WIDE_INT;
+	    }
+	  b_dividend[j+n] += k;
+	}
+    }
+  for (i = 0; i < n; i++)
+    b_remainder[i] = (b_dividend[i] >> s) 
+      | (b_dividend[i+1] << (HOST_BITS_PER_HALF_WIDE_INT - s));
+}
+
+
+/* Do a truncating divide DIVISOR into DIVIDEND.  The result is the
+   same size as the operands.  SIGN is either wide_int::SIGNED or
+   wide_int::UNSIGNED.  */
+
+wide_int
+wide_int::divmod_internal (bool compute_quotient, 
+			   const wide_int *dividend, 
+			   const HOST_WIDE_INT *divisor,
+			   unsigned int divisorlen,
+			   wide_int::SignOp sgn, wide_int *remainder,
+			   bool compute_remainder, 
+			   bool *oflow)
+{
+  wide_int quotient, u0, u1;
+  unsigned int prec = dividend->get_precision();
+  int blocks_needed = 2 * BLOCKS_NEEDED (prec);
+  /* The '2' in the next 4 vars are because they are built on half
+     sized wide ints.  */
+  unsigned HOST_HALF_WIDE_INT 
+    b_quotient[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_remainder[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_dividend[(MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT) + 1];
+  unsigned HOST_HALF_WIDE_INT
+    b_divisor[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  int m, n;
+  bool dividend_neg = false;
+  bool divisor_neg = false;
+  bool overflow = false;
+
+  if (divisor[0] == 0 && divisorlen == 1)
+    overflow = true;
+
+  /* The smallest signed number / -1 causes overflow.  */
+  if (sgn == wide_int::SIGNED)
+    {
+      wide_int t = wide_int::set_bit_in_zero (prec - 1, prec);
+      if (*dividend == t && divisor[0] == -1 && divisorlen == 1)
+	overflow = true;
+    }
+
+  quotient.precision = prec;
+  remainder->precision = prec;
+
+  /* If overflow is set, just get out.  There will only be grief by
+     continuing.  */
+  if (overflow)
+    {
+      if (compute_remainder)
+	{
+	  remainder->len = 1;
+	  remainder->val[0] = 0;
+	}
+      if (oflow != 0)
+	*oflow = true;
+      return wide_int::zero (prec);
+    }
+
+  /* Do it on the host if you can.  */
+  if (prec <= HOST_BITS_PER_WIDE_INT)
+    {
+      quotient.len = 1;
+      remainder->len = 1;
+      if (sgn == wide_int::SIGNED)
+	{
+	  quotient.val[0] 
+	    = sext_hwi (dividend->val[0] / divisor[0], prec);
+	  remainder->val[0] 
+	    = sext_hwi (dividend->val[0] % divisor[0], prec);
+	}
+      else
+	{
+	  unsigned HOST_WIDE_INT o0 = dividend->val[0];
+	  unsigned HOST_WIDE_INT o1 = divisor[0];
+
+	  if (prec < HOST_BITS_PER_WIDE_INT)
+	    {
+	      o0 = zext_hwi (o0, prec);
+	      o1 = zext_hwi (o1, prec);
+	    }
+	  quotient.val[0] = sext_hwi (o0 / o1, prec);
+	  remainder->val[0] = sext_hwi (o0 % o1, prec);
+	}
+      return quotient;
+    }
+
+  /* Make the divisor and divident positive and remember what we
+     did.  */
+  if (sgn == wide_int::SIGNED)
+    {
+      if (dividend->sign_mask ())
+	{
+	  u0 = dividend->neg ();
+	  dividend = &u0;
+	  dividend_neg = true;
+	}
+      if (divisor[divisorlen - 1] < 0)
+	{
+	  u1 = wide_int::zero (dividend->precision).sub_large (divisor, divisorlen);
+	  divisor = u1.val;
+	  divisorlen = u1.len;
+	  divisor_neg = true;
+	}
+    }
+
+  wi_unpack (b_dividend, (const unsigned HOST_WIDE_INT*)dividend->val,
+	     dividend->len, blocks_needed);
+  wi_unpack (b_divisor, (const unsigned HOST_WIDE_INT*)divisor, 
+	     divisorlen, blocks_needed);
+
+  if (dividend->sign_mask ())
+    m = blocks_needed;
+  else
+    m = 2 * dividend->get_len ();
+
+  if (SIGN_MASK (divisor[divisorlen - 1]))
+    n = blocks_needed;
+  else
+    n = 2 * divisorlen;
+
+  /* It is known that the top input block to the divisor is non zero,
+     but when this block is split into two half blocks, it may be that
+     the top half block is zero.  Skip over this half block.  */
+  if (b_divisor[n - 1] == 0)
+    n--;
+
+  divmod_internal_2 (b_quotient, b_remainder, b_dividend, b_divisor, m, n);
+
+  if (compute_quotient)
+    {
+      wi_pack ((unsigned HOST_WIDE_INT*)quotient.val, b_quotient, m);
+      quotient.len = m / 2;
+      quotient.canonize ();
+      /* The quotient is neg if exactly one of the divisor or dividend is
+	 neg.  */
+      if (dividend_neg != divisor_neg)
+	quotient = quotient.neg ();
+    }
+  else
+    quotient = wide_int::zero (dividend->precision);
+
+  if (compute_remainder)
+    {
+      wi_pack ((unsigned HOST_WIDE_INT*)remainder->val, b_remainder, n);
+      if (n & 1)
+	n++;
+      remainder->len = n / 2;
+      (*remainder).canonize ();
+      /* The remainder is always the same sign as the dividend.  */
+      if (dividend_neg)
+	*remainder = (*remainder).neg ();
+    }
+  else
+    *remainder = wide_int::zero (dividend->precision);
+  return quotient;
+}
+
+
+/*
+ * Shifting, rotating and extraction.
+ */
+
+/* Extract WIDTH bits from THIS starting at OFFSET.  The result is
+   assumed to fit in a HOST_WIDE_INT.  This function is safe in that
+   it can properly access elements that may not be explicitly
+   represented.  */
+
+HOST_WIDE_INT
+wide_int::extract_to_hwi (int offset, int width) const
+{
+  int start_elt, end_elt, shift;
+  HOST_WIDE_INT x;
+
+  /* Get rid of the easy cases first.   */
+  if (offset >= len * HOST_BITS_PER_WIDE_INT)
+    return sign_mask ();
+  if (offset + width <= 0)
+    return 0;
+
+  shift = offset & (HOST_BITS_PER_WIDE_INT - 1);
+  if (offset < 0)
+    {
+      start_elt = -1;
+      end_elt = 0;
+      x = 0;
+    }
+  else
+    {
+      start_elt = offset / HOST_BITS_PER_WIDE_INT;
+      end_elt = (offset + width - 1) / HOST_BITS_PER_WIDE_INT;
+      x = start_elt >= len 
+	? sign_mask ()
+	: (unsigned HOST_WIDE_INT)val[start_elt] >> shift;
+    }
+
+  if (start_elt != end_elt)
+    {
+      HOST_WIDE_INT y = end_elt == len
+	? sign_mask () : val[end_elt];
+
+      x |= y << (HOST_BITS_PER_WIDE_INT - shift);
+    }
+
+  if (width != HOST_BITS_PER_WIDE_INT)
+    x &= ((HOST_WIDE_INT)1 << width) - 1;
+
+  return x;
+}
+
+
+/* Left shift THIS by CNT.  See the definition of Op.TRUNC for how to
+   set Z.  Since this is used internally, it has the ability to
+   specify the BISIZE and PRECISION independently.  This is useful
+   when inserting a small value into a larger one.  */
+
+wide_int
+wide_int::lshift_large (unsigned int cnt, unsigned int res_prec) const
+{
+  wide_int result;
+  unsigned int i;
+
+  result.precision = res_prec;
+
+  if (cnt >= res_prec)
+    {
+      result.val[0] = 0;
+      result.len = 1;
+      return result;
+    }
+
+  for (i = 0; i < res_prec; i += HOST_BITS_PER_WIDE_INT)
+    result.val[i / HOST_BITS_PER_WIDE_INT]
+      = extract_to_hwi (i - cnt, HOST_BITS_PER_WIDE_INT);
+
+  result.len = BLOCKS_NEEDED (res_prec);
+  result.canonize ();
+
+  return result;
+}
+
+/* Unsigned right shift THIS by CNT.  See the definition of Op.TRUNC
+   for how to set Z.  */
+
+wide_int
+wide_int::rshiftu_large (unsigned int cnt) const
+{
+  wide_int result;
+  int stop_block, offset, i;
+
+  result.precision = precision;
+
+  if (cnt >= precision)
+    {
+      result.val[0] = 0;
+      result.len = 1;
+      return result;
+    }
+
+  stop_block = BLOCKS_NEEDED (precision - cnt);
+  for (i = 0; i < stop_block; i++)
+    result.val[i]
+      = extract_to_hwi ((i * HOST_BITS_PER_WIDE_INT) + cnt,
+			HOST_BITS_PER_WIDE_INT);
+
+  result.len = stop_block;
+
+  offset = (precision - cnt) & (HOST_BITS_PER_WIDE_INT - 1);
+  if (offset)
+    result.val[stop_block - 1] = zext_hwi (result.val[stop_block - 1], offset);
+  else
+    /* The top block had a 1 at the top position so it will decompress
+       wrong unless a zero block is added.  This only works because we
+       know the shift was greater than 0.  */
+    if (result.val[stop_block - 1] < 0)
+      result.val[result.len++] = 0;
+  return result;
+}
+
+/* Signed right shift THIS by CNT.  See the definition of Op.TRUNC for
+   how to set Z.  */
+
+wide_int
+wide_int::rshifts_large (unsigned int cnt) const
+{
+  wide_int result;
+  int stop_block, i;
+
+  result.precision = precision;
+
+  if (cnt >= precision)
+    {
+      HOST_WIDE_INT m = sign_mask ();
+      result.val[0] = m;
+      result.len = 1;
+      return result;
+    }
+
+  stop_block = BLOCKS_NEEDED (precision - cnt);
+  for (i = 0; i < stop_block; i++)
+    result.val[i]
+      = extract_to_hwi ((i * HOST_BITS_PER_WIDE_INT) + cnt,
+			HOST_BITS_PER_WIDE_INT);
+
+  result.len = stop_block;
+
+  /* No need to sign extend the last block, since it extract_to_hwi
+     already did that.  */
+  return result;
+}
+
+/*
+ * Private utilities.
+ */
+/* Decompress THIS for at least TARGET bits into a result with
+   precision PREC.  */
+
+wide_int
+wide_int::decompress (unsigned int target, unsigned int prec) const
+{
+  wide_int result;
+  int blocks_needed = BLOCKS_NEEDED (target);
+  HOST_WIDE_INT mask;
+  int len, i;
+
+  result.precision = prec;
+  result.len = blocks_needed;
+
+  for (i = 0; i < this->len; i++)
+    result.val[i] = val[i];
+
+  len = this->len;
+
+  if (target > result.precision)
+    return result;
+
+  /* The extension that we are doing here is not sign extension, it is
+     decompression.  */
+  mask = sign_mask ();
+  while (len < blocks_needed)
+    result.val[len++] = mask;
+
+  return result;
+}
+
diff --git a/gcc/wide-int.h b/gcc/wide-int.h
new file mode 100644
index 0000000..7fec321
--- /dev/null
+++ b/gcc/wide-int.h
@@ -0,0 +1,2464 @@ 
+/* Operations with very long integers.
+   Copyright (C) 2012-2013 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 3, or (at your option) any
+later version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef WIDE_INT_H
+#define WIDE_INT_H
+
+/* A wide integer is currently represented as a vector of
+   HOST_WIDE_INTs.  The vector contains enough elements to hold a
+   value of MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT which is
+   a derived for each host target combination.  The values are stored
+   in the vector with the least signicant HOST_BITS_PER_WIDE_INT bits
+   of the value stored in element 0.
+
+   A wide_int contains three fields: the vector (VAL), precision and a
+   length, (LEN).  The length is the number of HWIs needed to
+   represent the value.
+
+   Since most integers used in a compiler are small values, it is
+   generally profitable to use a representation of the value that is
+   shorter than the modes precision.  LEN is used to indicate the
+   number of elements of the vector that are in use.  When LEN *
+   HOST_BITS_PER_WIDE_INT < the precision, the value has been
+   compressed.  The values of the elements of the vector greater than
+   LEN - 1. are all equal to the highest order bit of LEN.
+
+   The representation does not contain any information inherant about
+   signedness of the represented value, so it can be used to represent
+   both signed and unsigned numbers.   For operations where the results
+   depend on signedness (division, comparisons), the signedness must
+   be specified separately.  For operations where the signness
+   matters, one of the operands to the operation specifies either
+   wide_int::SIGNED or wide_int::UNSIGNED.
+
+   The numbers are stored as sign entended numbers as a means of
+   compression.  Leading HOST_WIDE_INTS that contain strings of either
+   -1 or 0 are removed as long as they can be reconstructed from the
+   top bit that is being represented.
+
+   All constructors for wide_int take either a precision, an enum
+   machine_mode or tree_type.  */
+
+
+#ifndef GENERATOR_FILE
+#include "tree.h"
+#include "system.h"
+#include "hwint.h"
+#include "options.h"
+#include "tm.h"
+#include "insn-modes.h"
+#include "machmode.h"
+#include "double-int.h"
+#include <gmp.h>
+#include "dumpfile.h"
+#include "real.h"
+
+#define WIDE_INT_MAX_ELTS \
+  ((4 * MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT)
+
+class wide_int {
+  /* Internal representation.  */
+  
+  /* VAL is set to a size that is capable of computing a full
+     multiplication on the largest mode that is represented on the
+     target.  The full multiplication is use by tree-vrp.  tree-vpn
+     currently does a 2x largest mode by 2x largest mode yielding a 4x
+     largest mode result.  If operations are added that require larger
+     buffers, then VAL needs to be changed.  */
+  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+  unsigned short len;
+  unsigned int precision;
+
+ public:
+  enum ShiftOp {
+    NONE,
+    /* There are two uses for the wide-int shifting functions.  The
+       first use is as an emulation of the target hardware.  The
+       second use is as service routines for other optimizations.  The
+       first case needs to be identified by passing TRUNC as the value
+       of ShiftOp so that shift amount is properly handled according to the
+       SHIFT_COUNT_TRUNCATED flag.  For the second case, the shift
+       amount is always truncated by the bytesize of the mode of
+       THIS.  */
+    TRUNC
+  };
+
+  enum SignOp {
+    /* Many of the math functions produce different results depending
+       on if they are SIGNED or UNSIGNED.  In general, there are two
+       different functions, whose names are prefixed with an 'S" and
+       or an 'U'.  However, for some math functions there is also a
+       routine that does not have the prefix and takes an SignOp
+       parameter of SIGNED or UNSIGNED.  */
+    SIGNED,
+    UNSIGNED
+  };
+
+  /* Conversions.  */
+
+  static wide_int from_shwi (HOST_WIDE_INT op0, 
+			     unsigned int precision, bool *overflow = 0);
+  static wide_int from_uhwi (unsigned HOST_WIDE_INT op0, 
+			     unsigned int precision, bool *overflow = 0);
+
+  inline static wide_int from_hwi (HOST_WIDE_INT op0, const_tree type, 
+				   bool *overflow = 0);
+  inline static wide_int from_shwi (HOST_WIDE_INT op0, enum machine_mode mode, 
+				    bool *overflow = 0);
+  inline static wide_int from_uhwi (unsigned HOST_WIDE_INT op0, 
+				    enum machine_mode mode, 
+				    bool *overflow = 0);
+  static wide_int from_array (const HOST_WIDE_INT* op0,
+			      unsigned int len, 
+			      unsigned int precision, bool need_canon = true); 
+  inline static wide_int from_array (const HOST_WIDE_INT* op0,
+				     unsigned int len,
+				     enum machine_mode mode);
+  inline static wide_int from_array (const HOST_WIDE_INT* op0,
+				     unsigned int len,
+				     const_tree type);
+
+  static wide_int from_double_int (double_int, 
+				   unsigned int precision);
+  inline static wide_int from_double_int (double_int, enum machine_mode);
+  static wide_int from_tree (const_tree);
+  static wide_int from_rtx (const_rtx, enum machine_mode);
+  static wide_int from_buffer (const unsigned char*, int);
+
+  inline HOST_WIDE_INT to_shwi (unsigned int prec = 0) const;
+  inline unsigned HOST_WIDE_INT to_uhwi (unsigned int prec = 0) const;
+
+  /* Largest and smallest values that are represented in modes or precisions.  */
+
+  static wide_int max_value (unsigned int prec, 
+			     unsigned int precision, SignOp sgn);
+  inline static wide_int max_value (unsigned int precision, SignOp sgn);
+  inline static wide_int max_value (const_tree type);
+  inline static wide_int max_value (enum machine_mode mode, SignOp sgn);
+  
+  static wide_int min_value (unsigned int prec, 
+			     unsigned int precision, SignOp sgn);
+  inline static wide_int min_value (unsigned int precision, SignOp sgn);
+  inline static wide_int min_value (const_tree type);
+  inline static wide_int min_value (enum machine_mode mode, SignOp sgn);
+  
+  /* Small constants */
+
+  inline static wide_int minus_one (unsigned int prec);
+  inline static wide_int zero (unsigned int prec);
+  inline static wide_int one (unsigned int prec);
+  inline static wide_int two (unsigned int prec);
+
+  /* Accessors.  */
+
+  inline unsigned short get_len () const;
+  inline unsigned int get_precision () const;
+  inline HOST_WIDE_INT elt (unsigned int i) const;
+
+  /* Printing functions.  */
+
+  void print_dec (char *buf, SignOp sgn) const;
+  void print_dec (FILE *file, SignOp sgn) const;
+  void print_decs (char *buf) const;
+  void print_decs (FILE *file) const;
+  void print_decu (char *buf) const;
+  void print_decu (FILE *file) const;
+  void print_hex (char *buf) const;
+  void print_hex (FILE *file) const;
+
+  /* Comparative functions.  */
+
+  inline bool minus_one_p () const;
+  inline bool zero_p () const;
+  inline bool one_p () const;
+  inline bool neg_p () const;
+
+  template <typename T>
+    inline bool operator == (T c) const;
+  
+  template <typename T>
+    inline bool operator != (T c) const;
+  
+  template <typename T>
+    inline bool gt_p (T c, SignOp sgn) const;
+  template <typename T>
+    inline bool gts_p (T c) const;
+  template <typename T>
+    inline bool gtu_p (T c) const;
+  
+  template <typename T>
+    inline bool lt_p (T c, SignOp sgn) const;
+  template <typename T>
+    inline bool lts_p (T c) const;
+  template <typename T>
+    inline bool ltu_p (T c) const;
+  
+  template <typename T>
+    inline int cmp (T c, SignOp sgn) const;
+  template <typename T>
+    inline int cmps (T c) const;
+  template <typename T>
+    inline int cmpu (T c) const;
+  
+  bool only_sign_bit_p (unsigned int prec) const;
+  inline bool only_sign_bit_p () const;
+  inline bool fits_uhwi_p () const;
+  inline bool fits_shwi_p () const;
+  bool fits_to_tree_p (const_tree type) const;
+  bool fits_u_p (unsigned int prec) const;
+  bool fits_s_p (unsigned int prec) const;
+
+  /* Min and max */
+
+  template <typename T>
+    inline wide_int min (T c, SignOp sgn) const;
+    inline wide_int min (const wide_int &op1, SignOp sgn) const;
+  template <typename T>
+    inline wide_int max (T c, SignOp sgn) const;
+    inline wide_int max (const wide_int &op1, SignOp sgn) const;
+  template <typename T>
+    inline wide_int smin (T c) const;
+    inline wide_int smin (const wide_int &op1) const;
+  template <typename T>
+    inline wide_int smax (T c) const;
+    inline wide_int smax (const wide_int &op1) const;
+  template <typename T>
+    inline wide_int umin (T c) const;
+    inline wide_int umin (const wide_int &op1) const;
+  template <typename T>
+    inline wide_int umax (T c) const;
+    inline wide_int umax (const wide_int &op1) const;
+
+  /* Extension, these do not change the precision.  */
+
+  inline wide_int ext (unsigned int offset, SignOp sgn) const;
+  wide_int sext (unsigned int offset) const;
+  wide_int zext (unsigned int offset) const;
+
+  /* Make a fast copy.  */
+
+  wide_int copy () const;
+
+  /* These change the underlying precision.  */
+  
+  wide_int force_to_size (unsigned int precision, SignOp sgn) const;
+  inline wide_int force_to_size (enum machine_mode mode, SignOp sgn) const;
+  inline wide_int force_to_size (const_tree type) const;
+  inline wide_int force_to_size (const_tree type, SignOp sgn) const;
+
+  inline wide_int sforce_to_size (enum machine_mode mode) const;
+  inline wide_int sforce_to_size (const_tree type) const;
+  inline wide_int zforce_to_size (enum machine_mode mode) const;
+  inline wide_int zforce_to_size (const_tree type) const;
+
+  /* Masking, and Insertion  */
+
+  wide_int set_bit (unsigned int bitpos) const;
+  static wide_int set_bit_in_zero (unsigned int bitpos, unsigned int prec);
+  inline static wide_int set_bit_in_zero (unsigned int, 
+					  enum machine_mode mode);
+  inline static wide_int set_bit_in_zero (unsigned int, const_tree type);
+  wide_int insert (const wide_int &op0, unsigned int offset,
+		   unsigned int width) const;
+
+  wide_int bswap () const;
+
+  static wide_int mask (unsigned int start, bool negate, 
+			unsigned int prec);
+  inline static wide_int mask (unsigned int start, bool negate, 
+			       enum machine_mode mode);
+  inline static wide_int mask (unsigned int start, bool negate,
+			       const_tree type);
+
+  static wide_int shifted_mask (unsigned int start, unsigned int width,
+				bool negate, unsigned int prec);
+  inline static wide_int shifted_mask (unsigned int start, unsigned int width, 
+				       bool negate, enum machine_mode mode);
+  inline static wide_int shifted_mask (unsigned int start, unsigned int width, 
+				       bool negate, const_tree type);
+
+  inline HOST_WIDE_INT sign_mask () const;
+
+  /* Logicals */
+
+  template <typename T>
+    inline wide_int operator & (T c) const;
+  template <typename T>
+    inline wide_int and_not (T c) const;
+  inline wide_int operator ~ () const;
+  template <typename T>
+    inline wide_int operator | (T c) const;
+  template <typename T>
+    inline wide_int or_not (T c) const;
+  template <typename T>
+    inline wide_int operator ^ (T c) const;
+  
+  /* Arithmetic operation functions, alpha sorted.  */
+  
+  wide_int abs () const;
+  template <typename T>
+    inline wide_int operator + (T c) const;
+  wide_int add (const wide_int &x, SignOp sgn, bool *overflow) const;
+  
+  wide_int clz () const;
+  wide_int clrsb () const;
+  wide_int ctz () const;
+  wide_int exact_log2 () const;
+  wide_int floor_log2 () const;
+  wide_int ffs () const;
+  
+  template <typename T>
+    inline wide_int operator * (T c) const;
+  template <typename T>
+    inline wide_int mul (T c, SignOp sgn, bool *overflow) const;
+  template <typename T>
+    inline wide_int smul (T c, bool *overflow) const;
+  template <typename T>
+    inline wide_int umul (T c, bool *overflow) const;
+  template <typename T>
+    inline wide_int mul_full (T c, SignOp sgn) const;
+  template <typename T>
+    inline wide_int smul_full (T c) const;
+  template <typename T>
+    inline wide_int umul_full (T c) const;
+  template <typename T>
+    inline wide_int mul_high (T c, SignOp sgn) const;
+  
+  inline wide_int neg () const;
+  inline wide_int neg (bool *overflow) const;
+  
+  wide_int parity () const;
+  wide_int popcount () const;
+  
+  template <typename T>
+    inline wide_int operator - (T c) const;
+  wide_int sub (const wide_int &x, SignOp sgn, bool *overflow) const;
+  
+  /* Divison and mod.  These are the ones that are actually used, but
+     there are a lot of them.  */
+  
+  template <typename T>
+    inline wide_int div_trunc (T c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+    inline wide_int sdiv_trunc (T c) const;
+  template <typename T>
+    inline wide_int udiv_trunc (T c) const;
+  
+  template <typename T>
+    inline wide_int div_floor (T c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+    inline wide_int udiv_floor (T c) const;
+  template <typename T>
+    inline wide_int sdiv_floor (T c) const;
+  template <typename T>
+    inline wide_int div_ceil (T c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+    inline wide_int div_round (T c, SignOp sgn, bool *overflow = 0) const;
+  
+  template <typename T>
+    inline wide_int divmod_trunc (T c, wide_int *mod, SignOp sgn) const;
+  template <typename T>
+    inline wide_int sdivmod_trunc (T c, wide_int *mod) const;
+  template <typename T>
+    inline wide_int udivmod_trunc (T c, wide_int *mod) const;
+  
+  template <typename T>
+    inline wide_int divmod_floor (T c, wide_int *mod, SignOp sgn) const;
+  template <typename T>
+    inline wide_int sdivmod_floor (T c, wide_int *mod) const;
+  
+  template <typename T>
+    inline wide_int mod_trunc (T c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+    inline wide_int smod_trunc (T c) const;
+  template <typename T>
+    inline wide_int umod_trunc (T c) const;
+  
+  template <typename T>
+    inline wide_int mod_floor (T c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+    inline wide_int umod_floor (T c) const;
+  template <typename T>
+    inline wide_int mod_ceil (T c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+    inline wide_int mod_round (T c, SignOp sgn, bool *overflow = 0) const;
+  
+  /* Shifting rotating and extracting.  */
+  HOST_WIDE_INT extract_to_hwi (int offset, int width) const;
+  
+  template <typename T>
+    inline wide_int lshift (T c, unsigned int bitsize = 0,
+			    unsigned int precision = 0,
+			    ShiftOp z = NONE) const;
+
+  template <typename T>
+    inline wide_int lrotate (T c) const;
+  inline wide_int lrotate (unsigned HOST_WIDE_INT y) const;
+
+  template <typename T>
+    inline wide_int rshift (T c, SignOp sgn, 
+			    unsigned int bitsize = 0,
+			    ShiftOp z = NONE) const;
+  template <typename T>
+    inline wide_int rshiftu (T c, unsigned int bitsize = 0,
+			     ShiftOp z = NONE) const;
+  template <typename T>
+    inline wide_int rshifts (T c, unsigned int bitsize = 0,
+			     ShiftOp z = NONE) const;
+
+  template <typename T>
+    inline wide_int rrotate (T c) const;
+    inline wide_int rrotate (unsigned HOST_WIDE_INT y) const;
+
+  static const int DUMP_MAX = (2 * (MAX_BITSIZE_MODE_ANY_INT / 4
+			       + MAX_BITSIZE_MODE_ANY_INT 
+				    / HOST_BITS_PER_WIDE_INT + 32));
+  char *dump (char* buf) const;
+ private:
+
+  /* 
+   * Internal versions that do the work if the values do not fit in a
+   * HWI.
+   */ 
+
+  /* Comparisons */
+  bool eq_p_large (const HOST_WIDE_INT *, unsigned int) const;
+  static bool lts_p_large (const HOST_WIDE_INT *, unsigned int, 
+			   const HOST_WIDE_INT *, unsigned int,
+			   unsigned int);
+  int cmps_large (const HOST_WIDE_INT *, unsigned int) const;
+  static bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, 
+			   const HOST_WIDE_INT *, unsigned int);
+  int cmpu_large (const HOST_WIDE_INT *, unsigned int) const;
+
+  /* Logicals.  */
+  wide_int and_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int and_not_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int or_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int or_not_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int xor_large (const HOST_WIDE_INT *, unsigned int) const;
+
+  /* Arithmetic */
+  wide_int add_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int sub_large (const HOST_WIDE_INT *, unsigned int) const;
+
+  wide_int lshift_large (unsigned int cnt, unsigned int res_prec) const;
+  wide_int rshiftu_large (unsigned int cnt) const;
+  wide_int rshifts_large (unsigned int cnt) const;
+
+  static wide_int
+    mul_internal (bool high, bool full, 
+		  const wide_int *op1, const HOST_WIDE_INT *op2, unsigned int op2len,
+		  wide_int::SignOp sgn,  bool *overflow, bool needs_overflow);
+  static void
+    divmod_internal_2 (unsigned HOST_HALF_WIDE_INT *b_quotient, 
+		       unsigned HOST_HALF_WIDE_INT *b_remainder,
+		       unsigned HOST_HALF_WIDE_INT *b_dividend, 
+		       unsigned HOST_HALF_WIDE_INT *b_divisor, 
+		       int m, int n);
+  static wide_int
+    divmod_internal (bool compute_quotient, 
+		     const wide_int *dividend, const HOST_WIDE_INT *, unsigned int,
+		     wide_int::SignOp sgn, wide_int *remainder,
+		     bool compute_remainder, 
+		     bool *overflow);
+
+
+  /* Private utility routines.  */
+  wide_int decompress (unsigned int target, unsigned int precision) const;
+  void canonize ();
+  static inline int trunc_shift (const HOST_WIDE_INT *cnt, unsigned int len, 
+				 unsigned int bitsize, ShiftOp z);
+
+  /* The following template and its overrides are used for the second
+     operand of binary functions.  They access the value are package
+     it so that the operation can be done.  These have been
+     implemented so that pointer copying is done from the rep of the
+     second operand rather than actual data copying.  This is safe
+     even for garbage collected objects since the value is immediately
+     throw away.  
+
+     The first two templates match all integers.  */
+
+  template <typename T>
+  static inline const HOST_WIDE_INT* to_shwi2 (HOST_WIDE_INT *s, int *l, T x)
+  { 
+    s[0] = x;
+    if (~(T)0 < (T)0
+	|| sizeof (T) < sizeof (HOST_WIDE_INT))
+      {
+	*l = 1;
+      }
+    else
+      {
+	s[1] = 0;
+	*l = 2;	  
+      }
+    return s;
+  }
+  
+  static inline const HOST_WIDE_INT* to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
+					int *l, const wide_int &y)
+  {
+    *l = y.len;
+    return y.val;
+  }
+
+  /* This overload may just return the pointer to the value and set
+     the length.  */
+  static inline const HOST_WIDE_INT* to_shwi2 (HOST_WIDE_INT *s, int *l, const_tree tcst)
+  {
+    /* This is rather complex now, in the future when the rep for tree
+       cst is changed, it will be just a pointer and len copy.  */
+    tree type = TREE_TYPE (tcst);
+    unsigned int prec = TYPE_PRECISION (type);
+
+    if (prec == HOST_BITS_PER_DOUBLE_INT 
+	&& TYPE_UNSIGNED (type)
+	&& TREE_INT_CST_HIGH (tcst) < 0)
+      {
+	/* Copy the result because it does not fit in two HWIs.  */
+	s[0] = TREE_INT_CST_LOW (tcst);
+	s[1] = TREE_INT_CST_HIGH (tcst);
+	s[2] = 0;
+	*l = 3;
+	return s;
+      }
+    else
+      {
+	if (prec > HOST_BITS_PER_WIDE_INT)
+	  *l = 2;
+	else
+	  {
+	    if (prec == HOST_BITS_PER_WIDE_INT
+		&& TYPE_UNSIGNED (type)
+		&& (HOST_WIDE_INT)TREE_INT_CST_LOW (tcst) < 0)
+	      *l = 2;
+	    else
+	      *l = 1;
+	  }
+	return (const HOST_WIDE_INT*)&TREE_INT_CST_LOW (tcst);
+      }
+  }
+
+  /* There should logically be an overload for rtl here, but it cannot
+     be here because of circular include issues.  It is in rtl.h.  */
+  static inline const HOST_WIDE_INT* to_shwi2 
+    (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, int *l, rtx rcst);
+
+};
+
+/* Insert a 1 bit into 0 at BITPOS producing an number with precision
+   taken from MODE.  */
+
+wide_int
+wide_int::set_bit_in_zero (unsigned int bitpos, enum machine_mode mode)
+{
+  return wide_int::set_bit_in_zero (bitpos, GET_MODE_PRECISION (mode));
+}
+
+/* Insert a 1 bit into 0 at BITPOS producing an number with precision
+   taken from TYPE.  */
+
+wide_int
+wide_int::set_bit_in_zero (unsigned int bitpos, const_tree type)
+{
+
+  return wide_int::set_bit_in_zero (bitpos, TYPE_PRECISION (type));
+}
+
+/* Return a result mask where the lower WIDTH bits are ones and the
+   bits above that up to the precision are zeros.  The result is
+   inverted if NEGATE is true.  The result is made with precision
+   taken from MODE.  */
+
+wide_int
+wide_int::mask (unsigned int width, bool negate, enum machine_mode mode)
+{
+  return wide_int::mask (width, negate, GET_MODE_PRECISION (mode));
+}
+
+/* Return a result mask where the lower WIDTH bits are ones and the
+   bits above that up to the precision are zeros.  The result is
+   inverted if NEGATE is true.  The result is made with precision
+   taken from TYPE.  */
+
+wide_int
+wide_int::mask (unsigned int width, bool negate, const_tree type)
+{
+
+  return wide_int::mask (width, negate, TYPE_PRECISION (type));
+}
+
+/* Return a result mask of WIDTH ones starting at START and the bits
+   above that up to the precision are zeros.  The result is inverted
+   if NEGATE is true.  The result is made with precision taken from
+   MODE.  */
+
+wide_int
+wide_int::shifted_mask (unsigned int start, unsigned int width, 
+			bool negate, enum machine_mode mode)
+{
+  return wide_int::shifted_mask (start, width, negate,
+				 GET_MODE_PRECISION (mode));
+}
+
+/* Return a result mask of WIDTH ones starting at START and the bits
+   above that up to the precision are zeros.  The result is inverted
+   if NEGATE is true.  The result is made with precision taken from
+   TYPE.  */
+
+wide_int
+wide_int::shifted_mask (unsigned int start, unsigned int width, 
+			bool negate, const_tree type)
+{
+
+  return wide_int::shifted_mask (start, width, negate,
+				 TYPE_PRECISION (type));
+}
+
+/* Produce 0 or -1 that is the smear of the sign bit.  */
+
+HOST_WIDE_INT
+wide_int::sign_mask () const
+{
+  int i = len - 1;
+  if (precision < HOST_BITS_PER_WIDE_INT)
+    return ((val[0] << (HOST_BITS_PER_WIDE_INT - precision))
+	    >> (HOST_BITS_PER_WIDE_INT - 1));
+
+  /* VRP appears to be badly broken and this is a very ugly fix.  */
+  if (i >= 0)
+    return val[i] >> (HOST_BITS_PER_WIDE_INT - 1);
+
+  gcc_unreachable ();
+#if 0
+  return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
+#endif
+}
+
+/* Conversions */
+
+/* Convert OP0 into a wide_int with parameters taken from TYPE.  If
+   the value does not fit, set OVERFLOW.  */
+
+wide_int
+wide_int::from_hwi (HOST_WIDE_INT op0, const_tree type, 
+		    bool *overflow)
+{
+  unsigned int prec = TYPE_PRECISION (type);
+
+  if (TYPE_UNSIGNED (type))
+    return wide_int::from_uhwi (op0, prec, overflow);
+  else
+    return wide_int::from_shwi (op0, prec, overflow);
+}
+
+/* Convert signed OP0 into a wide_int with parameters taken from
+   MODE. If the value does not fit, set OVERFLOW. */
+
+wide_int
+wide_int::from_shwi (HOST_WIDE_INT op0, enum machine_mode mode, 
+		     bool *overflow)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return wide_int::from_shwi (op0, prec, overflow);
+}
+
+/* Convert unsigned OP0 into a wide_int with parameters taken from
+   MODE. If the value does not fit, set OVERFLOW. */
+
+wide_int
+wide_int::from_uhwi (unsigned HOST_WIDE_INT op0, enum machine_mode mode, 
+		     bool *overflow)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return wide_int::from_uhwi (op0, prec, overflow);
+}
+
+/* Return THIS as a signed HOST_WIDE_INT.  If THIS does not fit in
+   PREC, the information is lost. */
+
+HOST_WIDE_INT 
+wide_int::to_shwi (unsigned int prec) const
+{
+  HOST_WIDE_INT result;
+
+  if (prec == 0)
+    prec = precision;
+
+  if (prec < HOST_BITS_PER_WIDE_INT)
+    result = sext_hwi (val[0], prec);
+  else
+    result = val[0];
+
+  return result;
+}
+
+/* Return THIS as an unsigned HOST_WIDE_INT.  If THIS does not fit in
+   PREC, the information is lost. */
+
+unsigned HOST_WIDE_INT 
+wide_int::to_uhwi (unsigned int prec) const
+{
+  HOST_WIDE_INT result;
+
+  if (prec == 0)
+    prec = precision;
+
+  if (prec < HOST_BITS_PER_WIDE_INT)
+    result = zext_hwi (val[0], prec);
+  else
+    result = val[0];
+
+  return result;
+}
+
+
+/* Convert OP0 of LEN into a wide_int with parameters taken from
+   MODE. If the value does not fit, set OVERFLOW. */
+
+wide_int
+wide_int::from_array (const HOST_WIDE_INT* op0, unsigned int len, 
+		      enum machine_mode mode)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return wide_int::from_array (op0, len, prec);
+}
+
+/* Convert OP0 of LEN into a wide_int with parameters taken from
+   MODE. If the value does not fit, set OVERFLOW. */
+
+wide_int
+wide_int::from_array (const HOST_WIDE_INT* op0, unsigned int len, 
+		      const_tree type)
+{
+  unsigned int prec = TYPE_PRECISION (type);
+
+  return wide_int::from_array (op0, len, prec);
+}
+
+/* Convert double_int OP0 into a wide_int with parameters taken from
+   MODE.  */
+
+wide_int
+wide_int::from_double_int (double_int op0, enum machine_mode mode)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return wide_int::from_double_int (op0, prec);
+}
+
+/* Min and Max value helpers.  */
+
+/* Produce the largest SGNed number that is represented in PRECISION.
+   The result is represented in PRECISION.  SGN must be SIGNED or
+   UNSIGNED.  */
+
+wide_int
+wide_int::max_value (unsigned int precision, SignOp sgn)
+{
+  return max_value (precision, precision, sgn);
+}
+  
+/* Produce the largest number that is represented in MODE. The
+   precision are taken from mode.  SGN must be SIGNED or
+   UNSIGNED.  */
+
+wide_int
+wide_int::max_value (enum machine_mode mode, SignOp sgn)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+  return max_value (prec, prec, sgn);
+}
+
+/* Produce the largest number that is represented in TYPE. The
+   precision and sign are taken from TYPE.  */
+
+wide_int
+wide_int::max_value (const_tree type)
+{
+  unsigned int prec = TYPE_PRECISION (type);
+  return max_value (prec, prec, TYPE_UNSIGNED (type) ? UNSIGNED : SIGNED);
+}
+
+/* Produce the smallest SGNed number that is represented in PRECISION.
+   The result is represented in PRECISION.  SGN must be SIGNED or
+   UNSIGNED.  */
+
+wide_int
+wide_int::min_value (unsigned int precision, SignOp sgn)
+{
+  return min_value (precision, precision, sgn);
+}
+  
+/* Produce the smallest number that is represented in MODE. The
+   precision are taken from mode.  SGN must be SIGNED or UNSIGNED.  */
+
+wide_int
+wide_int::min_value (enum machine_mode mode, SignOp sgn)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+  return min_value (prec, prec, sgn);
+}
+
+/* Produce the smallest number that is represented in TYPE. The
+   precision and sign are taken from TYPE.  */
+
+wide_int
+wide_int::min_value (const_tree type)
+{
+  unsigned int prec = TYPE_PRECISION (type);
+  return min_value (prec, prec, TYPE_UNSIGNED (type) ? UNSIGNED : SIGNED);
+}
+
+
+/* Small constants.  */
+
+/* Return a wide int of -1 with precision PREC.  */
+
+wide_int
+wide_int::minus_one (unsigned int prec)
+{
+  return wide_int::from_shwi (-1, prec);
+}
+
+/* Return a wide int of 0 with precision PREC.  */
+
+wide_int
+wide_int::zero (unsigned int prec)
+{
+  return wide_int::from_shwi (0, prec);
+}
+
+/* Return a wide int of 1 with precision PREC.  */
+
+wide_int
+wide_int::one (unsigned int prec)
+{
+  return wide_int::from_shwi (1, prec);
+}
+
+/* Return a wide int of 2 with precision PREC.  */
+
+wide_int
+wide_int::two (unsigned int prec)
+{
+  return wide_int::from_shwi (2, prec);
+}
+
+/* Public accessors for the interior of a wide int.  */
+
+/* Get the number of host wide ints actually represented within the
+   wide int.  */
+
+unsigned short
+wide_int::get_len () const
+{
+  return len;
+}
+
+/* Get precision of the value represented within the wide int.  */
+
+unsigned int
+wide_int::get_precision () const
+{
+  return precision;
+}
+
+/* Get a particular element of the wide int.  */
+
+HOST_WIDE_INT
+wide_int::elt (unsigned int i) const
+{
+  return i >= len ? sign_mask () : val[i];
+}
+
+/* Return true if THIS is -1.  */
+
+bool
+wide_int::minus_one_p () const
+{
+  return len == 1 && val[0] == (HOST_WIDE_INT)-1;
+}
+
+/* Return true if THIS is 0.  */
+
+bool
+wide_int::zero_p () const
+{
+  return len == 1 && val[0] == 0;
+}
+
+/* Return true if THIS is 1.  */
+
+bool
+wide_int::one_p () const
+{
+  return len == 1 && val[0] == 1;
+}
+
+/* Return true if THIS is negative.  */
+
+bool
+wide_int::neg_p () const
+{
+  return sign_mask () != 0;
+}
+
+/*
+ * Comparisons, note that only equality is an operator.  The other
+ * comparisons cannot be operators since they are inherently signed or
+ * unsigned and C++ has no such operators.
+ */
+
+/* Return true if THIS == OP1.  */
+
+template <typename T>
+bool
+wide_int::operator == (T c) const
+{
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision < HOST_BITS_PER_WIDE_INT)
+    {
+      unsigned HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << precision) - 1;
+      result = (val[0] & mask) == (s[0] & mask);
+    }
+  else if (precision == HOST_BITS_PER_WIDE_INT)
+      result = val[0] == s[0];
+  else
+    result = eq_p_large (s, cl);
+
+  return result;
+}
+
+/* Return true if THIS is not equal to OP1. */ 
+
+template <typename T>
+bool
+wide_int::operator != (T c) const
+{
+  return !(*this == c);
+}  
+
+/* Return true if THIS is less than OP1.  Signness is indicated by
+   OP.  */
+
+template <typename T>
+bool
+wide_int::lt_p (T c, SignOp op) const
+{
+  if (op == SIGNED)
+    return lts_p (c);
+  else
+    return ltu_p (c);
+}  
+
+/* Return true if THIS < OP1 using signed comparisons.  */
+
+template <typename T>
+bool
+wide_int::lts_p (T c) const
+{
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    /* The values are already logically sign extended.  */
+    result = val[0] < s[0];
+  else
+    result = lts_p_large (val, len, s, cl, precision);
+  return result;
+}
+
+/* Return true if THIS < OP1 using unsigned comparisons.  */
+
+template <typename T>
+bool
+wide_int::ltu_p (T c) const
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	{
+	  x0 = zext_hwi (val[0], precision);
+	  x1 = zext_hwi (s[0], precision);
+	}
+      else
+	{
+	  x0 = val[0];
+	  x1 = s[0];
+	}
+
+      result = x0 < x1;
+    }
+  else
+    result = ltu_p_large (val, len, s, cl);
+  return result;
+}
+
+/* Return true if THIS is greater than OP1.  Signness is indicated by
+   OP.  */
+
+template <typename T>
+bool
+wide_int::gt_p (T c, SignOp op) const
+{
+  if (op == SIGNED)
+    return gts_p (c);
+  else
+    return gtu_p (c);
+}  
+
+/* Return true if THIS < OP1 using signed comparisons.  */
+
+template <typename T>
+bool
+wide_int::gts_p (T c) const
+{
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      /* The values are already logically sign extended.  */
+      result = val[0] > s[0];
+    }
+  else
+    /* Reverse the parms and use ltu.  */
+    result = lts_p_large (s, cl, val, len, precision);
+  return result;
+}
+
+/* Return true if THIS < OP1 using unsigned comparisons.  */
+
+template <typename T>
+bool
+wide_int::gtu_p (T c) const
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	{
+	  x0 = zext_hwi (val[0], precision);
+	  x1 = zext_hwi (s[0], precision);
+	}
+      else
+	{
+	  x0 = val[0];
+	  x1 = s[0];
+	}
+
+      result = x0 > x1;
+    }
+  else 
+    /* Reverse the parms and use ltu.  */
+    result = ltu_p_large (s, cl, val, len);
+  return result;
+}
+
+
+/* Return -1 0 or 1 depending on how THIS compares with OP1.  Signness
+   is indicated by OP.  */
+
+template <typename T>
+int
+wide_int::cmp (T c, SignOp op) const
+{
+  if (op == SIGNED)
+    return cmps (c);
+  else
+    return cmpu (c);
+}  
+
+
+/* Returns -1 if THIS < OP1, 0 if THIS == OP1 and 1 if A > OP1 using
+   signed compares.  */
+
+template <typename T>
+int
+wide_int::cmps (T c) const
+{
+  int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      /* The values are already logically sign extended.  */
+      if (val[0] < s[0])
+	result = -1;
+      else if (val[0] > s[0])
+	result = 1;
+      else 
+	result = 0;
+    }
+  else
+    result = cmps_large (s, cl);
+  return result;
+}
+
+/* Returns -1 if THIS < OP1, 0 if THIS == OP1 and 1 if A > OP1 using
+   unsigned compares.  */
+
+template <typename T>
+int
+wide_int::cmpu (T c) const
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	{
+	  x0 = zext_hwi (val[0], precision);
+	  x1 = zext_hwi (s[0], precision);
+	}
+      else
+	{
+	  x0 = val[0];
+	  x1 = s[0];
+	}
+
+      if (x0 < x1)
+	result = -1;
+      else if (x0 == x1)
+	result = 0;
+      else
+	result = 1;
+    }
+  else
+    result = cmpu_large (s, cl);
+  return result;
+}
+
+
+/* Return the signed or unsigned min of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::min (T c, SignOp sgn) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (sgn == SIGNED)
+    return lts_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+  else
+    return ltu_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the signed or unsigned min of THIS and OP1. */
+
+wide_int
+wide_int::min (const wide_int &op1, SignOp sgn) const
+{
+  if (sgn == SIGNED)
+    return lts_p (op1) ? (*this) : op1;
+  else
+    return ltu_p (op1) ? (*this) : op1;
+}  
+
+/* Return the signed or unsigned max of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::max (T c, SignOp sgn) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (sgn == SIGNED)
+    return gts_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+  else
+    return gtu_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the signed or unsigned max of THIS and OP1. */
+
+wide_int
+wide_int::max (const wide_int &op1, SignOp sgn) const
+{
+  if (sgn == SIGNED)
+    return gts_p (op1) ? (*this) : op1;
+  else
+    return gtu_p (op1) ? (*this) : op1;
+}  
+
+/* Return the signed min of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::smin (T c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return lts_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the signed min of THIS and OP1. */
+
+wide_int
+wide_int::smin (const wide_int &op1) const
+{
+  return lts_p (op1) ? (*this) : op1;
+}  
+
+/* Return the signed max of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::smax (T c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return gts_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the signed max of THIS and OP1. */
+
+wide_int
+wide_int::smax (const wide_int &op1) const
+{
+  return gts_p (op1) ? (*this) : op1;
+}  
+
+/* Return the unsigned min of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::umin (T c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return ltu_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the unsigned min of THIS and OP1. */
+
+wide_int
+wide_int::umin (const wide_int &op1) const
+{
+  return ltu_p (op1) ? (*this) : op1;
+}  
+
+/* Return the unsigned max of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::umax (T c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return gtu_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the unsigned max of THIS and OP1. */
+
+wide_int
+wide_int::umax (const wide_int &op1) const
+{
+  return gtu_p (op1) ? (*this) : op1;
+}  
+
+/* Return true if THIS has the sign bit set to 1 and all other bits are
+   zero.  */
+
+bool
+wide_int::only_sign_bit_p () const
+{
+  return only_sign_bit_p (precision);
+}
+
+/* Return true if THIS fits in a HOST_WIDE_INT with no loss of
+   precision.  */
+
+bool
+wide_int::fits_shwi_p () const
+{
+  return len == 1;
+}
+
+/* Return true if THIS fits in an unsigned HOST_WIDE_INT with no loss
+   of precision.  */
+
+bool
+wide_int::fits_uhwi_p () const
+{
+  return len == 1 
+    || (len == 2 && val[1] == 0);
+}
+
+/* Return THIS extended to PREC.  The signness of the extension is
+   specified by OP.  */
+
+wide_int 
+wide_int::ext (unsigned int prec, SignOp z) const
+{
+  if (z == UNSIGNED)
+    return zext (prec);
+  else
+    return sext (prec);
+}
+
+/* Return THIS forced to the precision of TYPE.  If this is extension,
+   the sign is set by SGN. */
+
+wide_int 
+wide_int::force_to_size (enum machine_mode mode, SignOp sgn) const
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return force_to_size (prec, sgn);
+}
+
+/* Return THIS forced to the precision and sign of TYPE.  If this is
+   extension, the sign is set by SGN. */
+
+wide_int 
+wide_int::force_to_size (const_tree type) const
+{
+  unsigned int prec = TYPE_PRECISION (type);
+  SignOp sgn = TYPE_UNSIGNED (type) ? UNSIGNED : SIGNED;
+
+  return force_to_size (prec, sgn);
+}
+
+/* Return THIS zero extended to the precision of TYPE but extends
+   using SGN.  If this is extension, the sign is set by SGN. */
+
+wide_int 
+wide_int::force_to_size (const_tree type, SignOp sgn) const
+{
+  unsigned int prec = TYPE_PRECISION (type);
+
+  return force_to_size (prec, sgn);
+}
+
+/* Return THIS forced to the precision of TYPE.  If this is extension,
+   it is signed. */
+
+wide_int 
+wide_int::sforce_to_size (enum machine_mode mode) const
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return force_to_size (prec, SIGNED);
+}
+
+/* Return THIS zero extended to the precision of TYPE but extends
+   using SGN.  If this is extension, it is signed. */
+
+wide_int 
+wide_int::sforce_to_size (const_tree type) const
+{
+  unsigned int prec = TYPE_PRECISION (type);
+
+  return force_to_size (prec, SIGNED);
+}
+
+/* Return THIS forced to the precision of TYPE.  If this
+   is extension, it is unsigned. */
+
+wide_int 
+wide_int::zforce_to_size (enum machine_mode mode) const
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return force_to_size (prec, UNSIGNED);
+}
+
+/* Return THIS zero extended to the precision of TYPE but extends
+   using SGN.  If this is extension, it is unsigned. */
+
+wide_int 
+wide_int::zforce_to_size (const_tree type) const
+{
+  unsigned int prec = TYPE_PRECISION (type);
+
+  return force_to_size (prec, UNSIGNED);
+}
+
+/*
+ * Logicals.
+ */
+
+/* Return THIS & OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator & (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] & s[0];
+    }
+  else
+    result = and_large (s, cl);
+  return result;
+}
+
+
+/* Return THIS & ~OP1.  */
+
+template <typename T>
+wide_int
+wide_int::and_not (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] & ~s[0];
+    }
+  else
+    result = and_not_large (s, cl);
+  return result;
+}
+
+
+/* Return the logical negation (bitwise complement) of THIS.  */
+
+wide_int
+wide_int::operator ~ () const
+{
+  wide_int result;
+  int l0 = len - 1;
+
+  result.len = len;
+  result.precision = precision;
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = ~val[l0];
+      l0--;
+    }
+  return result;
+}
+
+/* Return THIS | OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator | (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] | s[0];
+    }
+  else
+    result = or_large (s, cl);
+  return result;
+}
+
+
+/* Return THIS | ~OP1.  */
+
+template <typename T>
+wide_int
+wide_int::or_not (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] | ~s[0];
+    }
+  else
+    result = or_not_large (s, cl);
+  return result;
+}
+
+
+/* Return THIS ^ OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator ^ (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] ^ s[0];
+    }
+  else
+    result = xor_large (s, cl);
+  return result;
+}
+
+/*
+ * Integer arithmetic
+ */
+
+/* Return THIS + OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator + (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] + s[0];
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	result.val[0] = sext_hwi (result.val[0], precision);
+    }
+  else
+    result = add_large (s, cl);
+  return result;
+}
+
+/* Multiply THIS and OP1.  The result is the same precision as the
+   operands, so there is no reason for signed or unsigned
+   versions.  */
+
+template <typename T>
+wide_int
+wide_int::operator * (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+  bool overflow = false;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] * s[0];
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	result.val[0] = sext_hwi (result.val[0], precision);
+    }
+  else
+    result = mul_internal (false, false, this, s, cl, UNSIGNED, &overflow, false);
+  return result;
+}
+
+/* Multiply THIS and OP1.  The signess is specified with SGN.
+   OVERFLOW is set true if the result overflows.  */
+
+template <typename T>
+wide_int 
+wide_int::mul (T c, SignOp sgn, bool *overflow) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return mul_internal (false, false, this, s, cl, sgn, overflow, true);
+}
+
+/* Signed multiply THIS and OP1.  The result is the same precision as
+   the operands.  OVERFLOW is set true if the result overflows.  */
+
+template <typename T>
+wide_int
+wide_int::smul (T c, bool *overflow) const
+{
+  return mul (c, SIGNED, overflow);
+}
+
+/* Unsigned multiply THIS and OP1.  The result is the same precision
+   as the operands.  OVERFLOW is set true if the result overflows.  */
+
+template <typename T>
+wide_int
+wide_int::umul (T c, bool *overflow) const
+{
+  return mul (c, UNSIGNED, overflow);
+}
+
+/* Multiply THIS and OP1.  The signess is specified with SGN.  The
+   result is twice the precision as the operands.  The signess is
+   specified with SGN.  */
+
+template <typename T>
+wide_int
+wide_int::mul_full (T c, SignOp sgn) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return mul_internal (false, true, this, s, cl, sgn, 0, false);
+}
+
+/* Signed multiply THIS and OP1.  The result is twice the precision as
+   the operands.  */
+
+template <typename T>
+wide_int
+wide_int::smul_full (T c) const
+{
+  return mul_full (c, SIGNED);
+}
+
+/* Unsigned multiply THIS and OP1.  The result is twice the precision
+   as the operands.  */
+
+template <typename T>
+wide_int
+wide_int::umul_full (T c) const
+{
+  return mul_full (c, UNSIGNED);
+}
+
+/* Multiply THIS and OP1 and return the high part of that result.  The
+   signess is specified with SGN.  The result is the same precision as
+   the operands.  The mode is the same mode as the operands.  The
+   signess is specified with y.  */
+
+template <typename T>
+wide_int
+wide_int::mul_high (T c, SignOp sgn) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+  return mul_internal (true, false, this, s, cl, sgn, 0, false);
+}
+
+/* Negate THIS.  */
+
+wide_int
+wide_int::neg () const
+{
+  wide_int z = wide_int::from_shwi (0, precision);
+  return z - *this;
+}
+
+/* Negate THIS.  Set overflow if the value cannot be negated.  */
+
+wide_int
+wide_int::neg (bool *overflow) const
+{
+  wide_int z = wide_int::from_shwi (0, precision);
+  if (only_sign_bit_p ())
+    *overflow = true;
+
+  return z - *this;
+}
+
+/* Return THIS - OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator - (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] - s[0];
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	result.val[0] = sext_hwi (result.val[0], precision);
+    }
+  else
+    result = sub_large (s, cl);
+  return result;
+}
+
+
+
+/* Divide DIVISOR into THIS.  The result is the same size as the
+   operands.  The sign is specified in SGN.  The output is truncated.
+   Overflow is set to true if the result overflows, otherwise it is
+   not set.  */
+template <typename T>
+wide_int
+wide_int::div_trunc (T c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+  
+  return divmod_internal (true, this, s, cl, sgn, 
+			  &remainder, false, overflow);
+}
+
+/* Signed divide with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::sdiv_trunc (T c) const
+{
+  return div_trunc (c, SIGNED);
+}
+
+/* Unsigned divide with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::udiv_trunc (T c) const
+{
+  return div_trunc (c, UNSIGNED);
+}
+
+/* Divide DIVISOR into THIS.  The result is the same size as the
+   operands.  The sign is specified in SGN.  The output is floor
+   truncated.  Overflow is set to true if the result overflows,
+   otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::div_floor (T c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+  if (sgn == SIGNED && quotient.neg_p () && !remainder.zero_p ())
+    return quotient - 1;
+  return quotient;
+}
+
+/* Unsigned divide with floor truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::udiv_floor (T c) const
+{
+  bool overflow;
+
+  return div_floor (c, UNSIGNED, &overflow);
+}
+
+/* Signed divide with floor truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::sdiv_floor (T c) const
+{
+  bool overflow;
+
+  return div_floor (c, SIGNED, &overflow);
+}
+
+/* Divide DIVISOR into THIS.  The result is the same size as the
+   operands.  The sign is specified in SGN.  The output is ceil
+   truncated.  Overflow is set to true if the result overflows,
+   otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::div_ceil (T c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+
+  if (!remainder.zero_p ())
+    {
+      if (sgn == UNSIGNED || quotient.neg_p ())
+	return quotient;
+      else
+	return quotient + 1;
+    }
+  return quotient;
+}
+
+/* Divide DIVISOR into THIS.  The result is the same size as the
+   operands.  The sign is specified in SGN.  The output is round
+   truncated.  Overflow is set to true if the result overflows,
+   otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::div_round (T c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+  if (!remainder.zero_p ())
+    {
+      wide_int divisor = wide_int::from_array (s, cl, precision);
+      if (sgn == SIGNED)
+	{
+	  wide_int p_remainder = remainder.neg_p () ? remainder.neg () : remainder;
+	  wide_int p_divisor = divisor.neg_p () ? divisor.neg () : divisor;
+	  p_divisor = p_divisor.rshiftu_large (1);
+	  
+	  if (p_divisor.gts_p (p_remainder)) 
+	    {
+	      if (quotient.neg_p ())
+		return quotient - 1;
+	      else 
+		return quotient + 1;
+	    }
+	}
+      else
+	{
+	  wide_int p_divisor = divisor.rshiftu_large (1);
+	  if (p_divisor.gtu_p (remainder))
+	    return quotient + 1;
+	}
+    }
+  return quotient;
+}
+
+/* Divide DIVISOR into THIS producing both the quotient and remainder.
+   The result is the same size as the operands.  The sign is specified
+   in SGN.  The output is truncated.  */
+
+template <typename T>
+wide_int
+wide_int::divmod_trunc (T c, wide_int *remainder, SignOp sgn) const
+{
+  bool overflow = false;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return divmod_internal (true, this, s, cl, sgn, 
+			  remainder, true, &overflow);
+}
+
+/* Signed divide/mod with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::sdivmod_trunc (T c, wide_int *mod) const
+{
+  return divmod_trunc (c, mod, SIGNED);
+}
+
+/* Unsigned divide/mod with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::udivmod_trunc (T c, wide_int *mod) const
+{
+  return divmod_trunc (c, mod, UNSIGNED);
+}
+
+/* Divide DIVISOR into THIS.  The remainder is also produced in
+   REMAINDER.  The result is the same size as the operands.  The sign
+   is specified in SGN.  The output is floor truncated.  Overflow is
+   set to true if the result overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::divmod_floor (T c, wide_int *remainder, SignOp sgn) const
+{
+  wide_int quotient;
+  bool overflow = false;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      remainder, true, &overflow);
+  if (sgn == SIGNED && quotient.neg_p () && !(*remainder).zero_p ())
+    {
+      *remainder = *remainder - wide_int::from_array (s, cl, precision);
+      return quotient - 1;
+    }
+  return quotient;
+}
+
+/* Signed divide/mod with floor truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::sdivmod_floor (T c, wide_int *mod) const
+{
+  return divmod_floor (c, mod, SIGNED);
+}
+
+/* Divide DIVISOR into THIS producing the remainder.  The result is
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is truncated.  Overflow is set to true if the result
+   overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::mod_trunc (T c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  divmod_internal (true, this, s, cl, sgn, 
+			  &remainder, true, overflow);
+  return remainder;
+}
+
+/* Signed mod with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::smod_trunc (T c) const
+{
+  return mod_trunc (c, SIGNED);
+}
+
+/* Unsigned mod with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::umod_trunc (T c) const
+{
+  return mod_trunc (c, UNSIGNED);
+}
+
+/* Divide DIVISOR into THIS producing the remainder.  The result is
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is floor truncated.  Overflow is set to true if the result
+   overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::mod_floor (T c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+
+  if (sgn == SIGNED && quotient.neg_p () && !remainder.zero_p ())
+    return remainder - wide_int::from_array (s, cl, precision);
+  return remainder;
+}
+
+/* Unsigned mod with floor truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::umod_floor (T c) const
+{
+  bool overflow;
+
+  return mod_floor (c, UNSIGNED, &overflow);
+}
+
+/* Divide DIVISOR into THIS producing the remainder.  The result is the
+   same size as the operands.  The sign is specified in SGN.  The
+   output is ceil truncated.  Overflow is set to true if the result
+   overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::mod_ceil (T c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+
+  if (!remainder.zero_p ())
+    {
+      if (sgn == UNSIGNED || quotient.neg_p ())
+	return remainder;
+      else
+	return remainder - wide_int::from_array (s, cl, precision);
+    }
+  return remainder;
+}
+
+/* Divide DIVISOR into THIS producing the remainder.  The result is
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is round truncated.  Overflow is set to true if the result
+   overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::mod_round (T c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+
+  if (!remainder.zero_p ())
+    {
+      wide_int divisor = wide_int::from_array (s, cl, precision);
+      if (sgn == SIGNED)
+	{
+	  wide_int p_remainder = remainder.neg_p () ? remainder.neg () : remainder;
+	  wide_int p_divisor = divisor.neg_p () ? divisor.neg () : divisor;
+	  p_divisor = p_divisor.rshiftu_large (1);
+	  
+	  if (p_divisor.gts_p (p_remainder)) 
+	    {
+	      if (quotient.neg_p ())
+		return remainder + divisor;
+	      else 
+		return remainder - divisor;
+	    }
+	}
+      else
+	{
+	  wide_int p_divisor = divisor.rshiftu_large (1);
+	  if (p_divisor.gtu_p (remainder))
+	    return remainder - divisor;
+	}
+    }
+  return remainder;
+}
+
+
+/* If SHIFT_COUNT_TRUNCATED is defined, truncate CNT.   
+
+   At first look, the shift truncation code does not look right.
+   Shifts (and rotates) are done according to the precision of the
+   mode but the shift count is truncated according to the bitsize of
+   the mode.  This is how real hardware works (Knuth's mix machine is
+   the only known exception to this rule, but it was never real).
+
+   On an ideal machine, like Knuth's mix machine, a shift count is a
+   word long and all of the bits of that word are examined to compute
+   the shift amount.  But on real hardware, especially on machines
+   with fast (single cycle shifts) that takes too long.  On these
+   machines, the amount of time to perform a shift dictates the cycle
+   time of the machine so corners are cut to keep this fast.  A
+   comparison of an entire 64 bit word would take something like 6
+   gate delays before the shifting can even start.
+
+   So real hardware only looks at a small part of the shift amount.
+   On ibm machines, this tends to be 1 more than what is necessary to
+   encode the shift amount.  The rest of the world looks at only the
+   minimum number of bits.  This means that only 3 gate delays are
+   necessary to set up the shifter.
+
+   On the other hand, right shifts and rotates must be according to
+   the precision or the operation does not make any sense. 
+
+   This function is called in two contexts.  If OP == TRUNC, this
+   function provides a count that matches the semantics of the target
+   machine depending on the value of SHIFT_COUNT_TRUNCATED.  Note that
+   if SHIFT_COUNT_TRUNCATED is not defined, this function may produce
+   -1 as a value if the shift amount is greater than the bitsize of
+   the mode.  -1 is a surrogate for a very large amount.
+
+   If OP == NONE, then this function always truncates the shift value
+   to the bitsize because this shifting operation is a function that
+   is internal to GCC.  */
+
+inline int
+wide_int::trunc_shift (const HOST_WIDE_INT *cnt, unsigned int len ATTRIBUTE_UNUSED,
+		       unsigned int bitsize, ShiftOp trunc_op)
+{
+  if (trunc_op == TRUNC)
+    {
+      gcc_checking_assert (bitsize != 0);
+#ifdef SHIFT_COUNT_TRUNCATED
+      return cnt[0] & (bitsize - 1);
+#else
+      if (cnt[0] < bitsize && cnt[0] >= 0 && len == 1)
+	return cnt[0];
+      else 
+	return -1;
+#endif
+    }
+  else if (bitsize == 0)
+    return cnt[0];
+  else
+    return cnt[0] & (bitsize - 1);
+}
+
+/* Left shift THIS by C.  See the definition of Op.TRUNC for how to
+   set TRUNC_OP.  Since this is used internally, it has the ability to
+   specify the BITSIZE  and PRECISION independently.  This is useful
+   when inserting a small value into a larger one.  */
+
+template <typename T>
+wide_int
+wide_int::lshift (T c, unsigned int bitsize, 
+		  unsigned int res_prec, ShiftOp trunc_op) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+  HOST_WIDE_INT shift;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  if (res_prec == 0)
+    res_prec = precision;
+
+  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  if (shift == -1)
+    result = wide_int::zero (precision);
+  else if (shift == 0 && res_prec == precision)
+    result = *this;
+  /* Handle the simple case quickly.   */
+  else if (res_prec <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.precision = res_prec;
+      result.len = 1;
+      result.val[0] = val[0] << shift;
+    }
+  else
+    result = lshift_large (shift, res_prec);
+  return result;
+}
+
+/* Rotate THIS left by Y within its precision.  */
+
+template <typename T>
+wide_int
+wide_int::lrotate (T c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return lrotate ((unsigned HOST_WIDE_INT)s[0]);
+}
+
+
+/* Rotate THIS left by CNT within its precision.  */
+
+wide_int
+wide_int::lrotate (unsigned HOST_WIDE_INT cnt) const
+{
+  wide_int left, right, result;
+
+  left = lshift (cnt);
+  right = rshiftu (precision - cnt);
+  result = left | right;
+
+  return result;
+}
+
+
+/* Right shift THIS by Y.  SGN indicates the sign.  TRUNC_OP indicates the
+   truncation option.  */
+
+template <typename T>
+wide_int
+wide_int::rshift (T c, SignOp sgn, 
+		  unsigned int bitsize, ShiftOp trunc_op) const
+{
+  if (sgn == UNSIGNED)
+    return rshiftu (c, bitsize, trunc_op);
+  else
+    return rshifts (c, bitsize, trunc_op);
+}
+
+
+/* Unsigned right shift THIS by CNT.  See the definition of Op.TRUNC
+   for how to set TRUNC_OP.  */
+
+template <typename T>
+wide_int
+wide_int::rshiftu (T c, unsigned int bitsize,
+		   ShiftOp trunc_op) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+  HOST_WIDE_INT shift;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  
+  if (shift == 0)
+    result = copy ();
+  else if (shift == -1)
+    result = wide_int::zero (precision);
+  else if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      /* Handle the simple case quickly.   */
+      unsigned HOST_WIDE_INT x = val[0];
+
+      result.precision = precision;
+      result.len = 1;
+
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	x = zext_hwi (x, precision);
+
+      result.val[0] = x >> shift;
+    }
+  else 
+    result = rshiftu_large (shift);
+  return result;
+}
+
+/* Signed right shift THIS by CNT.  See the definition of Op.TRUNC for
+   how to set TRUNC_OP.  */
+
+template <typename T>
+wide_int
+wide_int::rshifts (T c, unsigned int bitsize,
+		   ShiftOp trunc_op) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+  HOST_WIDE_INT shift;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  
+  if (shift == 0)
+    result = copy ();
+  else if (shift == -1)
+    result = wide_int::zero (precision);
+  else if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      /* Handle the simple case quickly.   */
+      HOST_WIDE_INT x = val[0];
+
+      result.precision = precision;
+      result.len = 1;
+      result.val[0] = x >> shift;
+    }
+  else 
+    result = rshifts_large (shift);
+  return result;
+}
+
+/* Rotate THIS right by Y within its precision.  */
+
+
+template <typename T>
+wide_int
+wide_int::rrotate (T c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+
+  return rrotate ((unsigned HOST_WIDE_INT) s[0]);
+}
+
+
+/* Rotate THIS left by CNT within its precision.  */
+
+wide_int
+wide_int::rrotate (unsigned HOST_WIDE_INT cnt) const
+{
+  wide_int left, right, result;
+
+  left = lshift (precision - cnt);
+  right = rshiftu (cnt);
+  result = left | right;
+
+  return result;
+}
+
+/* tree related routines.  */
+
+extern tree wide_int_to_tree (tree type, const wide_int &cst);
+extern tree wide_int_to_infinite_tree (tree type, const wide_int &cst, 
+				       unsigned int prec);
+extern tree force_fit_type_wide (tree, const wide_int &, int, bool);
+
+/* real related routines.  */
+extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
+extern wide_int decimal_real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
+
+
+/* Conversion to and from GMP integer representations.  */
+
+void mpz_set_wide_int (mpz_t, wide_int, bool);
+wide_int mpz_get_wide_int (const_tree, mpz_t, bool);
+#endif /* GENERATOR FILE */
+
+#endif /* WIDE_INT_H */