diff mbox

[Pointer,Bounds,Checker,14/x] Passes [4/n] Memory accesses instrumentation

Message ID 20141008190138.GD13454@msticlxl57.ims.intel.com
State New
Headers show

Commit Message

Ilya Enkovich Oct. 8, 2014, 7:01 p.m. UTC
Hi,

This is the main chunk of instrumentation codes.  This patch introduces instrumentation pass which instruments memory accesses.

Thanks,
Ilya
--
2014-10-08  Ilya Enkovich  <ilya.enkovich@intel.com>

	* tree-chkp.c (chkp_may_complete_phi_bounds): New.
	(chkp_may_finish_incomplete_bounds): New.
	(chkp_recompute_phi_bounds): New.
	(chkp_find_valid_phi_bounds): New.
	(chkp_finish_incomplete_bounds): New.
	(chkp_maybe_copy_and_register_bounds): New.
	(chkp_build_returned_bound): New.
	(chkp_get_bound_for_parm): New.
	(chkp_compute_bounds_for_assignment): New.
	(chkp_get_bounds_by_definition): New.
	(chkp_get_bounds_for_decl_addr): New.
	(chkp_get_bounds_for_string_cst): New.
	(chkp_parse_array_and_component_ref): New.
	(chkp_make_addressed_object_bounds): New.
	(chkp_find_bounds_1): New.
	(chkp_find_bounds): New.
	(chkp_find_bounds_loaded): New.
	(chkp_copy_bounds_for_elem): New.
	(chkp_process_stmt): New.
	(chkp_fix_cfg): New.
	(chkp_instrument_function): New.
	(chkp_fini): New.
	(chkp_execute): New.
	(chkp_gate): New.
	(pass_data_chkp): New.
	(pass_chkp): New.
	(make_pass_chkp): New.

Comments

Jeff Law Oct. 13, 2014, 8:52 p.m. UTC | #1
On 10/08/14 13:01, Ilya Enkovich wrote:
> Hi,
>
> This is the main chunk of instrumentation codes.  This patch introduces instrumentation pass which instruments memory accesses.
>
> Thanks,
> Ilya
> --
> 2014-10-08  Ilya Enkovich<ilya.enkovich@intel.com>
>
> 	* tree-chkp.c (chkp_may_complete_phi_bounds): New.
> 	(chkp_may_finish_incomplete_bounds): New.
> 	(chkp_recompute_phi_bounds): New.
> 	(chkp_find_valid_phi_bounds): New.
> 	(chkp_finish_incomplete_bounds): New.
> 	(chkp_maybe_copy_and_register_bounds): New.
> 	(chkp_build_returned_bound): New.
> 	(chkp_get_bound_for_parm): New.
> 	(chkp_compute_bounds_for_assignment): New.
> 	(chkp_get_bounds_by_definition): New.
> 	(chkp_get_bounds_for_decl_addr): New.
> 	(chkp_get_bounds_for_string_cst): New.
> 	(chkp_parse_array_and_component_ref): New.
> 	(chkp_make_addressed_object_bounds): New.
> 	(chkp_find_bounds_1): New.
> 	(chkp_find_bounds): New.
> 	(chkp_find_bounds_loaded): New.
> 	(chkp_copy_bounds_for_elem): New.
> 	(chkp_process_stmt): New.
> 	(chkp_fix_cfg): New.
> 	(chkp_instrument_function): New.
> 	(chkp_fini): New.
> 	(chkp_execute): New.
> 	(chkp_gate): New.
> 	(pass_data_chkp): New.
> 	(pass_chkp): New.
> 	(make_pass_chkp): New.
>
>
> @@ -491,6 +910,129 @@ chkp_get_bounds_var (tree ptr_var)
>     return bnd_var;
>   }
>
> +
> +
> +/* Register bounds BND for object PTR in global bounds table.
> +   A copy of bounds may be created for abnormal ssa names.
> +   Returns bounds to use for PTR.  */
> +static tree
> +chkp_maybe_copy_and_register_bounds (tree ptr, tree bnd)
> +{
> +  bool abnormal_ptr;
> +
> +  if (!chkp_reg_bounds)
> +    return bnd;
> +
> +  /* Do nothing if bounds are incomplete_bounds
> +     because it means bounds will be recomputed.  */
> +  if (bnd == incomplete_bounds)
> +    return bnd;
> +
> +  abnormal_ptr = (TREE_CODE (ptr) == SSA_NAME
> +		  && SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ptr)
> +		  && gimple_code (SSA_NAME_DEF_STMT (ptr)) != GIMPLE_PHI);
> +
> +  /* A single bounds value may be reused multiple times for
> +     different pointer values.  It may cause coalescing issues
> +     for abnormal SSA names.  To avoid it we create a bounds
> +     copy in case it is copmputed for abnormal SSA name.
s/copmputed/computed/

  +  if (!bounds)
> +    {
> +      tree orig_decl = cgraph_node::get (cfun->decl)->orig_decl;
> +
> +      /* For static chain param we return zero bounds
> +	 because currently we do not check dereferences
> +	 of this pointer.  */
> +      /* ?? Is it a correct way to identify such parm?  */
> +      if (cfun->decl && DECL_STATIC_CHAIN (cfun->decl)
> +	  && DECL_ARTIFICIAL (decl))
> +	bounds = chkp_get_zero_bounds ();
Are you just looking for the parameter in which we pass the static 
chain?   Look at get_chain_decl for how we set it up.  You may actually 
have to peek at more fields.  I don't think there's a single magic bit 
that says "this is the static chain".  Though it may always appear in 
the same location on the parameter list.   Nested functions aren't 
something I'd poked with much.  Richard Henderson might know more since 
he wrote tree-nested a while back.

> @@ -1107,6 +1821,323 @@ chkp_build_bndstx (tree addr, tree ptr, tree bounds,
>       }
>   }
>
> +/* Compute bounds for pointer NODE which was assigned in
> +   assignment statement ASSIGN.  Return computed bounds.  */
> +static tree
> +chkp_compute_bounds_for_assignment (tree node, gimple assign)
Ugh.  Note how this introduces another place that anyone who might add a 
new RHS gimple statement needs to edit.  We need a pointer back to this 
code so that folks will know it needs updating.  The question is where 
to put it.

Basically we want a place where anyone adding a new code that can appear 
on the RHS of an assignment must change already.  Thoughts on a good 
location?

I realize there's probably many other places that probably need these 
kinds of documentation back links, I'm not asking you to address all of 
them.



> +/* Compute and returne bounds for address of OBJ.  */
s/returne/return


> +
> +/* Some code transformation made during instrumentation pass
> +   may put code into inconsistent state.  Here we find and fix
> +   such flaws.  */
> +static void
> +chkp_fix_cfg ()
Presumably none of the code you're inserting that causes these problems 
is ever supposed to be executed on the non-fallthru edge?  Else your 
"creative" method of hiding the abnormal nature of the edge for a period 
of time, then recreating it won't work.

I'm a bit worried by this code and while I'll approve it, it's something 
we may have to come back and revisit if it causes problems.


So I think there's a couple typo nits to fix and one backlink doc issue 
to address and possibly a tweak to the code to identify static chains. 
With those fixed this should be good to go onto the trunk.

jeff
diff mbox

Patch

diff --git a/gcc/tree-chkp.c b/gcc/tree-chkp.c
index c65334c..d297171 100644
--- a/gcc/tree-chkp.c
+++ b/gcc/tree-chkp.c
@@ -65,9 +65,278 @@  along with GCC; see the file COPYING3.  If not see
 #include "rtl.h" /* For MEM_P, assign_temp.  */
 #include "tree-dfa.h"
 
+/*  Pointer Bounds Checker instruments code with memory checks to find
+    out-of-bounds memory accesses.  Checks are performed by computing
+    bounds for each pointer and then comparing address of accessed
+    memory before pointer dereferencing.
+
+    1. Function clones.
+
+    See ipa-chkp.c.
+
+    2. Instrumentation.
+
+    There are few things to instrument:
+
+    a) Memory accesses - add checker calls to check address of accessed memory
+    against bounds of dereferenced pointer.  Obviously safe memory
+    accesses like static variable access does not have to be instrumented
+    with checks.
+
+    Example:
+
+      val_2 = *p_1;
+
+      with 4 bytes access is transformed into:
+
+      __builtin___chkp_bndcl (__bound_tmp.1_3, p_1);
+      D.1_4 = p_1 + 3;
+      __builtin___chkp_bndcu (__bound_tmp.1_3, D.1_4);
+      val_2 = *p_1;
+
+      where __bound_tmp.1_3 are bounds computed for pointer p_1,
+      __builtin___chkp_bndcl is a lower bound check and
+      __builtin___chkp_bndcu is an upper bound check.
+
+    b) Pointer stores.
+
+    When pointer is stored in memory we need to store its bounds.  To
+    achieve compatibility of instrumented code with regular codes
+    we have to keep data layout and store bounds in special bound tables
+    via special checker call.  Implementation of bounds table may vary for
+    different platforms.  It has to associate pointer value and its
+    location (it is required because we may have two equal pointers
+    with different bounds stored in different places) with bounds.
+    Another checker builtin allows to get bounds for specified pointer
+    loaded from specified location.
+
+    Example:
+
+      buf1[i_1] = &buf2;
+
+      is transformed into:
+
+      buf1[i_1] = &buf2;
+      D.1_2 = &buf1[i_1];
+      __builtin___chkp_bndstx (D.1_2, &buf2, __bound_tmp.1_2);
+
+      where __bound_tmp.1_2 are bounds of &buf2.
+
+    c) Static initialization.
+
+    The special case of pointer store is static pointer initialization.
+    Bounds initialization is performed in a few steps:
+      - register all static initializations in front-end using
+      chkp_register_var_initializer
+      - when file compilation finishes we create functions with special
+      attribute 'chkp ctor' and put explicit initialization code
+      (assignments) for all statically initialized pointers.
+      - when checker constructor is compiled checker pass adds required
+      bounds initialization for all statically initialized pointers
+      - since we do not actually need excess pointers initialization
+      in checker constructor we remove such assignments from them
+
+    d) Calls.
+
+    For each call in the code we add additional arguments to pass
+    bounds for pointer arguments.  We determine type of call arguments
+    using arguments list from function declaration; if function
+    declaration is not available we use function type; otherwise
+    (e.g. for unnamed arguments) we use type of passed value. Function
+    declaration/type is replaced with the instrumented one.
+
+    Example:
+
+      val_1 = foo (&buf1, &buf2, &buf1, 0);
+
+      is translated into:
+
+      val_1 = foo.chkp (&buf1, __bound_tmp.1_2, &buf2, __bound_tmp.1_3,
+                        &buf1, __bound_tmp.1_2, 0);
+
+    e) Returns.
+
+    If function returns a pointer value we have to return bounds also.
+    A new operand was added for return statement to hold returned bounds.
+
+    Example:
+
+      return &_buf1;
+
+      is transformed into
+
+      return &_buf1, __bound_tmp.1_1;
+
+    3. Bounds computation.
+
+    Compiler is fully responsible for computing bounds to be used for each
+    memory access.  The first step for bounds computation is to find the
+    origin of pointer dereferenced for memory access.  Basing on pointer
+    origin we define a way to compute its bounds.  There are just few
+    possible cases:
+
+    a) Pointer is returned by call.
+
+    In this case we use corresponding checker builtin method to obtain returned
+    bounds.
+
+    Example:
+
+      buf_1 = malloc (size_2);
+      foo (buf_1);
+
+      is translated into:
+
+      buf_1 = malloc (size_2);
+      __bound_tmp.1_3 = __builtin___chkp_bndret (buf_1);
+      foo (buf_1, __bound_tmp.1_3);
+
+    b) Pointer is an address of an object.
+
+    In this case compiler tries to compute objects size and create corresponding
+    bounds.  If object has incomplete type then special checker builtin is used to
+    obtain its size at runtime.
+
+    Example:
+
+      foo ()
+      {
+        <unnamed type> __bound_tmp.3;
+	static int buf[100];
+
+	<bb 3>:
+	__bound_tmp.3_2 = __builtin___chkp_bndmk (&buf, 400);
+
+	<bb 2>:
+	return &buf, __bound_tmp.3_2;
+      }
+
+    Example:
+
+      Address of an object 'extern int buf[]' with incomplete type is
+      returned.
+
+      foo ()
+      {
+        <unnamed type> __bound_tmp.4;
+	long unsigned int __size_tmp.3;
+
+	<bb 3>:
+	__size_tmp.3_4 = __builtin_ia32_sizeof (buf);
+	__bound_tmp.4_3 = __builtin_ia32_bndmk (&buf, __size_tmp.3_4);
+
+	<bb 2>:
+	return &buf, __bound_tmp.4_3;
+      }
+
+    c) Pointer is the result of object narrowing.
+
+    It happens when we use pointer to an object to compute pointer to a part
+    of an object.  E.g. we take pointer to a field of a structure. In this
+    case we perform bounds intersection using bounds of original object and
+    bounds of object's part (which are computed basing on its type).
+
+    There may be some debatable questions about when narrowing should occur
+    and when it should not.  To avoid false bound violations in correct
+    programs we do not perform narrowing when address of an array element is
+    obtained (it has address of the whole array) and when address of the first
+    structure field is obtained (because it is guaranteed to be equal to
+    address of the whole structure and it is legal to cast it back to structure).
+
+    Default narrowing behavior may be changed using compiler flags.
+
+    Example:
+
+      In this example address of the second structure field is returned.
+
+      foo (struct A * p, __bounds_type __bounds_of_p)
+      {
+        <unnamed type> __bound_tmp.3;
+	int * _2;
+	int * _5;
+
+	<bb 2>:
+	_5 = &p_1(D)->second_field;
+	__bound_tmp.3_6 = __builtin___chkp_bndmk (_5, 4);
+	__bound_tmp.3_8 = __builtin___chkp_intersect (__bound_tmp.3_6,
+	                                              __bounds_of_p_3(D));
+	_2 = &p_1(D)->second_field;
+	return _2, __bound_tmp.3_8;
+      }
+
+    Example:
+
+      In this example address of the first field of array element is returned.
+
+      foo (struct A * p, __bounds_type __bounds_of_p, int i)
+      {
+	long unsigned int _3;
+	long unsigned int _4;
+	struct A * _6;
+	int * _7;
+
+	<bb 2>:
+	_3 = (long unsigned int) i_1(D);
+	_4 = _3 * 8;
+	_6 = p_5(D) + _4;
+	_7 = &_6->first_field;
+	return _7, __bounds_of_p_2(D);
+      }
+
+
+    d) Pointer is the result of pointer arithmetic or type cast.
+
+    In this case bounds of the base pointer are used.  In case of binary
+    operation producing a pointer we are analyzing data flow further
+    looking for operand's bounds.  One operand is considered as a base
+    if it has some valid bounds.  If we fall into a case when none of
+    operands (or both of them) has valid bounds, a default bounds value
+    is used.
+
+    Trying to find out bounds for binary operations we may fall into
+    cyclic dependencies for pointers.  To avoid infinite recursion all
+    walked phi nodes instantly obtain corresponding bounds but created
+    bounds are marked as incomplete.  It helps us to stop DF walk during
+    bounds search.
+
+    When we reach pointer source, some args of incomplete bounds phi obtain
+    valid bounds and those values are propagated further through phi nodes.
+    If no valid bounds were found for phi node then we mark its result as
+    invalid bounds.  Process stops when all incomplete bounds become either
+    valid or invalid and we are able to choose a pointer base.
+
+    e) Pointer is loaded from the memory.
+
+    In this case we just need to load bounds from the bounds table.
+
+    Example:
+
+      foo ()
+      {
+        <unnamed type> __bound_tmp.3;
+	static int * buf;
+	int * _2;
+
+	<bb 2>:
+	_2 = buf;
+	__bound_tmp.3_4 = __builtin___chkp_bndldx (&buf, _2);
+	return _2, __bound_tmp.3_4;
+      }
+
+*/
+
 typedef void (*assign_handler)(tree, tree, void *);
 
 static tree chkp_get_zero_bounds ();
+static tree chkp_find_bounds (tree ptr, gimple_stmt_iterator *iter);
+static tree chkp_find_bounds_loaded (tree ptr, tree ptr_src,
+				     gimple_stmt_iterator *iter);
+static void chkp_parse_array_and_component_ref (tree node, tree *ptr,
+						tree *elt, bool *safe,
+						bool *bitfield,
+						tree *bounds,
+						gimple_stmt_iterator *iter,
+						bool innermost_bounds);
 
 #define chkp_bndldx_fndecl \
   (targetm.builtin_chkp_function (BUILT_IN_CHKP_BNDLDX))
@@ -345,6 +614,83 @@  chkp_make_bounds_for_struct_addr (tree ptr)
 			  2, ptr, size);
 }
 
+/* Traversal function for chkp_may_finish_incomplete_bounds.
+   Set RES to 0 if at least one argument of phi statement
+   defining bounds (passed in KEY arg) is unknown.
+   Traversal stops when first unknown phi argument is found.  */
+bool
+chkp_may_complete_phi_bounds (tree const &bounds, tree *slot ATTRIBUTE_UNUSED,
+			      bool *res)
+{
+  gimple phi;
+  unsigned i;
+
+  gcc_assert (TREE_CODE (bounds) == SSA_NAME);
+
+  phi = SSA_NAME_DEF_STMT (bounds);
+
+  gcc_assert (phi && gimple_code (phi) == GIMPLE_PHI);
+
+  for (i = 0; i < gimple_phi_num_args (phi); i++)
+    {
+      tree phi_arg = gimple_phi_arg_def (phi, i);
+      if (!phi_arg)
+	{
+	  *res = false;
+	  /* Do not need to traverse further.  */
+	  return false;
+	}
+    }
+
+  return true;
+}
+
+/* Return 1 if all phi nodes created for bounds have their
+   arguments computed.  */
+static bool
+chkp_may_finish_incomplete_bounds (void)
+{
+  bool res = true;
+
+  chkp_incomplete_bounds_map
+    ->traverse<bool *, chkp_may_complete_phi_bounds> (&res);
+
+  return res;
+}
+
+/* Helper function for chkp_finish_incomplete_bounds.
+   Recompute args for bounds phi node.  */
+bool
+chkp_recompute_phi_bounds (tree const &bounds, tree *slot,
+			   void *res ATTRIBUTE_UNUSED)
+{
+  tree ptr = *slot;
+  gimple bounds_phi;
+  gimple ptr_phi;
+  unsigned i;
+
+  gcc_assert (TREE_CODE (bounds) == SSA_NAME);
+  gcc_assert (TREE_CODE (ptr) == SSA_NAME);
+
+  bounds_phi = SSA_NAME_DEF_STMT (bounds);
+  ptr_phi = SSA_NAME_DEF_STMT (ptr);
+
+  gcc_assert (bounds_phi && gimple_code (bounds_phi) == GIMPLE_PHI);
+  gcc_assert (ptr_phi && gimple_code (ptr_phi) == GIMPLE_PHI);
+
+  for (i = 0; i < gimple_phi_num_args (bounds_phi); i++)
+    {
+      tree ptr_arg = gimple_phi_arg_def (ptr_phi, i);
+      tree bound_arg = chkp_find_bounds (ptr_arg, NULL);
+
+      add_phi_arg (bounds_phi, bound_arg,
+		   gimple_phi_arg_edge (ptr_phi, i),
+		   UNKNOWN_LOCATION);
+    }
+
+  return true;
+}
+
 /* Mark BOUNDS as invalid.  */
 static void
 chkp_mark_invalid_bounds (tree bounds)
@@ -370,6 +716,45 @@  chkp_valid_bounds (tree bounds)
 }
 
 /* Helper function for chkp_finish_incomplete_bounds.
+   Check all arguments of phi nodes trying to find
+   valid completed bounds.  If there is at least one
+   such arg then bounds produced by phi node are marked
+   as valid completed bounds and all phi args are
+   recomputed.  */
+bool
+chkp_find_valid_phi_bounds (tree const &bounds, tree *slot, bool *res)
+{
+  gimple phi;
+  unsigned i;
+
+  gcc_assert (TREE_CODE (bounds) == SSA_NAME);
+
+  if (chkp_completed_bounds (bounds))
+    return true;
+
+  phi = SSA_NAME_DEF_STMT (bounds);
+
+  gcc_assert (phi && gimple_code (phi) == GIMPLE_PHI);
+
+  for (i = 0; i < gimple_phi_num_args (phi); i++)
+    {
+      tree phi_arg = gimple_phi_arg_def (phi, i);
+
+      gcc_assert (phi_arg);
+
+      if (chkp_valid_bounds (phi_arg) && !chkp_incomplete_bounds (phi_arg))
+	{
+	  *res = true;
+	  chkp_mark_completed_bounds (bounds);
+	  chkp_recompute_phi_bounds (bounds, slot, NULL);
+	  return true;
+	}
+    }
+
+  return true;
+}
+
+/* Helper function for chkp_finish_incomplete_bounds.
    Marks all incompleted bounds as invalid.  */
 bool
 chkp_mark_invalid_bounds_walker (tree const &bounds,
@@ -384,6 +769,40 @@  chkp_mark_invalid_bounds_walker (tree const &bounds,
   return true;
 }
 
+/* When all bound phi nodes have all their args computed
+   we have enough info to find valid bounds.  We iterate
+   through all incompleted bounds searching for valid
+   bounds.  Found valid bounds are marked as completed
+   and all remaining incompleted bounds are recomputed.
+   Process continues until no new valid bounds may be
+   found.  All remained incompleted bounds are marked as
+   invalid (i.e. have no valid source of bounds).  */
+static void
+chkp_finish_incomplete_bounds (void)
+{
+  bool found_valid;
+
+  while (found_valid)
+    {
+      found_valid = false;
+
+      chkp_incomplete_bounds_map->
+	traverse<bool *, chkp_find_valid_phi_bounds> (&found_valid);
+
+      if (found_valid)
+	chkp_incomplete_bounds_map->
+	  traverse<void *, chkp_recompute_phi_bounds> (NULL);
+    }
+
+  chkp_incomplete_bounds_map->
+    traverse<void *, chkp_mark_invalid_bounds_walker> (NULL);
+  chkp_incomplete_bounds_map->
+    traverse<void *, chkp_recompute_phi_bounds> (NULL);
+
+  chkp_erase_completed_bounds ();
+  chkp_erase_incomplete_bounds ();
+}
+
 /* Return 1 if type TYPE is a pointer type or a
    structure having a pointer type as one of its fields.
    Otherwise return 0.  */
@@ -491,6 +910,129 @@  chkp_get_bounds_var (tree ptr_var)
   return bnd_var;
 }
 
+
+
+/* Register bounds BND for object PTR in global bounds table.
+   A copy of bounds may be created for abnormal ssa names.
+   Returns bounds to use for PTR.  */
+static tree
+chkp_maybe_copy_and_register_bounds (tree ptr, tree bnd)
+{
+  bool abnormal_ptr;
+
+  if (!chkp_reg_bounds)
+    return bnd;
+
+  /* Do nothing if bounds are incomplete_bounds
+     because it means bounds will be recomputed.  */
+  if (bnd == incomplete_bounds)
+    return bnd;
+
+  abnormal_ptr = (TREE_CODE (ptr) == SSA_NAME
+		  && SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ptr)
+		  && gimple_code (SSA_NAME_DEF_STMT (ptr)) != GIMPLE_PHI);
+
+  /* A single bounds value may be reused multiple times for
+     different pointer values.  It may cause coalescing issues
+     for abnormal SSA names.  To avoid it we create a bounds
+     copy in case it is copmputed for abnormal SSA name.
+
+     We also cannot reuse such created copies for other pointers  */
+  if (abnormal_ptr
+      || bitmap_bit_p (chkp_abnormal_copies, SSA_NAME_VERSION (bnd)))
+    {
+      tree bnd_var;
+
+      if (abnormal_ptr)
+	bnd_var = chkp_get_bounds_var (SSA_NAME_VAR (ptr));
+      else
+	bnd_var = chkp_get_tmp_var ();
+
+      /* For abnormal copies we may just find original
+	 bounds and use them.  */
+      if (!abnormal_ptr && !SSA_NAME_IS_DEFAULT_DEF (bnd))
+	{
+	  gimple bnd_def = SSA_NAME_DEF_STMT (bnd);
+	  gcc_checking_assert (gimple_code (bnd_def) == GIMPLE_ASSIGN);
+	  bnd = gimple_assign_rhs1 (bnd_def);
+	}
+      /* For undefined values we usually use none bounds
+	 value but in case of abnormal edge it may cause
+	 coalescing failures.  Use default definition of
+	 bounds variable instead to avoid it.  */
+      else if (SSA_NAME_IS_DEFAULT_DEF (ptr)
+	       && TREE_CODE (SSA_NAME_VAR (ptr)) != PARM_DECL)
+	{
+	  bnd = get_or_create_ssa_default_def (cfun, bnd_var);
+
+	  if (dump_file && (dump_flags & TDF_DETAILS))
+	    {
+	      fprintf (dump_file, "Using default def bounds ");
+	      print_generic_expr (dump_file, bnd, 0);
+	      fprintf (dump_file, " for abnormal default def SSA name ");
+	      print_generic_expr (dump_file, ptr, 0);
+	      fprintf (dump_file, "\n");
+	    }
+	}
+      else
+	{
+	  tree copy = make_ssa_name (bnd_var, gimple_build_nop ());
+	  gimple def = SSA_NAME_DEF_STMT (ptr);
+	  gimple assign = gimple_build_assign (copy, bnd);
+	  gimple_stmt_iterator gsi;
+
+	  if (dump_file && (dump_flags & TDF_DETAILS))
+	    {
+	      fprintf (dump_file, "Creating a copy of bounds ");
+	      print_generic_expr (dump_file, bnd, 0);
+	      fprintf (dump_file, " for abnormal SSA name ");
+	      print_generic_expr (dump_file, ptr, 0);
+	      fprintf (dump_file, "\n");
+	    }
+
+	  if (gimple_code (def) == GIMPLE_NOP)
+	    {
+	      gsi = gsi_last_bb (chkp_get_entry_block ());
+	      if (!gsi_end_p (gsi) && is_ctrl_stmt (gsi_stmt (gsi)))
+		gsi_insert_before (&gsi, assign, GSI_CONTINUE_LINKING);
+	      else
+		gsi_insert_after (&gsi, assign, GSI_CONTINUE_LINKING);
+	    }
+	  else
+	    {
+	      gimple bnd_def = SSA_NAME_DEF_STMT (bnd);
+	      /* Sometimes (e.g. when we load a pointer from a
+		 memory) bounds are produced later than a pointer.
+		 We need to insert bounds copy appropriately.  */
+	      if (gimple_code (bnd_def) != GIMPLE_NOP
+		  && stmt_dominates_stmt_p (def, bnd_def))
+		gsi = gsi_for_stmt (bnd_def);
+	      else
+		gsi = gsi_for_stmt (def);
+	      gsi_insert_after (&gsi, assign, GSI_CONTINUE_LINKING);
+	    }
+
+	  bnd = copy;
+	}
+
+      if (abnormal_ptr)
+	bitmap_set_bit (chkp_abnormal_copies, SSA_NAME_VERSION (bnd));
+    }
+
+  chkp_reg_bounds->put (ptr, bnd);
+
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      fprintf (dump_file, "Regsitered bound ");
+      print_generic_expr (dump_file, bnd, 0);
+      fprintf (dump_file, " for pointer ");
+      print_generic_expr (dump_file, ptr, 0);
+      fprintf (dump_file, "\n");
+    }
+
+  return bnd;
+}
+
 /* Get bounds registered for object PTR in global bounds table.  */
 static tree
 chkp_get_registered_bounds (tree ptr)
@@ -977,6 +1519,112 @@  chkp_get_nonpointer_load_bounds (void)
   return chkp_get_zero_bounds ();
 }
 
+/* Build bounds returned by CALL.  */
+static tree
+chkp_build_returned_bound (gimple call)
+{
+  gimple_stmt_iterator gsi;
+  tree bounds;
+  gimple stmt;
+  tree fndecl = gimple_call_fndecl (call);
+
+  /* To avoid fixing alloca expands in targets we handle
+     it separately.  */
+  if (fndecl
+      && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL
+      && (DECL_FUNCTION_CODE (fndecl) == BUILT_IN_ALLOCA
+	  || DECL_FUNCTION_CODE (fndecl) == BUILT_IN_ALLOCA_WITH_ALIGN))
+    {
+      tree size = gimple_call_arg (call, 0);
+      tree lb = gimple_call_lhs (call);
+      gimple_stmt_iterator iter = gsi_for_stmt (call);
+      bounds = chkp_make_bounds (lb, size, &iter, true);
+    }
+  /* We know bounds returned by set_bounds builtin call.  */
+  else if (fndecl
+	   && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL
+	   && DECL_FUNCTION_CODE (fndecl) == BUILT_IN_CHKP_SET_PTR_BOUNDS)
+    {
+      tree lb = gimple_call_arg (call, 0);
+      tree size = gimple_call_arg (call, 1);
+      gimple_stmt_iterator iter = gsi_for_stmt (call);
+      bounds = chkp_make_bounds (lb, size, &iter, true);
+    }
+  /* Detect bounds initialization calls.  */
+  else if (fndecl
+      && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL
+      && DECL_FUNCTION_CODE (fndecl) == BUILT_IN_CHKP_INIT_PTR_BOUNDS)
+    bounds = chkp_get_zero_bounds ();
+  /* Detect bounds nullification calls.  */
+  else if (fndecl
+      && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL
+      && DECL_FUNCTION_CODE (fndecl) == BUILT_IN_CHKP_NULL_PTR_BOUNDS)
+    bounds = chkp_get_none_bounds ();
+  /* Detect bounds copy calls.  */
+  else if (fndecl
+      && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL
+      && DECL_FUNCTION_CODE (fndecl) == BUILT_IN_CHKP_COPY_PTR_BOUNDS)
+    {
+      gimple_stmt_iterator iter = gsi_for_stmt (call);
+      bounds = chkp_find_bounds (gimple_call_arg (call, 1), &iter);
+    }
+  /* Do not use retbnd when returned bounds are equal to some
+     of passed bounds.  */
+  else if ((gimple_call_return_flags (call) & ERF_RETURNS_ARG)
+	   || gimple_call_builtin_p (call, BUILT_IN_STRCHR))
+    {
+      gimple_stmt_iterator iter = gsi_for_stmt (call);
+      unsigned int retarg = 0, argno;
+      if (gimple_call_return_flags (call) & ERF_RETURNS_ARG)
+	retarg = gimple_call_return_flags (call) & ERF_RETURN_ARG_MASK;
+      if (gimple_call_with_bounds_p (call))
+	{
+	  for (argno = 0; argno < gimple_call_num_args (call); argno++)
+	    if (!POINTER_BOUNDS_P (gimple_call_arg (call, argno)))
+	      {
+		if (retarg)
+		  retarg--;
+		else
+		  break;
+	      }
+	}
+      else
+	argno = retarg;
+
+      bounds = chkp_find_bounds (gimple_call_arg (call, argno), &iter);
+    }
+  else
+    {
+      gcc_assert (TREE_CODE (gimple_call_lhs (call)) == SSA_NAME);
+
+      /* In general case build checker builtin call to
+	 obtain returned bounds.  */
+      stmt = gimple_build_call (chkp_ret_bnd_fndecl, 1,
+				gimple_call_lhs (call));
+      chkp_mark_stmt (stmt);
+
+      gsi = gsi_for_stmt (call);
+      gsi_insert_after (&gsi, stmt, GSI_SAME_STMT);
+
+      bounds = chkp_get_tmp_reg (stmt);
+      gimple_call_set_lhs (stmt, bounds);
+
+      update_stmt (stmt);
+    }
+
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      fprintf (dump_file, "Built returned bounds (");
+      print_generic_expr (dump_file, bounds, 0);
+      fprintf (dump_file, ") for call: ");
+      print_gimple_stmt (dump_file, call, 0, TDF_VOPS|TDF_MEMSYMS);
+    }
+
+  bounds = chkp_maybe_copy_and_register_bounds (gimple_call_lhs (call), bounds);
+
+  return bounds;
+}
+
 /* Return bounds used as returned by call
    which produced SSA name VAL.  */
 gimple
@@ -1013,33 +1661,99 @@  chkp_get_next_bounds_parm (tree parm)
   return bounds;
 }
 
-/* Build and return CALL_EXPR for bndstx builtin with specified
-   arguments.  */
-tree
-chkp_build_bndldx_call (tree addr, tree ptr)
-{
-  tree fn = build1 (ADDR_EXPR,
-		    build_pointer_type (TREE_TYPE (chkp_bndldx_fndecl)),
-		    chkp_bndldx_fndecl);
-  tree call = build_call_nary (TREE_TYPE (TREE_TYPE (chkp_bndldx_fndecl)),
-			       fn, 2, addr, ptr);
-  CALL_WITH_BOUNDS_P (call) = true;
-  return call;
-}
-
-/* Insert code to load bounds for PTR located by ADDR.
-   Code is inserted after position pointed by GSI.
-   Loaded bounds are returned.  */
+/* Return bounds to be used for input argument PARM.  */
 static tree
-chkp_build_bndldx (tree addr, tree ptr, gimple_stmt_iterator *gsi)
+chkp_get_bound_for_parm (tree parm)
 {
-  gimple_seq seq;
-  gimple stmt;
+  tree decl = SSA_NAME_VAR (parm);
   tree bounds;
 
-  seq = NULL;
+  gcc_assert (TREE_CODE (decl) == PARM_DECL);
 
-  addr = chkp_force_gimple_call_op (addr, &seq);
+  bounds = chkp_get_registered_bounds (parm);
+
+  if (!bounds)
+    bounds = chkp_get_registered_bounds (decl);
+
+  if (!bounds)
+    {
+      tree orig_decl = cgraph_node::get (cfun->decl)->orig_decl;
+
+      /* For static chain param we return zero bounds
+	 because currently we do not check dereferences
+	 of this pointer.  */
+      /* ?? Is it a correct way to identify such parm?  */
+      if (cfun->decl && DECL_STATIC_CHAIN (cfun->decl)
+	  && DECL_ARTIFICIAL (decl))
+	bounds = chkp_get_zero_bounds ();
+      /* If non instrumented runtime is used then it may be useful
+	 to use zero bounds for input arguments of main
+	 function.  */
+      else if (flag_chkp_zero_input_bounds_for_main
+	       && strcmp (IDENTIFIER_POINTER (DECL_ASSEMBLER_NAME (orig_decl)),
+			  "main") == 0)
+	bounds = chkp_get_zero_bounds ();
+      else if (BOUNDED_P (parm))
+	{
+	  bounds = chkp_get_next_bounds_parm (decl);
+	  bounds = chkp_maybe_copy_and_register_bounds (decl, bounds);
+
+	  if (dump_file && (dump_flags & TDF_DETAILS))
+	    {
+	      fprintf (dump_file, "Built arg bounds (");
+	      print_generic_expr (dump_file, bounds, 0);
+	      fprintf (dump_file, ") for arg: ");
+	      print_node (dump_file, "", decl, 0);
+	    }
+	}
+      else
+	bounds = chkp_get_zero_bounds ();
+    }
+
+  if (!chkp_get_registered_bounds (parm))
+    bounds = chkp_maybe_copy_and_register_bounds (parm, bounds);
+
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      fprintf (dump_file, "Using bounds ");
+      print_generic_expr (dump_file, bounds, 0);
+      fprintf (dump_file, " for parm ");
+      print_generic_expr (dump_file, parm, 0);
+      fprintf (dump_file, " of type ");
+      print_generic_expr (dump_file, TREE_TYPE (parm), 0);
+      fprintf (dump_file, ".\n");
+    }
+
+  return bounds;
+}
+
+/* Build and return CALL_EXPR for bndstx builtin with specified
+   arguments.  */
+tree
+chkp_build_bndldx_call (tree addr, tree ptr)
+{
+  tree fn = build1 (ADDR_EXPR,
+		    build_pointer_type (TREE_TYPE (chkp_bndldx_fndecl)),
+		    chkp_bndldx_fndecl);
+  tree call = build_call_nary (TREE_TYPE (TREE_TYPE (chkp_bndldx_fndecl)),
+			       fn, 2, addr, ptr);
+  CALL_WITH_BOUNDS_P (call) = true;
+  return call;
+}
+
+/* Insert code to load bounds for PTR located by ADDR.
+   Code is inserted after position pointed by GSI.
+   Loaded bounds are returned.  */
+static tree
+chkp_build_bndldx (tree addr, tree ptr, gimple_stmt_iterator *gsi)
+{
+  gimple_seq seq;
+  gimple stmt;
+  tree bounds;
+
+  seq = NULL;
+
+  addr = chkp_force_gimple_call_op (addr, &seq);
   ptr = chkp_force_gimple_call_op (ptr, &seq);
 
   stmt = gimple_build_call (chkp_bndldx_fndecl, 2, addr, ptr);
@@ -1107,6 +1821,323 @@  chkp_build_bndstx (tree addr, tree ptr, tree bounds,
     }
 }
 
+/* Compute bounds for pointer NODE which was assigned in
+   assignment statement ASSIGN.  Return computed bounds.  */
+static tree
+chkp_compute_bounds_for_assignment (tree node, gimple assign)
+{
+  enum tree_code rhs_code = gimple_assign_rhs_code (assign);
+  tree rhs1 = gimple_assign_rhs1 (assign);
+  tree bounds = NULL_TREE;
+  gimple_stmt_iterator iter = gsi_for_stmt (assign);
+
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      fprintf (dump_file, "Computing bounds for assignment: ");
+      print_gimple_stmt (dump_file, assign, 0, TDF_VOPS|TDF_MEMSYMS);
+    }
+
+  switch (rhs_code)
+    {
+    case MEM_REF:
+    case TARGET_MEM_REF:
+    case COMPONENT_REF:
+    case ARRAY_REF:
+      /* We need to load bounds from the bounds table.  */
+      bounds = chkp_find_bounds_loaded (node, rhs1, &iter);
+      break;
+
+    case VAR_DECL:
+    case SSA_NAME:
+    case ADDR_EXPR:
+    case POINTER_PLUS_EXPR:
+    case NOP_EXPR:
+    case CONVERT_EXPR:
+    case INTEGER_CST:
+      /* Bounds are just propagated from RHS.  */
+      bounds = chkp_find_bounds (rhs1, &iter);
+      break;
+
+    case VIEW_CONVERT_EXPR:
+      /* Bounds are just propagated from RHS.  */
+      bounds = chkp_find_bounds (TREE_OPERAND (rhs1, 0), &iter);
+      break;
+
+    case PARM_DECL:
+      if (BOUNDED_P (rhs1))
+	{
+	  /* We need to load bounds from the bounds table.  */
+	  bounds = chkp_build_bndldx (chkp_build_addr_expr (rhs1),
+				      node, &iter);
+	  TREE_ADDRESSABLE (rhs1) = 1;
+	}
+      else
+	bounds = chkp_get_nonpointer_load_bounds ();
+      break;
+
+    case MINUS_EXPR:
+    case PLUS_EXPR:
+    case BIT_AND_EXPR:
+    case BIT_IOR_EXPR:
+    case BIT_XOR_EXPR:
+      {
+	tree rhs2 = gimple_assign_rhs2 (assign);
+	tree bnd1 = chkp_find_bounds (rhs1, &iter);
+	tree bnd2 = chkp_find_bounds (rhs2, &iter);
+
+	/* First we try to check types of operands.  If it
+	   does not help then look at bound values.
+
+	   If some bounds are incomplete and other are
+	   not proven to be valid (i.e. also incomplete
+	   or invalid because value is not pointer) then
+	   resulting value is incomplete and will be
+	   recomputed later in chkp_finish_incomplete_bounds.  */
+	if (BOUNDED_P (rhs1)
+	    && !BOUNDED_P (rhs2))
+	  bounds = bnd1;
+	else if (BOUNDED_P (rhs2)
+		 && !BOUNDED_P (rhs1)
+		 && rhs_code != MINUS_EXPR)
+	  bounds = bnd2;
+	else if (chkp_incomplete_bounds (bnd1))
+	  if (chkp_valid_bounds (bnd2) && rhs_code != MINUS_EXPR
+	      && !chkp_incomplete_bounds (bnd2))
+	    bounds = bnd2;
+	  else
+	    bounds = incomplete_bounds;
+	else if (chkp_incomplete_bounds (bnd2))
+	  if (chkp_valid_bounds (bnd1)
+	      && !chkp_incomplete_bounds (bnd1))
+	    bounds = bnd1;
+	  else
+	    bounds = incomplete_bounds;
+	else if (!chkp_valid_bounds (bnd1))
+	  if (chkp_valid_bounds (bnd2) && rhs_code != MINUS_EXPR)
+	    bounds = bnd2;
+	  else if (bnd2 == chkp_get_zero_bounds ())
+	    bounds = bnd2;
+	  else
+	    bounds = bnd1;
+	else if (!chkp_valid_bounds (bnd2))
+	  bounds = bnd1;
+	else
+	  /* Seems both operands may have valid bounds
+	     (e.g. pointer minus pointer).  In such case
+	     use default invalid op bounds.  */
+	  bounds = chkp_get_invalid_op_bounds ();
+      }
+      break;
+
+    case BIT_NOT_EXPR:
+    case NEGATE_EXPR:
+    case LSHIFT_EXPR:
+    case RSHIFT_EXPR:
+    case LROTATE_EXPR:
+    case RROTATE_EXPR:
+    case EQ_EXPR:
+    case NE_EXPR:
+    case LT_EXPR:
+    case LE_EXPR:
+    case GT_EXPR:
+    case GE_EXPR:
+    case MULT_EXPR:
+    case RDIV_EXPR:
+    case TRUNC_DIV_EXPR:
+    case FLOOR_DIV_EXPR:
+    case CEIL_DIV_EXPR:
+    case ROUND_DIV_EXPR:
+    case TRUNC_MOD_EXPR:
+    case FLOOR_MOD_EXPR:
+    case CEIL_MOD_EXPR:
+    case ROUND_MOD_EXPR:
+    case EXACT_DIV_EXPR:
+    case FIX_TRUNC_EXPR:
+    case FLOAT_EXPR:
+    case REALPART_EXPR:
+    case IMAGPART_EXPR:
+      /* No valid bounds may be produced by these exprs.  */
+      bounds = chkp_get_invalid_op_bounds ();
+      break;
+
+    case COND_EXPR:
+      {
+	tree val1 = gimple_assign_rhs2 (assign);
+	tree val2 = gimple_assign_rhs3 (assign);
+	tree bnd1 = chkp_find_bounds (val1, &iter);
+	tree bnd2 = chkp_find_bounds (val2, &iter);
+	gimple stmt;
+
+	if (chkp_incomplete_bounds (bnd1) || chkp_incomplete_bounds (bnd2))
+	  bounds = incomplete_bounds;
+	else if (bnd1 == bnd2)
+	  bounds = bnd1;
+	else
+	  {
+	    if (!chkp_can_be_shared (rhs1))
+	      rhs1 = unshare_expr (rhs1);
+
+	    bounds = chkp_get_tmp_reg (assign);
+	    stmt = gimple_build_assign_with_ops (COND_EXPR, bounds,
+						  rhs1, bnd1, bnd2);
+	    gsi_insert_after (&iter, stmt, GSI_SAME_STMT);
+
+	    if (!chkp_valid_bounds (bnd1) && !chkp_valid_bounds (bnd2))
+	      chkp_mark_invalid_bounds (bounds);
+	  }
+      }
+      break;
+
+    case MAX_EXPR:
+    case MIN_EXPR:
+      {
+	tree rhs2 = gimple_assign_rhs2 (assign);
+	tree bnd1 = chkp_find_bounds (rhs1, &iter);
+	tree bnd2 = chkp_find_bounds (rhs2, &iter);
+
+	if (chkp_incomplete_bounds (bnd1) || chkp_incomplete_bounds (bnd2))
+	  bounds = incomplete_bounds;
+	else if (bnd1 == bnd2)
+	  bounds = bnd1;
+	else
+	  {
+	    gimple stmt;
+	    tree cond = build2 (rhs_code == MAX_EXPR ? GT_EXPR : LT_EXPR,
+				boolean_type_node, rhs1, rhs2);
+	    bounds = chkp_get_tmp_reg (assign);
+	    stmt = gimple_build_assign_with_ops (COND_EXPR, bounds,
+						  cond, bnd1, bnd2);
+
+	    gsi_insert_after (&iter, stmt, GSI_SAME_STMT);
+
+	    if (!chkp_valid_bounds (bnd1) && !chkp_valid_bounds (bnd2))
+	      chkp_mark_invalid_bounds (bounds);
+	  }
+      }
+      break;
+
+    default:
+      internal_error ("chkp_compute_bounds_for_assignment: "
+		      "Unexpected RHS code %s",
+		      get_tree_code_name (rhs_code));
+    }
+
+  gcc_assert (bounds);
+
+  if (node)
+    bounds = chkp_maybe_copy_and_register_bounds (node, bounds);
+
+  return bounds;
+}
+
+/* Compute bounds for ssa name NODE defined by DEF_STMT pointed by ITER.
+
+   There are just few statement codes allowed: NOP (for default ssa names),
+   ASSIGN, CALL, PHI, ASM.
+
+   Return computed bounds.  */
+static tree
+chkp_get_bounds_by_definition (tree node, gimple def_stmt,
+			       gimple_stmt_iterator *iter)
+{
+  tree var, bounds;
+  enum gimple_code code = gimple_code (def_stmt);
+  gimple stmt;
+
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      fprintf (dump_file, "Searching for bounds for node: ");
+      print_generic_expr (dump_file, node, 0);
+
+      fprintf (dump_file, " using its definition: ");
+      print_gimple_stmt (dump_file, def_stmt, 0, TDF_VOPS|TDF_MEMSYMS);
+    }
+
+  switch (code)
+    {
+    case GIMPLE_NOP:
+      var = SSA_NAME_VAR (node);
+      switch (TREE_CODE (var))
+	{
+	case PARM_DECL:
+	  bounds = chkp_get_bound_for_parm (node);
+	  break;
+
+	case VAR_DECL:
+	  /* For uninitialized pointers use none bounds.  */
+	  bounds = chkp_get_none_bounds ();
+	  bounds = chkp_maybe_copy_and_register_bounds (node, bounds);
+	  break;
+
+	case RESULT_DECL:
+	  {
+	    tree base_type;
+
+	    gcc_assert (TREE_CODE (TREE_TYPE (node)) == REFERENCE_TYPE);
+
+	    base_type = TREE_TYPE (TREE_TYPE (node));
+
+	    gcc_assert (TYPE_SIZE (base_type)
+			&& TREE_CODE (TYPE_SIZE (base_type)) == INTEGER_CST
+			&& tree_to_uhwi (TYPE_SIZE (base_type)) != 0);
+
+	    bounds = chkp_make_bounds (node, TYPE_SIZE_UNIT (base_type),
+				       NULL, false);
+	    bounds = chkp_maybe_copy_and_register_bounds (node, bounds);
+	  }
+	  break;
+
+	default:
+	  if (dump_file && (dump_flags & TDF_DETAILS))
+	    {
+	      fprintf (dump_file, "Unexpected var with no definition\n");
+	      print_generic_expr (dump_file, var, 0);
+	    }
+	  internal_error ("chkp_get_bounds_by_definition: Unexpected var of type %s",
+			  get_tree_code_name (TREE_CODE (var)));
+	}
+      break;
+
+    case GIMPLE_ASSIGN:
+      bounds = chkp_compute_bounds_for_assignment (node, def_stmt);
+      break;
+
+    case GIMPLE_CALL:
+      bounds = chkp_build_returned_bound (def_stmt);
+      break;
+
+    case GIMPLE_PHI:
+      if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (node))
+	var = chkp_get_bounds_var (SSA_NAME_VAR (node));
+      else
+	var = chkp_get_tmp_var ();
+      stmt = create_phi_node (var, gimple_bb (def_stmt));
+      bounds = gimple_phi_result (stmt);
+      *iter = gsi_for_stmt (stmt);
+
+      bounds = chkp_maybe_copy_and_register_bounds (node, bounds);
+
+      /* Created bounds do not have all phi args computed and
+	 therefore we do not know if there is a valid source
+	 of bounds for that node.  Therefore we mark bounds
+	 as incomplete and then recompute them when all phi
+	 args are computed.  */
+      chkp_register_incomplete_bounds (bounds, node);
+      break;
+
+    case GIMPLE_ASM:
+      bounds = chkp_get_zero_bounds ();
+      bounds = chkp_maybe_copy_and_register_bounds (node, bounds);
+      break;
+
+    default:
+      internal_error ("chkp_get_bounds_by_definition: Unexpected GIMPLE code %s",
+		      gimple_code_name[code]);
+    }
+
+  return bounds;
+}
+
 /* Return CALL_EXPR for bndmk with specified LOWER_BOUND and SIZE.  */
 tree
 chkp_build_make_bounds_call (tree lower_bound, tree size)
@@ -1289,6 +2320,110 @@  chkp_variable_size_type (tree type)
   return res;
 }
 
+/* Compute and return bounds for address of DECL which is
+   one of VAR_DECL, PARM_DECL, RESULT_DECL.  */
+static tree
+chkp_get_bounds_for_decl_addr (tree decl)
+{
+  tree bounds;
+
+  gcc_assert (TREE_CODE (decl) == VAR_DECL
+	      || TREE_CODE (decl) == PARM_DECL
+	      || TREE_CODE (decl) == RESULT_DECL);
+
+  bounds = chkp_get_registered_addr_bounds (decl);
+
+  if (bounds)
+    return bounds;
+
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      fprintf (dump_file, "Building bounds for address of decl ");
+      print_generic_expr (dump_file, decl, 0);
+      fprintf (dump_file, "\n");
+    }
+
+  /* Use zero bounds if size is unknown and checks for
+     unknown sizes are restricted.  */
+  if ((!DECL_SIZE (decl)
+       || (chkp_variable_size_type (TREE_TYPE (decl))
+	   && (TREE_STATIC (decl)
+	       || DECL_EXTERNAL (decl)
+	       || TREE_PUBLIC (decl))))
+      && !flag_chkp_incomplete_type)
+      return chkp_get_zero_bounds ();
+
+  if (flag_chkp_use_static_bounds
+      && TREE_CODE (decl) == VAR_DECL
+      && (TREE_STATIC (decl)
+	      || DECL_EXTERNAL (decl)
+	      || TREE_PUBLIC (decl))
+      && !DECL_THREAD_LOCAL_P (decl))
+    {
+      tree bnd_var = chkp_make_static_bounds (decl);
+      gimple_stmt_iterator gsi = gsi_start_bb (chkp_get_entry_block ());
+      gimple stmt;
+
+      bounds = chkp_get_tmp_reg (gimple_build_nop ());
+      stmt = gimple_build_assign (bounds, bnd_var);
+      gsi_insert_before (&gsi, stmt, GSI_SAME_STMT);
+    }
+  else if (!DECL_SIZE (decl)
+      || (chkp_variable_size_type (TREE_TYPE (decl))
+	  && (TREE_STATIC (decl)
+	      || DECL_EXTERNAL (decl)
+	      || TREE_PUBLIC (decl))))
+    {
+      gcc_assert (TREE_CODE (decl) == VAR_DECL);
+      bounds = chkp_generate_extern_var_bounds (decl);
+    }
+  else
+    {
+      tree lb = chkp_build_addr_expr (decl);
+      bounds = chkp_make_bounds (lb, DECL_SIZE_UNIT (decl), NULL, false);
+    }
+
+  return bounds;
+}
+
+/* Compute and return bounds for constant string.  */
+static tree
+chkp_get_bounds_for_string_cst (tree cst)
+{
+  tree bounds;
+  tree lb;
+  tree size;
+
+  gcc_assert (TREE_CODE (cst) == STRING_CST);
+
+  bounds = chkp_get_registered_bounds (cst);
+
+  if (bounds)
+    return bounds;
+
+  if ((flag_chkp_use_static_bounds && flag_chkp_use_static_const_bounds)
+      || flag_chkp_use_static_const_bounds > 0)
+    {
+      tree bnd_var = chkp_make_static_bounds (cst);
+      gimple_stmt_iterator gsi = gsi_start_bb (chkp_get_entry_block ());
+      gimple stmt;
+
+      bounds = chkp_get_tmp_reg (gimple_build_nop ());
+      stmt = gimple_build_assign (bounds, bnd_var);
+      gsi_insert_before (&gsi, stmt, GSI_SAME_STMT);
+    }
+  else
+    {
+      lb = chkp_build_addr_expr (cst);
+      size = build_int_cst (chkp_uintptr_type, TREE_STRING_LENGTH (cst));
+      bounds = chkp_make_bounds (lb, size, NULL, false);
+    }
+
+  bounds = chkp_maybe_copy_and_register_bounds (cst, bounds);
+
+  return bounds;
+}
+
 /* Generate code to instersect bounds BOUNDS1 and BOUNDS2 and
    return the result.  if ITER is not NULL then Code is inserted
    before position pointed by ITER.  Otherwise code is added to
@@ -1397,6 +2532,390 @@  chkp_narrow_bounds_to_field (tree bounds, tree component,
   return chkp_intersect_bounds (field_bounds, bounds, iter);
 }
 
+/* Parse field or array access NODE.
+
+   PTR ouput parameter holds a pointer to the outermost
+   object.
+
+   BITFIELD output parameter is set to 1 if bitfield is
+   accessed and to 0 otherwise.  If it is 1 then ELT holds
+   outer component for accessed bit field.
+
+   SAFE outer parameter is set to 1 if access is safe and
+   checks are not required.
+
+   BOUNDS outer parameter holds bounds to be used to check
+   access (may be NULL).
+
+   If INNERMOST_BOUNDS is 1 then try to narrow bounds to the
+   innermost accessed component.  */
+static void
+chkp_parse_array_and_component_ref (tree node, tree *ptr,
+				    tree *elt, bool *safe,
+				    bool *bitfield,
+				    tree *bounds,
+				    gimple_stmt_iterator *iter,
+				    bool innermost_bounds)
+{
+  tree comp_to_narrow = NULL_TREE;
+  tree last_comp = NULL_TREE;
+  bool array_ref_found = false;
+  tree *nodes;
+  tree var;
+  int len;
+  int i;
+
+  /* Compute tree height for expression.  */
+  var = node;
+  len = 1;
+  while (TREE_CODE (var) == COMPONENT_REF
+	 || TREE_CODE (var) == ARRAY_REF
+	 || TREE_CODE (var) == VIEW_CONVERT_EXPR)
+    {
+      var = TREE_OPERAND (var, 0);
+      len++;
+    }
+
+  gcc_assert (len > 1);
+
+  /* It is more convenient for us to scan left-to-right,
+     so walk tree again and put all node to nodes vector
+     in reversed order.  */
+  nodes = XALLOCAVEC (tree, len);
+  nodes[len - 1] = node;
+  for (i = len - 2; i >= 0; i--)
+    nodes[i] = TREE_OPERAND (nodes[i + 1], 0);
+
+  if (bounds)
+    *bounds = NULL;
+  *safe = true;
+  *bitfield = (TREE_CODE (node) == COMPONENT_REF
+	       && DECL_BIT_FIELD_TYPE (TREE_OPERAND (node, 1)));
+  /* To get bitfield address we will need outer elemnt.  */
+  if (*bitfield)
+    *elt = nodes[len - 2];
+  else
+    *elt = NULL_TREE;
+
+  /* If we have indirection in expression then compute
+     outermost structure bounds.  Computed bounds may be
+     narrowed later.  */
+  if (TREE_CODE (nodes[0]) == MEM_REF || INDIRECT_REF_P (nodes[0]))
+    {
+      *safe = false;
+      *ptr = TREE_OPERAND (nodes[0], 0);
+      if (bounds)
+	*bounds = chkp_find_bounds (*ptr, iter);
+    }
+  else
+    {
+      gcc_assert (TREE_CODE (var) == VAR_DECL
+		  || TREE_CODE (var) == PARM_DECL
+		  || TREE_CODE (var) == RESULT_DECL
+		  || TREE_CODE (var) == STRING_CST
+		  || TREE_CODE (var) == SSA_NAME);
+
+      *ptr = chkp_build_addr_expr (var);
+    }
+
+  /* In this loop we are trying to find a field access
+     requiring narrowing.  There are two simple rules
+     for search:
+     1.  Leftmost array_ref is chosen if any.
+     2.  Rightmost suitable component_ref is chosen if innermost
+     bounds are required and no array_ref exists.  */
+  for (i = 1; i < len; i++)
+    {
+      var = nodes[i];
+
+      if (TREE_CODE (var) == ARRAY_REF)
+	{
+	  *safe = false;
+	  array_ref_found = true;
+	  if (flag_chkp_narrow_bounds
+	      && !flag_chkp_narrow_to_innermost_arrray
+	      && (!last_comp
+		  || chkp_may_narrow_to_field (TREE_OPERAND (last_comp, 1))))
+	    {
+	      comp_to_narrow = last_comp;
+	      break;
+	    }
+	}
+      else if (TREE_CODE (var) == COMPONENT_REF)
+	{
+	  tree field = TREE_OPERAND (var, 1);
+
+	  if (innermost_bounds
+	      && !array_ref_found
+	      && chkp_narrow_bounds_for_field (field))
+	    comp_to_narrow = var;
+	  last_comp = var;
+
+	  if (flag_chkp_narrow_bounds
+	      && flag_chkp_narrow_to_innermost_arrray
+	      && TREE_CODE (TREE_TYPE (field)) == ARRAY_TYPE)
+	    {
+	      if (bounds)
+		*bounds = chkp_narrow_bounds_to_field (*bounds, var, iter);
+	      comp_to_narrow = NULL;
+	    }
+	}
+      else if (TREE_CODE (var) == VIEW_CONVERT_EXPR)
+	/* Nothing to do for it.  */
+	;
+      else
+	gcc_unreachable ();
+    }
+
+  if (comp_to_narrow && DECL_SIZE (TREE_OPERAND (comp_to_narrow, 1)) && bounds)
+    *bounds = chkp_narrow_bounds_to_field (*bounds, comp_to_narrow, iter);
+
+  if (innermost_bounds && bounds && !*bounds)
+    *bounds = chkp_find_bounds (*ptr, iter);
+}
+
+/* Compute and returne bounds for address of OBJ.  */
+static tree
+chkp_make_addressed_object_bounds (tree obj, gimple_stmt_iterator *iter)
+{
+  tree bounds = chkp_get_registered_addr_bounds (obj);
+
+  if (bounds)
+    return bounds;
+
+  switch (TREE_CODE (obj))
+    {
+    case VAR_DECL:
+    case PARM_DECL:
+    case RESULT_DECL:
+      bounds = chkp_get_bounds_for_decl_addr (obj);
+      break;
+
+    case STRING_CST:
+      bounds = chkp_get_bounds_for_string_cst (obj);
+      break;
+
+    case ARRAY_REF:
+    case COMPONENT_REF:
+      {
+	tree elt;
+	tree ptr;
+	bool safe;
+	bool bitfield;
+
+	chkp_parse_array_and_component_ref (obj, &ptr, &elt, &safe,
+					    &bitfield, &bounds, iter, true);
+
+	gcc_assert (bounds);
+      }
+      break;
+
+    case FUNCTION_DECL:
+    case LABEL_DECL:
+      bounds = chkp_get_zero_bounds ();
+      break;
+
+    case MEM_REF:
+      bounds = chkp_find_bounds (TREE_OPERAND (obj, 0), iter);
+      break;
+
+    case REALPART_EXPR:
+    case IMAGPART_EXPR:
+      bounds = chkp_make_addressed_object_bounds (TREE_OPERAND (obj, 0), iter);
+      break;
+
+    default:
+      if (dump_file && (dump_flags & TDF_DETAILS))
+	{
+	  fprintf (dump_file, "chkp_make_addressed_object_bounds: "
+		   "unexpected object of type %s\n",
+		   get_tree_code_name (TREE_CODE (obj)));
+	  print_node (dump_file, "", obj, 0);
+	}
+      internal_error ("chkp_make_addressed_object_bounds: "
+		      "Unexpected tree code %s",
+		      get_tree_code_name (TREE_CODE (obj)));
+    }
+
+  chkp_register_addr_bounds (obj, bounds);
+
+  return bounds;
+}
+
+/* Compute bounds for pointer PTR loaded from PTR_SRC.  Generate statements
+   to compute bounds if required.  Computed bounds should be available at
+   position pointed by ITER.
+
+   If PTR_SRC is NULL_TREE then pointer definition is identified.
+
+   If PTR_SRC is not NULL_TREE then ITER points to statements which loads
+   PTR.  If PTR is a any memory reference then ITER points to a statement
+   after which bndldx will be inserterd.  In both cases ITER will be updated
+   to point to the inserted bndldx statement.  */
+
+static tree
+chkp_find_bounds_1 (tree ptr, tree ptr_src, gimple_stmt_iterator *iter)
+{
+  tree addr = NULL_TREE;
+  tree bounds = NULL_TREE;
+
+  if (!ptr_src)
+    ptr_src = ptr;
+
+  bounds = chkp_get_registered_bounds (ptr_src);
+
+  if (bounds)
+    return bounds;
+
+  switch (TREE_CODE (ptr_src))
+    {
+    case MEM_REF:
+    case VAR_DECL:
+      if (BOUNDED_P (ptr_src))
+	if (TREE_CODE (ptr) == VAR_DECL && DECL_REGISTER (ptr))
+	  bounds = chkp_get_zero_bounds ();
+	else
+	  {
+	    addr = chkp_build_addr_expr (ptr_src);
+	    bounds = chkp_build_bndldx (addr, ptr, iter);
+	  }
+      else
+	bounds = chkp_get_nonpointer_load_bounds ();
+      break;
+
+    case ARRAY_REF:
+    case COMPONENT_REF:
+      addr = get_base_address (ptr_src);
+      if (DECL_P (addr)
+	  || TREE_CODE (addr) == MEM_REF
+	  || TREE_CODE (addr) == TARGET_MEM_REF)
+	{
+	  if (BOUNDED_P (ptr_src))
+	    if (TREE_CODE (ptr) == VAR_DECL && DECL_REGISTER (ptr))
+	      bounds = chkp_get_zero_bounds ();
+	    else
+	      {
+		addr = chkp_build_addr_expr (ptr_src);
+		bounds = chkp_build_bndldx (addr, ptr, iter);
+	      }
+	  else
+	    bounds = chkp_get_nonpointer_load_bounds ();
+	}
+      else
+	{
+	  gcc_assert (TREE_CODE (addr) == SSA_NAME);
+	  bounds = chkp_find_bounds (addr, iter);
+	}
+      break;
+
+    case PARM_DECL:
+      gcc_unreachable ();
+      bounds = chkp_get_bound_for_parm (ptr_src);
+      break;
+
+    case TARGET_MEM_REF:
+      addr = chkp_build_addr_expr (ptr_src);
+      bounds = chkp_build_bndldx (addr, ptr, iter);
+      break;
+
+    case SSA_NAME:
+      bounds = chkp_get_registered_bounds (ptr_src);
+      if (!bounds)
+	{
+	  gimple def_stmt = SSA_NAME_DEF_STMT (ptr_src);
+	  gimple_stmt_iterator phi_iter;
+
+	  bounds = chkp_get_bounds_by_definition (ptr_src, def_stmt, &phi_iter);
+
+	  gcc_assert (bounds);
+
+	  if (gimple_code (def_stmt) == GIMPLE_PHI)
+	    {
+	      unsigned i;
+
+	      for (i = 0; i < gimple_phi_num_args (def_stmt); i++)
+		{
+		  tree arg = gimple_phi_arg_def (def_stmt, i);
+		  tree arg_bnd;
+		  gimple phi_bnd;
+
+		  arg_bnd = chkp_find_bounds (arg, NULL);
+
+		  /* chkp_get_bounds_by_definition created new phi
+		     statement and phi_iter points to it.
+
+		     Previous call to chkp_find_bounds could create
+		     new basic block and therefore change phi statement
+		     phi_iter points to.  */
+		  phi_bnd = gsi_stmt (phi_iter);
+
+		  add_phi_arg (phi_bnd, arg_bnd,
+			       gimple_phi_arg_edge (def_stmt, i),
+			       UNKNOWN_LOCATION);
+		}
+
+	      /* If all bound phi nodes have their arg computed
+		 then we may finish its computation.  See
+		 chkp_finish_incomplete_bounds for more details.  */
+	      if (chkp_may_finish_incomplete_bounds ())
+		chkp_finish_incomplete_bounds ();
+	    }
+
+	  gcc_assert (bounds == chkp_get_registered_bounds (ptr_src)
+		      || chkp_incomplete_bounds (bounds));
+	}
+      break;
+
+    case ADDR_EXPR:
+      bounds = chkp_make_addressed_object_bounds (TREE_OPERAND (ptr_src, 0), iter);
+      break;
+
+    case INTEGER_CST:
+      if (integer_zerop (ptr_src))
+	bounds = chkp_get_none_bounds ();
+      else
+	bounds = chkp_get_invalid_op_bounds ();
+      break;
+
+    default:
+      if (dump_file && (dump_flags & TDF_DETAILS))
+	{
+	  fprintf (dump_file, "chkp_find_bounds: unexpected ptr of type %s\n",
+		   get_tree_code_name (TREE_CODE (ptr_src)));
+	  print_node (dump_file, "", ptr_src, 0);
+	}
+      internal_error ("chkp_find_bounds: Unexpected tree code %s",
+		      get_tree_code_name (TREE_CODE (ptr_src)));
+    }
+
+  if (!bounds)
+    {
+      if (dump_file && (dump_flags & TDF_DETAILS))
+	{
+	  fprintf (stderr, "chkp_find_bounds: cannot find bounds for pointer\n");
+	  print_node (dump_file, "", ptr_src, 0);
+	}
+      internal_error ("chkp_find_bounds: Cannot find bounds for pointer");
+    }
+
+  return bounds;
+}
+
+/* Normal case for bounds search without forced narrowing.  */
+static tree
+chkp_find_bounds (tree ptr, gimple_stmt_iterator *iter)
+{
+  return chkp_find_bounds_1 (ptr, NULL_TREE, iter);
+}
+
+/* Search bounds for pointer PTR loaded from PTR_SRC
+   by statement *ITER points to.  */
+static tree
+chkp_find_bounds_loaded (tree ptr, tree ptr_src, gimple_stmt_iterator *iter)
+{
+  return chkp_find_bounds_1 (ptr, ptr_src, iter);
+}
+
 /* Helper function which checks type of RHS and finds all pointers in
    it.  For each found pointer we build it's accesses in LHS and RHS
    objects and then call HANDLER for them.  Function is used to copy
@@ -1496,6 +3015,320 @@  chkp_walk_pointer_assignments (tree lhs, tree rhs, void *arg,
 		   get_tree_code_name (TREE_CODE (type)));
 }
 
+/* Add code to copy bounds for assignment of RHS to LHS.
+   ARG is an iterator pointing ne code position.  */
+static void
+chkp_copy_bounds_for_elem (tree lhs, tree rhs, void *arg)
+{
+  gimple_stmt_iterator *iter = (gimple_stmt_iterator *)arg;
+  tree bounds = chkp_find_bounds (rhs, iter);
+  tree addr = chkp_build_addr_expr(lhs);
+
+  chkp_build_bndstx (addr, rhs, bounds, iter);
+}
+
+/* An instrumentation function which is called for each statement
+   having memory access we want to instrument.  It inserts check
+   code and bounds copy code.
+
+   ITER points to statement to instrument.
+
+   NODE holds memory access in statement to check.
+
+   LOC holds the location information for statement.
+
+   DIRFLAGS determines whether access is read or write.
+
+   ACCESS_OFFS should be added to address used in NODE
+   before check.
+
+   ACCESS_SIZE holds size of checked access.
+
+   SAFE indicates if NODE access is safe and should not be
+   checked.  */
+static void
+chkp_process_stmt (gimple_stmt_iterator *iter, tree node,
+		   location_t loc, tree dirflag,
+		   tree access_offs, tree access_size,
+		   bool safe)
+{
+  tree node_type = TREE_TYPE (node);
+  tree size = access_size ? access_size : TYPE_SIZE_UNIT (node_type);
+  tree addr_first = NULL_TREE; /* address of the first accessed byte */
+  tree addr_last = NULL_TREE; /* address of the last accessed byte */
+  tree ptr = NULL_TREE; /* a pointer used for dereference */
+  tree bounds = NULL_TREE;
+
+  /* We do not need instrumentation for clobbers.  */
+  if (dirflag == integer_one_node
+      && gimple_code (gsi_stmt (*iter)) == GIMPLE_ASSIGN
+      && TREE_CLOBBER_P (gimple_assign_rhs1 (gsi_stmt (*iter))))
+    return;
+
+  switch (TREE_CODE (node))
+    {
+    case ARRAY_REF:
+    case COMPONENT_REF:
+      {
+	bool bitfield;
+	tree elt;
+
+	if (safe)
+	  {
+	    /* We are not going to generate any checks, so do not
+	       generate bounds as well.  */
+	    addr_first = chkp_build_addr_expr (node);
+	    break;
+	  }
+
+	chkp_parse_array_and_component_ref (node, &ptr, &elt, &safe,
+					    &bitfield, &bounds, iter, false);
+
+	/* Break if there is no dereference and operation is safe.  */
+
+	if (bitfield)
+          {
+            tree field = TREE_OPERAND (node, 1);
+
+            if (TREE_CODE (DECL_SIZE_UNIT (field)) == INTEGER_CST)
+              size = DECL_SIZE_UNIT (field);
+
+	    if (elt)
+	      elt = chkp_build_addr_expr (elt);
+            addr_first = fold_convert_loc (loc, ptr_type_node, elt ? elt : ptr);
+            addr_first = fold_build_pointer_plus_loc (loc,
+						      addr_first,
+						      byte_position (field));
+          }
+        else
+          addr_first = chkp_build_addr_expr (node);
+      }
+      break;
+
+    case INDIRECT_REF:
+      ptr = TREE_OPERAND (node, 0);
+      addr_first = ptr;
+      break;
+
+    case MEM_REF:
+      ptr = TREE_OPERAND (node, 0);
+      addr_first = chkp_build_addr_expr (node);
+      break;
+
+    case TARGET_MEM_REF:
+      ptr = TMR_BASE (node);
+      addr_first = chkp_build_addr_expr (node);
+      break;
+
+    case ARRAY_RANGE_REF:
+      printf("ARRAY_RANGE_REF\n");
+      debug_gimple_stmt(gsi_stmt(*iter));
+      debug_tree(node);
+      gcc_unreachable ();
+      break;
+
+    case BIT_FIELD_REF:
+      {
+	tree offs, rem, bpu;
+
+	gcc_assert (!access_offs);
+	gcc_assert (!access_size);
+
+	bpu = fold_convert (size_type_node, bitsize_int (BITS_PER_UNIT));
+	offs = fold_convert (size_type_node, TREE_OPERAND (node, 2));
+	rem = size_binop_loc (loc, TRUNC_MOD_EXPR, offs, bpu);
+	offs = size_binop_loc (loc, TRUNC_DIV_EXPR, offs, bpu);
+
+	size = fold_convert (size_type_node, TREE_OPERAND (node, 1));
+        size = size_binop_loc (loc, PLUS_EXPR, size, rem);
+        size = size_binop_loc (loc, CEIL_DIV_EXPR, size, bpu);
+        size = fold_convert (size_type_node, size);
+
+	chkp_process_stmt (iter, TREE_OPERAND (node, 0), loc,
+			 dirflag, offs, size, safe);
+	return;
+      }
+      break;
+
+    case VAR_DECL:
+    case RESULT_DECL:
+    case PARM_DECL:
+      if (dirflag != integer_one_node
+	  || DECL_REGISTER (node))
+	return;
+
+      safe = true;
+      addr_first = chkp_build_addr_expr (node);
+      break;
+
+    default:
+      return;
+    }
+
+  /* If addr_last was not computed then use (addr_first + size - 1)
+     expression to compute it.  */
+  if (!addr_last)
+    {
+      addr_last = fold_build_pointer_plus_loc (loc, addr_first, size);
+      addr_last = fold_build_pointer_plus_hwi_loc (loc, addr_last, -1);
+    }
+
+  /* Shift both first_addr and last_addr by access_offs if specified.  */
+  if (access_offs)
+    {
+      addr_first = fold_build_pointer_plus_loc (loc, addr_first, access_offs);
+      addr_last = fold_build_pointer_plus_loc (loc, addr_last, access_offs);
+    }
+
+  /* Generate bndcl/bndcu checks if memory access is not safe.  */
+  if (!safe)
+    {
+      gimple_stmt_iterator stmt_iter = *iter;
+
+      if (!bounds)
+	bounds = chkp_find_bounds (ptr, iter);
+
+      chkp_check_mem_access (addr_first, addr_last, bounds,
+			     stmt_iter, loc, dirflag);
+    }
+
+  /* We need to store bounds in case pointer is stored.  */
+  if (dirflag == integer_one_node
+      && chkp_type_has_pointer (node_type)
+      && flag_chkp_store_bounds)
+    {
+      gimple stmt = gsi_stmt (*iter);
+      tree rhs1 = gimple_assign_rhs1 (stmt);
+      enum tree_code rhs_code = gimple_assign_rhs_code (stmt);
+
+      if (get_gimple_rhs_class (rhs_code) == GIMPLE_SINGLE_RHS)
+	chkp_walk_pointer_assignments (node, rhs1, iter,
+				       chkp_copy_bounds_for_elem);
+      else
+	{
+	  bounds = chkp_compute_bounds_for_assignment (NULL_TREE, stmt);
+	  chkp_build_bndstx (addr_first, rhs1, bounds, iter);
+	}
+    }
+}
+
+/* Some code transformation made during instrumentation pass
+   may put code into inconsistent state.  Here we find and fix
+   such flaws.  */
+static void
+chkp_fix_cfg ()
+{
+  basic_block bb;
+  gimple_stmt_iterator i;
+
+  /* We could insert some code right after stmt which ends bb.
+     We wanted to put this code on fallthru edge but did not
+     add new edges from the beginning because it may cause new
+     phi node creation which may be incorrect due to incomplete
+     bound phi nodes.  */
+  FOR_ALL_BB_FN (bb, cfun)
+    for (i = gsi_start_bb (bb); !gsi_end_p (i); gsi_next (&i))
+      {
+	gimple stmt = gsi_stmt (i);
+	gimple_stmt_iterator next = i;
+
+	gsi_next (&next);
+
+	if (stmt_ends_bb_p (stmt)
+	    && !gsi_end_p (next))
+	  {
+	    edge fall = find_fallthru_edge (bb->succs);
+	    basic_block dest = NULL;
+	    int flags = 0;
+
+	    gcc_assert (fall);
+
+	    /* We cannot split abnormal edge.  Therefore we
+	       store its params, make it regular and then
+	       rebuild abnormal edge after split.  */
+	    if (fall->flags & EDGE_ABNORMAL)
+	      {
+		flags = fall->flags & ~EDGE_FALLTHRU;
+		dest = fall->dest;
+
+		fall->flags &= ~EDGE_COMPLEX;
+	      }
+
+	    while (!gsi_end_p (next))
+	      {
+		gimple next_stmt = gsi_stmt (next);
+		gsi_remove (&next, false);
+		gsi_insert_on_edge (fall, next_stmt);
+	      }
+
+	    gsi_commit_edge_inserts ();
+
+	    /* Re-create abnormal edge.  */
+	    if (dest)
+	      make_edge (bb, dest, flags);
+	  }
+      }
+}
+
+/* This function instruments all statements working with memory.  */
+static void
+chkp_instrument_function (void)
+{
+  basic_block bb, next;
+  gimple_stmt_iterator i;
+  enum gimple_rhs_class grhs_class;
+  bool safe = lookup_attribute ("chkp ctor", DECL_ATTRIBUTES (cfun->decl));
+
+  bb = ENTRY_BLOCK_PTR_FOR_FN (cfun)->next_bb;
+  do
+    {
+      next = bb->next_bb;
+      for (i = gsi_start_bb (bb); !gsi_end_p (i); )
+        {
+          gimple s = gsi_stmt (i);
+
+	  /* Skip statement marked to not be instrumented.  */
+	  if (chkp_marked_stmt_p (s))
+	    {
+	      gsi_next (&i);
+	      continue;
+	    }
+
+          switch (gimple_code (s))
+            {
+            case GIMPLE_ASSIGN:
+	      chkp_process_stmt (&i, gimple_assign_lhs (s),
+				 gimple_location (s), integer_one_node,
+				 NULL_TREE, NULL_TREE, safe);
+	      chkp_process_stmt (&i, gimple_assign_rhs1 (s),
+				 gimple_location (s), integer_zero_node,
+				 NULL_TREE, NULL_TREE, safe);
+	      grhs_class = get_gimple_rhs_class (gimple_assign_rhs_code (s));
+	      if (grhs_class == GIMPLE_BINARY_RHS)
+		chkp_process_stmt (&i, gimple_assign_rhs2 (s),
+				   gimple_location (s), integer_zero_node,
+				   NULL_TREE, NULL_TREE, safe);
+              break;
+
+            case GIMPLE_RETURN:
+              if (gimple_return_retval (s) != NULL_TREE)
+		chkp_process_stmt (&i, gimple_return_retval (s),
+				   gimple_location (s),
+				   integer_zero_node,
+				   NULL_TREE, NULL_TREE, safe);
+              break;
+
+            default:
+              ;
+            }
+
+	  gsi_next (&i);
+        }
+      bb = next;
+    }
+  while (bb);
+}
+
 /* Initialize pass.  */
 static void
 chkp_init (void)
@@ -1540,4 +3373,93 @@  chkp_init (void)
   calculate_dominance_info (CDI_POST_DOMINATORS);
 }
 
+/* Finalize instrumentation pass.  */
+static void
+chkp_fini (void)
+{
+  in_chkp_pass = false;
+
+  delete chkp_invalid_bounds;
+  delete chkp_completed_bounds_set;
+  delete chkp_reg_addr_bounds;
+  delete chkp_incomplete_bounds_map;
+
+  free_dominance_info (CDI_DOMINATORS);
+  free_dominance_info (CDI_POST_DOMINATORS);
+}
+
+/* Main instrumentation pass function.  */
+static unsigned int
+chkp_execute (void)
+{
+  chkp_init ();
+
+  chkp_instrument_function ();
+
+  chkp_function_mark_instrumented (cfun->decl);
+
+  chkp_fix_cfg ();
+
+  chkp_fini ();
+
+  return 0;
+}
+
+/* Instrumentation pass gate.  */
+static bool
+chkp_gate (void)
+{
+  return cgraph_node::get (cfun->decl)->instrumentation_clone
+    || lookup_attribute ("chkp ctor", DECL_ATTRIBUTES (cfun->decl));
+}
+
+namespace {
+
+const pass_data pass_data_chkp =
+{
+  GIMPLE_PASS, /* type */
+  "chkp", /* name */
+  OPTGROUP_NONE, /* optinfo_flags */
+  TV_NONE, /* tv_id */
+  PROP_ssa | PROP_cfg, /* properties_required */
+  0, /* properties_provided */
+  0, /* properties_destroyed */
+  0, /* todo_flags_start */
+  TODO_verify_il
+  | TODO_update_ssa /* todo_flags_finish */
+};
+
+class pass_chkp : public gimple_opt_pass
+{
+public:
+  pass_chkp (gcc::context *ctxt)
+    : gimple_opt_pass (pass_data_chkp, ctxt)
+  {}
+
+  /* opt_pass methods: */
+  virtual opt_pass * clone ()
+    {
+      return new pass_chkp (m_ctxt);
+    }
+
+  virtual bool gate (function *)
+    {
+      return chkp_gate ();
+    }
+
+  virtual unsigned int execute (function *)
+    {
+      return chkp_execute ();
+    }
+
+}; // class pass_chkp
+
+} // anon namespace
+
+gimple_opt_pass *
+make_pass_chkp (gcc::context *ctxt)
+{
+  return new pass_chkp (ctxt);
+}
+
 #include "gt-tree-chkp.h"