diff mbox

Patch ping

Message ID 20170726134715.GW2123@tucnak
State New
Headers show

Commit Message

Jakub Jelinek July 26, 2017, 1:47 p.m. UTC
On Wed, Jul 26, 2017 at 12:34:10PM +0200, Richard Biener wrote:
> On Tue, 25 Jul 2017, Jakub Jelinek wrote:
> 
> > Hi!
> > 
> > I'd like to ping 2 patches:
> > 
> > - UBSAN -fsanitize=pointer-overflow support
> >   - http://gcc.gnu.org/ml/gcc-patches/2017-06/msg01365.html
> 
> The probablility stuff might need updating?

Yes, done in my copy.

> Can you put the TYPE_PRECISION (sizetype) != POINTER_SIZE check
> in option processing and inform people that pointer overflow sanitizing
> is not done instead?

That is problematic, because during the option processing sizetype
is NULL, it is set up only much later.  And that processing depends on
the options being finalized and backend initialized etc.
I guess I could emit a warning in the sanopt pass once or something.
Or do it very late in do_compile, after lang_dependent_init (we don't
handle any other option there though). 
But it is still just a theoretical case, because libubsan is supported
only on selected subset of targets and none of those have such weird
sizetype vs. pointer size setup.  And without libubsan, the only thing
one could do would be -fsanitize=undefined -fsanitize-undefined-trap-on-error
or -fsanitize=pointer-overflow -fsanitize-undefined-trap-on-error,
otherwise one would run into missing libubsan.

> Where you handle DECL_BIT_FIELD_REPRESENTATIVE in 
> maybe_instrument_pointer_overflow you could instead of building
> a new COMPONENT_REF strip the bitfield ref and just remember
> DECL_FIELD_OFFSET/BIT_OFFSET to be added to the get_inner_reference
> result?

That is not enough, the bitfield could be in variable length structure etc.
Furthermore, I need that COMPONENT_REF with the representative later
in the function, so that I can compute the ptr and base_addr.

>  You don't seem to use 'size' anywhere.

size I thought about but then decided not to do anything with it.
There are two cases, one is where there is no ADDR_EXPR and it actually
a memory reference.  
In that case in theory the size could be used, but it would need
to be used only for the positive offsets, so like:
if (off > 0) {
  if (ptr + off + size < ptr)
    runtime_fail;
} else if (ptr + off > ptr)
  runtime_fail;
but when it is actually a memory reference, I suppose it will fail
at runtime anyway when performing such an access, so I think it is
unnecessary.  And for the ADDR_EXPR case, the size is irrelevant, we
are just taking address of the start of the object.

> You fail to allow other handled components -- for no good reason?

I was trying to have a quick bail out.  What other handled components might
be relevant?  I guess IMAGPART_EXPR.  For say BIT_FIELD_REF I don't think
I can
  tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);

>  You fail to handle
> &MEM[ptr + CST] a canonical gimple invariant way of ptr +p CST,
> the early out bitpos == 0 will cause non-instrumentation here.

Guess I could use:
  if ((offset == NULL_TREE
       && bitpos == 0
       && (TREE_CODE (inner) != MEM_REF
	   || integer_zerop (TREE_OPERAND (inner, 1))))
The rest of the code will handle it.

> (I'd just round down in the case of bitpos % BITS_PER_UNIT != 0)

But then the
  tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);
won't work again.

> > - noipa attribute addition                                                                                                                         
> >   http://gcc.gnu.org/ml/gcc-patches/2016-12/msg01501.html                                                                                          
> 
> Ok.

Thanks, will retest it now.

Here is the -fsanitize=pointer-overflow patch untested, updated
for the probability and other stuff mentioned above.

	Jakub
2017-07-26  Jakub Jelinek  <jakub@redhat.com>

	PR sanitizer/80998
	* sanopt.c (pass_sanopt::execute): Handle IFN_UBSAN_PTR.
	* tree-ssa-alias.c (call_may_clobber_ref_p_1): Likewise.
	* flag-types.h (enum sanitize_code): Add SANITIZER_POINTER_OVERFLOW.
	Or it into SANITIZER_UNDEFINED.
	* ubsan.c: Include gimple-fold.h and varasm.h.
	(ubsan_expand_ptr_ifn): New function.
	(instrument_pointer_overflow): New function.
	(maybe_instrument_pointer_overflow): New function.
	(instrument_object_size): Formatting fix.
	(pass_ubsan::execute): Call instrument_pointer_overflow
	and maybe_instrument_pointer_overflow.
	* internal-fn.c (expand_UBSAN_PTR): New function.
	* ubsan.h (ubsan_expand_ptr_ifn): Declare.
	* sanitizer.def (__ubsan_handle_pointer_overflow,
	__ubsan_handle_pointer_overflow_abort): New builtins.
	* tree-ssa-tail-merge.c (merge_stmts_p): Handle IFN_UBSAN_PTR.
	* internal-fn.def (UBSAN_PTR): New internal function.
	* opts.c (sanitizer_opts): Add pointer-overflow.
	* lto-streamer-in.c (input_function): Handle IFN_UBSAN_PTR.
	* fold-const.c (build_range_check): Compute pointer range check in
	integral type if pointer arithmetics would be needed.  Formatting
	fixes.
gcc/testsuite/
	* c-c++-common/ubsan/ptr-overflow-1.c: New test.
	* c-c++-common/ubsan/ptr-overflow-2.c: New test.
libsanitizer/
	* ubsan/ubsan_handlers.cc: Cherry-pick upstream r304461.
	* ubsan/ubsan_checks.inc: Likewise.
	* ubsan/ubsan_handlers.h: Likewise.

Comments

Richard Biener July 26, 2017, 2:13 p.m. UTC | #1
On Wed, 26 Jul 2017, Jakub Jelinek wrote:

> On Wed, Jul 26, 2017 at 12:34:10PM +0200, Richard Biener wrote:
> > On Tue, 25 Jul 2017, Jakub Jelinek wrote:
> > 
> > > Hi!
> > > 
> > > I'd like to ping 2 patches:
> > > 
> > > - UBSAN -fsanitize=pointer-overflow support
> > >   - http://gcc.gnu.org/ml/gcc-patches/2017-06/msg01365.html
> > 
> > The probablility stuff might need updating?
> 
> Yes, done in my copy.
> 
> > Can you put the TYPE_PRECISION (sizetype) != POINTER_SIZE check
> > in option processing and inform people that pointer overflow sanitizing
> > is not done instead?
> 
> That is problematic, because during the option processing sizetype
> is NULL, it is set up only much later.  And that processing depends on

Ah, ok - fine then as-is.

> the options being finalized and backend initialized etc.
> I guess I could emit a warning in the sanopt pass once or something.
> Or do it very late in do_compile, after lang_dependent_init (we don't
> handle any other option there though). 
> But it is still just a theoretical case, because libubsan is supported
> only on selected subset of targets and none of those have such weird
> sizetype vs. pointer size setup.  And without libubsan, the only thing
> one could do would be -fsanitize=undefined -fsanitize-undefined-trap-on-error
> or -fsanitize=pointer-overflow -fsanitize-undefined-trap-on-error,
> otherwise one would run into missing libubsan.
>
> > Where you handle DECL_BIT_FIELD_REPRESENTATIVE in 
> > maybe_instrument_pointer_overflow you could instead of building
> > a new COMPONENT_REF strip the bitfield ref and just remember
> > DECL_FIELD_OFFSET/BIT_OFFSET to be added to the get_inner_reference
> > result?
> 
> That is not enough, the bitfield could be in variable length structure etc.
> Furthermore, I need that COMPONENT_REF with the representative later
> in the function, so that I can compute the ptr and base_addr.

Ok.
 
> >  You don't seem to use 'size' anywhere.
> 
> size I thought about but then decided not to do anything with it.
> There are two cases, one is where there is no ADDR_EXPR and it actually
> a memory reference.  
> In that case in theory the size could be used, but it would need
> to be used only for the positive offsets, so like:
> if (off > 0) {
>   if (ptr + off + size < ptr)
>     runtime_fail;
> } else if (ptr + off > ptr)
>   runtime_fail;
> but when it is actually a memory reference, I suppose it will fail
> at runtime anyway when performing such an access, so I think it is
> unnecessary.  And for the ADDR_EXPR case, the size is irrelevant, we
> are just taking address of the start of the object.
> 
> > You fail to allow other handled components -- for no good reason?
> 
> I was trying to have a quick bail out.  What other handled components might
> be relevant?  I guess IMAGPART_EXPR.  For say BIT_FIELD_REF I don't think
> I can
>   tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);

REALPART/IMAGPART_EXPR, yes.  You can't address BIT_FIELD_REF
apart those on byte boundary (&vector[4] is eventually folded to
a BIT_FIELD_REF).  Similar for VIEW_CONVERT_EXPR, but you are
only building the address on the base?

> >  You fail to handle
> > &MEM[ptr + CST] a canonical gimple invariant way of ptr +p CST,
> > the early out bitpos == 0 will cause non-instrumentation here.
> 
> Guess I could use:
>   if ((offset == NULL_TREE
>        && bitpos == 0
>        && (TREE_CODE (inner) != MEM_REF
> 	   || integer_zerop (TREE_OPERAND (inner, 1))))
> The rest of the code will handle it.

Yeah.

> 
> > (I'd just round down in the case of bitpos % BITS_PER_UNIT != 0)
> 
> But then the
>   tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);
> won't work again.

Hmm.  So instead of building the address on the original tree you
could build the difference based on what get_inner_reference returns
in bitpos/offset?

> > > - noipa attribute addition                                                                                                                         
> > >   http://gcc.gnu.org/ml/gcc-patches/2016-12/msg01501.html                                                                                          
> > 
> > Ok.
> 
> Thanks, will retest it now.
> 
> Here is the -fsanitize=pointer-overflow patch untested, updated
> for the probability and other stuff mentioned above.
> 
> 	Jakub
>
Jakub Jelinek July 26, 2017, 5:31 p.m. UTC | #2
On Wed, Jul 26, 2017 at 04:13:30PM +0200, Richard Biener wrote:
> > >  You don't seem to use 'size' anywhere.
> > 
> > size I thought about but then decided not to do anything with it.
> > There are two cases, one is where there is no ADDR_EXPR and it actually
> > a memory reference.  
> > In that case in theory the size could be used, but it would need
> > to be used only for the positive offsets, so like:
> > if (off > 0) {
> >   if (ptr + off + size < ptr)
> >     runtime_fail;
> > } else if (ptr + off > ptr)
> >   runtime_fail;
> > but when it is actually a memory reference, I suppose it will fail
> > at runtime anyway when performing such an access, so I think it is
> > unnecessary.  And for the ADDR_EXPR case, the size is irrelevant, we
> > are just taking address of the start of the object.
> > 
> > > You fail to allow other handled components -- for no good reason?
> > 
> > I was trying to have a quick bail out.  What other handled components might
> > be relevant?  I guess IMAGPART_EXPR.  For say BIT_FIELD_REF I don't think
> > I can
> >   tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);
> 
> REALPART/IMAGPART_EXPR, yes.  You can't address BIT_FIELD_REF
> apart those on byte boundary (&vector[4] is eventually folded to
> a BIT_FIELD_REF).  Similar for VIEW_CONVERT_EXPR, but you are
> only building the address on the base?
> 
> > >  You fail to handle
> > > &MEM[ptr + CST] a canonical gimple invariant way of ptr +p CST,
> > > the early out bitpos == 0 will cause non-instrumentation here.
> > 
> > Guess I could use:
> >   if ((offset == NULL_TREE
> >        && bitpos == 0
> >        && (TREE_CODE (inner) != MEM_REF
> > 	   || integer_zerop (TREE_OPERAND (inner, 1))))
> > The rest of the code will handle it.
> 
> Yeah.
> 
> > 
> > > (I'd just round down in the case of bitpos % BITS_PER_UNIT != 0)
> > 
> > But then the
> >   tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);
> > won't work again.
> 
> Hmm.  So instead of building the address on the original tree you
> could build the difference based on what get_inner_reference returns
> in bitpos/offset?

I'm building both addresses and subtracting them to get the offset.
I guess the other option is to compute just the address of the base
(i.e. base_addr), and add offset (if non-NULL) plus bitpos / BITS_PER_UNIT
plus offset from the MEM_REF (if any).  In that case it would probably
handle any handled_component_p and bitfields too.

	Jakub
Richard Biener July 27, 2017, 7:19 a.m. UTC | #3
On Wed, 26 Jul 2017, Jakub Jelinek wrote:

> On Wed, Jul 26, 2017 at 04:13:30PM +0200, Richard Biener wrote:
> > > >  You don't seem to use 'size' anywhere.
> > > 
> > > size I thought about but then decided not to do anything with it.
> > > There are two cases, one is where there is no ADDR_EXPR and it actually
> > > a memory reference.  
> > > In that case in theory the size could be used, but it would need
> > > to be used only for the positive offsets, so like:
> > > if (off > 0) {
> > >   if (ptr + off + size < ptr)
> > >     runtime_fail;
> > > } else if (ptr + off > ptr)
> > >   runtime_fail;
> > > but when it is actually a memory reference, I suppose it will fail
> > > at runtime anyway when performing such an access, so I think it is
> > > unnecessary.  And for the ADDR_EXPR case, the size is irrelevant, we
> > > are just taking address of the start of the object.
> > > 
> > > > You fail to allow other handled components -- for no good reason?
> > > 
> > > I was trying to have a quick bail out.  What other handled components might
> > > be relevant?  I guess IMAGPART_EXPR.  For say BIT_FIELD_REF I don't think
> > > I can
> > >   tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);
> > 
> > REALPART/IMAGPART_EXPR, yes.  You can't address BIT_FIELD_REF
> > apart those on byte boundary (&vector[4] is eventually folded to
> > a BIT_FIELD_REF).  Similar for VIEW_CONVERT_EXPR, but you are
> > only building the address on the base?
> > 
> > > >  You fail to handle
> > > > &MEM[ptr + CST] a canonical gimple invariant way of ptr +p CST,
> > > > the early out bitpos == 0 will cause non-instrumentation here.
> > > 
> > > Guess I could use:
> > >   if ((offset == NULL_TREE
> > >        && bitpos == 0
> > >        && (TREE_CODE (inner) != MEM_REF
> > > 	   || integer_zerop (TREE_OPERAND (inner, 1))))
> > > The rest of the code will handle it.
> > 
> > Yeah.
> > 
> > > 
> > > > (I'd just round down in the case of bitpos % BITS_PER_UNIT != 0)
> > > 
> > > But then the
> > >   tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);
> > > won't work again.
> > 
> > Hmm.  So instead of building the address on the original tree you
> > could build the difference based on what get_inner_reference returns
> > in bitpos/offset?
> 
> I'm building both addresses and subtracting them to get the offset.
> I guess the other option is to compute just the address of the base
> (i.e. base_addr), and add offset (if non-NULL) plus bitpos / BITS_PER_UNIT
> plus offset from the MEM_REF (if any).  In that case it would probably
> handle any handled_component_p and bitfields too.

Yes.  Can you try sth along this route?  Should be a matter of
adding offset and bitpos / BITS_PER_UNIT (thus rounded down) plus
any MEM_REF offset on the base.

Thanks,
Richard.
diff mbox

Patch

--- gcc/sanopt.c.jj	2017-07-04 13:51:47.781815329 +0200
+++ gcc/sanopt.c	2017-07-26 13:44:13.833204640 +0200
@@ -1062,6 +1062,9 @@  pass_sanopt::execute (function *fun)
 		case IFN_UBSAN_OBJECT_SIZE:
 		  no_next = ubsan_expand_objsize_ifn (&gsi);
 		  break;
+		case IFN_UBSAN_PTR:
+		  no_next = ubsan_expand_ptr_ifn (&gsi);
+		  break;
 		case IFN_UBSAN_VPTR:
 		  no_next = ubsan_expand_vptr_ifn (&gsi);
 		  break;
--- gcc/tree-ssa-alias.c.jj	2017-06-19 08:26:17.274597722 +0200
+++ gcc/tree-ssa-alias.c	2017-07-26 13:44:13.834204628 +0200
@@ -1991,6 +1991,7 @@  call_may_clobber_ref_p_1 (gcall *call, a
       case IFN_UBSAN_BOUNDS:
       case IFN_UBSAN_VPTR:
       case IFN_UBSAN_OBJECT_SIZE:
+      case IFN_UBSAN_PTR:
       case IFN_ASAN_CHECK:
 	return false;
       default:
--- gcc/flag-types.h.jj	2017-06-19 08:26:17.593593662 +0200
+++ gcc/flag-types.h	2017-07-26 13:44:13.834204628 +0200
@@ -238,6 +238,7 @@  enum sanitize_code {
   SANITIZE_OBJECT_SIZE = 1UL << 21,
   SANITIZE_VPTR = 1UL << 22,
   SANITIZE_BOUNDS_STRICT = 1UL << 23,
+  SANITIZE_POINTER_OVERFLOW = 1UL << 24,
   SANITIZE_SHIFT = SANITIZE_SHIFT_BASE | SANITIZE_SHIFT_EXPONENT,
   SANITIZE_UNDEFINED = SANITIZE_SHIFT | SANITIZE_DIVIDE | SANITIZE_UNREACHABLE
 		       | SANITIZE_VLA | SANITIZE_NULL | SANITIZE_RETURN
@@ -245,7 +246,8 @@  enum sanitize_code {
 		       | SANITIZE_BOUNDS | SANITIZE_ALIGNMENT
 		       | SANITIZE_NONNULL_ATTRIBUTE
 		       | SANITIZE_RETURNS_NONNULL_ATTRIBUTE
-		       | SANITIZE_OBJECT_SIZE | SANITIZE_VPTR,
+		       | SANITIZE_OBJECT_SIZE | SANITIZE_VPTR
+		       | SANITIZE_POINTER_OVERFLOW,
   SANITIZE_UNDEFINED_NONDEFAULT = SANITIZE_FLOAT_DIVIDE | SANITIZE_FLOAT_CAST
 				  | SANITIZE_BOUNDS_STRICT
 };
--- gcc/ubsan.c.jj	2017-06-30 09:49:32.306609364 +0200
+++ gcc/ubsan.c	2017-07-26 15:37:11.649228991 +0200
@@ -45,6 +45,8 @@  along with GCC; see the file COPYING3.
 #include "builtins.h"
 #include "tree-object-size.h"
 #include "tree-cfg.h"
+#include "gimple-fold.h"
+#include "varasm.h"
 
 /* Map from a tree to a VAR_DECL tree.  */
 
@@ -1029,6 +1031,170 @@  ubsan_expand_objsize_ifn (gimple_stmt_it
   return true;
 }
 
+/* Expand UBSAN_PTR internal call.  */
+
+bool
+ubsan_expand_ptr_ifn (gimple_stmt_iterator *gsip)
+{
+  gimple_stmt_iterator gsi = *gsip;
+  gimple *stmt = gsi_stmt (gsi);
+  location_t loc = gimple_location (stmt);
+  gcc_assert (gimple_call_num_args (stmt) == 2);
+  tree ptr = gimple_call_arg (stmt, 0);
+  tree off = gimple_call_arg (stmt, 1);
+
+  if (integer_zerop (off))
+    {
+      gsi_remove (gsip, true);
+      unlink_stmt_vdef (stmt);
+      return true;
+    }
+
+  basic_block cur_bb = gsi_bb (gsi);
+  tree ptrplusoff = make_ssa_name (pointer_sized_int_node);
+  tree ptri = make_ssa_name (pointer_sized_int_node);
+  int pos_neg = get_range_pos_neg (off);
+
+  /* Split the original block holding the pointer dereference.  */
+  edge e = split_block (cur_bb, stmt);
+
+  /* Get a hold on the 'condition block', the 'then block' and the
+     'else block'.  */
+  basic_block cond_bb = e->src;
+  basic_block fallthru_bb = e->dest;
+  basic_block then_bb = create_empty_bb (cond_bb);
+  basic_block cond_pos_bb = NULL, cond_neg_bb = NULL;
+  add_bb_to_loop (then_bb, cond_bb->loop_father);
+  loops_state_set (LOOPS_NEED_FIXUP);
+
+  /* Set up the fallthrough basic block.  */
+  e->flags = EDGE_FALSE_VALUE;
+  if (pos_neg != 3)
+    {
+      e->count = cond_bb->count;
+      e->probability = profile_probability::very_likely ();
+
+      /* Connect 'then block' with the 'else block'.  This is needed
+	 as the ubsan routines we call in the 'then block' are not noreturn.
+	 The 'then block' only has one outcoming edge.  */
+      make_single_succ_edge (then_bb, fallthru_bb, EDGE_FALLTHRU);
+
+      /* Make an edge coming from the 'cond block' into the 'then block';
+	 this edge is unlikely taken, so set up the probability
+	 accordingly.  */
+      e = make_edge (cond_bb, then_bb, EDGE_TRUE_VALUE);
+      e->probability = profile_probability::very_unlikely ();
+    }
+  else
+    {
+      profile_count count = cond_bb->count.apply_probability (PROB_EVEN);
+      e->count = count;
+      e->probability = profile_probability::even ();
+
+      e = split_block (fallthru_bb, (gimple *) NULL);
+      cond_neg_bb = e->src;
+      fallthru_bb = e->dest;
+      e->count = count;
+      e->probability = profile_probability::very_likely ();
+      e->flags = EDGE_FALSE_VALUE;
+
+      e = make_edge (cond_neg_bb, then_bb, EDGE_TRUE_VALUE);
+      e->probability = profile_probability::very_unlikely ();
+
+      cond_pos_bb = create_empty_bb (cond_bb);
+      add_bb_to_loop (cond_pos_bb, cond_bb->loop_father);
+
+      e = make_edge (cond_bb, cond_pos_bb, EDGE_TRUE_VALUE);
+      e->count = count;
+      e->probability = profile_probability::even ();
+
+      e = make_edge (cond_pos_bb, then_bb, EDGE_TRUE_VALUE);
+      e->probability = profile_probability::very_unlikely ();
+
+      e = make_edge (cond_pos_bb, fallthru_bb, EDGE_FALSE_VALUE);
+      e->count = count;
+      e->probability = profile_probability::very_likely ();
+
+      make_single_succ_edge (then_bb, fallthru_bb, EDGE_FALLTHRU);
+    }
+
+  gimple *g = gimple_build_assign (ptri, NOP_EXPR, ptr);
+  gimple_set_location (g, loc);
+  gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+  g = gimple_build_assign (ptrplusoff, PLUS_EXPR, ptri, off);
+  gimple_set_location (g, loc);
+  gsi_insert_before (&gsi, g, GSI_SAME_STMT);
+
+  /* Update dominance info for the newly created then_bb; note that
+     fallthru_bb's dominance info has already been updated by
+     split_block.  */
+  if (dom_info_available_p (CDI_DOMINATORS))
+    {
+      set_immediate_dominator (CDI_DOMINATORS, then_bb, cond_bb);
+      if (pos_neg == 3)
+	{
+	  set_immediate_dominator (CDI_DOMINATORS, cond_pos_bb, cond_bb);
+	  set_immediate_dominator (CDI_DOMINATORS, fallthru_bb, cond_bb);
+	}
+    }
+
+  /* Put the ubsan builtin call into the newly created BB.  */
+  if (flag_sanitize_undefined_trap_on_error)
+    g = gimple_build_call (builtin_decl_implicit (BUILT_IN_TRAP), 0);
+  else
+    {
+      enum built_in_function bcode
+	= (flag_sanitize_recover & SANITIZE_POINTER_OVERFLOW)
+	  ? BUILT_IN_UBSAN_HANDLE_POINTER_OVERFLOW
+	  : BUILT_IN_UBSAN_HANDLE_POINTER_OVERFLOW_ABORT;
+      tree fn = builtin_decl_implicit (bcode);
+      tree data
+	= ubsan_create_data ("__ubsan_ptrovf_data", 1, &loc,
+			     NULL_TREE, NULL_TREE);
+      data = build_fold_addr_expr_loc (loc, data);
+      g = gimple_build_call (fn, 3, data, ptr, ptrplusoff);
+    }
+  gimple_stmt_iterator gsi2 = gsi_start_bb (then_bb);
+  gimple_set_location (g, loc);
+  gsi_insert_after (&gsi2, g, GSI_NEW_STMT);
+
+  /* Unlink the UBSAN_PTRs vops before replacing it.  */
+  unlink_stmt_vdef (stmt);
+
+  if (TREE_CODE (off) == INTEGER_CST)
+    g = gimple_build_cond (wi::neg_p (off) ? LT_EXPR : GE_EXPR, ptri,
+			   fold_build1 (NEGATE_EXPR, sizetype, off),
+			   NULL_TREE, NULL_TREE);
+  else if (pos_neg != 3)
+    g = gimple_build_cond (pos_neg == 1 ? LT_EXPR : GT_EXPR,
+			   ptrplusoff, ptri, NULL_TREE, NULL_TREE);
+  else
+    {
+      gsi2 = gsi_start_bb (cond_pos_bb);
+      g = gimple_build_cond (LT_EXPR, ptrplusoff, ptri, NULL_TREE, NULL_TREE);
+      gimple_set_location (g, loc);
+      gsi_insert_after (&gsi2, g, GSI_NEW_STMT);
+
+      gsi2 = gsi_start_bb (cond_neg_bb);
+      g = gimple_build_cond (GT_EXPR, ptrplusoff, ptri, NULL_TREE, NULL_TREE);
+      gimple_set_location (g, loc);
+      gsi_insert_after (&gsi2, g, GSI_NEW_STMT);
+
+      gimple_seq seq = NULL;
+      tree t = gimple_build (&seq, loc, NOP_EXPR, ssizetype, off);
+      t = gimple_build (&seq, loc, GE_EXPR, boolean_type_node,
+			t, ssize_int (0));
+      gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT);
+      g = gimple_build_cond (NE_EXPR, t, boolean_false_node,
+			     NULL_TREE, NULL_TREE);
+    }
+  gimple_set_location (g, loc);
+  /* Replace the UBSAN_PTR with a GIMPLE_COND stmt.  */
+  gsi_replace (&gsi, g, false);
+  return false;
+}
+
+
 /* Cached __ubsan_vptr_type_cache decl.  */
 static GTY(()) tree ubsan_vptr_type_cache_decl;
 
@@ -1234,6 +1400,107 @@  instrument_null (gimple_stmt_iterator gs
     instrument_mem_ref (t, base, &gsi, is_lhs);
 }
 
+/* Instrument pointer arithmetics PTR p+ OFF.  */
+
+static void
+instrument_pointer_overflow (gimple_stmt_iterator *gsi, tree ptr, tree off)
+{
+  if (TYPE_PRECISION (sizetype) != POINTER_SIZE)
+    return;
+  gcall *g = gimple_build_call_internal (IFN_UBSAN_PTR, 2, ptr, off);
+  gimple_set_location (g, gimple_location (gsi_stmt (*gsi)));
+  gsi_insert_before (gsi, g, GSI_SAME_STMT);
+}
+
+/* Instrument pointer arithmetics if any.  */
+
+static void
+maybe_instrument_pointer_overflow (gimple_stmt_iterator *gsi, tree t)
+{
+  if (TYPE_PRECISION (sizetype) != POINTER_SIZE)
+    return;
+  /* Handle also e.g. &s->i.  */
+  if (TREE_CODE (t) == ADDR_EXPR)
+    t = TREE_OPERAND (t, 0);
+
+  switch (TREE_CODE (t))
+    {
+    case COMPONENT_REF:
+      if (TREE_CODE (t) == COMPONENT_REF
+	  && DECL_BIT_FIELD_REPRESENTATIVE (TREE_OPERAND (t, 1)) != NULL_TREE)
+	{
+	  tree repr = DECL_BIT_FIELD_REPRESENTATIVE (TREE_OPERAND (t, 1));
+	  t = build3 (COMPONENT_REF, TREE_TYPE (repr), TREE_OPERAND (t, 0),
+		      repr, TREE_OPERAND (t, 2));
+	}
+      break;
+    case ARRAY_REF:
+    case MEM_REF:
+    case REALPART_EXPR:
+    case IMAGPART_EXPR:
+      break;
+    default:
+      return;
+    }
+
+  HOST_WIDE_INT bitsize, bitpos;
+  tree offset;
+  machine_mode mode;
+  int volatilep = 0, reversep, unsignedp = 0;
+  tree inner = get_inner_reference (t, &bitsize, &bitpos, &offset, &mode,
+				    &unsignedp, &reversep, &volatilep);
+
+  if ((offset == NULL_TREE
+       && bitpos == 0
+       && (TREE_CODE (inner) != MEM_REF
+           || integer_zerop (TREE_OPERAND (inner, 1))))
+      || bitpos % BITS_PER_UNIT != 0)
+    return;
+
+  bool decl_p = DECL_P (inner);
+  tree base;
+  if (decl_p)
+    {
+      if (DECL_REGISTER (inner))
+	return;
+      base = inner;
+      /* If BASE is a fixed size automatic variable or
+	 global variable defined in the current TU and bitpos
+	 fits, don't instrument anything.  */
+      if (offset == NULL_TREE
+	  && bitpos > 0
+	  && (VAR_P (base)
+	   || TREE_CODE (base) == PARM_DECL
+	   || TREE_CODE (base) == RESULT_DECL)
+	  && DECL_SIZE (base)
+	  && TREE_CODE (DECL_SIZE (base)) == INTEGER_CST
+	  && compare_tree_int (DECL_SIZE (base), bitpos) >= 0
+	  && (!is_global_var (base) || decl_binds_to_current_def_p (base)))
+	return;
+    }
+  else if (TREE_CODE (inner) == MEM_REF)
+    base = TREE_OPERAND (inner, 0);
+  else
+    return;
+  tree ptr = build1 (ADDR_EXPR, build_pointer_type (TREE_TYPE (t)), t);
+
+  if (!POINTER_TYPE_P (TREE_TYPE (base)) && !DECL_P (base))
+    return;
+
+  tree base_addr = base;
+  if (decl_p)
+    base_addr = build1 (ADDR_EXPR,
+			build_pointer_type (TREE_TYPE (base)), base);
+  t = fold_build2 (MINUS_EXPR, sizetype,
+		   fold_convert (pointer_sized_int_node, ptr),
+		   fold_convert (pointer_sized_int_node, base_addr));
+  t = force_gimple_operand_gsi (gsi, t, true, NULL_TREE, true,
+				GSI_SAME_STMT);
+  base_addr = force_gimple_operand_gsi (gsi, base_addr, true, NULL_TREE, true,
+					GSI_SAME_STMT);
+  instrument_pointer_overflow (gsi, base_addr, t);
+}
+
 /* Build an ubsan builtin call for the signed-integer-overflow
    sanitization.  CODE says what kind of builtin are we building,
    LOC is a location, LHSTYPE is the type of LHS, OP0 and OP1
@@ -1849,7 +2116,7 @@  instrument_object_size (gimple_stmt_iter
 	{
 	  tree rhs1 = gimple_assign_rhs1 (def_stmt);
 	  if (TREE_CODE (rhs1) == SSA_NAME
-	    && SSA_NAME_OCCURS_IN_ABNORMAL_PHI (rhs1))
+	      && SSA_NAME_OCCURS_IN_ABNORMAL_PHI (rhs1))
 	    break;
 	  else
 	    base = rhs1;
@@ -1973,7 +2240,8 @@  public:
 				| SANITIZE_ALIGNMENT
 				| SANITIZE_NONNULL_ATTRIBUTE
 				| SANITIZE_RETURNS_NONNULL_ATTRIBUTE
-				| SANITIZE_OBJECT_SIZE));
+				| SANITIZE_OBJECT_SIZE
+				| SANITIZE_POINTER_OVERFLOW));
     }
 
   virtual unsigned int execute (function *);
@@ -2064,6 +2332,32 @@  pass_ubsan::execute (function *fun)
 		    }
 		}
 	    }
+
+	  if (sanitize_flags_p (SANITIZE_POINTER_OVERFLOW, fun->decl))
+	    {
+	      if (is_gimple_assign (stmt)
+		  && gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR)
+		instrument_pointer_overflow (&gsi,
+					     gimple_assign_rhs1 (stmt),
+					     gimple_assign_rhs2 (stmt));
+	      if (gimple_store_p (stmt))
+		maybe_instrument_pointer_overflow (&gsi,
+						   gimple_get_lhs (stmt));
+	      if (gimple_assign_single_p (stmt))
+		maybe_instrument_pointer_overflow (&gsi,
+						   gimple_assign_rhs1 (stmt));
+	      if (is_gimple_call (stmt))
+		{
+		  unsigned args_num = gimple_call_num_args (stmt);
+		  for (unsigned i = 0; i < args_num; ++i)
+		    {
+		      tree arg = gimple_call_arg (stmt, i);
+		      if (is_gimple_reg (arg))
+			continue;
+		      maybe_instrument_pointer_overflow (&gsi, arg);
+		    }
+		}
+	    }
 
 	  gsi_next (&gsi);
 	}
--- gcc/internal-fn.c.jj	2017-07-17 10:08:34.923647976 +0200
+++ gcc/internal-fn.c	2017-07-26 13:44:13.837204592 +0200
@@ -402,6 +402,14 @@  expand_UBSAN_VPTR (internal_fn, gcall *)
 /* This should get expanded in the sanopt pass.  */
 
 static void
+expand_UBSAN_PTR (internal_fn, gcall *)
+{
+  gcc_unreachable ();
+}
+
+/* This should get expanded in the sanopt pass.  */
+
+static void
 expand_UBSAN_OBJECT_SIZE (internal_fn, gcall *)
 {
   gcc_unreachable ();
--- gcc/ubsan.h.jj	2017-06-20 09:05:22.498654128 +0200
+++ gcc/ubsan.h	2017-07-26 13:44:13.837204592 +0200
@@ -52,6 +52,7 @@  enum ubsan_encode_value_phase {
 extern bool ubsan_expand_bounds_ifn (gimple_stmt_iterator *);
 extern bool ubsan_expand_null_ifn (gimple_stmt_iterator *);
 extern bool ubsan_expand_objsize_ifn (gimple_stmt_iterator *);
+extern bool ubsan_expand_ptr_ifn (gimple_stmt_iterator *);
 extern bool ubsan_expand_vptr_ifn (gimple_stmt_iterator *);
 extern bool ubsan_instrument_unreachable (gimple_stmt_iterator *);
 extern tree ubsan_create_data (const char *, int, const location_t *, ...);
--- gcc/sanitizer.def.jj	2017-07-06 20:31:32.835082221 +0200
+++ gcc/sanitizer.def	2017-07-26 13:44:13.837204592 +0200
@@ -448,6 +448,10 @@  DEF_SANITIZER_BUILTIN(BUILT_IN_UBSAN_HAN
 		      "__ubsan_handle_load_invalid_value",
 		      BT_FN_VOID_PTR_PTR,
 		      ATTR_COLD_NOTHROW_LEAF_LIST)
+DEF_SANITIZER_BUILTIN(BUILT_IN_UBSAN_HANDLE_POINTER_OVERFLOW,
+		      "__ubsan_handle_pointer_overflow",
+		      BT_FN_VOID_PTR_PTR_PTR,
+		      ATTR_COLD_NOTHROW_LEAF_LIST)
 DEF_SANITIZER_BUILTIN(BUILT_IN_UBSAN_HANDLE_DIVREM_OVERFLOW_ABORT,
 		      "__ubsan_handle_divrem_overflow_abort",
 		      BT_FN_VOID_PTR_PTR_PTR,
@@ -484,6 +488,10 @@  DEF_SANITIZER_BUILTIN(BUILT_IN_UBSAN_HAN
 		      "__ubsan_handle_load_invalid_value_abort",
 		      BT_FN_VOID_PTR_PTR,
 		      ATTR_COLD_NORETURN_NOTHROW_LEAF_LIST)
+DEF_SANITIZER_BUILTIN(BUILT_IN_UBSAN_HANDLE_POINTER_OVERFLOW_ABORT,
+		      "__ubsan_handle_pointer_overflow_abort",
+		      BT_FN_VOID_PTR_PTR_PTR,
+		      ATTR_COLD_NORETURN_NOTHROW_LEAF_LIST)
 DEF_SANITIZER_BUILTIN(BUILT_IN_UBSAN_HANDLE_FLOAT_CAST_OVERFLOW,
 		      "__ubsan_handle_float_cast_overflow",
 		      BT_FN_VOID_PTR_PTR,
--- gcc/tree-ssa-tail-merge.c.jj	2017-07-03 19:03:29.467756294 +0200
+++ gcc/tree-ssa-tail-merge.c	2017-07-26 13:44:13.838204580 +0200
@@ -1241,6 +1241,7 @@  merge_stmts_p (gimple *stmt1, gimple *st
       case IFN_UBSAN_CHECK_SUB:
       case IFN_UBSAN_CHECK_MUL:
       case IFN_UBSAN_OBJECT_SIZE:
+      case IFN_UBSAN_PTR:
       case IFN_ASAN_CHECK:
 	/* For these internal functions, gimple_location is an implicit
 	   parameter, which will be used explicitly after expansion.
--- gcc/internal-fn.def.jj	2017-07-06 20:31:43.930946892 +0200
+++ gcc/internal-fn.def	2017-07-26 13:44:13.838204580 +0200
@@ -166,6 +166,7 @@  DEF_INTERNAL_FN (UBSAN_VPTR, ECF_LEAF |
 DEF_INTERNAL_FN (UBSAN_CHECK_ADD, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (UBSAN_CHECK_SUB, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (UBSAN_CHECK_MUL, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
+DEF_INTERNAL_FN (UBSAN_PTR, ECF_LEAF | ECF_NOTHROW, ".R.")
 DEF_INTERNAL_FN (UBSAN_OBJECT_SIZE, ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (ABNORMAL_DISPATCHER, ECF_NORETURN, NULL)
 DEF_INTERNAL_FN (BUILTIN_EXPECT, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
--- gcc/opts.c.jj	2017-07-26 13:37:44.955908409 +0200
+++ gcc/opts.c	2017-07-26 13:44:13.839204567 +0200
@@ -1521,6 +1521,7 @@  const struct sanitizer_opts_s sanitizer_
 		 true),
   SANITIZER_OPT (object-size, SANITIZE_OBJECT_SIZE, true),
   SANITIZER_OPT (vptr, SANITIZE_VPTR, true),
+  SANITIZER_OPT (pointer-overflow, SANITIZE_POINTER_OVERFLOW, true),
   SANITIZER_OPT (all, ~0U, true),
 #undef SANITIZER_OPT
   { NULL, 0U, 0UL, false }
--- gcc/lto-streamer-in.c.jj	2017-06-30 09:49:28.006662098 +0200
+++ gcc/lto-streamer-in.c	2017-07-26 13:44:13.840204555 +0200
@@ -1143,6 +1143,10 @@  input_function (tree fn_decl, struct dat
 		      if ((flag_sanitize & SANITIZE_OBJECT_SIZE) == 0)
 			remove = true;
 		      break;
+		    case IFN_UBSAN_PTR:
+		      if ((flag_sanitize & SANITIZE_POINTER_OVERFLOW) == 0)
+			remove = true;
+		      break;
 		    case IFN_ASAN_MARK:
 		      if ((flag_sanitize & SANITIZE_ADDRESS) == 0)
 			remove = true;
--- gcc/fold-const.c.jj	2017-07-25 12:19:45.423601159 +0200
+++ gcc/fold-const.c	2017-07-26 14:59:51.908708150 +0200
@@ -4859,21 +4859,21 @@  build_range_check (location_t loc, tree
 
   if (low == 0)
     return fold_build2_loc (loc, LE_EXPR, type, exp,
-			fold_convert_loc (loc, etype, high));
+			    fold_convert_loc (loc, etype, high));
 
   if (high == 0)
     return fold_build2_loc (loc, GE_EXPR, type, exp,
-			fold_convert_loc (loc, etype, low));
+			    fold_convert_loc (loc, etype, low));
 
   if (operand_equal_p (low, high, 0))
     return fold_build2_loc (loc, EQ_EXPR, type, exp,
-			fold_convert_loc (loc, etype, low));
+			    fold_convert_loc (loc, etype, low));
 
   if (TREE_CODE (exp) == BIT_AND_EXPR
       && maskable_range_p (low, high, etype, &mask, &value))
     return fold_build2_loc (loc, EQ_EXPR, type,
 			    fold_build2_loc (loc, BIT_AND_EXPR, etype,
-					      exp, mask),
+					     exp, mask),
 			    value);
 
   if (integer_zerop (low))
@@ -4905,7 +4905,7 @@  build_range_check (location_t loc, tree
 	      exp = fold_convert_loc (loc, etype, exp);
 	    }
 	  return fold_build2_loc (loc, GT_EXPR, type, exp,
-			      build_int_cst (etype, 0));
+				  build_int_cst (etype, 0));
 	}
     }
 
@@ -4915,25 +4915,15 @@  build_range_check (location_t loc, tree
   if (etype == NULL_TREE)
     return NULL_TREE;
 
+  if (POINTER_TYPE_P (etype))
+    etype = unsigned_type_for (etype);
+
   high = fold_convert_loc (loc, etype, high);
   low = fold_convert_loc (loc, etype, low);
   exp = fold_convert_loc (loc, etype, exp);
 
   value = const_binop (MINUS_EXPR, high, low);
 
-
-  if (POINTER_TYPE_P (etype))
-    {
-      if (value != 0 && !TREE_OVERFLOW (value))
-	{
-	  low = fold_build1_loc (loc, NEGATE_EXPR, TREE_TYPE (low), low);
-          return build_range_check (loc, type,
-			     	    fold_build_pointer_plus_loc (loc, exp, low),
-			            1, build_int_cst (etype, 0), value);
-	}
-      return 0;
-    }
-
   if (value != 0 && !TREE_OVERFLOW (value))
     return build_range_check (loc, type,
 			      fold_build2_loc (loc, MINUS_EXPR, etype, exp, low),
--- gcc/testsuite/c-c++-common/ubsan/ptr-overflow-1.c.jj	2017-07-26 13:44:13.840204555 +0200
+++ gcc/testsuite/c-c++-common/ubsan/ptr-overflow-1.c	2017-07-26 13:44:13.840204555 +0200
@@ -0,0 +1,65 @@ 
+/* PR sanitizer/80998 */
+/* { dg-do run } */
+/* { dg-options "-fsanitize=pointer-overflow -fno-sanitize-recover=pointer-overflow -Wall" } */
+
+struct S { int a; int b; int c[64]; };
+__attribute__((noinline, noclone)) char *f1 (char *p) { return p + 1; }
+__attribute__((noinline, noclone)) char *f2 (char *p) { return p - 1; }
+__attribute__((noinline, noclone)) char *f3 (char *p, int i) { return p + i; }
+__attribute__((noinline, noclone)) char *f4 (char *p, int i) { return p - i; }
+__attribute__((noinline, noclone)) char *f5 (char *p, unsigned long int i) { return p + i; }
+__attribute__((noinline, noclone)) char *f6 (char *p, unsigned long int i) { return p - i; }
+__attribute__((noinline, noclone)) int *f7 (struct S *p) { return &p->a; }
+__attribute__((noinline, noclone)) int *f8 (struct S *p) { return &p->b; }
+__attribute__((noinline, noclone)) int *f9 (struct S *p) { return &p->c[64]; }
+__attribute__((noinline, noclone)) int *f10 (struct S *p, int i) { return &p->c[i]; }
+
+char *volatile p;
+struct S *volatile q;
+char a[64];
+struct S s;
+int *volatile r;
+
+int
+main ()
+{
+  struct S t;
+  p = &a[32];
+  p = f1 (p);
+  p = f1 (p);
+  p = f2 (p);
+  p = f3 (p, 1);
+  p = f3 (p, -1);
+  p = f3 (p, 3);
+  p = f3 (p, -6);
+  p = f4 (p, 1);
+  p = f4 (p, -1);
+  p = f4 (p, 3);
+  p = f4 (p, -6);
+  p = f5 (p, 1);
+  p = f5 (p, 3);
+  p = f6 (p, 1);
+  p = f6 (p, 3);
+  if (sizeof (unsigned long) >= sizeof (char *))
+    {
+      p = f5 (p, -1);
+      p = f5 (p, -6);
+      p = f6 (p, -1);
+      p = f6 (p, -6);
+    }
+  q = &s;
+  r = f7 (q);
+  r = f8 (q);
+  r = f9 (q);
+  r = f10 (q, 0);
+  r = f10 (q, 10);
+  r = f10 (q, 64);
+  q = &t;
+  r = f7 (q);
+  r = f8 (q);
+  r = f9 (q);
+  r = f10 (q, 0);
+  r = f10 (q, 10);
+  r = f10 (q, 64);
+  return 0;
+}
--- gcc/testsuite/c-c++-common/ubsan/ptr-overflow-2.c.jj	2017-07-26 13:44:13.840204555 +0200
+++ gcc/testsuite/c-c++-common/ubsan/ptr-overflow-2.c	2017-07-26 13:44:13.840204555 +0200
@@ -0,0 +1,113 @@ 
+/* PR sanitizer/80998 */
+/* { dg-do run } */
+/* { dg-options "-fsanitize=pointer-overflow -fsanitize-recover=pointer-overflow -fno-ipa-icf -Wall" } */
+
+__attribute__((noinline, noclone)) char * f1 (char *p) { return p + 1; }
+__attribute__((noinline, noclone)) char * f2 (char *p) { return p - 1; }
+__attribute__((noinline, noclone)) char * f3 (char *p, int i) { return p + i; }
+__attribute__((noinline, noclone)) char * f4 (char *p, int i) { return p + i; }
+__attribute__((noinline, noclone)) char * f5 (char *p, int i) { return p - i; }
+__attribute__((noinline, noclone)) char * f6 (char *p, int i) { return p - i; }
+__attribute__((noinline, noclone)) char * f7 (char *p, unsigned long int i) { return p + i; }
+__attribute__((noinline, noclone)) char * f8 (char *p, unsigned long int i) { return p + i; }
+__attribute__((noinline, noclone)) char * f9 (char *p, unsigned long int i) { return p - i; }
+__attribute__((noinline, noclone)) char * f10 (char *p, unsigned long int i) { return p - i; }
+struct S { int a; int b; int c[64]; };
+__attribute__((noinline, noclone)) int *f11 (struct S *p) { return &p->a; }
+__attribute__((noinline, noclone)) int *f12 (struct S *p) { return &p->b; }
+__attribute__((noinline, noclone)) int *f13 (struct S *p) { return &p->c[64]; }
+__attribute__((noinline, noclone)) int *f14 (struct S *p, int i) { return &p->c[i]; }
+__attribute__((noinline, noclone)) int *f15 (struct S *p, int i) { return &p->c[i]; }
+__attribute__((noinline, noclone)) int *f16 (struct S *p) { return &p->a; }
+__attribute__((noinline, noclone)) int *f17 (struct S *p) { return &p->b; }
+__attribute__((noinline, noclone)) int *f18 (struct S *p) { return &p->c[64]; }
+__attribute__((noinline, noclone)) int *f19 (struct S *p, int i) { return &p->c[i]; }
+__attribute__((noinline, noclone)) int *f20 (struct S *p, int i) { return &p->c[i]; }
+__attribute__((noinline, noclone)) int *f21 (struct S *p) { return &p->a; }
+__attribute__((noinline, noclone)) int *f22 (struct S *p) { return &p->b; }
+__attribute__((noinline, noclone)) int *f23 (struct S *p) { return &p->c[64]; }
+__attribute__((noinline, noclone)) int *f24 (struct S *p, int i) { return &p->c[i]; }
+__attribute__((noinline, noclone)) int *f25 (struct S *p, int i) { return &p->c[i]; }
+
+char *volatile p;
+__UINTPTR_TYPE__ volatile u;
+struct S *volatile q;
+int *volatile r;
+
+int
+main ()
+{
+  u = ~(__UINTPTR_TYPE__) 0;
+  p = (char *) u;
+  p = f1 (p);
+  u = 0;
+  p = (char *) u;
+  p = f2 (p);
+  u = -(__UINTPTR_TYPE__) 7;
+  p = (char *) u;
+  p = f3 (p, 7);
+  u = 3;
+  p = (char *) u;
+  p = f4 (p, -4);
+  u = 23;
+  p = (char *) u;
+  p = f5 (p, 27);
+  u = -(__UINTPTR_TYPE__) 15;
+  p = (char *) u;
+  p = f6 (p, -15);
+  u = -(__UINTPTR_TYPE__) 29;
+  p = (char *) u;
+  p = f7 (p, 31);
+  u = 23;
+  p = (char *) u;
+  p = f9 (p, 24);
+  if (sizeof (unsigned long) < sizeof (char *))
+    return 0;
+  u = 7;
+  p = (char *) u;
+  p = f8 (p, -8);
+  u = -(__UINTPTR_TYPE__) 25;
+  p = (char *) u;
+  p = f10 (p, -25);
+  u = ~(__UINTPTR_TYPE__) 0;
+  q = (struct S *) u;
+  r = f11 (q);
+  r = f12 (q);
+  r = f13 (q);
+  r = f14 (q, 0);
+  r = f15 (q, 63);
+  u = ~(__UINTPTR_TYPE__) 0 - (17 * sizeof (int));
+  q = (struct S *) u;
+  r = f16 (q);
+  r = f17 (q);
+  r = f18 (q);
+  r = f19 (q, 0);
+  r = f20 (q, 63);
+  u = 3 * sizeof (int);
+  q = (struct S *) u;
+  r = f21 (q);
+  r = f22 (q);
+  r = f23 (q);
+  r = f24 (q, -2);
+  r = f25 (q, -6);
+  return 0;
+}
+
+/* { dg-output ":5:6\[79]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+ overflowed to (0\[xX])?0\+(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:6:6\[79]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?0\+ overflowed to (0\[xX])?\[fF]\+(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:7:7\[46]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+9 overflowed to (0\[xX])?0\+(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:8:7\[46]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?0\+3 overflowed to (0\[xX])?\[fF]\+(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:9:7\[46]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?0\+17 overflowed to (0\[xX])?\[fF]\+\[cC](\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:10:7\[46]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+1 overflowed to (0\[xX])?0\+(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:11:\[89]\[80]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+\[eE]3 overflowed to (0\[xX])?0\+2(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:13:\[89]\[80]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?0\+17 overflowed to (0\[xX])?\[fF]\+(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:12:\[89]\[80]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?0\+7 overflowed to (0\[xX])?\[fF]\+(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*:14:\[89]\[91]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+\[eE]7 overflowed to (0\[xX])?0\+" } */
+/* { dg-output "(\n|\r\n|\r)" { target int32 } } */
+/* { dg-output "\[^\n\r]*:17:\[67]\[82]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+ overflowed to (0\[xX])?0\+3(\n|\r\n|\r)" { target int32 } } */
+/* { dg-output "\[^\n\r]*:18:\[67]\[86]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+ overflowed to (0\[xX])?0\+107(\n|\r\n|\r)" { target int32 } } */
+/* { dg-output "\[^\n\r]*:19:\[78]\[52]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+ overflowed to (0\[xX])?0\+7(\n|\r\n|\r)" { target int32 } } */
+/* { dg-output "\[^\n\r]*:20:\[78]\[52]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+ overflowed to (0\[xX])?0\+103(\n|\r\n|\r)" { target int32 } } */
+/* { dg-output "\[^\n\r]*:23:\[67]\[86]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+\[bB]\[bB] overflowed to (0\[xX])?0\+\[cC]3(\n|\r\n|\r)" { target int32 } } */
+/* { dg-output "\[^\n\r]*:25:\[78]\[52]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?\[fF]\+\[bB]\[bB] overflowed to (0\[xX])?0\+\[bB]\[fF](\n|\r\n|\r)" { target int32 } } */
+/* { dg-output "\[^\n\r]*:30:\[78]\[52]\[^\n\r]*runtime error: pointer index expression with base (0\[xX])?0\+\[cC] overflowed to (0\[xX])?\[fF]\+\[cC]" { target int32 } } */
--- libsanitizer/ubsan/ubsan_handlers.cc.jj	2017-06-19 08:26:17.073600279 +0200
+++ libsanitizer/ubsan/ubsan_handlers.cc	2017-07-26 13:44:14.261199463 +0200
@@ -521,6 +521,37 @@  void __ubsan::__ubsan_handle_nonnull_arg
   Die();
 }
 
+static void handlePointerOverflowImpl(PointerOverflowData *Data,
+                                      ValueHandle Base,
+                                      ValueHandle Result,
+                                      ReportOptions Opts) {
+  SourceLocation Loc = Data->Loc.acquire();
+  ErrorType ET = ErrorType::PointerOverflow;
+
+  if (ignoreReport(Loc, Opts, ET))
+    return;
+
+  ScopedReport R(Opts, Loc, ET);
+
+  Diag(Loc, DL_Error, "pointer index expression with base %0 overflowed to %1")
+    << (void *)Base << (void*)Result;
+}
+
+void __ubsan::__ubsan_handle_pointer_overflow(PointerOverflowData *Data,
+                                              ValueHandle Base,
+                                              ValueHandle Result) {
+  GET_REPORT_OPTIONS(false);
+  handlePointerOverflowImpl(Data, Base, Result, Opts);
+}
+
+void __ubsan::__ubsan_handle_pointer_overflow_abort(PointerOverflowData *Data,
+                                                    ValueHandle Base,
+                                                    ValueHandle Result) {
+  GET_REPORT_OPTIONS(true);
+  handlePointerOverflowImpl(Data, Base, Result, Opts);
+  Die();
+}
+
 static void handleCFIBadIcall(CFICheckFailData *Data, ValueHandle Function,
                               ReportOptions Opts) {
   if (Data->CheckKind != CFITCK_ICall)
--- libsanitizer/ubsan/ubsan_checks.inc.jj	2017-06-19 08:26:17.061600432 +0200
+++ libsanitizer/ubsan/ubsan_checks.inc	2017-07-26 13:44:14.275199294 +0200
@@ -17,6 +17,7 @@ 
 
 UBSAN_CHECK(GenericUB, "undefined-behavior", "undefined")
 UBSAN_CHECK(NullPointerUse, "null-pointer-use", "null")
+UBSAN_CHECK(PointerOverflow, "pointer-overflow", "pointer-overflow")
 UBSAN_CHECK(MisalignedPointerUse, "misaligned-pointer-use", "alignment")
 UBSAN_CHECK(InsufficientObjectSize, "insufficient-object-size", "object-size")
 UBSAN_CHECK(SignedIntegerOverflow, "signed-integer-overflow",
--- libsanitizer/ubsan/ubsan_handlers.h.jj	2017-06-19 08:26:17.073600279 +0200
+++ libsanitizer/ubsan/ubsan_handlers.h	2017-07-26 13:44:14.282199209 +0200
@@ -146,6 +146,13 @@  struct NonNullArgData {
 /// \brief Handle passing null pointer to function with nonnull attribute.
 RECOVERABLE(nonnull_arg, NonNullArgData *Data)
 
+struct PointerOverflowData {
+  SourceLocation Loc;
+};
+
+RECOVERABLE(pointer_overflow, PointerOverflowData *Data, ValueHandle Base,
+            ValueHandle Result)
+
 /// \brief Known CFI check kinds.
 /// Keep in sync with the enum of the same name in CodeGenFunction.h
 enum CFITypeCheckKind : unsigned char {