diff mbox series

add access warning pass

Message ID bfd140bd-66e3-0642-992b-1d975265fdb1@gmail.com
State New
Headers show
Series add access warning pass | expand

Commit Message

Martin Sebor July 15, 2021, 10:39 p.m. UTC
A number of access warnings as well as their supporting
infrastructure (compute_objsize et al.) are implemented in
builtins.{c,h} where they  (mostly) operate on trees and run
just before RTL expansion.

This setup may have made sense initially when the warnings were
very simple and didn't perform any CFG analysis, but it's becoming
a liability.  The code has grown both in size and in complexity,
might need to examine the CFG to improve detection, and in some
cases might achieve a better S/R ratio if run earlier.  Running
the warning code on trees is also slower because it doesn't
benefit from the SSA_NAME caching provided by the pointer_query
class.  Finally, having the code there is also an impediment to
maintainability as warnings and builtin expansion are unrelated
to each other and contributors to one area shouldn't need to wade
through unrelated code (similar for patch reviewers).

The attached change introduces a new warning pass and a couple of
new source and headers and, as the first step, moves the warning
code from builtins.{c,h} there.  To keep the initial changes as
simple as possible the pass only runs a subset of existing
warnings: -Wfree-nonheap-object, -Wmismatched-dealloc, and
-Wmismatched-new-delete.  The others (-Wstringop-overflow and
-Wstringop-overread) still run on the tree representation and
are still invoked from builtins.c or elsewhere.

The changes have no functional impact either on codegen or on
warnings.  I tested them on x86_64-linux.

As the next step I plan to change the -Wstringop-overflow and
-Wstringop-overread code to run on the GIMPLE IL in the new pass
instead of on trees in builtins.c.

Martin

PS The builtins.c diff produced by git diff was much bigger than
the changes justify.  It seems that the code removal somehow
confused it.  To make review easy I replaced it with a plain
unified diff of builtins.c that doesn't suffer from the problem.

Comments

Martin Sebor July 23, 2021, 4:48 p.m. UTC | #1
Ping: https://gcc.gnu.org/pipermail/gcc-patches/2021-July/575377.html

On 7/15/21 4:39 PM, Martin Sebor wrote:
> A number of access warnings as well as their supporting
> infrastructure (compute_objsize et al.) are implemented in
> builtins.{c,h} where they  (mostly) operate on trees and run
> just before RTL expansion.
> 
> This setup may have made sense initially when the warnings were
> very simple and didn't perform any CFG analysis, but it's becoming
> a liability.  The code has grown both in size and in complexity,
> might need to examine the CFG to improve detection, and in some
> cases might achieve a better S/R ratio if run earlier.  Running
> the warning code on trees is also slower because it doesn't
> benefit from the SSA_NAME caching provided by the pointer_query
> class.  Finally, having the code there is also an impediment to
> maintainability as warnings and builtin expansion are unrelated
> to each other and contributors to one area shouldn't need to wade
> through unrelated code (similar for patch reviewers).
> 
> The attached change introduces a new warning pass and a couple of
> new source and headers and, as the first step, moves the warning
> code from builtins.{c,h} there.  To keep the initial changes as
> simple as possible the pass only runs a subset of existing
> warnings: -Wfree-nonheap-object, -Wmismatched-dealloc, and
> -Wmismatched-new-delete.  The others (-Wstringop-overflow and
> -Wstringop-overread) still run on the tree representation and
> are still invoked from builtins.c or elsewhere.
> 
> The changes have no functional impact either on codegen or on
> warnings.  I tested them on x86_64-linux.
> 
> As the next step I plan to change the -Wstringop-overflow and
> -Wstringop-overread code to run on the GIMPLE IL in the new pass
> instead of on trees in builtins.c.
> 
> Martin
> 
> PS The builtins.c diff produced by git diff was much bigger than
> the changes justify.  It seems that the code removal somehow
> confused it.  To make review easy I replaced it with a plain
> unified diff of builtins.c that doesn't suffer from the problem.
Richard Biener July 28, 2021, 9:23 a.m. UTC | #2
On Fri, Jul 16, 2021 at 12:42 AM Martin Sebor via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> A number of access warnings as well as their supporting
> infrastructure (compute_objsize et al.) are implemented in
> builtins.{c,h} where they  (mostly) operate on trees and run
> just before RTL expansion.
>
> This setup may have made sense initially when the warnings were
> very simple and didn't perform any CFG analysis, but it's becoming
> a liability.  The code has grown both in size and in complexity,
> might need to examine the CFG to improve detection, and in some
> cases might achieve a better S/R ratio if run earlier.  Running
> the warning code on trees is also slower because it doesn't
> benefit from the SSA_NAME caching provided by the pointer_query
> class.  Finally, having the code there is also an impediment to
> maintainability as warnings and builtin expansion are unrelated
> to each other and contributors to one area shouldn't need to wade
> through unrelated code (similar for patch reviewers).
>
> The attached change introduces a new warning pass and a couple of
> new source and headers and, as the first step, moves the warning
> code from builtins.{c,h} there.  To keep the initial changes as
> simple as possible the pass only runs a subset of existing
> warnings: -Wfree-nonheap-object, -Wmismatched-dealloc, and
> -Wmismatched-new-delete.  The others (-Wstringop-overflow and
> -Wstringop-overread) still run on the tree representation and
> are still invoked from builtins.c or elsewhere.
>
> The changes have no functional impact either on codegen or on
> warnings.  I tested them on x86_64-linux.
>
> As the next step I plan to change the -Wstringop-overflow and
> -Wstringop-overread code to run on the GIMPLE IL in the new pass
> instead of on trees in builtins.c.

That's the maybe_warn_rdwr_sizes thing?

+      gimple *stmt = gsi_stmt (si);
+      if (!is_gimple_call (stmt))
+       continue;
+
+      check (as_a <gcall *>(stmt));


     if (gcall *call = dyn_cast <gcall *> (gsi_stmt (si)))
       check (call);

might be more C++-ish.

The patch looks OK - I skimmed it as mostly moving things
around plus adding a new pass.

Thanks,
Richard.

> Martin
>
> PS The builtins.c diff produced by git diff was much bigger than
> the changes justify.  It seems that the code removal somehow
> confused it.  To make review easy I replaced it with a plain
> unified diff of builtins.c that doesn't suffer from the problem.
Martin Sebor July 28, 2021, 10:09 p.m. UTC | #3
On 7/28/21 3:23 AM, Richard Biener wrote:
> On Fri, Jul 16, 2021 at 12:42 AM Martin Sebor via Gcc-patches
> <gcc-patches@gcc.gnu.org> wrote:
>>
>> A number of access warnings as well as their supporting
>> infrastructure (compute_objsize et al.) are implemented in
>> builtins.{c,h} where they  (mostly) operate on trees and run
>> just before RTL expansion.
>>
>> This setup may have made sense initially when the warnings were
>> very simple and didn't perform any CFG analysis, but it's becoming
>> a liability.  The code has grown both in size and in complexity,
>> might need to examine the CFG to improve detection, and in some
>> cases might achieve a better S/R ratio if run earlier.  Running
>> the warning code on trees is also slower because it doesn't
>> benefit from the SSA_NAME caching provided by the pointer_query
>> class.  Finally, having the code there is also an impediment to
>> maintainability as warnings and builtin expansion are unrelated
>> to each other and contributors to one area shouldn't need to wade
>> through unrelated code (similar for patch reviewers).
>>
>> The attached change introduces a new warning pass and a couple of
>> new source and headers and, as the first step, moves the warning
>> code from builtins.{c,h} there.  To keep the initial changes as
>> simple as possible the pass only runs a subset of existing
>> warnings: -Wfree-nonheap-object, -Wmismatched-dealloc, and
>> -Wmismatched-new-delete.  The others (-Wstringop-overflow and
>> -Wstringop-overread) still run on the tree representation and
>> are still invoked from builtins.c or elsewhere.
>>
>> The changes have no functional impact either on codegen or on
>> warnings.  I tested them on x86_64-linux.
>>
>> As the next step I plan to change the -Wstringop-overflow and
>> -Wstringop-overread code to run on the GIMPLE IL in the new pass
>> instead of on trees in builtins.c.
> 
> That's the maybe_warn_rdwr_sizes thing?

Among others, yes.  It includes most buffer overflow and overread
warnings.

> 
> +      gimple *stmt = gsi_stmt (si);
> +      if (!is_gimple_call (stmt))
> +       continue;
> +
> +      check (as_a <gcall *>(stmt));
> 
> 
>       if (gcall *call = dyn_cast <gcall *> (gsi_stmt (si)))
>         check (call);
> 
> might be more C++-ish.

Sure, I can do that.

> 
> The patch looks OK - I skimmed it as mostly moving things
> around plus adding a new pass.

Okay, thanks.  I have retested the updated change and pushed it in
r12-2581.  As a reminder, the git show output for builtins.c looks
considerably different from the patch I posted because of what I
mentioned below.

Martin

> 
> Thanks,
> Richard.
> 
>> Martin
>>
>> PS The builtins.c diff produced by git diff was much bigger than
>> the changes justify.  It seems that the code removal somehow
>> confused it.  To make review easy I replaced it with a plain
>> unified diff of builtins.c that doesn't suffer from the problem.
diff mbox series

Patch

Add new gimple-ssa-warn-access pass.

gcc/ChangeLog:

	* Makefile.in (OBJS): Add gimple-ssa-warn-access.o and pointer-query.o.
	* attribs.h (fndecl_dealloc_argno): Move fndecl_dealloc_argno to tree.h.
	* builtins.c (compute_objsize_r): Move to pointer-query.cc.
	(access_ref::access_ref): Same.
	(access_ref::phi): Same.
	(access_ref::get_ref): Same.
	(access_ref::size_remaining): Same.
	(access_ref::offset_in_range): Same.
	(access_ref::add_offset): Same.
	(access_ref::inform_access): Same.
	(ssa_name_limit_t::visit_phi): Same.
	(ssa_name_limit_t::leave_phi): Same.
	(ssa_name_limit_t::next): Same.
	(ssa_name_limit_t::next_phi): Same.
	(ssa_name_limit_t::~ssa_name_limit_t): Same.
	(pointer_query::pointer_query): Same.
	(pointer_query::get_ref): Same.
	(pointer_query::put_ref): Same.
	(pointer_query::flush_cache): Same.
	(warn_string_no_nul): Move to gimple-ssa-warn-access.cc.
	(check_nul_terminated_array): Same.
	(unterminated_array): Same.
	(maybe_warn_for_bound): Same.
	(check_read_access): Same.
	(warn_for_access): Same.
	(get_size_range): Same.
	(check_access): Same.
	(gimple_call_alloc_size): Move to tree.c.
	(gimple_parm_array_size): Move to pointer-query.cc.
	(get_offset_range): Same.
	(gimple_call_return_array): Same.
	(handle_min_max_size): Same.
	(handle_array_ref): Same.
	(handle_mem_ref): Same.
	(compute_objsize): Same.
	(gimple_call_alloc_p): Move to gimple-ssa-warn-access.cc.
	(call_dealloc_argno): Same.
	(fndecl_dealloc_argno): Same.
	(new_delete_mismatch_p): Same.
	(matching_alloc_calls_p): Same.
	(warn_dealloc_offset): Same.
	(maybe_emit_free_warning): Same.
	* builtins.h (check_nul_terminated_array): Move to
	gimple-ssa-warn-access.h.
	(check_nul_terminated_array): Same.
	(warn_string_no_nul): Same.
	(unterminated_array): Same.
	(class ssa_name_limit_t): Same.
	(class pointer_query): Same.
	(struct access_ref): Same.
	(class range_query): Same.
	(struct access_data): Same.
	(gimple_call_alloc_size): Same.
	(gimple_parm_array_size): Same.
	(compute_objsize): Same.
	(class access_data): Same.
	(maybe_emit_free_warning): Same.
	* calls.c (initialize_argument_information): Remove call to
	maybe_emit_free_warning.
	* gimple-array-bounds.cc: Include new header..
	* gimple-fold.c: Same.
	* gimple-ssa-sprintf.c: Same.
	* gimple-ssa-warn-restrict.c: Same.
	* passes.def: Add pass_warn_access.
	* tree-pass.h (make_pass_warn_access): Declare.
	* tree-ssa-strlen.c: Include new headers.
	* tree.c (fndecl_dealloc_argno): Move here from builtins.c.
	* tree.h (fndecl_dealloc_argno): Move here from attribs.h.
	* gimple-ssa-warn-access.cc: New file.
	* gimple-ssa-warn-access.h: New file.
	* pointer-query.cc: New file.
	* pointer-query.h: New file.

gcc/cp/ChangeLog:

	* init.c: Include new header.

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 934b2a05327..5d5dcaf29ef 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1413,6 +1413,7 @@  OBJS = \
 	gimple-ssa-store-merging.o \
 	gimple-ssa-strength-reduction.o \
 	gimple-ssa-sprintf.o \
+	gimple-ssa-warn-access.o \
 	gimple-ssa-warn-alloca.o \
 	gimple-ssa-warn-restrict.o \
 	gimple-streamer-in.o \
@@ -1523,6 +1524,7 @@  OBJS = \
 	ordered-hash-map-tests.o \
 	passes.o \
 	plugin.o \
+	pointer-query.o \
 	postreload-gcse.o \
 	postreload.o \
 	predict.o \
diff --git a/gcc/attribs.h b/gcc/attribs.h
index df78eb152f9..87231b954c6 100644
--- a/gcc/attribs.h
+++ b/gcc/attribs.h
@@ -316,6 +316,4 @@  extern void init_attr_rdwr_indices (rdwr_map *, tree);
 extern attr_access *get_parm_access (rdwr_map &, tree,
 				     tree = current_function_decl);
 
-extern unsigned fndecl_dealloc_argno (tree fndecl);
-
 #endif // GCC_ATTRIBS_H
diff --git a/gcc/builtins.c b/gcc/builtins.c
index 39ab139b7e1..845a8bb1201 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -80,0 +80,0 @@  along with GCC; see the file COPYING3.  If not see
@@ -80,6 +80,8 @@ 
 #include "attr-fnspec.h"
 #include "demangle.h"
 #include "gimple-range.h"
+#include "pointer-query.h"
+#include "gimple-ssa-warn-access.h"
 
 struct target_builtins default_target_builtins;
 #if SWITCHABLE_TARGET
@@ -185,8 +187,6 @@ 
 static void maybe_emit_sprintf_chk_warning (tree, enum built_in_function);
 static tree fold_builtin_object_size (tree, tree);
 static bool check_read_access (tree, tree, tree = NULL_TREE, int = 1);
-static bool compute_objsize_r (tree, int, access_ref *, ssa_name_limit_t &,
-			       pointer_query *);
 
 unsigned HOST_WIDE_INT target_newline;
 unsigned HOST_WIDE_INT target_percent;
@@ -199,565 +199,6 @@ 
 static tree do_mpfr_lgamma_r (tree, tree, tree);
 static void expand_builtin_sync_synchronize (void);
 
-access_ref::access_ref (tree bound /* = NULL_TREE */,
-			bool minaccess /* = false */)
-: ref (), eval ([](tree x){ return x; }), deref (), trail1special (true),
-  base0 (true), parmarray ()
-{
-  /* Set to valid.  */
-  offrng[0] = offrng[1] = 0;
-  offmax[0] = offmax[1] = 0;
-  /* Invalidate.   */
-  sizrng[0] = sizrng[1] = -1;
-
-  /* Set the default bounds of the access and adjust below.  */
-  bndrng[0] = minaccess ? 1 : 0;
-  bndrng[1] = HOST_WIDE_INT_M1U;
-
-  /* When BOUND is nonnull and a range can be extracted from it,
-     set the bounds of the access to reflect both it and MINACCESS.
-     BNDRNG[0] is the size of the minimum access.  */
-  tree rng[2];
-  if (bound && get_size_range (bound, rng, SR_ALLOW_ZERO))
-    {
-      bndrng[0] = wi::to_offset (rng[0]);
-      bndrng[1] = wi::to_offset (rng[1]);
-      bndrng[0] = bndrng[0] > 0 && minaccess ? 1 : 0;
-    }
-}
-
-/* Return the PHI node REF refers to or null if it doesn't.  */
-
-gphi *
-access_ref::phi () const
-{
-  if (!ref || TREE_CODE (ref) != SSA_NAME)
-    return NULL;
-
-  gimple *def_stmt = SSA_NAME_DEF_STMT (ref);
-  if (gimple_code (def_stmt) != GIMPLE_PHI)
-    return NULL;
-
-  return as_a <gphi *> (def_stmt);
-}
-
-/* Determine and return the largest object to which *THIS.  If *THIS
-   refers to a PHI and PREF is nonnull, fill *PREF with the details
-   of the object determined by compute_objsize(ARG, OSTYPE) for each
-   PHI argument ARG.  */
-
-tree
-access_ref::get_ref (vec<access_ref> *all_refs,
-		     access_ref *pref /* = NULL */,
-		     int ostype /* = 1 */,
-		     ssa_name_limit_t *psnlim /* = NULL */,
-		     pointer_query *qry /* = NULL */) const
-{
-  gphi *phi_stmt = this->phi ();
-  if (!phi_stmt)
-    return ref;
-
-  /* FIXME: Calling get_ref() with a null PSNLIM is dangerous and might
-     cause unbounded recursion.  */
-  ssa_name_limit_t snlim_buf;
-  if (!psnlim)
-    psnlim = &snlim_buf;
-
-  if (!psnlim->visit_phi (ref))
-    return NULL_TREE;
-
-  /* Reflects the range of offsets of all PHI arguments refer to the same
-     object (i.e., have the same REF).  */
-  access_ref same_ref;
-  /* The conservative result of the PHI reflecting the offset and size
-     of the largest PHI argument, regardless of whether or not they all
-     refer to the same object.  */
-  pointer_query empty_qry;
-  if (!qry)
-    qry = &empty_qry;
-
-  access_ref phi_ref;
-  if (pref)
-    {
-      phi_ref = *pref;
-      same_ref = *pref;
-    }
-
-  /* Set if any argument is a function array (or VLA) parameter not
-     declared [static].  */
-  bool parmarray = false;
-  /* The size of the smallest object referenced by the PHI arguments.  */
-  offset_int minsize = 0;
-  const offset_int maxobjsize = wi::to_offset (max_object_size ());
-  /* The offset of the PHI, not reflecting those of its arguments.  */
-  const offset_int orng[2] = { phi_ref.offrng[0], phi_ref.offrng[1] };
-
-  const unsigned nargs = gimple_phi_num_args (phi_stmt);
-  for (unsigned i = 0; i < nargs; ++i)
-    {
-      access_ref phi_arg_ref;
-      tree arg = gimple_phi_arg_def (phi_stmt, i);
-      if (!compute_objsize_r (arg, ostype, &phi_arg_ref, *psnlim, qry)
-	  || phi_arg_ref.sizrng[0] < 0)
-	/* A PHI with all null pointer arguments.  */
-	return NULL_TREE;
-
-      /* Add PREF's offset to that of the argument.  */
-      phi_arg_ref.add_offset (orng[0], orng[1]);
-      if (TREE_CODE (arg) == SSA_NAME)
-	qry->put_ref (arg, phi_arg_ref);
-
-      if (all_refs)
-	all_refs->safe_push (phi_arg_ref);
-
-      const bool arg_known_size = (phi_arg_ref.sizrng[0] != 0
-				   || phi_arg_ref.sizrng[1] != maxobjsize);
-
-      parmarray |= phi_arg_ref.parmarray;
-
-      const bool nullp = integer_zerop (arg) && (i || i + 1 < nargs);
-
-      if (phi_ref.sizrng[0] < 0)
-	{
-	  if (!nullp)
-	    same_ref = phi_arg_ref;
-	  phi_ref = phi_arg_ref;
-	  if (arg_known_size)
-	    minsize = phi_arg_ref.sizrng[0];
-	  continue;
-	}
-
-      const bool phi_known_size = (phi_ref.sizrng[0] != 0
-				   || phi_ref.sizrng[1] != maxobjsize);
-
-      if (phi_known_size && phi_arg_ref.sizrng[0] < minsize)
-	minsize = phi_arg_ref.sizrng[0];
-
-      /* Disregard null pointers in PHIs with two or more arguments.
-	 TODO: Handle this better!  */
-      if (nullp)
-	continue;
-
-      /* Determine the amount of remaining space in the argument.  */
-      offset_int argrem[2];
-      argrem[1] = phi_arg_ref.size_remaining (argrem);
-
-      /* Determine the amount of remaining space computed so far and
-	 if the remaining space in the argument is more use it instead.  */
-      offset_int phirem[2];
-      phirem[1] = phi_ref.size_remaining (phirem);
-
-      if (phi_arg_ref.ref != same_ref.ref)
-	same_ref.ref = NULL_TREE;
-
-      if (phirem[1] < argrem[1]
-	  || (phirem[1] == argrem[1]
-	      && phi_ref.sizrng[1] < phi_arg_ref.sizrng[1]))
-	/* Use the argument with the most space remaining as the result,
-	   or the larger one if the space is equal.  */
-	phi_ref = phi_arg_ref;
-
-      /* Set SAME_REF.OFFRNG to the maximum range of all arguments.  */
-      if (phi_arg_ref.offrng[0] < same_ref.offrng[0])
-	same_ref.offrng[0] = phi_arg_ref.offrng[0];
-      if (same_ref.offrng[1] < phi_arg_ref.offrng[1])
-	same_ref.offrng[1] = phi_arg_ref.offrng[1];
-    }
-
-  if (!same_ref.ref && same_ref.offrng[0] != 0)
-    /* Clear BASE0 if not all the arguments refer to the same object and
-       if not all their offsets are zero-based.  This allows the final
-       PHI offset to out of bounds for some arguments but not for others
-       (or negative even of all the arguments are BASE0), which is overly
-       permissive.  */
-    phi_ref.base0 = false;
-
-  if (same_ref.ref)
-    phi_ref = same_ref;
-  else
-    {
-      /* Replace the lower bound of the largest argument with the size
-	 of the smallest argument, and set PARMARRAY if any argument
-	 was one.  */
-      phi_ref.sizrng[0] = minsize;
-      phi_ref.parmarray = parmarray;
-    }
-
-  if (phi_ref.sizrng[0] < 0)
-    {
-      /* Fail if none of the PHI's arguments resulted in updating PHI_REF
-	 (perhaps because they have all been already visited by prior
-	 recursive calls).  */
-      psnlim->leave_phi (ref);
-      return NULL_TREE;
-    }
-
-  /* Avoid changing *THIS.  */
-  if (pref && pref != this)
-    *pref = phi_ref;
-
-  psnlim->leave_phi (ref);
-
-  return phi_ref.ref;
-}
-
-/* Return the maximum amount of space remaining and if non-null, set
-   argument to the minimum.  */
-
-offset_int
-access_ref::size_remaining (offset_int *pmin /* = NULL */) const
-{
-  offset_int minbuf;
-  if (!pmin)
-    pmin = &minbuf;
-
-  /* add_offset() ensures the offset range isn't inverted.  */
-  gcc_checking_assert (offrng[0] <= offrng[1]);
-
-  if (base0)
-    {
-      /* The offset into referenced object is zero-based (i.e., it's
-	 not referenced by a pointer into middle of some unknown object).  */
-      if (offrng[0] < 0 && offrng[1] < 0)
-	{
-	  /* If the offset is negative the remaining size is zero.  */
-	  *pmin = 0;
-	  return 0;
-	}
-
-      if (sizrng[1] <= offrng[0])
-	{
-	  /* If the starting offset is greater than or equal to the upper
-	     bound on the size of the object, the space remaining is zero.
-	     As a special case, if it's equal, set *PMIN to -1 to let
-	     the caller know the offset is valid and just past the end.  */
-	  *pmin = sizrng[1] == offrng[0] ? -1 : 0;
-	  return 0;
-	}
-
-      /* Otherwise return the size minus the lower bound of the offset.  */
-      offset_int or0 = offrng[0] < 0 ? 0 : offrng[0];
-
-      *pmin = sizrng[0] - or0;
-      return sizrng[1] - or0;
-    }
-
-  /* The offset to the referenced object isn't zero-based (i.e., it may
-     refer to a byte other than the first.  The size of such an object
-     is constrained only by the size of the address space (the result
-     of max_object_size()).  */
-  if (sizrng[1] <= offrng[0])
-    {
-      *pmin = 0;
-      return 0;
-    }
-
-  offset_int or0 = offrng[0] < 0 ? 0 : offrng[0];
-
-  *pmin = sizrng[0] - or0;
-  return sizrng[1] - or0;
-}
-
-/* Return true if the offset and object size are in range for SIZE.  */
-
-bool
-access_ref::offset_in_range (const offset_int &size) const
-{
-  if (size_remaining () < size)
-    return false;
-
-  if (base0)
-    return offmax[0] >= 0 && offmax[1] <= sizrng[1];
-
-  offset_int maxoff = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
-  return offmax[0] > -maxoff && offmax[1] < maxoff;
-}
-
-/* Add the range [MIN, MAX] to the offset range.  For known objects (with
-   zero-based offsets) at least one of whose offset's bounds is in range,
-   constrain the other (or both) to the bounds of the object (i.e., zero
-   and the upper bound of its size).  This improves the quality of
-   diagnostics.  */
-
-void access_ref::add_offset (const offset_int &min, const offset_int &max)
-{
-  if (min <= max)
-    {
-      /* To add an ordinary range just add it to the bounds.  */
-      offrng[0] += min;
-      offrng[1] += max;
-    }
-  else if (!base0)
-    {
-      /* To add an inverted range to an offset to an unknown object
-	 expand it to the maximum.  */
-      add_max_offset ();
-      return;
-    }
-  else
-    {
-      /* To add an inverted range to an offset to an known object set
-	 the upper bound to the maximum representable offset value
-	 (which may be greater than MAX_OBJECT_SIZE).
-	 The lower bound is either the sum of the current offset and
-	 MIN when abs(MAX) is greater than the former, or zero otherwise.
-	 Zero because then then inverted range includes the negative of
-	 the lower bound.  */
-      offset_int maxoff = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
-      offrng[1] = maxoff;
-
-      if (max >= 0)
-	{
-	  offrng[0] = 0;
-	  if (offmax[0] > 0)
-	    offmax[0] = 0;
-	  return;
-	}
-
-      offset_int absmax = wi::abs (max);
-      if (offrng[0] < absmax)
-	{
-	  offrng[0] += min;
-	  /* Cap the lower bound at the upper (set to MAXOFF above)
-	     to avoid inadvertently recreating an inverted range.  */
-	  if (offrng[1] < offrng[0])
-	    offrng[0] = offrng[1];
-	}
-      else
-	offrng[0] = 0;
-    }
-
-  /* Set the minimum and maximmum computed so far. */
-  if (offrng[1] < 0 && offrng[1] < offmax[0])
-    offmax[0] = offrng[1];
-  if (offrng[0] > 0 && offrng[0] > offmax[1])
-    offmax[1] = offrng[0];
-
-  if (!base0)
-    return;
-
-  /* When referencing a known object check to see if the offset computed
-     so far is in bounds... */
-  offset_int remrng[2];
-  remrng[1] = size_remaining (remrng);
-  if (remrng[1] > 0 || remrng[0] < 0)
-    {
-      /* ...if so, constrain it so that neither bound exceeds the size of
-	 the object.  Out of bounds offsets are left unchanged, and, for
-	 better or worse, become in bounds later.  They should be detected
-	 and diagnosed at the point they first become invalid by
-	 -Warray-bounds.  */
-      if (offrng[0] < 0)
-	offrng[0] = 0;
-      if (offrng[1] > sizrng[1])
-	offrng[1] = sizrng[1];
-    }
-}
-
-/* Set a bit for the PHI in VISITED and return true if it wasn't
-   already set.  */
-
-bool
-ssa_name_limit_t::visit_phi (tree ssa_name)
-{
-  if (!visited)
-    visited = BITMAP_ALLOC (NULL);
-
-  /* Return false if SSA_NAME has already been visited.  */
-  return bitmap_set_bit (visited, SSA_NAME_VERSION (ssa_name));
-}
-
-/* Clear a bit for the PHI in VISITED.  */
-
-void
-ssa_name_limit_t::leave_phi (tree ssa_name)
-{
-  /* Return false if SSA_NAME has already been visited.  */
-  bitmap_clear_bit (visited, SSA_NAME_VERSION (ssa_name));
-}
-
-/* Return false if the SSA_NAME chain length counter has reached
-   the limit, otherwise increment the counter and return true.  */
-
-bool
-ssa_name_limit_t::next ()
-{
-  /* Return a negative value to let caller avoid recursing beyond
-     the specified limit.  */
-  if (ssa_def_max == 0)
-    return false;
-
-  --ssa_def_max;
-  return true;
-}
-
-/* If the SSA_NAME has already been "seen" return a positive value.
-   Otherwise add it to VISITED.  If the SSA_NAME limit has been
-   reached, return a negative value.  Otherwise return zero.  */
-
-int
-ssa_name_limit_t::next_phi (tree ssa_name)
-{
-  {
-    gimple *def_stmt = SSA_NAME_DEF_STMT (ssa_name);
-    /* Return a positive value if the PHI has already been visited.  */
-    if (gimple_code (def_stmt) == GIMPLE_PHI
-	&& !visit_phi (ssa_name))
-      return 1;
-  }
-
-  /* Return a negative value to let caller avoid recursing beyond
-     the specified limit.  */
-  if (ssa_def_max == 0)
-    return -1;
-
-  --ssa_def_max;
-
-  return 0;
-}
-
-ssa_name_limit_t::~ssa_name_limit_t ()
-{
-  if (visited)
-    BITMAP_FREE (visited);
-}
-
-/* Default ctor.  Initialize object with pointers to the range_query
-   and cache_type instances to use or null.  */
-
-pointer_query::pointer_query (range_query *qry /* = NULL */,
-			      cache_type *cache /* = NULL */)
-: rvals (qry), var_cache (cache), hits (), misses (),
-  failures (), depth (), max_depth ()
-{
-  /* No op.  */
-}
-
-/* Return a pointer to the cached access_ref instance for the SSA_NAME
-   PTR if it's there or null otherwise.  */
-
-const access_ref *
-pointer_query::get_ref (tree ptr, int ostype /* = 1 */) const
-{
-  if (!var_cache)
-    {
-      ++misses;
-      return NULL;
-    }
-
-  unsigned version = SSA_NAME_VERSION (ptr);
-  unsigned idx = version << 1 | (ostype & 1);
-  if (var_cache->indices.length () <= idx)
-    {
-      ++misses;
-      return NULL;
-    }
-
-  unsigned cache_idx = var_cache->indices[idx];
-  if (var_cache->access_refs.length () <= cache_idx)
-    {
-      ++misses;
-      return NULL;
-    }
-
-  access_ref &cache_ref = var_cache->access_refs[cache_idx];
-  if (cache_ref.ref)
-    {
-      ++hits;
-      return &cache_ref;
-    }
-
-  ++misses;
-  return NULL;
-}
-
-/* Retrieve the access_ref instance for a variable from the cache if it's
-   there or compute it and insert it into the cache if it's nonnonull.  */
-
-bool
-pointer_query::get_ref (tree ptr, access_ref *pref, int ostype /* = 1 */)
-{
-  const unsigned version
-    = TREE_CODE (ptr) == SSA_NAME ? SSA_NAME_VERSION (ptr) : 0;
-
-  if (var_cache && version)
-    {
-      unsigned idx = version << 1 | (ostype & 1);
-      if (idx < var_cache->indices.length ())
-	{
-	  unsigned cache_idx = var_cache->indices[idx] - 1;
-	  if (cache_idx < var_cache->access_refs.length ()
-	      && var_cache->access_refs[cache_idx].ref)
-	    {
-	      ++hits;
-	      *pref = var_cache->access_refs[cache_idx];
-	      return true;
-	    }
-	}
-
-      ++misses;
-    }
-
-  if (!compute_objsize (ptr, ostype, pref, this))
-    {
-      ++failures;
-      return false;
-    }
-
-  return true;
-}
-
-/* Add a copy of the access_ref REF for the SSA_NAME to the cache if it's
-   nonnull.  */
-
-void
-pointer_query::put_ref (tree ptr, const access_ref &ref, int ostype /* = 1 */)
-{
-  /* Only add populated/valid entries.  */
-  if (!var_cache || !ref.ref || ref.sizrng[0] < 0)
-    return;
-
-  /* Add REF to the two-level cache.  */
-  unsigned version = SSA_NAME_VERSION (ptr);
-  unsigned idx = version << 1 | (ostype & 1);
-
-  /* Grow INDICES if necessary.  An index is valid if it's nonzero.
-     Its value minus one is the index into ACCESS_REFS.  Not all
-     entries are valid.  */
-  if (var_cache->indices.length () <= idx)
-    var_cache->indices.safe_grow_cleared (idx + 1);
-
-  if (!var_cache->indices[idx])
-    var_cache->indices[idx] = var_cache->access_refs.length () + 1;
-
-  /* Grow ACCESS_REF cache if necessary.  An entry is valid if its
-     REF member is nonnull.  All entries except for the last two
-     are valid.  Once nonnull, the REF value must stay unchanged.  */
-  unsigned cache_idx = var_cache->indices[idx];
-  if (var_cache->access_refs.length () <= cache_idx)
-    var_cache->access_refs.safe_grow_cleared (cache_idx + 1);
-
-  access_ref cache_ref = var_cache->access_refs[cache_idx - 1];
-  if (cache_ref.ref)
-  {
-    gcc_checking_assert (cache_ref.ref == ref.ref);
-    return;
-  }
-
-  cache_ref = ref;
-}
-
-/* Flush the cache if it's nonnull.  */
-
-void
-pointer_query::flush_cache ()
-{
-  if (!var_cache)
-    return;
-  var_cache->indices.release ();
-  var_cache->access_refs.release ();
-}
-
 /* Return true if NAME starts with __builtin_ or __sync_.  */
 
 static bool
@@ -1106,218 +547,6 @@ 
   return n;
 }
 
-/* For a call EXPR at LOC to a function FNAME that expects a string
-   in the argument ARG, issue a diagnostic due to it being a called
-   with an argument that is a character array with no terminating
-   NUL.  SIZE is the EXACT size of the array, and BNDRNG the number
-   of characters in which the NUL is expected.  Either EXPR or FNAME
-   may be null but noth both.  SIZE may be null when BNDRNG is null.  */
-
-void
-warn_string_no_nul (location_t loc, tree expr, const char *fname,
-		    tree arg, tree decl, tree size /* = NULL_TREE */,
-		    bool exact /* = false */,
-		    const wide_int bndrng[2] /* = NULL */)
-{
-  const opt_code opt = OPT_Wstringop_overread;
-  if ((expr && warning_suppressed_p (expr, opt))
-      || warning_suppressed_p (arg, opt))
-    return;
-
-  loc = expansion_point_location_if_in_system_header (loc);
-  bool warned;
-
-  /* Format the bound range as a string to keep the nuber of messages
-     from exploding.  */
-  char bndstr[80];
-  *bndstr = 0;
-  if (bndrng)
-    {
-      if (bndrng[0] == bndrng[1])
-	sprintf (bndstr, "%llu", (unsigned long long) bndrng[0].to_uhwi ());
-      else
-	sprintf (bndstr, "[%llu, %llu]",
-		 (unsigned long long) bndrng[0].to_uhwi (),
-		 (unsigned long long) bndrng[1].to_uhwi ());
-    }
-
-  const tree maxobjsize = max_object_size ();
-  const wide_int maxsiz = wi::to_wide (maxobjsize);
-  if (expr)
-    {
-      tree func = get_callee_fndecl (expr);
-      if (bndrng)
-	{
-	  if (wi::ltu_p (maxsiz, bndrng[0]))
-	    warned = warning_at (loc, opt,
-				 "%qD specified bound %s exceeds "
-				 "maximum object size %E",
-				 func, bndstr, maxobjsize);
-	  else
-	    {
-	      bool maybe = wi::to_wide (size) == bndrng[0];
-	      warned = warning_at (loc, opt,
-				   exact
-				   ? G_("%qD specified bound %s exceeds "
-					"the size %E of unterminated array")
-				   : (maybe
-				      ? G_("%qD specified bound %s may "
-					   "exceed the size of at most %E "
-					   "of unterminated array")
-				      : G_("%qD specified bound %s exceeds "
-					   "the size of at most %E "
-					   "of unterminated array")),
-				   func, bndstr, size);
-	    }
-	}
-      else
-	warned = warning_at (loc, opt,
-			     "%qD argument missing terminating nul",
-			     func);
-    }
-  else
-    {
-      if (bndrng)
-	{
-	  if (wi::ltu_p (maxsiz, bndrng[0]))
-	    warned = warning_at (loc, opt,
-				 "%qs specified bound %s exceeds "
-				 "maximum object size %E",
-				 fname, bndstr, maxobjsize);
-	  else
-	    {
-	      bool maybe = wi::to_wide (size) == bndrng[0];
-	      warned = warning_at (loc, opt,
-				   exact
-				   ? G_("%qs specified bound %s exceeds "
-					"the size %E of unterminated array")
-				   : (maybe
-				      ? G_("%qs specified bound %s may "
-					   "exceed the size of at most %E "
-					   "of unterminated array")
-				      : G_("%qs specified bound %s exceeds "
-					   "the size of at most %E "
-					   "of unterminated array")),
-				   fname, bndstr, size);
-	    }
-	}
-      else
-	warned = warning_at (loc, opt,
-			     "%qs argument missing terminating nul",
-			     fname);
-    }
-
-  if (warned)
-    {
-      inform (DECL_SOURCE_LOCATION (decl),
-	      "referenced argument declared here");
-      suppress_warning (arg, opt);
-      if (expr)
-	suppress_warning (expr, opt);
-    }
-}
-
-/* For a call EXPR (which may be null) that expects a string argument
-   SRC as an argument, returns false if SRC is a character array with
-   no terminating NUL.  When nonnull, BOUND is the number of characters
-   in which to expect the terminating NUL.  RDONLY is true for read-only
-   accesses such as strcmp, false for read-write such as strcpy.  When
-   EXPR is also issues a warning.  */
-
-bool
-check_nul_terminated_array (tree expr, tree src,
-			    tree bound /* = NULL_TREE */)
-{
-  /* The constant size of the array SRC points to.  The actual size
-     may be less of EXACT is true, but not more.  */
-  tree size;
-  /* True if SRC involves a non-constant offset into the array.  */
-  bool exact;
-  /* The unterminated constant array SRC points to.  */
-  tree nonstr = unterminated_array (src, &size, &exact);
-  if (!nonstr)
-    return true;
-
-  /* NONSTR refers to the non-nul terminated constant array and SIZE
-     is the constant size of the array in bytes.  EXACT is true when
-     SIZE is exact.  */
-
-  wide_int bndrng[2];
-  if (bound)
-    {
-      value_range r;
-
-      get_global_range_query ()->range_of_expr (r, bound);
-
-      if (r.kind () != VR_RANGE)
-	return true;
-
-      bndrng[0] = r.lower_bound ();
-      bndrng[1] = r.upper_bound ();
-
-      if (exact)
-	{
-	  if (wi::leu_p (bndrng[0], wi::to_wide (size)))
-	    return true;
-	}
-      else if (wi::lt_p (bndrng[0], wi::to_wide (size), UNSIGNED))
-	return true;
-    }
-
-  if (expr)
-    warn_string_no_nul (EXPR_LOCATION (expr), expr, NULL, src, nonstr,
-			size, exact, bound ? bndrng : NULL);
-
-  return false;
-}
-
-/* If EXP refers to an unterminated constant character array return
-   the declaration of the object of which the array is a member or
-   element and if SIZE is not null, set *SIZE to the size of
-   the unterminated array and set *EXACT if the size is exact or
-   clear it otherwise.  Otherwise return null.  */
-
-tree
-unterminated_array (tree exp, tree *size /* = NULL */, bool *exact /* = NULL */)
-{
-  /* C_STRLEN will return NULL and set DECL in the info
-     structure if EXP references a unterminated array.  */
-  c_strlen_data lendata = { };
-  tree len = c_strlen (exp, 1, &lendata);
-  if (len == NULL_TREE && lendata.minlen && lendata.decl)
-     {
-       if (size)
-	{
-	  len = lendata.minlen;
-	  if (lendata.off)
-	    {
-	      /* Constant offsets are already accounted for in LENDATA.MINLEN,
-		 but not in a SSA_NAME + CST expression.  */
-	      if (TREE_CODE (lendata.off) == INTEGER_CST)
-		*exact = true;
-	      else if (TREE_CODE (lendata.off) == PLUS_EXPR
-		       && TREE_CODE (TREE_OPERAND (lendata.off, 1)) == INTEGER_CST)
-		{
-		  /* Subtract the offset from the size of the array.  */
-		  *exact = false;
-		  tree temp = TREE_OPERAND (lendata.off, 1);
-		  temp = fold_convert (ssizetype, temp);
-		  len = fold_build2 (MINUS_EXPR, ssizetype, len, temp);
-		}
-	      else
-		*exact = false;
-	    }
-	  else
-	    *exact = true;
-
-	  *size = len;
-	}
-       return lendata.decl;
-     }
-
-  return NULL_TREE;
-}
-
 /* Compute the length of a null-terminated character string or wide
    character string handling character sizes of 1, 2, and 4 bytes.
    TREE_STRING_LENGTH is not the right way because it evaluates to
@@ -3969,996 +3198,6 @@ 
 			  GET_MODE_MASK (GET_MODE (len_rtx)));
 }
 
-/* Issue a warning OPT for a bounded call EXP with a bound in RANGE
-   accessing an object with SIZE.  */
-
-static bool
-maybe_warn_for_bound (opt_code opt, location_t loc, tree exp, tree func,
-		      tree bndrng[2], tree size, const access_data *pad = NULL)
-{
-  if (!bndrng[0] || warning_suppressed_p (exp, opt))
-    return false;
-
-  tree maxobjsize = max_object_size ();
-
-  bool warned = false;
-
-  if (opt == OPT_Wstringop_overread)
-    {
-      bool maybe = pad && pad->src.phi ();
-
-      if (tree_int_cst_lt (maxobjsize, bndrng[0]))
-	{
-	  if (bndrng[0] == bndrng[1])
-	    warned = (func
-		      ? warning_at (loc, opt,
-				    (maybe
-				     ? G_("%qD specified bound %E may "
-					  "exceed maximum object size %E")
-				     : G_("%qD specified bound %E "
-					  "exceeds maximum object size %E")),
-				    func, bndrng[0], maxobjsize)
-		      : warning_at (loc, opt,
-				    (maybe
-				     ? G_("specified bound %E may "
-					  "exceed maximum object size %E")
-				     : G_("specified bound %E "
-					  "exceeds maximum object size %E")),
-				    bndrng[0], maxobjsize));
-	  else
-	    warned = (func
-		      ? warning_at (loc, opt,
-				    (maybe
-				     ? G_("%qD specified bound [%E, %E] may "
-					  "exceed maximum object size %E")
-				     : G_("%qD specified bound [%E, %E] "
-					  "exceeds maximum object size %E")),
-				    func, bndrng[0], bndrng[1], maxobjsize)
-		      : warning_at (loc, opt,
-				    (maybe
-				     ? G_("specified bound [%E, %E] may "
-					  "exceed maximum object size %E")
-				     : G_("specified bound [%E, %E] "
-					  "exceeds maximum object size %E")),
-				    bndrng[0], bndrng[1], maxobjsize));
-	}
-      else if (!size || tree_int_cst_le (bndrng[0], size))
-	return false;
-      else if (tree_int_cst_equal (bndrng[0], bndrng[1]))
-	warned = (func
-		  ? warning_at (loc, opt,
-				(maybe
-				 ? G_("%qD specified bound %E may exceed "
-				      "source size %E")
-				 : G_("%qD specified bound %E exceeds "
-				      "source size %E")),
-				func, bndrng[0], size)
-		  : warning_at (loc, opt,
-				(maybe
-				 ? G_("specified bound %E may exceed "
-				      "source size %E")
-				 : G_("specified bound %E exceeds "
-				      "source size %E")),
-				bndrng[0], size));
-      else
-	warned = (func
-		  ? warning_at (loc, opt,
-				(maybe
-				 ? G_("%qD specified bound [%E, %E] may "
-				      "exceed source size %E")
-				 : G_("%qD specified bound [%E, %E] exceeds "
-				      "source size %E")),
-				func, bndrng[0], bndrng[1], size)
-		  : warning_at (loc, opt,
-				(maybe
-				 ? G_("specified bound [%E, %E] may exceed "
-				      "source size %E")
-				 : G_("specified bound [%E, %E] exceeds "
-				      "source size %E")),
-				bndrng[0], bndrng[1], size));
-      if (warned)
-	{
-	  if (pad && pad->src.ref)
-	    {
-	      if (DECL_P (pad->src.ref))
-		inform (DECL_SOURCE_LOCATION (pad->src.ref),
-			"source object declared here");
-	      else if (EXPR_HAS_LOCATION (pad->src.ref))
-		inform (EXPR_LOCATION (pad->src.ref),
-			"source object allocated here");
-	    }
-	  suppress_warning (exp, opt);
-	}
-
-      return warned;
-    }
-
-  bool maybe = pad && pad->dst.phi ();
-  if (tree_int_cst_lt (maxobjsize, bndrng[0]))
-    {
-      if (bndrng[0] == bndrng[1])
-	warned = (func
-		  ? warning_at (loc, opt,
-				(maybe
-				 ? G_("%qD specified size %E may "
-				      "exceed maximum object size %E")
-				 : G_("%qD specified size %E "
-				      "exceeds maximum object size %E")),
-				func, bndrng[0], maxobjsize)
-		  : warning_at (loc, opt,
-				(maybe
-				 ? G_("specified size %E may exceed "
-				      "maximum object size %E")
-				 : G_("specified size %E exceeds "
-				      "maximum object size %E")),
-				bndrng[0], maxobjsize));
-      else
-	warned = (func
-		  ? warning_at (loc, opt,
-				(maybe
-				 ? G_("%qD specified size between %E and %E "
-				      "may exceed maximum object size %E")
-				 : G_("%qD specified size between %E and %E "
-				      "exceeds maximum object size %E")),
-				func, bndrng[0], bndrng[1], maxobjsize)
-		  : warning_at (loc, opt,
-				(maybe
-				 ? G_("specified size between %E and %E "
-				      "may exceed maximum object size %E")
-				 : G_("specified size between %E and %E "
-				      "exceeds maximum object size %E")),
-				bndrng[0], bndrng[1], maxobjsize));
-    }
-  else if (!size || tree_int_cst_le (bndrng[0], size))
-    return false;
-  else if (tree_int_cst_equal (bndrng[0], bndrng[1]))
-    warned = (func
-	      ? warning_at (loc, opt,
-			    (maybe
-			     ? G_("%qD specified bound %E may exceed "
-				  "destination size %E")
-			     : G_("%qD specified bound %E exceeds "
-				  "destination size %E")),
-			    func, bndrng[0], size)
-	      : warning_at (loc, opt,
-			    (maybe
-			     ? G_("specified bound %E may exceed "
-				  "destination size %E")
-			     : G_("specified bound %E exceeds "
-				  "destination size %E")),
-			    bndrng[0], size));
-  else
-    warned = (func
-	      ? warning_at (loc, opt,
-			    (maybe
-			     ? G_("%qD specified bound [%E, %E] may exceed "
-				  "destination size %E")
-			     : G_("%qD specified bound [%E, %E] exceeds "
-				  "destination size %E")),
-			    func, bndrng[0], bndrng[1], size)
-	      : warning_at (loc, opt,
-			    (maybe
-			     ? G_("specified bound [%E, %E] exceeds "
-				  "destination size %E")
-			     : G_("specified bound [%E, %E] exceeds "
-				  "destination size %E")),
-			    bndrng[0], bndrng[1], size));
-
-  if (warned)
-    {
-      if (pad && pad->dst.ref)
-	{
-	  if (DECL_P (pad->dst.ref))
-	    inform (DECL_SOURCE_LOCATION (pad->dst.ref),
-		    "destination object declared here");
-	  else if (EXPR_HAS_LOCATION (pad->dst.ref))
-	    inform (EXPR_LOCATION (pad->dst.ref),
-		    "destination object allocated here");
-	}
-      suppress_warning (exp, opt);
-    }
-
-  return warned;
-}
-
-/* For an expression EXP issue an access warning controlled by option OPT
-   with access to a region SIZE bytes in size in the RANGE of sizes.
-   WRITE is true for a write access, READ for a read access, neither for
-   call that may or may not perform an access but for which the range
-   is expected to valid.
-   Returns true when a warning has been issued.  */
-
-static bool
-warn_for_access (location_t loc, tree func, tree exp, int opt, tree range[2],
-		 tree size, bool write, bool read, bool maybe)
-{
-  bool warned = false;
-
-  if (write && read)
-    {
-      if (tree_int_cst_equal (range[0], range[1]))
-	warned = (func
-		  ? warning_n (loc, opt, tree_to_uhwi (range[0]),
-			       (maybe
-				? G_("%qD may access %E byte in a region "
-				     "of size %E")
-				: G_("%qD accessing %E byte in a region "
-				     "of size %E")),
-				(maybe
-				 ? G_ ("%qD may access %E bytes in a region "
-				       "of size %E")
-				 : G_ ("%qD accessing %E bytes in a region "
-				       "of size %E")),
-			       func, range[0], size)
-		  : warning_n (loc, opt, tree_to_uhwi (range[0]),
-			       (maybe
-				? G_("may access %E byte in a region "
-				     "of size %E")
-				: G_("accessing %E byte in a region "
-				     "of size %E")),
-			       (maybe
-				? G_("may access %E bytes in a region "
-				     "of size %E")
-				: G_("accessing %E bytes in a region "
-				     "of size %E")),
-			       range[0], size));
-      else if (tree_int_cst_sign_bit (range[1]))
-	{
-	  /* Avoid printing the upper bound if it's invalid.  */
-	  warned = (func
-		    ? warning_at (loc, opt,
-				  (maybe
-				   ? G_("%qD may access %E or more bytes "
-					"in a region of size %E")
-				   : G_("%qD accessing %E or more bytes "
-					"in a region of size %E")),
-				  func, range[0], size)
-		    : warning_at (loc, opt,
-				  (maybe
-				   ? G_("may access %E or more bytes "
-					"in a region of size %E")
-				   : G_("accessing %E or more bytes "
-					"in a region of size %E")),
-				  range[0], size));
-	}
-      else
-	warned = (func
-		  ? warning_at (loc, opt,
-				(maybe
-				 ? G_("%qD may access between %E and %E "
-				      "bytes in a region of size %E")
-				 : G_("%qD accessing between %E and %E "
-				      "bytes in a region of size %E")),
-				func, range[0], range[1], size)
-		  : warning_at (loc, opt,
-				(maybe
-				 ? G_("may access between %E and %E bytes "
-				      "in a region of size %E")
-				 : G_("accessing between %E and %E bytes "
-				      "in a region of size %E")),
-				range[0], range[1], size));
-      return warned;
-    }
-
-  if (write)
-    {
-      if (tree_int_cst_equal (range[0], range[1]))
-	warned = (func
-		  ? warning_n (loc, opt, tree_to_uhwi (range[0]),
-			       (maybe
-				? G_("%qD may write %E byte into a region "
-				     "of size %E")
-				: G_("%qD writing %E byte into a region "
-				     "of size %E overflows the destination")),
-			       (maybe
-				? G_("%qD may write %E bytes into a region "
-				     "of size %E")
-				: G_("%qD writing %E bytes into a region "
-				     "of size %E overflows the destination")),
-			       func, range[0], size)
-		  : warning_n (loc, opt, tree_to_uhwi (range[0]),
-			       (maybe
-				? G_("may write %E byte into a region "
-				     "of size %E")
-				: G_("writing %E byte into a region "
-				     "of size %E overflows the destination")),
-			       (maybe
-				? G_("may write %E bytes into a region "
-				     "of size %E")
-				: G_("writing %E bytes into a region "
-				     "of size %E overflows the destination")),
-			       range[0], size));
-      else if (tree_int_cst_sign_bit (range[1]))
-	{
-	  /* Avoid printing the upper bound if it's invalid.  */
-	  warned = (func
-		    ? warning_at (loc, opt,
-				  (maybe
-				   ? G_("%qD may write %E or more bytes "
-					"into a region of size %E")
-				   : G_("%qD writing %E or more bytes "
-					"into a region of size %E overflows "
-					"the destination")),
-				  func, range[0], size)
-		    : warning_at (loc, opt,
-				  (maybe
-				   ? G_("may write %E or more bytes into "
-					"a region of size %E")
-				   : G_("writing %E or more bytes into "
-					"a region of size %E overflows "
-					"the destination")),
-				  range[0], size));
-	}
-      else
-	warned = (func
-		  ? warning_at (loc, opt,
-				(maybe
-				 ? G_("%qD may write between %E and %E bytes "
-				      "into a region of size %E")
-				 : G_("%qD writing between %E and %E bytes "
-				      "into a region of size %E overflows "
-				      "the destination")),
-				func, range[0], range[1], size)
-		  : warning_at (loc, opt,
-				(maybe
-				 ? G_("may write between %E and %E bytes "
-				      "into a region of size %E")
-				 : G_("writing between %E and %E bytes "
-				      "into a region of size %E overflows "
-				      "the destination")),
-				range[0], range[1], size));
-      return warned;
-    }
-
-  if (read)
-    {
-      if (tree_int_cst_equal (range[0], range[1]))
-	warned = (func
-		  ? warning_n (loc, OPT_Wstringop_overread,
-			       tree_to_uhwi (range[0]),
-			       (maybe
-				? G_("%qD may read %E byte from a region "
-				     "of size %E")
-				: G_("%qD reading %E byte from a region "
-				     "of size %E")),
-			       (maybe
-				? G_("%qD may read %E bytes from a region "
-				     "of size %E")
-				: G_("%qD reading %E bytes from a region "
-				     "of size %E")),
-			       func, range[0], size)
-		  : warning_n (loc, OPT_Wstringop_overread,
-			       tree_to_uhwi (range[0]),
-			       (maybe
-				? G_("may read %E byte from a region "
-				     "of size %E")
-				: G_("reading %E byte from a region "
-				     "of size %E")),
-			       (maybe
-				? G_("may read %E bytes from a region "
-				     "of size %E")
-				: G_("reading %E bytes from a region "
-				     "of size %E")),
-			       range[0], size));
-      else if (tree_int_cst_sign_bit (range[1]))
-	{
-	  /* Avoid printing the upper bound if it's invalid.  */
-	  warned = (func
-		    ? warning_at (loc, OPT_Wstringop_overread,
-				  (maybe
-				   ? G_("%qD may read %E or more bytes "
-					"from a region of size %E")
-				   : G_("%qD reading %E or more bytes "
-					"from a region of size %E")),
-				  func, range[0], size)
-		    : warning_at (loc, OPT_Wstringop_overread,
-				  (maybe
-				   ? G_("may read %E or more bytes "
-					"from a region of size %E")
-				   : G_("reading %E or more bytes "
-					"from a region of size %E")),
-				  range[0], size));
-	}
-      else
-	warned = (func
-		  ? warning_at (loc, OPT_Wstringop_overread,
-				(maybe
-				 ? G_("%qD may read between %E and %E bytes "
-				      "from a region of size %E")
-				 : G_("%qD reading between %E and %E bytes "
-				      "from a region of size %E")),
-				func, range[0], range[1], size)
-		  : warning_at (loc, opt,
-				(maybe
-				 ? G_("may read between %E and %E bytes "
-				      "from a region of size %E")
-				 : G_("reading between %E and %E bytes "
-				      "from a region of size %E")),
-				range[0], range[1], size));
-
-      if (warned)
-	suppress_warning (exp, OPT_Wstringop_overread);
-
-      return warned;
-    }
-
-  if (tree_int_cst_equal (range[0], range[1])
-      || tree_int_cst_sign_bit (range[1]))
-    warned = (func
-	      ? warning_n (loc, OPT_Wstringop_overread,
-			   tree_to_uhwi (range[0]),
-			   "%qD expecting %E byte in a region of size %E",
-			   "%qD expecting %E bytes in a region of size %E",
-			   func, range[0], size)
-	      : warning_n (loc, OPT_Wstringop_overread,
-			   tree_to_uhwi (range[0]),
-			   "expecting %E byte in a region of size %E",
-			   "expecting %E bytes in a region of size %E",
-			   range[0], size));
-  else if (tree_int_cst_sign_bit (range[1]))
-    {
-      /* Avoid printing the upper bound if it's invalid.  */
-      warned = (func
-		? warning_at (loc, OPT_Wstringop_overread,
-			      "%qD expecting %E or more bytes in a region "
-			      "of size %E",
-			      func, range[0], size)
-		: warning_at (loc, OPT_Wstringop_overread,
-			      "expecting %E or more bytes in a region "
-			      "of size %E",
-			      range[0], size));
-    }
-  else
-    warned = (func
-	      ? warning_at (loc, OPT_Wstringop_overread,
-			    "%qD expecting between %E and %E bytes in "
-			    "a region of size %E",
-			    func, range[0], range[1], size)
-	      : warning_at (loc, OPT_Wstringop_overread,
-			    "expecting between %E and %E bytes in "
-			    "a region of size %E",
-			    range[0], range[1], size));
-
-  if (warned)
-    suppress_warning (exp, OPT_Wstringop_overread);
-
-  return warned;
-}
-
-/* Issue one inform message describing each target of an access REF.
-   WRITE is set for a write access and clear for a read access.  */
-
-void
-access_ref::inform_access (access_mode mode) const
-{
-  const access_ref &aref = *this;
-  if (!aref.ref)
-    return;
-
-  if (aref.phi ())
-    {
-      /* Set MAXREF to refer to the largest object and fill ALL_REFS
-	 with data for all objects referenced by the PHI arguments.  */
-      access_ref maxref;
-      auto_vec<access_ref> all_refs;
-      if (!get_ref (&all_refs, &maxref))
-	return;
-
-      /* Except for MAXREF, the rest of the arguments' offsets need not
-	 reflect one added to the PHI itself.  Determine the latter from
-	 MAXREF on which the result is based.  */
-      const offset_int orng[] =
-	{
-	  offrng[0] - maxref.offrng[0],
-	  wi::smax (offrng[1] - maxref.offrng[1], offrng[0]),
-	};
-
-      /* Add the final PHI's offset to that of each of the arguments
-	 and recurse to issue an inform message for it.  */
-      for (unsigned i = 0; i != all_refs.length (); ++i)
-	{
-	  /* Skip any PHIs; those could lead to infinite recursion.  */
-	  if (all_refs[i].phi ())
-	    continue;
-
-	  all_refs[i].add_offset (orng[0], orng[1]);
-	  all_refs[i].inform_access (mode);
-	}
-      return;
-    }
-
-  /* Convert offset range and avoid including a zero range since it
-     isn't necessarily meaningful.  */
-  HOST_WIDE_INT diff_min = tree_to_shwi (TYPE_MIN_VALUE (ptrdiff_type_node));
-  HOST_WIDE_INT diff_max = tree_to_shwi (TYPE_MAX_VALUE (ptrdiff_type_node));
-  HOST_WIDE_INT minoff;
-  HOST_WIDE_INT maxoff = diff_max;
-  if (wi::fits_shwi_p (aref.offrng[0]))
-    minoff = aref.offrng[0].to_shwi ();
-  else
-    minoff = aref.offrng[0] < 0 ? diff_min : diff_max;
-
-  if (wi::fits_shwi_p (aref.offrng[1]))
-    maxoff = aref.offrng[1].to_shwi ();
-
-  if (maxoff <= diff_min || maxoff >= diff_max)
-    /* Avoid mentioning an upper bound that's equal to or in excess
-       of the maximum of ptrdiff_t.  */
-    maxoff = minoff;
-
-  /* Convert size range and always include it since all sizes are
-     meaningful. */
-  unsigned long long minsize = 0, maxsize = 0;
-  if (wi::fits_shwi_p (aref.sizrng[0])
-      && wi::fits_shwi_p (aref.sizrng[1]))
-    {
-      minsize = aref.sizrng[0].to_shwi ();
-      maxsize = aref.sizrng[1].to_shwi ();
-    }
-
-  /* SIZRNG doesn't necessarily have the same range as the allocation
-     size determined by gimple_call_alloc_size ().  */
-  char sizestr[80];
-  if (minsize == maxsize)
-    sprintf (sizestr, "%llu", minsize);
-  else
-    sprintf (sizestr, "[%llu, %llu]", minsize, maxsize);
-
-  char offstr[80];
-  if (minoff == 0
-      && (maxoff == 0 || aref.sizrng[1] <= maxoff))
-    offstr[0] = '\0';
-  else if (minoff == maxoff)
-    sprintf (offstr, "%lli", (long long) minoff);
-  else
-    sprintf (offstr, "[%lli, %lli]", (long long) minoff, (long long) maxoff);
-
-  location_t loc = UNKNOWN_LOCATION;
-
-  tree ref = this->ref;
-  tree allocfn = NULL_TREE;
-  if (TREE_CODE (ref) == SSA_NAME)
-    {
-      gimple *stmt = SSA_NAME_DEF_STMT (ref);
-      if (is_gimple_call (stmt))
-	{
-	  loc = gimple_location (stmt);
-	  if (gimple_call_builtin_p (stmt, BUILT_IN_ALLOCA_WITH_ALIGN))
-	    {
-	      /* Strip the SSA_NAME suffix from the variable name and
-		 recreate an identifier with the VLA's original name.  */
-	      ref = gimple_call_lhs (stmt);
-	      if (SSA_NAME_IDENTIFIER (ref))
-		{
-		  ref = SSA_NAME_IDENTIFIER (ref);
-		  const char *id = IDENTIFIER_POINTER (ref);
-		  size_t len = strcspn (id, ".$");
-		  if (!len)
-		    len = strlen (id);
-		  ref = get_identifier_with_length (id, len);
-		}
-	    }
-	  else
-	    {
-	      /* Except for VLAs, retrieve the allocation function.  */
-	      allocfn = gimple_call_fndecl (stmt);
-	      if (!allocfn)
-		allocfn = gimple_call_fn (stmt);
-	      if (TREE_CODE (allocfn) == SSA_NAME)
-		{
-		  /* For an ALLOC_CALL via a function pointer make a small
-		     effort to determine the destination of the pointer.  */
-		  gimple *def = SSA_NAME_DEF_STMT (allocfn);
-		  if (gimple_assign_single_p (def))
-		    {
-		      tree rhs = gimple_assign_rhs1 (def);
-		      if (DECL_P (rhs))
-			allocfn = rhs;
-		      else if (TREE_CODE (rhs) == COMPONENT_REF)
-			allocfn = TREE_OPERAND (rhs, 1);
-		    }
-		}
-	    }
-	}
-      else if (gimple_nop_p (stmt))
-	/* Handle DECL_PARM below.  */
-	ref = SSA_NAME_VAR (ref);
-    }
-
-  if (DECL_P (ref))
-    loc = DECL_SOURCE_LOCATION (ref);
-  else if (EXPR_P (ref) && EXPR_HAS_LOCATION (ref))
-    loc = EXPR_LOCATION (ref);
-  else if (TREE_CODE (ref) != IDENTIFIER_NODE
-	   && TREE_CODE (ref) != SSA_NAME)
-    return;
-
-  if (mode == access_read_write || mode == access_write_only)
-    {
-      if (allocfn == NULL_TREE)
-	{
-	  if (*offstr)
-	    inform (loc, "at offset %s into destination object %qE of size %s",
-		    offstr, ref, sizestr);
-	  else
-	    inform (loc, "destination object %qE of size %s", ref, sizestr);
-	  return;
-	}
-
-      if (*offstr)
-	inform (loc,
-		"at offset %s into destination object of size %s "
-		"allocated by %qE", offstr, sizestr, allocfn);
-      else
-	inform (loc, "destination object of size %s allocated by %qE",
-		sizestr, allocfn);
-      return;
-    }
-
-  if (mode == access_read_only)
-    {
-      if (allocfn == NULL_TREE)
-	{
-	  if (*offstr)
-	    inform (loc, "at offset %s into source object %qE of size %s",
-		    offstr, ref, sizestr);
-	  else
-	    inform (loc, "source object %qE of size %s", ref, sizestr);
-
-	  return;
-	}
-
-      if (*offstr)
-	inform (loc,
-		"at offset %s into source object of size %s allocated by %qE",
-		offstr, sizestr, allocfn);
-      else
-	inform (loc, "source object of size %s allocated by %qE",
-		sizestr, allocfn);
-      return;
-    }
-
-  if (allocfn == NULL_TREE)
-    {
-      if (*offstr)
-	inform (loc, "at offset %s into object %qE of size %s",
-		offstr, ref, sizestr);
-      else
-	inform (loc, "object %qE of size %s", ref, sizestr);
-
-      return;
-    }
-
-  if (*offstr)
-    inform (loc,
-	    "at offset %s into object of size %s allocated by %qE",
-	    offstr, sizestr, allocfn);
-  else
-    inform (loc, "object of size %s allocated by %qE",
-	    sizestr, allocfn);
-}
-
-/* Helper to set RANGE to the range of BOUND if it's nonnull, bounded
-   by BNDRNG if nonnull and valid.  */
-
-static void
-get_size_range (tree bound, tree range[2], const offset_int bndrng[2])
-{
-  if (bound)
-    get_size_range (bound, range);
-
-  if (!bndrng || (bndrng[0] == 0 && bndrng[1] == HOST_WIDE_INT_M1U))
-    return;
-
-  if (range[0] && TREE_CODE (range[0]) == INTEGER_CST)
-    {
-      offset_int r[] =
-	{ wi::to_offset (range[0]), wi::to_offset (range[1]) };
-      if (r[0] < bndrng[0])
-	range[0] = wide_int_to_tree (sizetype, bndrng[0]);
-      if (bndrng[1] < r[1])
-	range[1] = wide_int_to_tree (sizetype, bndrng[1]);
-    }
-  else
-    {
-      range[0] = wide_int_to_tree (sizetype, bndrng[0]);
-      range[1] = wide_int_to_tree (sizetype, bndrng[1]);
-    }
-}
-
-/* Try to verify that the sizes and lengths of the arguments to a string
-   manipulation function given by EXP are within valid bounds and that
-   the operation does not lead to buffer overflow or read past the end.
-   Arguments other than EXP may be null.  When non-null, the arguments
-   have the following meaning:
-   DST is the destination of a copy call or NULL otherwise.
-   SRC is the source of a copy call or NULL otherwise.
-   DSTWRITE is the number of bytes written into the destination obtained
-   from the user-supplied size argument to the function (such as in
-   memcpy(DST, SRCs, DSTWRITE) or strncpy(DST, DRC, DSTWRITE).
-   MAXREAD is the user-supplied bound on the length of the source sequence
-   (such as in strncat(d, s, N).  It specifies the upper limit on the number
-   of bytes to write.  If NULL, it's taken to be the same as DSTWRITE.
-   SRCSTR is the source string (such as in strcpy(DST, SRC)) when the
-   expression EXP is a string function call (as opposed to a memory call
-   like memcpy).  As an exception, SRCSTR can also be an integer denoting
-   the precomputed size of the source string or object (for functions like
-   memcpy).
-   DSTSIZE is the size of the destination object.
-
-   When DSTWRITE is null LEN is checked to verify that it doesn't exceed
-   SIZE_MAX.
-
-   WRITE is true for write accesses, READ is true for reads.  Both are
-   false for simple size checks in calls to functions that neither read
-   from nor write to the region.
-
-   When nonnull, PAD points to a more detailed description of the access.
-
-   If the call is successfully verified as safe return true, otherwise
-   return false.  */
-
-bool
-check_access (tree exp, tree dstwrite,
-	      tree maxread, tree srcstr, tree dstsize,
-	      access_mode mode, const access_data *pad /* = NULL */)
-{
-  /* The size of the largest object is half the address space, or
-     PTRDIFF_MAX.  (This is way too permissive.)  */
-  tree maxobjsize = max_object_size ();
-
-  /* Either an approximate/minimum the length of the source string for
-     string functions or the size of the source object for raw memory
-     functions.  */
-  tree slen = NULL_TREE;
-
-  /* The range of the access in bytes; first set to the write access
-     for functions that write and then read for those that also (or
-     just) read.  */
-  tree range[2] = { NULL_TREE, NULL_TREE };
-
-  /* Set to true when the exact number of bytes written by a string
-     function like strcpy is not known and the only thing that is
-     known is that it must be at least one (for the terminating nul).  */
-  bool at_least_one = false;
-  if (srcstr)
-    {
-      /* SRCSTR is normally a pointer to string but as a special case
-	 it can be an integer denoting the length of a string.  */
-      if (POINTER_TYPE_P (TREE_TYPE (srcstr)))
-	{
-	  if (!check_nul_terminated_array (exp, srcstr, maxread))
-	    return false;
-	  /* Try to determine the range of lengths the source string
-	     refers to.  If it can be determined and is less than
-	     the upper bound given by MAXREAD add one to it for
-	     the terminating nul.  Otherwise, set it to one for
-	     the same reason, or to MAXREAD as appropriate.  */
-	  c_strlen_data lendata = { };
-	  get_range_strlen (srcstr, &lendata, /* eltsize = */ 1);
-	  range[0] = lendata.minlen;
-	  range[1] = lendata.maxbound ? lendata.maxbound : lendata.maxlen;
-	  if (range[0]
-	      && TREE_CODE (range[0]) == INTEGER_CST
-	      && TREE_CODE (range[1]) == INTEGER_CST
-	      && (!maxread || TREE_CODE (maxread) == INTEGER_CST))
-	    {
-	      if (maxread && tree_int_cst_le (maxread, range[0]))
-		range[0] = range[1] = maxread;
-	      else
-		range[0] = fold_build2 (PLUS_EXPR, size_type_node,
-					range[0], size_one_node);
-
-	      if (maxread && tree_int_cst_le (maxread, range[1]))
-		range[1] = maxread;
-	      else if (!integer_all_onesp (range[1]))
-		range[1] = fold_build2 (PLUS_EXPR, size_type_node,
-					range[1], size_one_node);
-
-	      slen = range[0];
-	    }
-	  else
-	    {
-	      at_least_one = true;
-	      slen = size_one_node;
-	    }
-	}
-      else
-	slen = srcstr;
-    }
-
-  if (!dstwrite && !maxread)
-    {
-      /* When the only available piece of data is the object size
-	 there is nothing to do.  */
-      if (!slen)
-	return true;
-
-      /* Otherwise, when the length of the source sequence is known
-	 (as with strlen), set DSTWRITE to it.  */
-      if (!range[0])
-	dstwrite = slen;
-    }
-
-  if (!dstsize)
-    dstsize = maxobjsize;
-
-  /* Set RANGE to that of DSTWRITE if non-null, bounded by PAD->DST.BNDRNG
-     if valid.  */
-  get_size_range (dstwrite, range, pad ? pad->dst.bndrng : NULL);
-
-  tree func = get_callee_fndecl (exp);
-  /* Read vs write access by built-ins can be determined from the const
-     qualifiers on the pointer argument.  In the absence of attribute
-     access, non-const qualified pointer arguments to user-defined
-     functions are assumed to both read and write the objects.  */
-  const bool builtin = func ? fndecl_built_in_p (func) : false;
-
-  /* First check the number of bytes to be written against the maximum
-     object size.  */
-  if (range[0]
-      && TREE_CODE (range[0]) == INTEGER_CST
-      && tree_int_cst_lt (maxobjsize, range[0]))
-    {
-      location_t loc = EXPR_LOCATION (exp);
-      maybe_warn_for_bound (OPT_Wstringop_overflow_, loc, exp, func, range,
-			    NULL_TREE, pad);
-      return false;
-    }
-
-  /* The number of bytes to write is "exact" if DSTWRITE is non-null,
-     constant, and in range of unsigned HOST_WIDE_INT.  */
-  bool exactwrite = dstwrite && tree_fits_uhwi_p (dstwrite);
-
-  /* Next check the number of bytes to be written against the destination
-     object size.  */
-  if (range[0] || !exactwrite || integer_all_onesp (dstwrite))
-    {
-      if (range[0]
-	  && TREE_CODE (range[0]) == INTEGER_CST
-	  && ((tree_fits_uhwi_p (dstsize)
-	       && tree_int_cst_lt (dstsize, range[0]))
-	      || (dstwrite
-		  && tree_fits_uhwi_p (dstwrite)
-		  && tree_int_cst_lt (dstwrite, range[0]))))
-	{
-	  const opt_code opt = OPT_Wstringop_overflow_;
-	  if (warning_suppressed_p (exp, opt)
-	      || (pad && pad->dst.ref
-		  && warning_suppressed_p (pad->dst.ref, opt)))
-	    return false;
-
-	  location_t loc = EXPR_LOCATION (exp);
-	  bool warned = false;
-	  if (dstwrite == slen && at_least_one)
-	    {
-	      /* This is a call to strcpy with a destination of 0 size
-		 and a source of unknown length.  The call will write
-		 at least one byte past the end of the destination.  */
-	      warned = (func
-			? warning_at (loc, opt,
-				      "%qD writing %E or more bytes into "
-				      "a region of size %E overflows "
-				      "the destination",
-				      func, range[0], dstsize)
-			: warning_at (loc, opt,
-				      "writing %E or more bytes into "
-				      "a region of size %E overflows "
-				      "the destination",
-				      range[0], dstsize));
-	    }
-	  else
-	    {
-	      const bool read
-		= mode == access_read_only || mode == access_read_write;
-	      const bool write
-		= mode == access_write_only || mode == access_read_write;
-	      const bool maybe = pad && pad->dst.parmarray;
-	      warned = warn_for_access (loc, func, exp,
-					OPT_Wstringop_overflow_,
-					range, dstsize,
-					write, read && !builtin, maybe);
-	    }
-
-	  if (warned)
-	    {
-	      suppress_warning (exp, OPT_Wstringop_overflow_);
-	      if (pad)
-		pad->dst.inform_access (pad->mode);
-	    }
-
-	  /* Return error when an overflow has been detected.  */
-	  return false;
-	}
-    }
-
-  /* Check the maximum length of the source sequence against the size
-     of the destination object if known, or against the maximum size
-     of an object.  */
-  if (maxread)
-    {
-      /* Set RANGE to that of MAXREAD, bounded by PAD->SRC.BNDRNG if
-	 PAD is nonnull and BNDRNG is valid.  */
-      get_size_range (maxread, range, pad ? pad->src.bndrng : NULL);
-
-      location_t loc = EXPR_LOCATION (exp);
-      tree size = dstsize;
-      if (pad && pad->mode == access_read_only)
-	size = wide_int_to_tree (sizetype, pad->src.sizrng[1]);
-
-      if (range[0] && maxread && tree_fits_uhwi_p (size))
-	{
-	  if (tree_int_cst_lt (maxobjsize, range[0]))
-	    {
-	      maybe_warn_for_bound (OPT_Wstringop_overread, loc, exp, func,
-				    range, size, pad);
-	      return false;
-	    }
-
-	  if (size != maxobjsize && tree_int_cst_lt (size, range[0]))
-	    {
-	      opt_code opt = (dstwrite || mode != access_read_only
-			      ? OPT_Wstringop_overflow_
-			      : OPT_Wstringop_overread);
-	      maybe_warn_for_bound (opt, loc, exp, func, range, size, pad);
-	      return false;
-	    }
-	}
-
-      maybe_warn_nonstring_arg (func, exp);
-    }
-
-  /* Check for reading past the end of SRC.  */
-  bool overread = (slen
-		   && slen == srcstr
-		   && dstwrite
-		   && range[0]
-		   && TREE_CODE (slen) == INTEGER_CST
-		   && tree_int_cst_lt (slen, range[0]));
-  /* If none is determined try to get a better answer based on the details
-     in PAD.  */
-  if (!overread
-      && pad
-      && pad->src.sizrng[1] >= 0
-      && pad->src.offrng[0] >= 0
-      && (pad->src.offrng[1] < 0
-	  || pad->src.offrng[0] <= pad->src.offrng[1]))
-    {
-      /* Set RANGE to that of MAXREAD, bounded by PAD->SRC.BNDRNG if
-	 PAD is nonnull and BNDRNG is valid.  */
-      get_size_range (maxread, range, pad ? pad->src.bndrng : NULL);
-      /* Set OVERREAD for reads starting just past the end of an object.  */
-      overread = pad->src.sizrng[1] - pad->src.offrng[0] < pad->src.bndrng[0];
-      range[0] = wide_int_to_tree (sizetype, pad->src.bndrng[0]);
-      slen = size_zero_node;
-    }
-
-  if (overread)
-    {
-      const opt_code opt = OPT_Wstringop_overread;
-      if (warning_suppressed_p (exp, opt)
-	  || (srcstr && warning_suppressed_p (srcstr, opt))
-	  || (pad && pad->src.ref
-	      && warning_suppressed_p (pad->src.ref, opt)))
-	return false;
-
-      location_t loc = EXPR_LOCATION (exp);
-      const bool read
-	= mode == access_read_only || mode == access_read_write;
-      const bool maybe = pad && pad->dst.parmarray;
-      if (warn_for_access (loc, func, exp, opt, range, slen, false, read,
-			   maybe))
-	{
-	  suppress_warning (exp, opt);
-	  if (pad)
-	    pad->src.inform_access (access_read_only);
-	}
-      return false;
-    }
-
-  return true;
-}
-
 /* A convenience wrapper for check_access above to check access
    by a read-only function like puts.  */
 
@@ -4978,981 +3217,6 @@ 
 		       &data);
 }
 
-/* If STMT is a call to an allocation function, returns the constant
-   maximum size of the object allocated by the call represented as
-   sizetype.  If nonnull, sets RNG1[] to the range of the size.
-   When nonnull, uses RVALS for range information, otherwise gets global
-   range info.
-   Returns null when STMT is not a call to a valid allocation function.  */
-
-tree
-gimple_call_alloc_size (gimple *stmt, wide_int rng1[2] /* = NULL */,
-			range_query * /* = NULL */)
-{
-  if (!stmt || !is_gimple_call (stmt))
-    return NULL_TREE;
-
-  tree allocfntype;
-  if (tree fndecl = gimple_call_fndecl (stmt))
-    allocfntype = TREE_TYPE (fndecl);
-  else
-    allocfntype = gimple_call_fntype (stmt);
-
-  if (!allocfntype)
-    return NULL_TREE;
-
-  unsigned argidx1 = UINT_MAX, argidx2 = UINT_MAX;
-  tree at = lookup_attribute ("alloc_size", TYPE_ATTRIBUTES (allocfntype));
-  if (!at)
-    {
-      if (!gimple_call_builtin_p (stmt, BUILT_IN_ALLOCA_WITH_ALIGN))
-	return NULL_TREE;
-
-      argidx1 = 0;
-    }
-
-  unsigned nargs = gimple_call_num_args (stmt);
-
-  if (argidx1 == UINT_MAX)
-    {
-      tree atval = TREE_VALUE (at);
-      if (!atval)
-	return NULL_TREE;
-
-      argidx1 = TREE_INT_CST_LOW (TREE_VALUE (atval)) - 1;
-      if (nargs <= argidx1)
-	return NULL_TREE;
-
-      atval = TREE_CHAIN (atval);
-      if (atval)
-	{
-	  argidx2 = TREE_INT_CST_LOW (TREE_VALUE (atval)) - 1;
-	  if (nargs <= argidx2)
-	    return NULL_TREE;
-	}
-    }
-
-  tree size = gimple_call_arg (stmt, argidx1);
-
-  wide_int rng1_buf[2];
-  /* If RNG1 is not set, use the buffer.  */
-  if (!rng1)
-    rng1 = rng1_buf;
-
-  /* Use maximum precision to avoid overflow below.  */
-  const int prec = ADDR_MAX_PRECISION;
-
-  {
-    tree r[2];
-    /* Determine the largest valid range size, including zero.  */
-    if (!get_size_range (size, r, SR_ALLOW_ZERO | SR_USE_LARGEST))
-      return NULL_TREE;
-    rng1[0] = wi::to_wide (r[0], prec);
-    rng1[1] = wi::to_wide (r[1], prec);
-  }
-
-  if (argidx2 > nargs && TREE_CODE (size) == INTEGER_CST)
-    return fold_convert (sizetype, size);
-
-  /* To handle ranges do the math in wide_int and return the product
-     of the upper bounds as a constant.  Ignore anti-ranges.  */
-  tree n = argidx2 < nargs ? gimple_call_arg (stmt, argidx2) : integer_one_node;
-  wide_int rng2[2];
-  {
-    tree r[2];
-      /* As above, use the full non-negative range on failure.  */
-    if (!get_size_range (n, r, SR_ALLOW_ZERO | SR_USE_LARGEST))
-      return NULL_TREE;
-    rng2[0] = wi::to_wide (r[0], prec);
-    rng2[1] = wi::to_wide (r[1], prec);
-  }
-
-  /* Compute products of both bounds for the caller but return the lesser
-     of SIZE_MAX and the product of the upper bounds as a constant.  */
-  rng1[0] = rng1[0] * rng2[0];
-  rng1[1] = rng1[1] * rng2[1];
-
-  const tree size_max = TYPE_MAX_VALUE (sizetype);
-  if (wi::gtu_p (rng1[1], wi::to_wide (size_max, prec)))
-    {
-      rng1[1] = wi::to_wide (size_max, prec);
-      return size_max;
-    }
-
-  return wide_int_to_tree (sizetype, rng1[1]);
-}
-
-/* For an access to an object referenced to by the function parameter PTR
-   of pointer type, and set RNG[] to the range of sizes of the object
-   obtainedfrom the attribute access specification for the current function.
-   Set STATIC_ARRAY if the array parameter has been declared [static].
-   Return the function parameter on success and null otherwise.  */
-
-tree
-gimple_parm_array_size (tree ptr, wide_int rng[2],
-			bool *static_array /* = NULL */)
-{
-  /* For a function argument try to determine the byte size of the array
-     from the current function declaratation (e.g., attribute access or
-     related).  */
-  tree var = SSA_NAME_VAR (ptr);
-  if (TREE_CODE (var) != PARM_DECL)
-    return NULL_TREE;
-
-  const unsigned prec = TYPE_PRECISION (sizetype);
-
-  rdwr_map rdwr_idx;
-  attr_access *access = get_parm_access (rdwr_idx, var);
-  if (!access)
-    return NULL_TREE;
-
-  if (access->sizarg != UINT_MAX)
-    {
-      /* TODO: Try to extract the range from the argument based on
-	 those of subsequent assertions or based on known calls to
-	 the current function.  */
-      return NULL_TREE;
-    }
-
-  if (!access->minsize)
-    return NULL_TREE;
-
-  /* Only consider ordinary array bound at level 2 (or above if it's
-     ever added).  */
-  if (warn_array_parameter < 2 && !access->static_p)
-    return NULL_TREE;
-
-  if (static_array)
-    *static_array = access->static_p;
-
-  rng[0] = wi::zero (prec);
-  rng[1] = wi::uhwi (access->minsize, prec);
-  /* Multiply the array bound encoded in the attribute by the size
-     of what the pointer argument to which it decays points to.  */
-  tree eltype = TREE_TYPE (TREE_TYPE (ptr));
-  tree size = TYPE_SIZE_UNIT (eltype);
-  if (!size || TREE_CODE (size) != INTEGER_CST)
-    return NULL_TREE;
-
-  rng[1] *= wi::to_wide (size, prec);
-  return var;
-}
-
-/* Wrapper around the wide_int overload of get_range that accepts
-   offset_int instead.  For middle end expressions returns the same
-   result.  For a subset of nonconstamt expressions emitted by the front
-   end determines a more precise range than would be possible otherwise.  */
-
-static bool
-get_offset_range (tree x, gimple *stmt, offset_int r[2], range_query *rvals)
-{
-  offset_int add = 0;
-  if (TREE_CODE (x) == PLUS_EXPR)
-    {
-      /* Handle constant offsets in pointer addition expressions seen
-	 n the front end IL.  */
-      tree op = TREE_OPERAND (x, 1);
-      if (TREE_CODE (op) == INTEGER_CST)
-	{
-	  op = fold_convert (signed_type_for (TREE_TYPE (op)), op);
-	  add = wi::to_offset (op);
-	  x = TREE_OPERAND (x, 0);
-	}
-    }
-
-  if (TREE_CODE (x) == NOP_EXPR)
-    /* Also handle conversions to sizetype seen in the front end IL.  */
-    x = TREE_OPERAND (x, 0);
-
-  tree type = TREE_TYPE (x);
-  if (!INTEGRAL_TYPE_P (type) && !POINTER_TYPE_P (type))
-    return false;
-
-   if (TREE_CODE (x) != INTEGER_CST
-      && TREE_CODE (x) != SSA_NAME)
-    {
-      if (TYPE_UNSIGNED (type)
-	  && TYPE_PRECISION (type) == TYPE_PRECISION (sizetype))
-	type = signed_type_for (type);
-
-      r[0] = wi::to_offset (TYPE_MIN_VALUE (type)) + add;
-      r[1] = wi::to_offset (TYPE_MAX_VALUE (type)) + add;
-      return x;
-    }
-
-  wide_int wr[2];
-  if (!get_range (x, stmt, wr, rvals))
-    return false;
-
-  signop sgn = SIGNED;
-  /* Only convert signed integers or unsigned sizetype to a signed
-     offset and avoid converting large positive values in narrower
-     types to negative offsets.  */
-  if (TYPE_UNSIGNED (type)
-      && wr[0].get_precision () < TYPE_PRECISION (sizetype))
-    sgn = UNSIGNED;
-
-  r[0] = offset_int::from (wr[0], sgn);
-  r[1] = offset_int::from (wr[1], sgn);
-  return true;
-}
-
-/* Return the argument that the call STMT to a built-in function returns
-   or null if it doesn't.  On success, set OFFRNG[] to the range of offsets
-   from the argument reflected in the value returned by the built-in if it
-   can be determined, otherwise to 0 and HWI_M1U respectively.  */
-
-static tree
-gimple_call_return_array (gimple *stmt, offset_int offrng[2],
-			  range_query *rvals)
-{
-  {
-    /* Check for attribute fn spec to see if the function returns one
-       of its arguments.  */
-    attr_fnspec fnspec = gimple_call_fnspec (as_a <gcall *>(stmt));
-    unsigned int argno;
-    if (fnspec.returns_arg (&argno))
-      {
-	offrng[0] = offrng[1] = 0;
-	return gimple_call_arg (stmt, argno);
-      }
-  }
-
-  if (gimple_call_num_args (stmt) < 1)
-    return NULL_TREE;
-
-  tree fn = gimple_call_fndecl (stmt);
-  if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL))
-    {
-      /* See if this is a call to placement new.  */
-      if (!fn
-	  || !DECL_IS_OPERATOR_NEW_P (fn)
-	  || DECL_IS_REPLACEABLE_OPERATOR_NEW_P (fn))
-	return NULL_TREE;
-
-      /* Check the mangling, keeping in mind that operator new takes
-	 a size_t which could be unsigned int or unsigned long.  */
-      tree fname = DECL_ASSEMBLER_NAME (fn);
-      if (!id_equal (fname, "_ZnwjPv")       // ordinary form
-	  && !id_equal (fname, "_ZnwmPv")    // ordinary form
-	  && !id_equal (fname, "_ZnajPv")    // array form
-	  && !id_equal (fname, "_ZnamPv"))   // array form
-	return NULL_TREE;
-
-      if (gimple_call_num_args (stmt) != 2)
-	return NULL_TREE;
-
-      offrng[0] = offrng[1] = 0;
-      return gimple_call_arg (stmt, 1);
-    }
-
-  switch (DECL_FUNCTION_CODE (fn))
-    {
-    case BUILT_IN_MEMCPY:
-    case BUILT_IN_MEMCPY_CHK:
-    case BUILT_IN_MEMMOVE:
-    case BUILT_IN_MEMMOVE_CHK:
-    case BUILT_IN_MEMSET:
-    case BUILT_IN_STPCPY:
-    case BUILT_IN_STPCPY_CHK:
-    case BUILT_IN_STPNCPY:
-    case BUILT_IN_STPNCPY_CHK:
-    case BUILT_IN_STRCAT:
-    case BUILT_IN_STRCAT_CHK:
-    case BUILT_IN_STRCPY:
-    case BUILT_IN_STRCPY_CHK:
-    case BUILT_IN_STRNCAT:
-    case BUILT_IN_STRNCAT_CHK:
-    case BUILT_IN_STRNCPY:
-    case BUILT_IN_STRNCPY_CHK:
-      offrng[0] = offrng[1] = 0;
-      return gimple_call_arg (stmt, 0);
-
-    case BUILT_IN_MEMPCPY:
-    case BUILT_IN_MEMPCPY_CHK:
-      {
-	tree off = gimple_call_arg (stmt, 2);
-	if (!get_offset_range (off, stmt, offrng, rvals))
-	  {
-	    offrng[0] = 0;
-	    offrng[1] = HOST_WIDE_INT_M1U;
-	  }
-	return gimple_call_arg (stmt, 0);
-      }
-
-    case BUILT_IN_MEMCHR:
-      {
-	tree off = gimple_call_arg (stmt, 2);
-	if (get_offset_range (off, stmt, offrng, rvals))
-	  offrng[0] = 0;
-	else
-	  {
-	    offrng[0] = 0;
-	    offrng[1] = HOST_WIDE_INT_M1U;
-	  }
-	return gimple_call_arg (stmt, 0);
-      }
-
-    case BUILT_IN_STRCHR:
-    case BUILT_IN_STRRCHR:
-    case BUILT_IN_STRSTR:
-      {
-	offrng[0] = 0;
-	offrng[1] = HOST_WIDE_INT_M1U;
-      }
-      return gimple_call_arg (stmt, 0);
-
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* A helper of compute_objsize_r() to determine the size from an assignment
-   statement STMT with the RHS of either MIN_EXPR or MAX_EXPR.  */
-
-static bool
-handle_min_max_size (gimple *stmt, int ostype, access_ref *pref,
-		     ssa_name_limit_t &snlim, pointer_query *qry)
-{
-  tree_code code = gimple_assign_rhs_code (stmt);
-
-  tree ptr = gimple_assign_rhs1 (stmt);
-
-  /* In a valid MAX_/MIN_EXPR both operands must refer to the same array.
-     Determine the size/offset of each and use the one with more or less
-     space remaining, respectively.  If either fails, use the information
-     determined from the other instead, adjusted up or down as appropriate
-     for the expression.  */
-  access_ref aref[2] = { *pref, *pref };
-  if (!compute_objsize_r (ptr, ostype, &aref[0], snlim, qry))
-    {
-      aref[0].base0 = false;
-      aref[0].offrng[0] = aref[0].offrng[1] = 0;
-      aref[0].add_max_offset ();
-      aref[0].set_max_size_range ();
-    }
-
-  ptr = gimple_assign_rhs2 (stmt);
-  if (!compute_objsize_r (ptr, ostype, &aref[1], snlim, qry))
-    {
-      aref[1].base0 = false;
-      aref[1].offrng[0] = aref[1].offrng[1] = 0;
-      aref[1].add_max_offset ();
-      aref[1].set_max_size_range ();
-    }
-
-  if (!aref[0].ref && !aref[1].ref)
-    /* Fail if the identity of neither argument could be determined.  */
-    return false;
-
-  bool i0 = false;
-  if (aref[0].ref && aref[0].base0)
-    {
-      if (aref[1].ref && aref[1].base0)
-	{
-	  /* If the object referenced by both arguments has been determined
-	     set *PREF to the one with more or less space remainng, whichever
-	     is appopriate for CODE.
-	     TODO: Indicate when the objects are distinct so it can be
-	     diagnosed.  */
-	  i0 = code == MAX_EXPR;
-	  const bool i1 = !i0;
-
-	  if (aref[i0].size_remaining () < aref[i1].size_remaining ())
-	    *pref = aref[i1];
-	  else
-	    *pref = aref[i0];
-	  return true;
-	}
-
-      /* If only the object referenced by one of the arguments could be
-	 determined, use it and...  */
-      *pref = aref[0];
-      i0 = true;
-    }
-  else
-    *pref = aref[1];
-
-  const bool i1 = !i0;
-  /* ...see if the offset obtained from the other pointer can be used
-     to tighten up the bound on the offset obtained from the first.  */
-  if ((code == MAX_EXPR && aref[i1].offrng[1] < aref[i0].offrng[0])
-      || (code == MIN_EXPR && aref[i0].offrng[0] < aref[i1].offrng[1]))
-    {
-      pref->offrng[0] = aref[i0].offrng[0];
-      pref->offrng[1] = aref[i0].offrng[1];
-    }
-  return true;
-}
-
-/* A helper of compute_objsize_r() to determine the size from ARRAY_REF
-   AREF.  ADDR is true if PTR is the operand of ADDR_EXPR.  Return true
-   on success and false on failure.  */
-
-static bool
-handle_array_ref (tree aref, bool addr, int ostype, access_ref *pref,
-		  ssa_name_limit_t &snlim, pointer_query *qry)
-{
-  gcc_assert (TREE_CODE (aref) == ARRAY_REF);
-
-  ++pref->deref;
-
-  tree arefop = TREE_OPERAND (aref, 0);
-  tree reftype = TREE_TYPE (arefop);
-  if (!addr && TREE_CODE (TREE_TYPE (reftype)) == POINTER_TYPE)
-    /* Avoid arrays of pointers.  FIXME: Hande pointers to arrays
-       of known bound.  */
-    return false;
-
-  if (!compute_objsize_r (arefop, ostype, pref, snlim, qry))
-    return false;
-
-  offset_int orng[2];
-  tree off = pref->eval (TREE_OPERAND (aref, 1));
-  range_query *const rvals = qry ? qry->rvals : NULL;
-  if (!get_offset_range (off, NULL, orng, rvals))
-    {
-      /* Set ORNG to the maximum offset representable in ptrdiff_t.  */
-      orng[1] = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
-      orng[0] = -orng[1] - 1;
-    }
-
-  /* Convert the array index range determined above to a byte
-     offset.  */
-  tree lowbnd = array_ref_low_bound (aref);
-  if (!integer_zerop (lowbnd) && tree_fits_uhwi_p (lowbnd))
-    {
-      /* Adjust the index by the low bound of the array domain
-	 (normally zero but 1 in Fortran).  */
-      unsigned HOST_WIDE_INT lb = tree_to_uhwi (lowbnd);
-      orng[0] -= lb;
-      orng[1] -= lb;
-    }
-
-  tree eltype = TREE_TYPE (aref);
-  tree tpsize = TYPE_SIZE_UNIT (eltype);
-  if (!tpsize || TREE_CODE (tpsize) != INTEGER_CST)
-    {
-      pref->add_max_offset ();
-      return true;
-    }
-
-  offset_int sz = wi::to_offset (tpsize);
-  orng[0] *= sz;
-  orng[1] *= sz;
-
-  if (ostype && TREE_CODE (eltype) == ARRAY_TYPE)
-    {
-      /* Except for the permissive raw memory functions which use
-	 the size of the whole object determined above, use the size
-	 of the referenced array.  Because the overall offset is from
-	 the beginning of the complete array object add this overall
-	 offset to the size of array.  */
-      offset_int sizrng[2] =
-	{
-	 pref->offrng[0] + orng[0] + sz,
-	 pref->offrng[1] + orng[1] + sz
-	};
-      if (sizrng[1] < sizrng[0])
-	std::swap (sizrng[0], sizrng[1]);
-      if (sizrng[0] >= 0 && sizrng[0] <= pref->sizrng[0])
-	pref->sizrng[0] = sizrng[0];
-      if (sizrng[1] >= 0 && sizrng[1] <= pref->sizrng[1])
-	pref->sizrng[1] = sizrng[1];
-    }
-
-  pref->add_offset (orng[0], orng[1]);
-  return true;
-}
-
-/* A helper of compute_objsize_r() to determine the size from MEM_REF
-   MREF.  Return true on success and false on failure.  */
-
-static bool
-handle_mem_ref (tree mref, int ostype, access_ref *pref,
-		ssa_name_limit_t &snlim, pointer_query *qry)
-{
-  gcc_assert (TREE_CODE (mref) == MEM_REF);
-
-  ++pref->deref;
-
-  if (VECTOR_TYPE_P (TREE_TYPE (mref)))
-    {
-      /* Hack: Handle MEM_REFs of vector types as those to complete
-	 objects; those may be synthesized from multiple assignments
-	 to consecutive data members (see PR 93200 and 96963).
-	 FIXME: Vectorized assignments should only be present after
-	 vectorization so this hack is only necessary after it has
-	 run and could be avoided in calls from prior passes (e.g.,
-	 tree-ssa-strlen.c).
-	 FIXME: Deal with this more generally, e.g., by marking up
-	 such MEM_REFs at the time they're created.  */
-      ostype = 0;
-    }
-
-  tree mrefop = TREE_OPERAND (mref, 0);
-  if (!compute_objsize_r (mrefop, ostype, pref, snlim, qry))
-    return false;
-
-  offset_int orng[2];
-  tree off = pref->eval (TREE_OPERAND (mref, 1));
-  range_query *const rvals = qry ? qry->rvals : NULL;
-  if (!get_offset_range (off, NULL, orng, rvals))
-    {
-      /* Set ORNG to the maximum offset representable in ptrdiff_t.  */
-      orng[1] = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
-      orng[0] = -orng[1] - 1;
-    }
-
-  pref->add_offset (orng[0], orng[1]);
-  return true;
-}
-
-/* Helper to compute the size of the object referenced by the PTR
-   expression which must have pointer type, using Object Size type
-   OSTYPE (only the least significant 2 bits are used).
-   On success, sets PREF->REF to the DECL of the referenced object
-   if it's unique, otherwise to null, PREF->OFFRNG to the range of
-   offsets into it, and PREF->SIZRNG to the range of sizes of
-   the object(s).
-   SNLIM is used to avoid visiting the same PHI operand multiple
-   times, and, when nonnull, RVALS to determine range information.
-   Returns true on success, false when a meaningful size (or range)
-   cannot be determined.
-
-   The function is intended for diagnostics and should not be used
-   to influence code generation or optimization.  */
-
-static bool
-compute_objsize_r (tree ptr, int ostype, access_ref *pref,
-		   ssa_name_limit_t &snlim, pointer_query *qry)
-{
-  STRIP_NOPS (ptr);
-
-  const bool addr = TREE_CODE (ptr) == ADDR_EXPR;
-  if (addr)
-    {
-      --pref->deref;
-      ptr = TREE_OPERAND (ptr, 0);
-    }
-
-  if (DECL_P (ptr))
-    {
-      pref->ref = ptr;
-
-      if (!addr && POINTER_TYPE_P (TREE_TYPE (ptr)))
-	{
-	  /* Set the maximum size if the reference is to the pointer
-	     itself (as opposed to what it points to), and clear
-	     BASE0 since the offset isn't necessarily zero-based.  */
-	  pref->set_max_size_range ();
-	  pref->base0 = false;
-	  return true;
-	}
-
-      if (tree size = decl_init_size (ptr, false))
-	if (TREE_CODE (size) == INTEGER_CST)
-	  {
-	    pref->sizrng[0] = pref->sizrng[1] = wi::to_offset (size);
-	    return true;
-	  }
-
-      pref->set_max_size_range ();
-      return true;
-    }
-
-  const tree_code code = TREE_CODE (ptr);
-  range_query *const rvals = qry ? qry->rvals : NULL;
-
-  if (code == BIT_FIELD_REF)
-    {
-      tree ref = TREE_OPERAND (ptr, 0);
-      if (!compute_objsize_r (ref, ostype, pref, snlim, qry))
-	return false;
-
-      offset_int off = wi::to_offset (pref->eval (TREE_OPERAND (ptr, 2)));
-      pref->add_offset (off / BITS_PER_UNIT);
-      return true;
-    }
-
-  if (code == COMPONENT_REF)
-    {
-      tree ref = TREE_OPERAND (ptr, 0);
-      if (TREE_CODE (TREE_TYPE (ref)) == UNION_TYPE)
-	/* In accesses through union types consider the entire unions
-	   rather than just their members.  */
-	ostype = 0;
-      tree field = TREE_OPERAND (ptr, 1);
-
-      if (ostype == 0)
-	{
-	  /* In OSTYPE zero (for raw memory functions like memcpy), use
-	     the maximum size instead if the identity of the enclosing
-	     object cannot be determined.  */
-	  if (!compute_objsize_r (ref, ostype, pref, snlim, qry))
-	    return false;
-
-	  /* Otherwise, use the size of the enclosing object and add
-	     the offset of the member to the offset computed so far.  */
-	  tree offset = byte_position (field);
-	  if (TREE_CODE (offset) == INTEGER_CST)
-	    pref->add_offset (wi::to_offset (offset));
-	  else
-	    pref->add_max_offset ();
-
-	  if (!pref->ref)
-	    /* REF may have been already set to an SSA_NAME earlier
-	       to provide better context for diagnostics.  In that case,
-	       leave it unchanged.  */
-	    pref->ref = ref;
-	  return true;
-	}
-
-      pref->ref = field;
-
-      if (!addr && POINTER_TYPE_P (TREE_TYPE (field)))
-	{
-	  /* Set maximum size if the reference is to the pointer member
-	     itself (as opposed to what it points to).  */
-	  pref->set_max_size_range ();
-	  return true;
-	}
-
-      /* SAM is set for array members that might need special treatment.  */
-      special_array_member sam;
-      tree size = component_ref_size (ptr, &sam);
-      if (sam == special_array_member::int_0)
-	pref->sizrng[0] = pref->sizrng[1] = 0;
-      else if (!pref->trail1special && sam == special_array_member::trail_1)
-	pref->sizrng[0] = pref->sizrng[1] = 1;
-      else if (size && TREE_CODE (size) == INTEGER_CST)
-	pref->sizrng[0] = pref->sizrng[1] = wi::to_offset (size);
-      else
-	{
-	  /* When the size of the member is unknown it's either a flexible
-	     array member or a trailing special array member (either zero
-	     length or one-element).  Set the size to the maximum minus
-	     the constant size of the type.  */
-	  pref->sizrng[0] = 0;
-	  pref->sizrng[1] = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
-	  if (tree recsize = TYPE_SIZE_UNIT (TREE_TYPE (ref)))
-	    if (TREE_CODE (recsize) == INTEGER_CST)
-	      pref->sizrng[1] -= wi::to_offset (recsize);
-	}
-      return true;
-    }
-
-  if (code == ARRAY_REF)
-    return handle_array_ref (ptr, addr, ostype, pref, snlim, qry);
-
-  if (code == MEM_REF)
-    return handle_mem_ref (ptr, ostype, pref, snlim, qry);
-
-  if (code == TARGET_MEM_REF)
-    {
-      tree ref = TREE_OPERAND (ptr, 0);
-      if (!compute_objsize_r (ref, ostype, pref, snlim, qry))
-	return false;
-
-      /* TODO: Handle remaining operands.  Until then, add maximum offset.  */
-      pref->ref = ptr;
-      pref->add_max_offset ();
-      return true;
-    }
-
-  if (code == INTEGER_CST)
-    {
-      /* Pointer constants other than null are most likely the result
-	 of erroneous null pointer addition/subtraction.  Set size to
-	 zero.  For null pointers, set size to the maximum for now
-	 since those may be the result of jump threading.  */
-      if (integer_zerop (ptr))
-	pref->set_max_size_range ();
-      else
-	pref->sizrng[0] = pref->sizrng[1] = 0;
-      pref->ref = ptr;
-
-      return true;
-    }
-
-  if (code == STRING_CST)
-    {
-      pref->sizrng[0] = pref->sizrng[1] = TREE_STRING_LENGTH (ptr);
-      pref->ref = ptr;
-      return true;
-    }
-
-  if (code == POINTER_PLUS_EXPR)
-    {
-      tree ref = TREE_OPERAND (ptr, 0);
-      if (!compute_objsize_r (ref, ostype, pref, snlim, qry))
-	return false;
-
-      /* Clear DEREF since the offset is being applied to the target
-	 of the dereference.  */
-      pref->deref = 0;
-
-      offset_int orng[2];
-      tree off = pref->eval (TREE_OPERAND (ptr, 1));
-      if (get_offset_range (off, NULL, orng, rvals))
-	pref->add_offset (orng[0], orng[1]);
-      else
-	pref->add_max_offset ();
-      return true;
-    }
-
-  if (code == VIEW_CONVERT_EXPR)
-    {
-      ptr = TREE_OPERAND (ptr, 0);
-      return compute_objsize_r (ptr, ostype, pref, snlim, qry);
-    }
-
-  if (code == SSA_NAME)
-    {
-      if (!snlim.next ())
-	return false;
-
-      /* Only process an SSA_NAME if the recursion limit has not yet
-	 been reached.  */
-      if (qry)
-	{
-	  if (++qry->depth)
-	    qry->max_depth = qry->depth;
-	  if (const access_ref *cache_ref = qry->get_ref (ptr))
-	    {
-	      /* If the pointer is in the cache set *PREF to what it refers
-		 to and return success.  */
-	      *pref = *cache_ref;
-	      return true;
-	    }
-	}
-
-      gimple *stmt = SSA_NAME_DEF_STMT (ptr);
-      if (is_gimple_call (stmt))
-	{
-	  /* If STMT is a call to an allocation function get the size
-	     from its argument(s).  If successful, also set *PREF->REF
-	     to PTR for the caller to include in diagnostics.  */
-	  wide_int wr[2];
-	  if (gimple_call_alloc_size (stmt, wr, rvals))
-	    {
-	      pref->ref = ptr;
-	      pref->sizrng[0] = offset_int::from (wr[0], UNSIGNED);
-	      pref->sizrng[1] = offset_int::from (wr[1], UNSIGNED);
-	      /* Constrain both bounds to a valid size.  */
-	      offset_int maxsize = wi::to_offset (max_object_size ());
-	      if (pref->sizrng[0] > maxsize)
-		pref->sizrng[0] = maxsize;
-	      if (pref->sizrng[1] > maxsize)
-		pref->sizrng[1] = maxsize;
-	    }
-	  else
-	    {
-	      /* For functions known to return one of their pointer arguments
-		 try to determine what the returned pointer points to, and on
-		 success add OFFRNG which was set to the offset added by
-		 the function (e.g., memchr) to the overall offset.  */
-	      offset_int offrng[2];
-	      if (tree ret = gimple_call_return_array (stmt, offrng, rvals))
-		{
-		  if (!compute_objsize_r (ret, ostype, pref, snlim, qry))
-		    return false;
-
-		  /* Cap OFFRNG[1] to at most the remaining size of
-		     the object.  */
-		  offset_int remrng[2];
-		  remrng[1] = pref->size_remaining (remrng);
-		  if (remrng[1] < offrng[1])
-		    offrng[1] = remrng[1];
-		  pref->add_offset (offrng[0], offrng[1]);
-		}
-	      else
-		{
-		  /* For other calls that might return arbitrary pointers
-		     including into the middle of objects set the size
-		     range to maximum, clear PREF->BASE0, and also set
-		     PREF->REF to include in diagnostics.  */
-		  pref->set_max_size_range ();
-		  pref->base0 = false;
-		  pref->ref = ptr;
-		}
-	    }
-	  qry->put_ref (ptr, *pref);
-	  return true;
-	}
-
-      if (gimple_nop_p (stmt))
-	{
-	  /* For a function argument try to determine the byte size
-	     of the array from the current function declaratation
-	     (e.g., attribute access or related).  */
-	  wide_int wr[2];
-	  bool static_array = false;
-	  if (tree ref = gimple_parm_array_size (ptr, wr, &static_array))
-	    {
-	      pref->parmarray = !static_array;
-	      pref->sizrng[0] = offset_int::from (wr[0], UNSIGNED);
-	      pref->sizrng[1] = offset_int::from (wr[1], UNSIGNED);
-	      pref->ref = ref;
-	      qry->put_ref (ptr, *pref);
-	      return true;
-	    }
-
-	  pref->set_max_size_range ();
-	  pref->base0 = false;
-	  pref->ref = ptr;
-	  qry->put_ref (ptr, *pref);
-	  return true;
-	}
-
-      if (gimple_code (stmt) == GIMPLE_PHI)
-	{
-	  pref->ref = ptr;
-	  access_ref phi_ref = *pref;
-	  if (!pref->get_ref (NULL, &phi_ref, ostype, &snlim, qry))
-	    return false;
-	  *pref = phi_ref;
-	  pref->ref = ptr;
-	  qry->put_ref (ptr, *pref);
-	  return true;
-	}
-
-      if (!is_gimple_assign (stmt))
-	{
-	  /* Clear BASE0 since the assigned pointer might point into
-	     the middle of the object, set the maximum size range and,
-	     if the SSA_NAME refers to a function argumnent, set
-	     PREF->REF to it.  */
-	  pref->base0 = false;
-	  pref->set_max_size_range ();
-	  pref->ref = ptr;
-	  return true;
-	}
-
-      tree_code code = gimple_assign_rhs_code (stmt);
-
-      if (code == MAX_EXPR || code == MIN_EXPR)
-	{
-	  if (!handle_min_max_size (stmt, ostype, pref, snlim, qry))
-	    return false;
-	  qry->put_ref (ptr, *pref);
-	  return true;
-	}
-
-      tree rhs = gimple_assign_rhs1 (stmt);
-
-      if (code == ASSERT_EXPR)
-	{
-	  rhs = TREE_OPERAND (rhs, 0);
-	  return compute_objsize_r (rhs, ostype, pref, snlim, qry);
-	}
-
-      if (code == POINTER_PLUS_EXPR
-	  && TREE_CODE (TREE_TYPE (rhs)) == POINTER_TYPE)
-	{
-	  /* Compute the size of the object first. */
-	  if (!compute_objsize_r (rhs, ostype, pref, snlim, qry))
-	    return false;
-
-	  offset_int orng[2];
-	  tree off = gimple_assign_rhs2 (stmt);
-	  if (get_offset_range (off, stmt, orng, rvals))
-	    pref->add_offset (orng[0], orng[1]);
-	  else
-	    pref->add_max_offset ();
-	  qry->put_ref (ptr, *pref);
-	  return true;
-	}
-
-      if (code == ADDR_EXPR
-	  || code == SSA_NAME)
-	return compute_objsize_r (rhs, ostype, pref, snlim, qry);
-
-      /* (This could also be an assignment from a nonlocal pointer.)  Save
-	 PTR to mention in diagnostics but otherwise treat it as a pointer
-	 to an unknown object.  */
-      pref->ref = rhs;
-      pref->base0 = false;
-      pref->set_max_size_range ();
-      return true;
-    }
-
-  /* Assume all other expressions point into an unknown object
-     of the maximum valid size.  */
-  pref->ref = ptr;
-  pref->base0 = false;
-  pref->set_max_size_range ();
-  if (TREE_CODE (ptr) == SSA_NAME)
-    qry->put_ref (ptr, *pref);
-  return true;
-}
-
-/* A "public" wrapper around the above.  Clients should use this overload
-   instead.  */
-
-tree
-compute_objsize (tree ptr, int ostype, access_ref *pref,
-		 range_query *rvals /* = NULL */)
-{
-  pointer_query qry;
-  qry.rvals = rvals;
-  ssa_name_limit_t snlim;
-  if (!compute_objsize_r (ptr, ostype, pref, snlim, &qry))
-    return NULL_TREE;
-
-  offset_int maxsize = pref->size_remaining ();
-  if (pref->base0 && pref->offrng[0] < 0 && pref->offrng[1] >= 0)
-    pref->offrng[0] = 0;
-  return wide_int_to_tree (sizetype, maxsize);
-}
-
-/* Transitional wrapper.  The function should be removed once callers
-   transition to the pointer_query API.  */
-
-tree
-compute_objsize (tree ptr, int ostype, access_ref *pref, pointer_query *ptr_qry)
-{
-  pointer_query qry;
-  if (ptr_qry)
-    ptr_qry->depth = 0;
-  else
-    ptr_qry = &qry;
-
-  ssa_name_limit_t snlim;
-  if (!compute_objsize_r (ptr, ostype, pref, snlim, ptr_qry))
-    return NULL_TREE;
-
-  offset_int maxsize = pref->size_remaining ();
-  if (pref->base0 && pref->offrng[0] < 0 && pref->offrng[1] >= 0)
-    pref->offrng[0] = 0;
-  return wide_int_to_tree (sizetype, maxsize);
-}
-
-/* Legacy wrapper around the above.  The function should be removed
-   once callers transition to one of the two above.  */
-
-tree
-compute_objsize (tree ptr, int ostype, tree *pdecl /* = NULL */,
-		 tree *poff /* = NULL */, range_query *rvals /* = NULL */)
-{
-  /* Set the initial offsets to zero and size to negative to indicate
-     none has been computed yet.  */
-  access_ref ref;
-  tree size = compute_objsize (ptr, ostype, &ref, rvals);
-  if (!size || !ref.base0)
-    return NULL_TREE;
-
-  if (pdecl)
-    *pdecl = ref.ref;
-
-  if (poff)
-    *poff = wide_int_to_tree (ptrdiff_type_node, ref.offrng[ref.offrng[0] < 0]);
-
-  return size;
-}
-
 /* Helper to determine and check the sizes of the source and the destination
    of calls to __builtin_{bzero,memcpy,mempcpy,memset} calls.  EXP is the
    call expression, DEST is the destination argument, SRC is the source
@@ -13283,716 +10547,6 @@ 
 		access_write_only);
 }
 
-/* Return true if STMT is a call to an allocation function.  Unless
-   ALL_ALLOC is set, consider only functions that return dynmamically
-   allocated objects.  Otherwise return true even for all forms of
-   alloca (including VLA).  */
-
-static bool
-fndecl_alloc_p (tree fndecl, bool all_alloc)
-{
-  if (!fndecl)
-    return false;
-
-  /* A call to operator new isn't recognized as one to a built-in.  */
-  if (DECL_IS_OPERATOR_NEW_P (fndecl))
-    return true;
-
-  if (fndecl_built_in_p (fndecl, BUILT_IN_NORMAL))
-    {
-      switch (DECL_FUNCTION_CODE (fndecl))
-	{
-	case BUILT_IN_ALLOCA:
-	case BUILT_IN_ALLOCA_WITH_ALIGN:
-	  return all_alloc;
-	case BUILT_IN_ALIGNED_ALLOC:
-	case BUILT_IN_CALLOC:
-	case BUILT_IN_GOMP_ALLOC:
-	case BUILT_IN_MALLOC:
-	case BUILT_IN_REALLOC:
-	case BUILT_IN_STRDUP:
-	case BUILT_IN_STRNDUP:
-	  return true;
-	default:
-	  break;
-	}
-    }
-
-  /* A function is considered an allocation function if it's declared
-     with attribute malloc with an argument naming its associated
-     deallocation function.  */
-  tree attrs = DECL_ATTRIBUTES (fndecl);
-  if (!attrs)
-    return false;
-
-  for (tree allocs = attrs;
-       (allocs = lookup_attribute ("malloc", allocs));
-       allocs = TREE_CHAIN (allocs))
-    {
-      tree args = TREE_VALUE (allocs);
-      if (!args)
-	continue;
-
-      if (TREE_VALUE (args))
-	return true;
-    }
-
-  return false;
-}
-
-/* Return true if STMT is a call to an allocation function.  A wrapper
-   around fndecl_alloc_p.  */
-
-static bool
-gimple_call_alloc_p (gimple *stmt, bool all_alloc = false)
-{
-  return fndecl_alloc_p (gimple_call_fndecl (stmt), all_alloc);
-}
-
-/* Return the zero-based number corresponding to the argument being
-   deallocated if STMT is a call to a deallocation function or UINT_MAX
-   if it isn't.  */
-
-static unsigned
-call_dealloc_argno (tree exp)
-{
-  tree fndecl = get_callee_fndecl (exp);
-  if (!fndecl)
-    return UINT_MAX;
-
-  return fndecl_dealloc_argno (fndecl);
-}
-
-/* Return the zero-based number corresponding to the argument being
-   deallocated if FNDECL is a deallocation function or UINT_MAX
-   if it isn't.  */
-
-unsigned
-fndecl_dealloc_argno (tree fndecl)
-{
-  /* A call to operator delete isn't recognized as one to a built-in.  */
-  if (DECL_IS_OPERATOR_DELETE_P (fndecl))
-    {
-      if (DECL_IS_REPLACEABLE_OPERATOR (fndecl))
-	return 0;
-
-      /* Avoid placement delete that's not been inlined.  */
-      tree fname = DECL_ASSEMBLER_NAME (fndecl);
-      if (id_equal (fname, "_ZdlPvS_")       // ordinary form
-	  || id_equal (fname, "_ZdaPvS_"))   // array form
-	return UINT_MAX;
-      return 0;
-    }
-
-  /* TODO: Handle user-defined functions with attribute malloc?  Handle
-     known non-built-ins like fopen?  */
-  if (fndecl_built_in_p (fndecl, BUILT_IN_NORMAL))
-    {
-      switch (DECL_FUNCTION_CODE (fndecl))
-	{
-	case BUILT_IN_FREE:
-	case BUILT_IN_REALLOC:
-	  return 0;
-	default:
-	  break;
-	}
-      return UINT_MAX;
-    }
-
-  tree attrs = DECL_ATTRIBUTES (fndecl);
-  if (!attrs)
-    return UINT_MAX;
-
-  for (tree atfree = attrs;
-       (atfree = lookup_attribute ("*dealloc", atfree));
-       atfree = TREE_CHAIN (atfree))
-    {
-      tree alloc = TREE_VALUE (atfree);
-      if (!alloc)
-	continue;
-
-      tree pos = TREE_CHAIN (alloc);
-      if (!pos)
-	return 0;
-
-      pos = TREE_VALUE (pos);
-      return TREE_INT_CST_LOW (pos) - 1;
-    }
-
-  return UINT_MAX;
-}
-
-/* Return true if DELC doesn't refer to an operator delete that's
-   suitable to call with a pointer returned from the operator new
-   described by NEWC.  */
-
-static bool
-new_delete_mismatch_p (const demangle_component &newc,
-		       const demangle_component &delc)
-{
-  if (newc.type != delc.type)
-    return true;
-
-  switch (newc.type)
-    {
-    case DEMANGLE_COMPONENT_NAME:
-      {
-	int len = newc.u.s_name.len;
-	const char *news = newc.u.s_name.s;
-	const char *dels = delc.u.s_name.s;
-	if (len != delc.u.s_name.len || memcmp (news, dels, len))
-	  return true;
-
-	if (news[len] == 'n')
-	  {
-	    if (news[len + 1] == 'a')
-	      return dels[len] != 'd' || dels[len + 1] != 'a';
-	    if (news[len + 1] == 'w')
-	      return dels[len] != 'd' || dels[len + 1] != 'l';
-	  }
-	return false;
-      }
-
-    case DEMANGLE_COMPONENT_OPERATOR:
-      /* Operator mismatches are handled above.  */
-      return false;
-
-    case DEMANGLE_COMPONENT_EXTENDED_OPERATOR:
-      if (newc.u.s_extended_operator.args != delc.u.s_extended_operator.args)
-	return true;
-      return new_delete_mismatch_p (*newc.u.s_extended_operator.name,
-				    *delc.u.s_extended_operator.name);
-
-    case DEMANGLE_COMPONENT_FIXED_TYPE:
-      if (newc.u.s_fixed.accum != delc.u.s_fixed.accum
-	  || newc.u.s_fixed.sat != delc.u.s_fixed.sat)
-	return true;
-      return new_delete_mismatch_p (*newc.u.s_fixed.length,
-				    *delc.u.s_fixed.length);
-
-    case DEMANGLE_COMPONENT_CTOR:
-      if (newc.u.s_ctor.kind != delc.u.s_ctor.kind)
-	return true;
-      return new_delete_mismatch_p (*newc.u.s_ctor.name,
-				    *delc.u.s_ctor.name);
-
-    case DEMANGLE_COMPONENT_DTOR:
-      if (newc.u.s_dtor.kind != delc.u.s_dtor.kind)
-	return true;
-      return new_delete_mismatch_p (*newc.u.s_dtor.name,
-				    *delc.u.s_dtor.name);
-
-    case DEMANGLE_COMPONENT_BUILTIN_TYPE:
-      {
-	/* The demangler API provides no better way to compare built-in
-	   types except to by comparing their demangled names. */
-	size_t nsz, dsz;
-	demangle_component *pnc = const_cast<demangle_component *>(&newc);
-	demangle_component *pdc = const_cast<demangle_component *>(&delc);
-	char *nts = cplus_demangle_print (0, pnc, 16, &nsz);
-	char *dts = cplus_demangle_print (0, pdc, 16, &dsz);
-	if (!nts != !dts)
-	  return true;
-	bool mismatch = strcmp (nts, dts);
-	free (nts);
-	free (dts);
-	return mismatch;
-      }
-
-    case DEMANGLE_COMPONENT_SUB_STD:
-      if (newc.u.s_string.len != delc.u.s_string.len)
-	return true;
-      return memcmp (newc.u.s_string.string, delc.u.s_string.string,
-		     newc.u.s_string.len);
-
-    case DEMANGLE_COMPONENT_FUNCTION_PARAM:
-    case DEMANGLE_COMPONENT_TEMPLATE_PARAM:
-      return newc.u.s_number.number != delc.u.s_number.number;
-
-    case DEMANGLE_COMPONENT_CHARACTER:
-      return newc.u.s_character.character != delc.u.s_character.character;
-
-    case DEMANGLE_COMPONENT_DEFAULT_ARG:
-    case DEMANGLE_COMPONENT_LAMBDA:
-      if (newc.u.s_unary_num.num != delc.u.s_unary_num.num)
-	return true;
-      return new_delete_mismatch_p (*newc.u.s_unary_num.sub,
-				    *delc.u.s_unary_num.sub);
-    default:
-      break;
-    }
-
-  if (!newc.u.s_binary.left != !delc.u.s_binary.left)
-    return true;
-
-  if (!newc.u.s_binary.left)
-    return false;
-
-  if (new_delete_mismatch_p (*newc.u.s_binary.left, *delc.u.s_binary.left)
-      || !newc.u.s_binary.right != !delc.u.s_binary.right)
-    return true;
-
-  if (newc.u.s_binary.right)
-    return new_delete_mismatch_p (*newc.u.s_binary.right,
-				  *delc.u.s_binary.right);
-  return false;
-}
-
-/* Return true if DELETE_DECL is an operator delete that's not suitable
-   to call with a pointer returned fron NEW_DECL.  */
-
-static bool
-new_delete_mismatch_p (tree new_decl, tree delete_decl)
-{
-  tree new_name = DECL_ASSEMBLER_NAME (new_decl);
-  tree delete_name = DECL_ASSEMBLER_NAME (delete_decl);
-
-  /* valid_new_delete_pair_p() returns a conservative result (currently
-     it only handles global operators).  A true result is reliable but
-     a false result doesn't necessarily mean the operators don't match.  */
-  if (valid_new_delete_pair_p (new_name, delete_name))
-    return false;
-
-  /* For anything not handled by valid_new_delete_pair_p() such as member
-     operators compare the individual demangled components of the mangled
-     name.  */
-  const char *new_str = IDENTIFIER_POINTER (new_name);
-  const char *del_str = IDENTIFIER_POINTER (delete_name);
-
-  void *np = NULL, *dp = NULL;
-  demangle_component *ndc = cplus_demangle_v3_components (new_str, 0, &np);
-  demangle_component *ddc = cplus_demangle_v3_components (del_str, 0, &dp);
-  bool mismatch = new_delete_mismatch_p (*ndc, *ddc);
-  free (np);
-  free (dp);
-  return mismatch;
-}
-
-/* ALLOC_DECL and DEALLOC_DECL are pair of allocation and deallocation
-   functions.  Return true if the latter is suitable to deallocate objects
-   allocated by calls to the former.  */
-
-static bool
-matching_alloc_calls_p (tree alloc_decl, tree dealloc_decl)
-{
-  /* Set to alloc_kind_t::builtin if ALLOC_DECL is associated with
-     a built-in deallocator.  */
-  enum class alloc_kind_t { none, builtin, user }
-  alloc_dealloc_kind = alloc_kind_t::none;
-
-  if (DECL_IS_OPERATOR_NEW_P (alloc_decl))
-    {
-      if (DECL_IS_OPERATOR_DELETE_P (dealloc_decl))
-	/* Return true iff both functions are of the same array or
-	   singleton form and false otherwise.  */
-	return !new_delete_mismatch_p (alloc_decl, dealloc_decl);
-
-      /* Return false for deallocation functions that are known not
-	 to match.  */
-      if (fndecl_built_in_p (dealloc_decl, BUILT_IN_FREE)
-	  || fndecl_built_in_p (dealloc_decl, BUILT_IN_REALLOC))
-	return false;
-      /* Otherwise proceed below to check the deallocation function's
-	 "*dealloc" attributes to look for one that mentions this operator
-	 new.  */
-    }
-  else if (fndecl_built_in_p (alloc_decl, BUILT_IN_NORMAL))
-    {
-      switch (DECL_FUNCTION_CODE (alloc_decl))
-	{
-	case BUILT_IN_ALLOCA:
-	case BUILT_IN_ALLOCA_WITH_ALIGN:
-	  return false;
-
-	case BUILT_IN_ALIGNED_ALLOC:
-	case BUILT_IN_CALLOC:
-	case BUILT_IN_GOMP_ALLOC:
-	case BUILT_IN_MALLOC:
-	case BUILT_IN_REALLOC:
-	case BUILT_IN_STRDUP:
-	case BUILT_IN_STRNDUP:
-	  if (DECL_IS_OPERATOR_DELETE_P (dealloc_decl))
-	    return false;
-
-	  if (fndecl_built_in_p (dealloc_decl, BUILT_IN_FREE)
-	      || fndecl_built_in_p (dealloc_decl, BUILT_IN_REALLOC))
-	    return true;
-
-	  alloc_dealloc_kind = alloc_kind_t::builtin;
-	  break;
-
-	default:
-	  break;
-	}
-    }
-
-  /* Set if DEALLOC_DECL both allocates and deallocates.  */
-  alloc_kind_t realloc_kind = alloc_kind_t::none;
-
-  if (fndecl_built_in_p (dealloc_decl, BUILT_IN_NORMAL))
-    {
-      built_in_function dealloc_code = DECL_FUNCTION_CODE (dealloc_decl);
-      if (dealloc_code == BUILT_IN_REALLOC)
-	realloc_kind = alloc_kind_t::builtin;
-
-      for (tree amats = DECL_ATTRIBUTES (alloc_decl);
-	   (amats = lookup_attribute ("malloc", amats));
-	   amats = TREE_CHAIN (amats))
-	{
-	  tree args = TREE_VALUE (amats);
-	  if (!args)
-	    continue;
-
-	  tree fndecl = TREE_VALUE (args);
-	  if (!fndecl || !DECL_P (fndecl))
-	    continue;
-
-	  if (fndecl_built_in_p (fndecl, BUILT_IN_NORMAL)
-	      && dealloc_code == DECL_FUNCTION_CODE (fndecl))
-	    return true;
-	}
-    }
-
-  const bool alloc_builtin = fndecl_built_in_p (alloc_decl, BUILT_IN_NORMAL);
-  alloc_kind_t realloc_dealloc_kind = alloc_kind_t::none;
-
-  /* If DEALLOC_DECL has an internal "*dealloc" attribute scan the list
-     of its associated allocation functions for ALLOC_DECL.
-     If the corresponding ALLOC_DECL is found they're a matching pair,
-     otherwise they're not.
-     With DDATS set to the Deallocator's *Dealloc ATtributes...  */
-  for (tree ddats = DECL_ATTRIBUTES (dealloc_decl);
-       (ddats = lookup_attribute ("*dealloc", ddats));
-       ddats = TREE_CHAIN (ddats))
-    {
-      tree args = TREE_VALUE (ddats);
-      if (!args)
-	continue;
-
-      tree alloc = TREE_VALUE (args);
-      if (!alloc)
-	continue;
-
-      if (alloc == DECL_NAME (dealloc_decl))
-	realloc_kind = alloc_kind_t::user;
-
-      if (DECL_P (alloc))
-	{
-	  gcc_checking_assert (fndecl_built_in_p (alloc, BUILT_IN_NORMAL));
-
-	  switch (DECL_FUNCTION_CODE (alloc))
-	    {
-	    case BUILT_IN_ALIGNED_ALLOC:
-	    case BUILT_IN_CALLOC:
-	    case BUILT_IN_GOMP_ALLOC:
-	    case BUILT_IN_MALLOC:
-	    case BUILT_IN_REALLOC:
-	    case BUILT_IN_STRDUP:
-	    case BUILT_IN_STRNDUP:
-	      realloc_dealloc_kind = alloc_kind_t::builtin;
-	      break;
-	    default:
-	      break;
-	    }
-
-	  if (!alloc_builtin)
-	    continue;
-
-	  if (DECL_FUNCTION_CODE (alloc) != DECL_FUNCTION_CODE (alloc_decl))
-	    continue;
-
-	  return true;
-	}
-
-      if (alloc == DECL_NAME (alloc_decl))
-	return true;
-    }
-
-  if (realloc_kind == alloc_kind_t::none)
-    return false;
-
-  hash_set<tree> common_deallocs;
-  /* Special handling for deallocators.  Iterate over both the allocator's
-     and the reallocator's associated deallocator functions looking for
-     the first one in common.  If one is found, the de/reallocator is
-     a match for the allocator even though the latter isn't directly
-     associated with the former.  This simplifies declarations in system
-     headers.
-     With AMATS set to the Allocator's Malloc ATtributes,
-     and  RMATS set to Reallocator's Malloc ATtributes...  */
-  for (tree amats = DECL_ATTRIBUTES (alloc_decl),
-	 rmats = DECL_ATTRIBUTES (dealloc_decl);
-       (amats = lookup_attribute ("malloc", amats))
-	 || (rmats = lookup_attribute ("malloc", rmats));
-       amats = amats ? TREE_CHAIN (amats) : NULL_TREE,
-	 rmats = rmats ? TREE_CHAIN (rmats) : NULL_TREE)
-    {
-      if (tree args = amats ? TREE_VALUE (amats) : NULL_TREE)
-	if (tree adealloc = TREE_VALUE (args))
-	  {
-	    if (DECL_P (adealloc)
-		&& fndecl_built_in_p (adealloc, BUILT_IN_NORMAL))
-	      {
-		built_in_function fncode = DECL_FUNCTION_CODE (adealloc);
-		if (fncode == BUILT_IN_FREE || fncode == BUILT_IN_REALLOC)
-		  {
-		    if (realloc_kind == alloc_kind_t::builtin)
-		      return true;
-		    alloc_dealloc_kind = alloc_kind_t::builtin;
-		  }
-		continue;
-	      }
-
-	    common_deallocs.add (adealloc);
-	  }
-
-      if (tree args = rmats ? TREE_VALUE (rmats) : NULL_TREE)
-	if (tree ddealloc = TREE_VALUE (args))
-	  {
-	    if (DECL_P (ddealloc)
-		&& fndecl_built_in_p (ddealloc, BUILT_IN_NORMAL))
-	      {
-		built_in_function fncode = DECL_FUNCTION_CODE (ddealloc);
-		if (fncode == BUILT_IN_FREE || fncode == BUILT_IN_REALLOC)
-		  {
-		    if (alloc_dealloc_kind == alloc_kind_t::builtin)
-		      return true;
-		    realloc_dealloc_kind = alloc_kind_t::builtin;
-		  }
-		continue;
-	      }
-
-	    if (common_deallocs.add (ddealloc))
-	      return true;
-	  }
-    }
-
-  /* Succeed only if ALLOC_DECL and the reallocator DEALLOC_DECL share
-     a built-in deallocator.  */
-  return  (alloc_dealloc_kind == alloc_kind_t::builtin
-	   && realloc_dealloc_kind == alloc_kind_t::builtin);
-}
-
-/* Return true if DEALLOC_DECL is a function suitable to deallocate
-   objectes allocated by the ALLOC call.  */
-
-static bool
-matching_alloc_calls_p (gimple *alloc, tree dealloc_decl)
-{
-  tree alloc_decl = gimple_call_fndecl (alloc);
-  if (!alloc_decl)
-    return true;
-
-  return matching_alloc_calls_p (alloc_decl, dealloc_decl);
-}
-
-/* Diagnose a call EXP to deallocate a pointer referenced by AREF if it
-   includes a nonzero offset.  Such a pointer cannot refer to the beginning
-   of an allocated object.  A negative offset may refer to it only if
-   the target pointer is unknown.  */
-
-static bool
-warn_dealloc_offset (location_t loc, tree exp, const access_ref &aref)
-{
-  if (aref.deref || aref.offrng[0] <= 0 || aref.offrng[1] <= 0)
-    return false;
-
-  tree dealloc_decl = get_callee_fndecl (exp);
-  if (!dealloc_decl)
-    return false;
-
-  if (DECL_IS_OPERATOR_DELETE_P (dealloc_decl)
-      && !DECL_IS_REPLACEABLE_OPERATOR (dealloc_decl))
-    {
-      /* A call to a user-defined operator delete with a pointer plus offset
-	 may be valid if it's returned from an unknown function (i.e., one
-	 that's not operator new).  */
-      if (TREE_CODE (aref.ref) == SSA_NAME)
-	{
-	  gimple *def_stmt = SSA_NAME_DEF_STMT (aref.ref);
-	  if (is_gimple_call (def_stmt))
-	    {
-	      tree alloc_decl = gimple_call_fndecl (def_stmt);
-	      if (!alloc_decl || !DECL_IS_OPERATOR_NEW_P (alloc_decl))
-		return false;
-	    }
-	}
-    }
-
-  char offstr[80];
-  offstr[0] = '\0';
-  if (wi::fits_shwi_p (aref.offrng[0]))
-    {
-      if (aref.offrng[0] == aref.offrng[1]
-	  || !wi::fits_shwi_p (aref.offrng[1]))
-	sprintf (offstr, " %lli",
-		 (long long)aref.offrng[0].to_shwi ());
-      else
-	sprintf (offstr, " [%lli, %lli]",
-		 (long long)aref.offrng[0].to_shwi (),
-		 (long long)aref.offrng[1].to_shwi ());
-    }
-
-  if (!warning_at (loc, OPT_Wfree_nonheap_object,
-		   "%qD called on pointer %qE with nonzero offset%s",
-		   dealloc_decl, aref.ref, offstr))
-    return false;
-
-  if (DECL_P (aref.ref))
-    inform (DECL_SOURCE_LOCATION (aref.ref), "declared here");
-  else if (TREE_CODE (aref.ref) == SSA_NAME)
-    {
-      gimple *def_stmt = SSA_NAME_DEF_STMT (aref.ref);
-      if (is_gimple_call (def_stmt))
-	{
-	  location_t def_loc = gimple_location (def_stmt);
-	  tree alloc_decl = gimple_call_fndecl (def_stmt);
-	  if (alloc_decl)
-	    inform (def_loc,
-		    "returned from %qD", alloc_decl);
-	  else if (tree alloc_fntype = gimple_call_fntype (def_stmt))
-	    inform (def_loc,
-		    "returned from %qT", alloc_fntype);
-	  else
-	    inform (def_loc,  "obtained here");
-	}
-    }
-
-  return true;
-}
-
-/* Issue a warning if a deallocation function such as free, realloc,
-   or C++ operator delete is called with an argument not returned by
-   a matching allocation function such as malloc or the corresponding
-   form of C++ operatorn new.  */
-
-void
-maybe_emit_free_warning (tree exp)
-{
-  tree fndecl = get_callee_fndecl (exp);
-  if (!fndecl)
-    return;
-
-  unsigned argno = call_dealloc_argno (exp);
-  if ((unsigned) call_expr_nargs (exp) <= argno)
-    return;
-
-  tree ptr = CALL_EXPR_ARG (exp, argno);
-  if (integer_zerop (ptr))
-    return;
-
-  access_ref aref;
-  if (!compute_objsize (ptr, 0, &aref))
-    return;
-
-  tree ref = aref.ref;
-  if (integer_zerop (ref))
-    return;
-
-  tree dealloc_decl = get_callee_fndecl (exp);
-  location_t loc = EXPR_LOCATION (exp);
-
-  if (DECL_P (ref) || EXPR_P (ref))
-    {
-      /* Diagnose freeing a declared object.  */
-      if (aref.ref_declared ()
-	  && warning_at (loc, OPT_Wfree_nonheap_object,
-			 "%qD called on unallocated object %qD",
-			 dealloc_decl, ref))
-	{
-	  loc = (DECL_P (ref)
-		 ? DECL_SOURCE_LOCATION (ref)
-		 : EXPR_LOCATION (ref));
-	  inform (loc, "declared here");
-	  return;
-	}
-
-      /* Diagnose freeing a pointer that includes a positive offset.
-	 Such a pointer cannot refer to the beginning of an allocated
-	 object.  A negative offset may refer to it.  */
-      if (aref.sizrng[0] != aref.sizrng[1]
-	  && warn_dealloc_offset (loc, exp, aref))
-	return;
-    }
-  else if (CONSTANT_CLASS_P (ref))
-    {
-      if (warning_at (loc, OPT_Wfree_nonheap_object,
-		      "%qD called on a pointer to an unallocated "
-		      "object %qE", dealloc_decl, ref))
-	{
-	  if (TREE_CODE (ptr) == SSA_NAME)
-	    {
-	      gimple *def_stmt = SSA_NAME_DEF_STMT (ptr);
-	      if (is_gimple_assign (def_stmt))
-		{
-		  location_t loc = gimple_location (def_stmt);
-		  inform (loc, "assigned here");
-		}
-	    }
-	  return;
-	}
-    }
-  else if (TREE_CODE (ref) == SSA_NAME)
-    {
-      /* Also warn if the pointer argument refers to the result
-	 of an allocation call like alloca or VLA.  */
-      gimple *def_stmt = SSA_NAME_DEF_STMT (ref);
-      if (is_gimple_call (def_stmt))
-	{
-	  bool warned = false;
-	  if (gimple_call_alloc_p (def_stmt))
-	    {
-	      if (matching_alloc_calls_p (def_stmt, dealloc_decl))
-		{
-		  if (warn_dealloc_offset (loc, exp, aref))
-		    return;
-		}
-	      else
-		{
-		  tree alloc_decl = gimple_call_fndecl (def_stmt);
-		  const opt_code opt =
-		    (DECL_IS_OPERATOR_NEW_P (alloc_decl)
-		     || DECL_IS_OPERATOR_DELETE_P (dealloc_decl)
-		     ? OPT_Wmismatched_new_delete
-		     : OPT_Wmismatched_dealloc);
-		  warned = warning_at (loc, opt,
-				       "%qD called on pointer returned "
-				       "from a mismatched allocation "
-				       "function", dealloc_decl);
-		}
-	    }
-	  else if (gimple_call_builtin_p (def_stmt, BUILT_IN_ALLOCA)
-	    	   || gimple_call_builtin_p (def_stmt,
-	    				     BUILT_IN_ALLOCA_WITH_ALIGN))
-	    warned = warning_at (loc, OPT_Wfree_nonheap_object,
-				 "%qD called on pointer to "
-				 "an unallocated object",
-				 dealloc_decl);
-	  else if (warn_dealloc_offset (loc, exp, aref))
-	    return;
-
-	  if (warned)
-	    {
-	      tree fndecl = gimple_call_fndecl (def_stmt);
-	      inform (gimple_location (def_stmt),
-		      "returned from %qD", fndecl);
-	      return;
-	    }
-	}
-      else if (gimple_nop_p (def_stmt))
-	{
-	  ref = SSA_NAME_VAR (ref);
-	  /* Diagnose freeing a pointer that includes a positive offset.  */
-	  if (TREE_CODE (ref) == PARM_DECL
-	      && !aref.deref
-	      && aref.sizrng[0] != aref.sizrng[1]
-	      && aref.offrng[0] > 0 && aref.offrng[1] > 0
-	      && warn_dealloc_offset (loc, exp, aref))
-	    return;
-	}
-    }
-}
-
 /* Fold a call to __builtin_object_size with arguments PTR and OST,
    if possible.  */
diff --git a/gcc/builtins.h b/gcc/builtins.h
index a64ece3f1cd..b580635a09f 100644
--- a/gcc/builtins.h
+++ b/gcc/builtins.h
@@ -149,223 +149,10 @@  extern bool target_char_cst_p (tree t, char *p);
 extern internal_fn associated_internal_fn (tree);
 extern internal_fn replacement_internal_fn (gcall *);
 
-extern bool check_nul_terminated_array (tree, tree, tree = NULL_TREE);
-extern void warn_string_no_nul (location_t, tree, const char *, tree,
-				tree, tree = NULL_TREE, bool = false,
-				const wide_int[2] = NULL);
-extern tree unterminated_array (tree, tree * = NULL, bool * = NULL);
 extern bool builtin_with_linkage_p (tree);
 
-/* Describes recursion limits used by functions that follow use-def
-   chains of SSA_NAMEs.  */
-
-class ssa_name_limit_t
-{
-  bitmap visited;         /* Bitmap of visited SSA_NAMEs.  */
-  unsigned ssa_def_max;   /* Longest chain of SSA_NAMEs to follow.  */
-
-  /* Not copyable or assignable.  */
-  DISABLE_COPY_AND_ASSIGN (ssa_name_limit_t);
-
-public:
-
-  ssa_name_limit_t ()
-    : visited (),
-      ssa_def_max (param_ssa_name_def_chain_limit) { }
-
-  /* Set a bit for the PHI in VISITED and return true if it wasn't
-     already set.  */
-  bool visit_phi (tree);
-  /* Clear a bit for the PHI in VISITED.  */
-  void leave_phi (tree);
-  /* Return false if the SSA_NAME chain length counter has reached
-     the limit, otherwise increment the counter and return true.  */
-  bool next ();
-
-  /* If the SSA_NAME has already been "seen" return a positive value.
-     Otherwise add it to VISITED.  If the SSA_NAME limit has been
-     reached, return a negative value.  Otherwise return zero.  */
-  int next_phi (tree);
-
-  ~ssa_name_limit_t ();
-};
-
-class pointer_query;
-
-/* Describes a reference to an object used in an access.  */
-struct access_ref
-{
-  /* Set the bounds of the reference to at most as many bytes
-     as the first argument or unknown when null, and at least
-     one when the second argument is true unless the first one
-     is a constant zero.  */
-  access_ref (tree = NULL_TREE, bool = false);
-
-  /* Return the PHI node REF refers to or null if it doesn't.  */
-  gphi *phi () const;
-
-  /* Return the object to which REF refers.  */
-  tree get_ref (vec<access_ref> *, access_ref * = NULL, int = 1,
-		ssa_name_limit_t * = NULL, pointer_query * = NULL) const;
-
-  /* Return true if OFFRNG is the constant zero.  */
-  bool offset_zero () const
-  {
-    return offrng[0] == 0 && offrng[1] == 0;
-  }
-
-  /* Return true if OFFRNG is bounded to a subrange of offset values
-     valid for the largest possible object.  */
-  bool offset_bounded () const;
-
-  /* Return the maximum amount of space remaining and if non-null, set
-     argument to the minimum.  */
-  offset_int size_remaining (offset_int * = NULL) const;
-
-/* Return true if the offset and object size are in range for SIZE.  */
-  bool offset_in_range (const offset_int &) const;
-
-  /* Return true if *THIS is an access to a declared object.  */
-  bool ref_declared () const
-  {
-    return DECL_P (ref) && base0 && deref < 1;
-  }
-
-  /* Set the size range to the maximum.  */
-  void set_max_size_range ()
-  {
-    sizrng[0] = 0;
-    sizrng[1] = wi::to_offset (max_object_size ());
-  }
-
-  /* Add OFF to the offset range.  */
-  void add_offset (const offset_int &off)
-  {
-    add_offset (off, off);
-  }
-
-  /* Add the range [MIN, MAX] to the offset range.  */
-  void add_offset (const offset_int &, const offset_int &);
-
-  /* Add the maximum representable offset to the offset range.  */
-  void add_max_offset ()
-  {
-    offset_int maxoff = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
-    add_offset (-maxoff - 1, maxoff);
-  }
-
-  /* Issue an informational message describing the target of an access
-     with the given mode.  */
-  void inform_access (access_mode) const;
-
-  /* Reference to the accessed object(s).  */
-  tree ref;
-
-  /* Range of byte offsets into and sizes of the object(s).  */
-  offset_int offrng[2];
-  offset_int sizrng[2];
-  /* The minimum and maximum offset computed.  */
-  offset_int offmax[2];
-  /* Range of the bound of the access: denotes that the access
-     is at least BNDRNG[0] bytes but no more than BNDRNG[1].
-     For string functions the size of the actual access is
-     further constrained by the length of the string.  */
-  offset_int bndrng[2];
-
-  /* Used to fold integer expressions when called from front ends.  */
-  tree (*eval)(tree);
-  /* Positive when REF is dereferenced, negative when its address is
-     taken.  */
-  int deref;
-  /* Set if trailing one-element arrays should be treated as flexible
-     array members.  */
-  bool trail1special;
-  /* Set if valid offsets must start at zero (for declared and allocated
-     objects but not for others referenced by pointers).  */
-  bool base0;
-  /* Set if REF refers to a function array parameter not declared
-     static.  */
-  bool parmarray;
-};
-
-class range_query;
-
-/* Queries and caches compute_objsize results.  */
-class pointer_query
-{
-  DISABLE_COPY_AND_ASSIGN (pointer_query);
-
-public:
-  /* Type of the two-level cache object defined by clients of the class
-     to have pointer SSA_NAMEs cached for speedy access.  */
-  struct cache_type
-  {
-    /* 1-based indices into cache.  */
-    vec<unsigned> indices;
-    /* The cache itself.  */
-    vec<access_ref> access_refs;
-  };
-
-  /* Construct an object with the given Ranger instance and cache.  */
-  explicit pointer_query (range_query * = NULL, cache_type * = NULL);
-
-  /* Retrieve the access_ref for a variable from cache if it's there.  */
-  const access_ref* get_ref (tree, int = 1) const;
-
-  /* Retrieve the access_ref for a variable from cache or compute it.  */
-  bool get_ref (tree, access_ref*, int = 1);
-
-  /* Add an access_ref for the SSA_NAME to the cache.  */
-  void put_ref (tree, const access_ref&, int = 1);
-
-  /* Flush the cache.  */
-  void flush_cache ();
-
-  /* A Ranger instance.  May be null to use global ranges.  */
-  range_query *rvals;
-  /* Cache of SSA_NAMEs.  May be null to disable caching.  */
-  cache_type *var_cache;
-
-  /* Cache performance counters.  */
-  mutable unsigned hits;
-  mutable unsigned misses;
-  mutable unsigned failures;
-  mutable unsigned depth;
-  mutable unsigned max_depth;
-};
-
-/* Describes a pair of references used in an access by built-in
-   functions like memcpy.  */
-struct access_data
-{
-  /* Set the access to at most MAXWRITE and MAXREAD bytes, and
-     at least 1 when MINWRITE or MINREAD, respectively, is set.  */
-  access_data (tree expr, access_mode mode,
-	       tree maxwrite = NULL_TREE, bool minwrite = false,
-	       tree maxread = NULL_TREE, bool minread = false)
-    : call (expr),
-      dst (maxwrite, minwrite), src (maxread, minread), mode (mode) { }
-
-  /* Built-in function call.  */
-  tree call;
-  /* Destination and source of the access.  */
-  access_ref dst, src;
-  /* Read-only for functions like memcmp or strlen, write-only
-     for memset, read-write for memcpy or strcat.  */
-  access_mode mode;
-};
-
-extern tree gimple_call_alloc_size (gimple *, wide_int[2] = NULL,
-				    range_query * = NULL);
-extern tree gimple_parm_array_size (tree, wide_int[2], bool * = NULL);
-
-extern tree compute_objsize (tree, int, access_ref *, range_query * = NULL);
-/* Legacy/transitional API.  Should not be used in new code.  */
-extern tree compute_objsize (tree, int, access_ref *, pointer_query *);
-extern tree compute_objsize (tree, int, tree * = NULL, tree * = NULL,
-			     range_query * = NULL);
+class access_data;
 extern bool check_access (tree, tree, tree, tree, tree,
 			  access_mode, const access_data * = NULL);
-extern void maybe_emit_free_warning (tree);
 
 #endif /* GCC_BUILTINS_H */
diff --git a/gcc/calls.c b/gcc/calls.c
index d2413a280cf..86841351c6e 100644
--- a/gcc/calls.c
+++ b/gcc/calls.c
@@ -60,6 +60,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "gimple-fold.h"
 #include "attr-fnspec.h"
 #include "value-query.h"
+#include "pointer-query.h"
 
 #include "tree-pretty-print.h"
 
@@ -2628,10 +2629,6 @@  initialize_argument_information (int num_actuals ATTRIBUTE_UNUSED,
 
   /* Check attribute access arguments.  */
   maybe_warn_rdwr_sizes (&rdwr_idx, fndecl, fntype, exp);
-
-  /* Check calls to operator new for mismatched forms and attempts
-     to deallocate unallocated objects.  */
-  maybe_emit_free_warning (exp);
 }
 
 /* Update ARGS_SIZE to contain the total size for the argument block.
diff --git a/gcc/cp/init.c b/gcc/cp/init.c
index d47e405e745..229c84e1d74 100644
--- a/gcc/cp/init.c
+++ b/gcc/cp/init.c
@@ -34,7 +34,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "attribs.h"
 #include "asan.h"
 #include "stor-layout.h"
-#include "builtins.h"
+#include "pointer-query.h"
 
 static bool begin_init_stmts (tree *, tree *);
 static tree finish_init_stmts (bool, tree, tree);
diff --git a/gcc/gimple-array-bounds.cc b/gcc/gimple-array-bounds.cc
index 8dfd6f9500a..0f212aba191 100644
--- a/gcc/gimple-array-bounds.cc
+++ b/gcc/gimple-array-bounds.cc
@@ -37,7 +37,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "domwalk.h"
 #include "tree-cfg.h"
 #include "attribs.h"
-#include "builtins.h"
+#include "pointer-query.h"
 
 // This purposely returns a value_range, not a value_range_equiv, to
 // break the dependency on equivalences for this pass.
diff --git a/gcc/gimple-fold.c b/gcc/gimple-fold.c
index 1401092aa9b..e66d1cfd2cf 100644
--- a/gcc/gimple-fold.c
+++ b/gcc/gimple-fold.c
@@ -30,6 +30,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "ssa.h"
 #include "cgraph.h"
 #include "gimple-pretty-print.h"
+#include "gimple-ssa-warn-access.h"
 #include "gimple-ssa-warn-restrict.h"
 #include "fold-const.h"
 #include "stmt.h"
diff --git a/gcc/gimple-ssa-sprintf.c b/gcc/gimple-ssa-sprintf.c
index f38fb03f068..8e90b7cfc43 100644
--- a/gcc/gimple-ssa-sprintf.c
+++ b/gcc/gimple-ssa-sprintf.c
@@ -71,6 +71,7 @@  along with GCC; see the file COPYING3.  If not see
 
 #include "attribs.h"
 #include "builtins.h"
+#include "pointer-query.h"
 #include "stor-layout.h"
 
 #include "realmpfr.h"
diff --git a/gcc/gimple-ssa-warn-access.cc b/gcc/gimple-ssa-warn-access.cc
new file mode 100644
index 00000000000..1cc80f0984b
--- /dev/null
+++ b/gcc/gimple-ssa-warn-access.cc
@@ -0,0 +1,1769 @@ 
+/* Pass to detect and issue warnings for invalid accesses, including
+   invalid or mismatched allocation/deallocation calls.
+
+   Copyright (C) 2020-2021 Free Software Foundation, Inc.
+   Contributed by Martin Sebor <msebor@redhat.com>.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it under
+   the terms of the GNU General Public License as published by the Free
+   Software Foundation; either version 3, or (at your option) any later
+   version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+   WARRANTY; without even the implied warranty of MERCHANTABILITY or
+   FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+   for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "tree.h"
+#include "gimple.h"
+#include "tree-pass.h"
+#include "builtins.h"
+#include "ssa.h"
+#include "gimple-pretty-print.h"
+#include "gimple-ssa-warn-access.h"
+#include "gimple-ssa-warn-restrict.h"
+#include "diagnostic-core.h"
+#include "fold-const.h"
+#include "gimple-fold.h"
+#include "gimple-iterator.h"
+#include "tree-dfa.h"
+#include "tree-ssa.h"
+#include "tree-cfg.h"
+#include "tree-object-size.h"
+#include "calls.h"
+#include "cfgloop.h"
+#include "intl.h"
+#include "gimple-range.h"
+#include "stringpool.h"
+#include "attribs.h"
+#include "demangle.h"
+#include "pointer-query.h"
+
+/* For a call EXPR at LOC to a function FNAME that expects a string
+   in the argument ARG, issue a diagnostic due to it being a called
+   with an argument that is a character array with no terminating
+   NUL.  SIZE is the EXACT size of the array, and BNDRNG the number
+   of characters in which the NUL is expected.  Either EXPR or FNAME
+   may be null but noth both.  SIZE may be null when BNDRNG is null.  */
+
+void
+warn_string_no_nul (location_t loc, tree expr, const char *fname,
+		    tree arg, tree decl, tree size /* = NULL_TREE */,
+		    bool exact /* = false */,
+		    const wide_int bndrng[2] /* = NULL */)
+{
+  const opt_code opt = OPT_Wstringop_overread;
+  if ((expr && warning_suppressed_p (expr, opt))
+      || warning_suppressed_p (arg, opt))
+    return;
+
+  loc = expansion_point_location_if_in_system_header (loc);
+  bool warned;
+
+  /* Format the bound range as a string to keep the nuber of messages
+     from exploding.  */
+  char bndstr[80];
+  *bndstr = 0;
+  if (bndrng)
+    {
+      if (bndrng[0] == bndrng[1])
+	sprintf (bndstr, "%llu", (unsigned long long) bndrng[0].to_uhwi ());
+      else
+	sprintf (bndstr, "[%llu, %llu]",
+		 (unsigned long long) bndrng[0].to_uhwi (),
+		 (unsigned long long) bndrng[1].to_uhwi ());
+    }
+
+  const tree maxobjsize = max_object_size ();
+  const wide_int maxsiz = wi::to_wide (maxobjsize);
+  if (expr)
+    {
+      tree func = get_callee_fndecl (expr);
+      if (bndrng)
+	{
+	  if (wi::ltu_p (maxsiz, bndrng[0]))
+	    warned = warning_at (loc, opt,
+				 "%qD specified bound %s exceeds "
+				 "maximum object size %E",
+				 func, bndstr, maxobjsize);
+	  else
+	    {
+	      bool maybe = wi::to_wide (size) == bndrng[0];
+	      warned = warning_at (loc, opt,
+				   exact
+				   ? G_("%qD specified bound %s exceeds "
+					"the size %E of unterminated array")
+				   : (maybe
+				      ? G_("%qD specified bound %s may "
+					   "exceed the size of at most %E "
+					   "of unterminated array")
+				      : G_("%qD specified bound %s exceeds "
+					   "the size of at most %E "
+					   "of unterminated array")),
+				   func, bndstr, size);
+	    }
+	}
+      else
+	warned = warning_at (loc, opt,
+			     "%qD argument missing terminating nul",
+			     func);
+    }
+  else
+    {
+      if (bndrng)
+	{
+	  if (wi::ltu_p (maxsiz, bndrng[0]))
+	    warned = warning_at (loc, opt,
+				 "%qs specified bound %s exceeds "
+				 "maximum object size %E",
+				 fname, bndstr, maxobjsize);
+	  else
+	    {
+	      bool maybe = wi::to_wide (size) == bndrng[0];
+	      warned = warning_at (loc, opt,
+				   exact
+				   ? G_("%qs specified bound %s exceeds "
+					"the size %E of unterminated array")
+				   : (maybe
+				      ? G_("%qs specified bound %s may "
+					   "exceed the size of at most %E "
+					   "of unterminated array")
+				      : G_("%qs specified bound %s exceeds "
+					   "the size of at most %E "
+					   "of unterminated array")),
+				   fname, bndstr, size);
+	    }
+	}
+      else
+	warned = warning_at (loc, opt,
+			     "%qs argument missing terminating nul",
+			     fname);
+    }
+
+  if (warned)
+    {
+      inform (DECL_SOURCE_LOCATION (decl),
+	      "referenced argument declared here");
+      suppress_warning (arg, opt);
+      if (expr)
+	suppress_warning (expr, opt);
+    }
+}
+
+/* For a call EXPR (which may be null) that expects a string argument
+   SRC as an argument, returns false if SRC is a character array with
+   no terminating NUL.  When nonnull, BOUND is the number of characters
+   in which to expect the terminating NUL.  RDONLY is true for read-only
+   accesses such as strcmp, false for read-write such as strcpy.  When
+   EXPR is also issues a warning.  */
+
+bool
+check_nul_terminated_array (tree expr, tree src,
+			    tree bound /* = NULL_TREE */)
+{
+  /* The constant size of the array SRC points to.  The actual size
+     may be less of EXACT is true, but not more.  */
+  tree size;
+  /* True if SRC involves a non-constant offset into the array.  */
+  bool exact;
+  /* The unterminated constant array SRC points to.  */
+  tree nonstr = unterminated_array (src, &size, &exact);
+  if (!nonstr)
+    return true;
+
+  /* NONSTR refers to the non-nul terminated constant array and SIZE
+     is the constant size of the array in bytes.  EXACT is true when
+     SIZE is exact.  */
+
+  wide_int bndrng[2];
+  if (bound)
+    {
+      value_range r;
+
+      get_global_range_query ()->range_of_expr (r, bound);
+
+      if (r.kind () != VR_RANGE)
+	return true;
+
+      bndrng[0] = r.lower_bound ();
+      bndrng[1] = r.upper_bound ();
+
+      if (exact)
+	{
+	  if (wi::leu_p (bndrng[0], wi::to_wide (size)))
+	    return true;
+	}
+      else if (wi::lt_p (bndrng[0], wi::to_wide (size), UNSIGNED))
+	return true;
+    }
+
+  if (expr)
+    warn_string_no_nul (EXPR_LOCATION (expr), expr, NULL, src, nonstr,
+			size, exact, bound ? bndrng : NULL);
+
+  return false;
+}
+
+/* If EXP refers to an unterminated constant character array return
+   the declaration of the object of which the array is a member or
+   element and if SIZE is not null, set *SIZE to the size of
+   the unterminated array and set *EXACT if the size is exact or
+   clear it otherwise.  Otherwise return null.  */
+
+tree
+unterminated_array (tree exp, tree *size /* = NULL */, bool *exact /* = NULL */)
+{
+  /* C_STRLEN will return NULL and set DECL in the info
+     structure if EXP references a unterminated array.  */
+  c_strlen_data lendata = { };
+  tree len = c_strlen (exp, 1, &lendata);
+  if (len == NULL_TREE && lendata.minlen && lendata.decl)
+     {
+       if (size)
+	{
+	  len = lendata.minlen;
+	  if (lendata.off)
+	    {
+	      /* Constant offsets are already accounted for in LENDATA.MINLEN,
+		 but not in a SSA_NAME + CST expression.  */
+	      if (TREE_CODE (lendata.off) == INTEGER_CST)
+		*exact = true;
+	      else if (TREE_CODE (lendata.off) == PLUS_EXPR
+		       && TREE_CODE (TREE_OPERAND (lendata.off, 1)) == INTEGER_CST)
+		{
+		  /* Subtract the offset from the size of the array.  */
+		  *exact = false;
+		  tree temp = TREE_OPERAND (lendata.off, 1);
+		  temp = fold_convert (ssizetype, temp);
+		  len = fold_build2 (MINUS_EXPR, ssizetype, len, temp);
+		}
+	      else
+		*exact = false;
+	    }
+	  else
+	    *exact = true;
+
+	  *size = len;
+	}
+       return lendata.decl;
+     }
+
+  return NULL_TREE;
+}
+
+/* Issue a warning OPT for a bounded call EXP with a bound in RANGE
+   accessing an object with SIZE.  */
+
+bool
+maybe_warn_for_bound (opt_code opt, location_t loc, tree exp, tree func,
+		      tree bndrng[2], tree size,
+		      const access_data *pad /* = NULL */)
+{
+  if (!bndrng[0] || warning_suppressed_p (exp, opt))
+    return false;
+
+  tree maxobjsize = max_object_size ();
+
+  bool warned = false;
+
+  if (opt == OPT_Wstringop_overread)
+    {
+      bool maybe = pad && pad->src.phi ();
+
+      if (tree_int_cst_lt (maxobjsize, bndrng[0]))
+	{
+	  if (bndrng[0] == bndrng[1])
+	    warned = (func
+		      ? warning_at (loc, opt,
+				    (maybe
+				     ? G_("%qD specified bound %E may "
+					  "exceed maximum object size %E")
+				     : G_("%qD specified bound %E "
+					  "exceeds maximum object size %E")),
+				    func, bndrng[0], maxobjsize)
+		      : warning_at (loc, opt,
+				    (maybe
+				     ? G_("specified bound %E may "
+					  "exceed maximum object size %E")
+				     : G_("specified bound %E "
+					  "exceeds maximum object size %E")),
+				    bndrng[0], maxobjsize));
+	  else
+	    warned = (func
+		      ? warning_at (loc, opt,
+				    (maybe
+				     ? G_("%qD specified bound [%E, %E] may "
+					  "exceed maximum object size %E")
+				     : G_("%qD specified bound [%E, %E] "
+					  "exceeds maximum object size %E")),
+				    func,
+				    bndrng[0], bndrng[1], maxobjsize)
+		      : warning_at (loc, opt,
+				    (maybe
+				     ? G_("specified bound [%E, %E] may "
+					  "exceed maximum object size %E")
+				     : G_("specified bound [%E, %E] "
+					  "exceeds maximum object size %E")),
+				    bndrng[0], bndrng[1], maxobjsize));
+	}
+      else if (!size || tree_int_cst_le (bndrng[0], size))
+	return false;
+      else if (tree_int_cst_equal (bndrng[0], bndrng[1]))
+	warned = (func
+		  ? warning_at (loc, opt,
+				(maybe
+				 ? G_("%qD specified bound %E may exceed "
+				      "source size %E")
+				 : G_("%qD specified bound %E exceeds "
+				      "source size %E")),
+				func, bndrng[0], size)
+		  : warning_at (loc, opt,
+				(maybe
+				 ? G_("specified bound %E may exceed "
+				      "source size %E")
+				 : G_("specified bound %E exceeds "
+				      "source size %E")),
+				bndrng[0], size));
+      else
+	warned = (func
+		  ? warning_at (loc, opt,
+				(maybe
+				 ? G_("%qD specified bound [%E, %E] may "
+				      "exceed source size %E")
+				 : G_("%qD specified bound [%E, %E] exceeds "
+				      "source size %E")),
+				func, bndrng[0], bndrng[1], size)
+		  : warning_at (loc, opt,
+				(maybe
+				 ? G_("specified bound [%E, %E] may exceed "
+				      "source size %E")
+				 : G_("specified bound [%E, %E] exceeds "
+				      "source size %E")),
+				bndrng[0], bndrng[1], size));
+      if (warned)
+	{
+	  if (pad && pad->src.ref)
+	    {
+	      if (DECL_P (pad->src.ref))
+		inform (DECL_SOURCE_LOCATION (pad->src.ref),
+			"source object declared here");
+	      else if (EXPR_HAS_LOCATION (pad->src.ref))
+		inform (EXPR_LOCATION (pad->src.ref),
+			"source object allocated here");
+	    }
+	  suppress_warning (exp, opt);
+	}
+
+      return warned;
+    }
+
+  bool maybe = pad && pad->dst.phi ();
+  if (tree_int_cst_lt (maxobjsize, bndrng[0]))
+    {
+      if (bndrng[0] == bndrng[1])
+	warned = (func
+		  ? warning_at (loc, opt,
+				(maybe
+				 ? G_("%qD specified size %E may "
+				      "exceed maximum object size %E")
+				 : G_("%qD specified size %E "
+				      "exceeds maximum object size %E")),
+				func, bndrng[0], maxobjsize)
+		  : warning_at (loc, opt,
+				(maybe
+				 ? G_("specified size %E may exceed "
+				      "maximum object size %E")
+				 : G_("specified size %E exceeds "
+				      "maximum object size %E")),
+				bndrng[0], maxobjsize));
+      else
+	warned = (func
+		  ? warning_at (loc, opt,
+				(maybe
+				 ? G_("%qD specified size between %E and %E "
+				      "may exceed maximum object size %E")
+				 : G_("%qD specified size between %E and %E "
+				      "exceeds maximum object size %E")),
+				func, bndrng[0], bndrng[1], maxobjsize)
+		  : warning_at (loc, opt,
+				(maybe
+				 ? G_("specified size between %E and %E "
+				      "may exceed maximum object size %E")
+				 : G_("specified size between %E and %E "
+				      "exceeds maximum object size %E")),
+				bndrng[0], bndrng[1], maxobjsize));
+    }
+  else if (!size || tree_int_cst_le (bndrng[0], size))
+    return false;
+  else if (tree_int_cst_equal (bndrng[0], bndrng[1]))
+    warned = (func
+	      ? warning_at (loc, opt,
+			    (maybe
+			     ? G_("%qD specified bound %E may exceed "
+				  "destination size %E")
+			     : G_("%qD specified bound %E exceeds "
+				  "destination size %E")),
+			    func, bndrng[0], size)
+	      : warning_at (loc, opt,
+			    (maybe
+			     ? G_("specified bound %E may exceed "
+				  "destination size %E")
+			     : G_("specified bound %E exceeds "
+				  "destination size %E")),
+			    bndrng[0], size));
+  else
+    warned = (func
+	      ? warning_at (loc, opt,
+			    (maybe
+			     ? G_("%qD specified bound [%E, %E] may exceed "
+				  "destination size %E")
+			     : G_("%qD specified bound [%E, %E] exceeds "
+				  "destination size %E")),
+			    func, bndrng[0], bndrng[1], size)
+	      : warning_at (loc, opt,
+			    (maybe
+			     ? G_("specified bound [%E, %E] exceeds "
+				  "destination size %E")
+			     : G_("specified bound [%E, %E] exceeds "
+				  "destination size %E")),
+			    bndrng[0], bndrng[1], size));
+
+  if (warned)
+    {
+      if (pad && pad->dst.ref)
+	{
+	  if (DECL_P (pad->dst.ref))
+	    inform (DECL_SOURCE_LOCATION (pad->dst.ref),
+		    "destination object declared here");
+	  else if (EXPR_HAS_LOCATION (pad->dst.ref))
+	    inform (EXPR_LOCATION (pad->dst.ref),
+		    "destination object allocated here");
+	}
+      suppress_warning (exp, opt);
+    }
+
+  return warned;
+}
+
+/* For an expression EXP issue an access warning controlled by option OPT
+   with access to a region SIZE bytes in size in the RANGE of sizes.
+   WRITE is true for a write access, READ for a read access, neither for
+   call that may or may not perform an access but for which the range
+   is expected to valid.
+   Returns true when a warning has been issued.  */
+
+static bool
+warn_for_access (location_t loc, tree func, tree exp, int opt, tree range[2],
+		 tree size, bool write, bool read, bool maybe)
+{
+  bool warned = false;
+
+  if (write && read)
+    {
+      if (tree_int_cst_equal (range[0], range[1]))
+	warned = (func
+		  ? warning_n (loc, opt, tree_to_uhwi (range[0]),
+			       (maybe
+				? G_("%qD may access %E byte in a region "
+				     "of size %E")
+				: G_("%qD accessing %E byte in a region "
+				     "of size %E")),
+				(maybe
+				 ? G_ ("%qD may access %E bytes in a region "
+				       "of size %E")
+				 : G_ ("%qD accessing %E bytes in a region "
+				       "of size %E")),
+			       func, range[0], size)
+		  : warning_n (loc, opt, tree_to_uhwi (range[0]),
+			       (maybe
+				? G_("may access %E byte in a region "
+				     "of size %E")
+				: G_("accessing %E byte in a region "
+				     "of size %E")),
+			       (maybe
+				? G_("may access %E bytes in a region "
+				     "of size %E")
+				: G_("accessing %E bytes in a region "
+				     "of size %E")),
+			       range[0], size));
+      else if (tree_int_cst_sign_bit (range[1]))
+	{
+	  /* Avoid printing the upper bound if it's invalid.  */
+	  warned = (func
+		    ? warning_at (loc, opt,
+				  (maybe
+				   ? G_("%qD may access %E or more bytes "
+					"in a region of size %E")
+				   : G_("%qD accessing %E or more bytes "
+					"in a region of size %E")),
+				  func, range[0], size)
+		    : warning_at (loc, opt,
+				  (maybe
+				   ? G_("may access %E or more bytes "
+					"in a region of size %E")
+				   : G_("accessing %E or more bytes "
+					"in a region of size %E")),
+				  range[0], size));
+	}
+      else
+	warned = (func
+		  ? warning_at (loc, opt,
+				(maybe
+				 ? G_("%qD may access between %E and %E "
+				      "bytes in a region of size %E")
+				 : G_("%qD accessing between %E and %E "
+				      "bytes in a region of size %E")),
+				func, range[0], range[1], size)
+		  : warning_at (loc, opt,
+				(maybe
+				 ? G_("may access between %E and %E bytes "
+				      "in a region of size %E")
+				 : G_("accessing between %E and %E bytes "
+				      "in a region of size %E")),
+				range[0], range[1], size));
+      return warned;
+    }
+
+  if (write)
+    {
+      if (tree_int_cst_equal (range[0], range[1]))
+	warned = (func
+		  ? warning_n (loc, opt, tree_to_uhwi (range[0]),
+			       (maybe
+				? G_("%qD may write %E byte into a region "
+				     "of size %E")
+				: G_("%qD writing %E byte into a region "
+				     "of size %E overflows the destination")),
+			       (maybe
+				? G_("%qD may write %E bytes into a region "
+				     "of size %E")
+				: G_("%qD writing %E bytes into a region "
+				     "of size %E overflows the destination")),
+			       func, range[0], size)
+		  : warning_n (loc, opt, tree_to_uhwi (range[0]),
+			       (maybe
+				? G_("may write %E byte into a region "
+				     "of size %E")
+				: G_("writing %E byte into a region "
+				     "of size %E overflows the destination")),
+			       (maybe
+				? G_("may write %E bytes into a region "
+				     "of size %E")
+				: G_("writing %E bytes into a region "
+				     "of size %E overflows the destination")),
+			       range[0], size));
+      else if (tree_int_cst_sign_bit (range[1]))
+	{
+	  /* Avoid printing the upper bound if it's invalid.  */
+	  warned = (func
+		    ? warning_at (loc, opt,
+				  (maybe
+				   ? G_("%qD may write %E or more bytes "
+					"into a region of size %E")
+				   : G_("%qD writing %E or more bytes "
+					"into a region of size %E overflows "
+					"the destination")),
+				  func, range[0], size)
+		    : warning_at (loc, opt,
+				  (maybe
+				   ? G_("may write %E or more bytes into "
+					"a region of size %E")
+				   : G_("writing %E or more bytes into "
+					"a region of size %E overflows "
+					"the destination")),
+				  range[0], size));
+	}
+      else
+	warned = (func
+		  ? warning_at (loc, opt,
+				(maybe
+				 ? G_("%qD may write between %E and %E bytes "
+				      "into a region of size %E")
+				 : G_("%qD writing between %E and %E bytes "
+				      "into a region of size %E overflows "
+				      "the destination")),
+				func, range[0], range[1], size)
+		  : warning_at (loc, opt,
+				(maybe
+				 ? G_("may write between %E and %E bytes "
+				      "into a region of size %E")
+				 : G_("writing between %E and %E bytes "
+				      "into a region of size %E overflows "
+				      "the destination")),
+				range[0], range[1], size));
+      return warned;
+    }
+
+  if (read)
+    {
+      if (tree_int_cst_equal (range[0], range[1]))
+	warned = (func
+		  ? warning_n (loc, OPT_Wstringop_overread,
+			       tree_to_uhwi (range[0]),
+			       (maybe
+				? G_("%qD may read %E byte from a region "
+				     "of size %E")
+				: G_("%qD reading %E byte from a region "
+				     "of size %E")),
+			       (maybe
+				? G_("%qD may read %E bytes from a region "
+				     "of size %E")
+				: G_("%qD reading %E bytes from a region "
+				     "of size %E")),
+			       func, range[0], size)
+		  : warning_n (loc, OPT_Wstringop_overread,
+			       tree_to_uhwi (range[0]),
+			       (maybe
+				? G_("may read %E byte from a region "
+				     "of size %E")
+				: G_("reading %E byte from a region "
+				     "of size %E")),
+			       (maybe
+				? G_("may read %E bytes from a region "
+				     "of size %E")
+				: G_("reading %E bytes from a region "
+				     "of size %E")),
+			       range[0], size));
+      else if (tree_int_cst_sign_bit (range[1]))
+	{
+	  /* Avoid printing the upper bound if it's invalid.  */
+	  warned = (func
+		    ? warning_at (loc, OPT_Wstringop_overread,
+				  (maybe
+				   ? G_("%qD may read %E or more bytes "
+					"from a region of size %E")
+				   : G_("%qD reading %E or more bytes "
+					"from a region of size %E")),
+				  func, range[0], size)
+		    : warning_at (loc, OPT_Wstringop_overread,
+				  (maybe
+				   ? G_("may read %E or more bytes "
+					"from a region of size %E")
+				   : G_("reading %E or more bytes "
+					"from a region of size %E")),
+				  range[0], size));
+	}
+      else
+	warned = (func
+		  ? warning_at (loc, OPT_Wstringop_overread,
+				(maybe
+				 ? G_("%qD may read between %E and %E bytes "
+				      "from a region of size %E")
+				 : G_("%qD reading between %E and %E bytes "
+				      "from a region of size %E")),
+				func, range[0], range[1], size)
+		  : warning_at (loc, opt,
+				(maybe
+				 ? G_("may read between %E and %E bytes "
+				      "from a region of size %E")
+				 : G_("reading between %E and %E bytes "
+				      "from a region of size %E")),
+				range[0], range[1], size));
+
+      if (warned)
+	suppress_warning (exp, OPT_Wstringop_overread);
+
+      return warned;
+    }
+
+  if (tree_int_cst_equal (range[0], range[1])
+      || tree_int_cst_sign_bit (range[1]))
+    warned = (func
+	      ? warning_n (loc, OPT_Wstringop_overread,
+			   tree_to_uhwi (range[0]),
+			   "%qD expecting %E byte in a region of size %E",
+			   "%qD expecting %E bytes in a region of size %E",
+			   func, range[0], size)
+	      : warning_n (loc, OPT_Wstringop_overread,
+			   tree_to_uhwi (range[0]),
+			   "expecting %E byte in a region of size %E",
+			   "expecting %E bytes in a region of size %E",
+			   range[0], size));
+  else if (tree_int_cst_sign_bit (range[1]))
+    {
+      /* Avoid printing the upper bound if it's invalid.  */
+      warned = (func
+		? warning_at (loc, OPT_Wstringop_overread,
+			      "%qD expecting %E or more bytes in a region "
+			      "of size %E",
+			      func, range[0], size)
+		: warning_at (loc, OPT_Wstringop_overread,
+			      "expecting %E or more bytes in a region "
+			      "of size %E",
+			      range[0], size));
+    }
+  else
+    warned = (func
+	      ? warning_at (loc, OPT_Wstringop_overread,
+			    "%qD expecting between %E and %E bytes in "
+			    "a region of size %E",
+			    func, range[0], range[1], size)
+	      : warning_at (loc, OPT_Wstringop_overread,
+			    "expecting between %E and %E bytes in "
+			    "a region of size %E",
+			    range[0], range[1], size));
+
+  if (warned)
+    suppress_warning (exp, OPT_Wstringop_overread);
+
+  return warned;
+}
+
+/* Helper to set RANGE to the range of BOUND if it's nonnull, bounded
+   by BNDRNG if nonnull and valid.  */
+
+void
+get_size_range (tree bound, tree range[2], const offset_int bndrng[2])
+{
+  if (bound)
+    get_size_range (bound, range);
+
+  if (!bndrng || (bndrng[0] == 0 && bndrng[1] == HOST_WIDE_INT_M1U))
+    return;
+
+  if (range[0] && TREE_CODE (range[0]) == INTEGER_CST)
+    {
+      offset_int r[] =
+	{ wi::to_offset (range[0]), wi::to_offset (range[1]) };
+      if (r[0] < bndrng[0])
+	range[0] = wide_int_to_tree (sizetype, bndrng[0]);
+      if (bndrng[1] < r[1])
+	range[1] = wide_int_to_tree (sizetype, bndrng[1]);
+    }
+  else
+    {
+      range[0] = wide_int_to_tree (sizetype, bndrng[0]);
+      range[1] = wide_int_to_tree (sizetype, bndrng[1]);
+    }
+}
+
+/* Try to verify that the sizes and lengths of the arguments to a string
+   manipulation function given by EXP are within valid bounds and that
+   the operation does not lead to buffer overflow or read past the end.
+   Arguments other than EXP may be null.  When non-null, the arguments
+   have the following meaning:
+   DST is the destination of a copy call or NULL otherwise.
+   SRC is the source of a copy call or NULL otherwise.
+   DSTWRITE is the number of bytes written into the destination obtained
+   from the user-supplied size argument to the function (such as in
+   memcpy(DST, SRCs, DSTWRITE) or strncpy(DST, DRC, DSTWRITE).
+   MAXREAD is the user-supplied bound on the length of the source sequence
+   (such as in strncat(d, s, N).  It specifies the upper limit on the number
+   of bytes to write.  If NULL, it's taken to be the same as DSTWRITE.
+   SRCSTR is the source string (such as in strcpy(DST, SRC)) when the
+   expression EXP is a string function call (as opposed to a memory call
+   like memcpy).  As an exception, SRCSTR can also be an integer denoting
+   the precomputed size of the source string or object (for functions like
+   memcpy).
+   DSTSIZE is the size of the destination object.
+
+   When DSTWRITE is null LEN is checked to verify that it doesn't exceed
+   SIZE_MAX.
+
+   WRITE is true for write accesses, READ is true for reads.  Both are
+   false for simple size checks in calls to functions that neither read
+   from nor write to the region.
+
+   When nonnull, PAD points to a more detailed description of the access.
+
+   If the call is successfully verified as safe return true, otherwise
+   return false.  */
+
+bool
+check_access (tree exp, tree dstwrite,
+	      tree maxread, tree srcstr, tree dstsize,
+	      access_mode mode, const access_data *pad /* = NULL */)
+{
+  /* The size of the largest object is half the address space, or
+     PTRDIFF_MAX.  (This is way too permissive.)  */
+  tree maxobjsize = max_object_size ();
+
+  /* Either an approximate/minimum the length of the source string for
+     string functions or the size of the source object for raw memory
+     functions.  */
+  tree slen = NULL_TREE;
+
+  /* The range of the access in bytes; first set to the write access
+     for functions that write and then read for those that also (or
+     just) read.  */
+  tree range[2] = { NULL_TREE, NULL_TREE };
+
+  /* Set to true when the exact number of bytes written by a string
+     function like strcpy is not known and the only thing that is
+     known is that it must be at least one (for the terminating nul).  */
+  bool at_least_one = false;
+  if (srcstr)
+    {
+      /* SRCSTR is normally a pointer to string but as a special case
+	 it can be an integer denoting the length of a string.  */
+      if (POINTER_TYPE_P (TREE_TYPE (srcstr)))
+	{
+	  if (!check_nul_terminated_array (exp, srcstr, maxread))
+	    return false;
+	  /* Try to determine the range of lengths the source string
+	     refers to.  If it can be determined and is less than
+	     the upper bound given by MAXREAD add one to it for
+	     the terminating nul.  Otherwise, set it to one for
+	     the same reason, or to MAXREAD as appropriate.  */
+	  c_strlen_data lendata = { };
+	  get_range_strlen (srcstr, &lendata, /* eltsize = */ 1);
+	  range[0] = lendata.minlen;
+	  range[1] = lendata.maxbound ? lendata.maxbound : lendata.maxlen;
+	  if (range[0]
+	      && TREE_CODE (range[0]) == INTEGER_CST
+	      && TREE_CODE (range[1]) == INTEGER_CST
+	      && (!maxread || TREE_CODE (maxread) == INTEGER_CST))
+	    {
+	      if (maxread && tree_int_cst_le (maxread, range[0]))
+		range[0] = range[1] = maxread;
+	      else
+		range[0] = fold_build2 (PLUS_EXPR, size_type_node,
+					range[0], size_one_node);
+
+	      if (maxread && tree_int_cst_le (maxread, range[1]))
+		range[1] = maxread;
+	      else if (!integer_all_onesp (range[1]))
+		range[1] = fold_build2 (PLUS_EXPR, size_type_node,
+					range[1], size_one_node);
+
+	      slen = range[0];
+	    }
+	  else
+	    {
+	      at_least_one = true;
+	      slen = size_one_node;
+	    }
+	}
+      else
+	slen = srcstr;
+    }
+
+  if (!dstwrite && !maxread)
+    {
+      /* When the only available piece of data is the object size
+	 there is nothing to do.  */
+      if (!slen)
+	return true;
+
+      /* Otherwise, when the length of the source sequence is known
+	 (as with strlen), set DSTWRITE to it.  */
+      if (!range[0])
+	dstwrite = slen;
+    }
+
+  if (!dstsize)
+    dstsize = maxobjsize;
+
+  /* Set RANGE to that of DSTWRITE if non-null, bounded by PAD->DST.BNDRNG
+     if valid.  */
+  get_size_range (dstwrite, range, pad ? pad->dst.bndrng : NULL);
+
+  tree func = get_callee_fndecl (exp);
+  /* Read vs write access by built-ins can be determined from the const
+     qualifiers on the pointer argument.  In the absence of attribute
+     access, non-const qualified pointer arguments to user-defined
+     functions are assumed to both read and write the objects.  */
+  const bool builtin = func ? fndecl_built_in_p (func) : false;
+
+  /* First check the number of bytes to be written against the maximum
+     object size.  */
+  if (range[0]
+      && TREE_CODE (range[0]) == INTEGER_CST
+      && tree_int_cst_lt (maxobjsize, range[0]))
+    {
+      location_t loc = EXPR_LOCATION (exp);
+      maybe_warn_for_bound (OPT_Wstringop_overflow_, loc, exp, func, range,
+			    NULL_TREE, pad);
+      return false;
+    }
+
+  /* The number of bytes to write is "exact" if DSTWRITE is non-null,
+     constant, and in range of unsigned HOST_WIDE_INT.  */
+  bool exactwrite = dstwrite && tree_fits_uhwi_p (dstwrite);
+
+  /* Next check the number of bytes to be written against the destination
+     object size.  */
+  if (range[0] || !exactwrite || integer_all_onesp (dstwrite))
+    {
+      if (range[0]
+	  && TREE_CODE (range[0]) == INTEGER_CST
+	  && ((tree_fits_uhwi_p (dstsize)
+	       && tree_int_cst_lt (dstsize, range[0]))
+	      || (dstwrite
+		  && tree_fits_uhwi_p (dstwrite)
+		  && tree_int_cst_lt (dstwrite, range[0]))))
+	{
+	  const opt_code opt = OPT_Wstringop_overflow_;
+	  if (warning_suppressed_p (exp, opt)
+	      || (pad && pad->dst.ref
+		  && warning_suppressed_p (pad->dst.ref, opt)))
+	    return false;
+
+	  location_t loc = EXPR_LOCATION (exp);
+	  bool warned = false;
+	  if (dstwrite == slen && at_least_one)
+	    {
+	      /* This is a call to strcpy with a destination of 0 size
+		 and a source of unknown length.  The call will write
+		 at least one byte past the end of the destination.  */
+	      warned = (func
+			? warning_at (loc, opt,
+				      "%qD writing %E or more bytes into "
+				      "a region of size %E overflows "
+				      "the destination",
+				      func, range[0], dstsize)
+			: warning_at (loc, opt,
+				      "writing %E or more bytes into "
+				      "a region of size %E overflows "
+				      "the destination",
+				      range[0], dstsize));
+	    }
+	  else
+	    {
+	      const bool read
+		= mode == access_read_only || mode == access_read_write;
+	      const bool write
+		= mode == access_write_only || mode == access_read_write;
+	      const bool maybe = pad && pad->dst.parmarray;
+	      warned = warn_for_access (loc, func, exp,
+					OPT_Wstringop_overflow_,
+					range, dstsize,
+					write, read && !builtin, maybe);
+	    }
+
+	  if (warned)
+	    {
+	      suppress_warning (exp, OPT_Wstringop_overflow_);
+	      if (pad)
+		pad->dst.inform_access (pad->mode);
+	    }
+
+	  /* Return error when an overflow has been detected.  */
+	  return false;
+	}
+    }
+
+  /* Check the maximum length of the source sequence against the size
+     of the destination object if known, or against the maximum size
+     of an object.  */
+  if (maxread)
+    {
+      /* Set RANGE to that of MAXREAD, bounded by PAD->SRC.BNDRNG if
+	 PAD is nonnull and BNDRNG is valid.  */
+      get_size_range (maxread, range, pad ? pad->src.bndrng : NULL);
+
+      location_t loc = EXPR_LOCATION (exp);
+      tree size = dstsize;
+      if (pad && pad->mode == access_read_only)
+	size = wide_int_to_tree (sizetype, pad->src.sizrng[1]);
+
+      if (range[0] && maxread && tree_fits_uhwi_p (size))
+	{
+	  if (tree_int_cst_lt (maxobjsize, range[0]))
+	    {
+	      maybe_warn_for_bound (OPT_Wstringop_overread, loc, exp, func,
+				    range, size, pad);
+	      return false;
+	    }
+
+	  if (size != maxobjsize && tree_int_cst_lt (size, range[0]))
+	    {
+	      opt_code opt = (dstwrite || mode != access_read_only
+			      ? OPT_Wstringop_overflow_
+			      : OPT_Wstringop_overread);
+	      maybe_warn_for_bound (opt, loc, exp, func, range, size, pad);
+	      return false;
+	    }
+	}
+
+      maybe_warn_nonstring_arg (func, exp);
+    }
+
+  /* Check for reading past the end of SRC.  */
+  bool overread = (slen
+		   && slen == srcstr
+		   && dstwrite
+		   && range[0]
+		   && TREE_CODE (slen) == INTEGER_CST
+		   && tree_int_cst_lt (slen, range[0]));
+  /* If none is determined try to get a better answer based on the details
+     in PAD.  */
+  if (!overread
+      && pad
+      && pad->src.sizrng[1] >= 0
+      && pad->src.offrng[0] >= 0
+      && (pad->src.offrng[1] < 0
+	  || pad->src.offrng[0] <= pad->src.offrng[1]))
+    {
+      /* Set RANGE to that of MAXREAD, bounded by PAD->SRC.BNDRNG if
+	 PAD is nonnull and BNDRNG is valid.  */
+      get_size_range (maxread, range, pad ? pad->src.bndrng : NULL);
+      /* Set OVERREAD for reads starting just past the end of an object.  */
+      overread = pad->src.sizrng[1] - pad->src.offrng[0] < pad->src.bndrng[0];
+      range[0] = wide_int_to_tree (sizetype, pad->src.bndrng[0]);
+      slen = size_zero_node;
+    }
+
+  if (overread)
+    {
+      const opt_code opt = OPT_Wstringop_overread;
+      if (warning_suppressed_p (exp, opt)
+	  || (srcstr && warning_suppressed_p (srcstr, opt))
+	  || (pad && pad->src.ref
+	      && warning_suppressed_p (pad->src.ref, opt)))
+	return false;
+
+      location_t loc = EXPR_LOCATION (exp);
+      const bool read
+	= mode == access_read_only || mode == access_read_write;
+      const bool maybe = pad && pad->dst.parmarray;
+      if (warn_for_access (loc, func, exp, opt, range, slen, false, read,
+			   maybe))
+	{
+	  suppress_warning (exp, opt);
+	  if (pad)
+	    pad->src.inform_access (access_read_only);
+	}
+      return false;
+    }
+
+  return true;
+}
+
+/* Return true if STMT is a call to an allocation function.  Unless
+   ALL_ALLOC is set, consider only functions that return dynmamically
+   allocated objects.  Otherwise return true even for all forms of
+   alloca (including VLA).  */
+
+static bool
+fndecl_alloc_p (tree fndecl, bool all_alloc)
+{
+  if (!fndecl)
+    return false;
+
+  /* A call to operator new isn't recognized as one to a built-in.  */
+  if (DECL_IS_OPERATOR_NEW_P (fndecl))
+    return true;
+
+  if (fndecl_built_in_p (fndecl, BUILT_IN_NORMAL))
+    {
+      switch (DECL_FUNCTION_CODE (fndecl))
+	{
+	case BUILT_IN_ALLOCA:
+	case BUILT_IN_ALLOCA_WITH_ALIGN:
+	  return all_alloc;
+	case BUILT_IN_ALIGNED_ALLOC:
+	case BUILT_IN_CALLOC:
+	case BUILT_IN_GOMP_ALLOC:
+	case BUILT_IN_MALLOC:
+	case BUILT_IN_REALLOC:
+	case BUILT_IN_STRDUP:
+	case BUILT_IN_STRNDUP:
+	  return true;
+	default:
+	  break;
+	}
+    }
+
+  /* A function is considered an allocation function if it's declared
+     with attribute malloc with an argument naming its associated
+     deallocation function.  */
+  tree attrs = DECL_ATTRIBUTES (fndecl);
+  if (!attrs)
+    return false;
+
+  for (tree allocs = attrs;
+       (allocs = lookup_attribute ("malloc", allocs));
+       allocs = TREE_CHAIN (allocs))
+    {
+      tree args = TREE_VALUE (allocs);
+      if (!args)
+	continue;
+
+      if (TREE_VALUE (args))
+	return true;
+    }
+
+  return false;
+}
+
+/* Return true if STMT is a call to an allocation function.  A wrapper
+   around fndecl_alloc_p.  */
+
+static bool
+gimple_call_alloc_p (gimple *stmt, bool all_alloc = false)
+{
+  return fndecl_alloc_p (gimple_call_fndecl (stmt), all_alloc);
+}
+
+/* Return true if DELC doesn't refer to an operator delete that's
+   suitable to call with a pointer returned from the operator new
+   described by NEWC.  */
+
+static bool
+new_delete_mismatch_p (const demangle_component &newc,
+		       const demangle_component &delc)
+{
+  if (newc.type != delc.type)
+    return true;
+
+  switch (newc.type)
+    {
+    case DEMANGLE_COMPONENT_NAME:
+      {
+	int len = newc.u.s_name.len;
+	const char *news = newc.u.s_name.s;
+	const char *dels = delc.u.s_name.s;
+	if (len != delc.u.s_name.len || memcmp (news, dels, len))
+	  return true;
+
+	if (news[len] == 'n')
+	  {
+	    if (news[len + 1] == 'a')
+	      return dels[len] != 'd' || dels[len + 1] != 'a';
+	    if (news[len + 1] == 'w')
+	      return dels[len] != 'd' || dels[len + 1] != 'l';
+	  }
+	return false;
+      }
+
+    case DEMANGLE_COMPONENT_OPERATOR:
+      /* Operator mismatches are handled above.  */
+      return false;
+
+    case DEMANGLE_COMPONENT_EXTENDED_OPERATOR:
+      if (newc.u.s_extended_operator.args != delc.u.s_extended_operator.args)
+	return true;
+      return new_delete_mismatch_p (*newc.u.s_extended_operator.name,
+				    *delc.u.s_extended_operator.name);
+
+    case DEMANGLE_COMPONENT_FIXED_TYPE:
+      if (newc.u.s_fixed.accum != delc.u.s_fixed.accum
+	  || newc.u.s_fixed.sat != delc.u.s_fixed.sat)
+	return true;
+      return new_delete_mismatch_p (*newc.u.s_fixed.length,
+				    *delc.u.s_fixed.length);
+
+    case DEMANGLE_COMPONENT_CTOR:
+      if (newc.u.s_ctor.kind != delc.u.s_ctor.kind)
+	return true;
+      return new_delete_mismatch_p (*newc.u.s_ctor.name,
+				    *delc.u.s_ctor.name);
+
+    case DEMANGLE_COMPONENT_DTOR:
+      if (newc.u.s_dtor.kind != delc.u.s_dtor.kind)
+	return true;
+      return new_delete_mismatch_p (*newc.u.s_dtor.name,
+				    *delc.u.s_dtor.name);
+
+    case DEMANGLE_COMPONENT_BUILTIN_TYPE:
+      {
+	/* The demangler API provides no better way to compare built-in
+	   types except to by comparing their demangled names. */
+	size_t nsz, dsz;
+	demangle_component *pnc = const_cast<demangle_component *>(&newc);
+	demangle_component *pdc = const_cast<demangle_component *>(&delc);
+	char *nts = cplus_demangle_print (0, pnc, 16, &nsz);
+	char *dts = cplus_demangle_print (0, pdc, 16, &dsz);
+	if (!nts != !dts)
+	  return true;
+	bool mismatch = strcmp (nts, dts);
+	free (nts);
+	free (dts);
+	return mismatch;
+      }
+
+    case DEMANGLE_COMPONENT_SUB_STD:
+      if (newc.u.s_string.len != delc.u.s_string.len)
+	return true;
+      return memcmp (newc.u.s_string.string, delc.u.s_string.string,
+		     newc.u.s_string.len);
+
+    case DEMANGLE_COMPONENT_FUNCTION_PARAM:
+    case DEMANGLE_COMPONENT_TEMPLATE_PARAM:
+      return newc.u.s_number.number != delc.u.s_number.number;
+
+    case DEMANGLE_COMPONENT_CHARACTER:
+      return newc.u.s_character.character != delc.u.s_character.character;
+
+    case DEMANGLE_COMPONENT_DEFAULT_ARG:
+    case DEMANGLE_COMPONENT_LAMBDA:
+      if (newc.u.s_unary_num.num != delc.u.s_unary_num.num)
+	return true;
+      return new_delete_mismatch_p (*newc.u.s_unary_num.sub,
+				    *delc.u.s_unary_num.sub);
+    default:
+      break;
+    }
+
+  if (!newc.u.s_binary.left != !delc.u.s_binary.left)
+    return true;
+
+  if (!newc.u.s_binary.left)
+    return false;
+
+  if (new_delete_mismatch_p (*newc.u.s_binary.left, *delc.u.s_binary.left)
+      || !newc.u.s_binary.right != !delc.u.s_binary.right)
+    return true;
+
+  if (newc.u.s_binary.right)
+    return new_delete_mismatch_p (*newc.u.s_binary.right,
+				  *delc.u.s_binary.right);
+  return false;
+}
+
+/* Return true if DELETE_DECL is an operator delete that's not suitable
+   to call with a pointer returned fron NEW_DECL.  */
+
+static bool
+new_delete_mismatch_p (tree new_decl, tree delete_decl)
+{
+  tree new_name = DECL_ASSEMBLER_NAME (new_decl);
+  tree delete_name = DECL_ASSEMBLER_NAME (delete_decl);
+
+  /* valid_new_delete_pair_p() returns a conservative result (currently
+     it only handles global operators).  A true result is reliable but
+     a false result doesn't necessarily mean the operators don't match.  */
+  if (valid_new_delete_pair_p (new_name, delete_name))
+    return false;
+
+  /* For anything not handled by valid_new_delete_pair_p() such as member
+     operators compare the individual demangled components of the mangled
+     name.  */
+  const char *new_str = IDENTIFIER_POINTER (new_name);
+  const char *del_str = IDENTIFIER_POINTER (delete_name);
+
+  void *np = NULL, *dp = NULL;
+  demangle_component *ndc = cplus_demangle_v3_components (new_str, 0, &np);
+  demangle_component *ddc = cplus_demangle_v3_components (del_str, 0, &dp);
+  bool mismatch = new_delete_mismatch_p (*ndc, *ddc);
+  free (np);
+  free (dp);
+  return mismatch;
+}
+
+/* ALLOC_DECL and DEALLOC_DECL are pair of allocation and deallocation
+   functions.  Return true if the latter is suitable to deallocate objects
+   allocated by calls to the former.  */
+
+static bool
+matching_alloc_calls_p (tree alloc_decl, tree dealloc_decl)
+{
+  /* Set to alloc_kind_t::builtin if ALLOC_DECL is associated with
+     a built-in deallocator.  */
+  enum class alloc_kind_t { none, builtin, user }
+  alloc_dealloc_kind = alloc_kind_t::none;
+
+  if (DECL_IS_OPERATOR_NEW_P (alloc_decl))
+    {
+      if (DECL_IS_OPERATOR_DELETE_P (dealloc_decl))
+	/* Return true iff both functions are of the same array or
+	   singleton form and false otherwise.  */
+	return !new_delete_mismatch_p (alloc_decl, dealloc_decl);
+
+      /* Return false for deallocation functions that are known not
+	 to match.  */
+      if (fndecl_built_in_p (dealloc_decl, BUILT_IN_FREE)
+	  || fndecl_built_in_p (dealloc_decl, BUILT_IN_REALLOC))
+	return false;
+      /* Otherwise proceed below to check the deallocation function's
+	 "*dealloc" attributes to look for one that mentions this operator
+	 new.  */
+    }
+  else if (fndecl_built_in_p (alloc_decl, BUILT_IN_NORMAL))
+    {
+      switch (DECL_FUNCTION_CODE (alloc_decl))
+	{
+	case BUILT_IN_ALLOCA:
+	case BUILT_IN_ALLOCA_WITH_ALIGN:
+	  return false;
+
+	case BUILT_IN_ALIGNED_ALLOC:
+	case BUILT_IN_CALLOC:
+	case BUILT_IN_GOMP_ALLOC:
+	case BUILT_IN_MALLOC:
+	case BUILT_IN_REALLOC:
+	case BUILT_IN_STRDUP:
+	case BUILT_IN_STRNDUP:
+	  if (DECL_IS_OPERATOR_DELETE_P (dealloc_decl))
+	    return false;
+
+	  if (fndecl_built_in_p (dealloc_decl, BUILT_IN_FREE)
+	      || fndecl_built_in_p (dealloc_decl, BUILT_IN_REALLOC))
+	    return true;
+
+	  alloc_dealloc_kind = alloc_kind_t::builtin;
+	  break;
+
+	default:
+	  break;
+	}
+    }
+
+  /* Set if DEALLOC_DECL both allocates and deallocates.  */
+  alloc_kind_t realloc_kind = alloc_kind_t::none;
+
+  if (fndecl_built_in_p (dealloc_decl, BUILT_IN_NORMAL))
+    {
+      built_in_function dealloc_code = DECL_FUNCTION_CODE (dealloc_decl);
+      if (dealloc_code == BUILT_IN_REALLOC)
+	realloc_kind = alloc_kind_t::builtin;
+
+      for (tree amats = DECL_ATTRIBUTES (alloc_decl);
+	   (amats = lookup_attribute ("malloc", amats));
+	   amats = TREE_CHAIN (amats))
+	{
+	  tree args = TREE_VALUE (amats);
+	  if (!args)
+	    continue;
+
+	  tree fndecl = TREE_VALUE (args);
+	  if (!fndecl || !DECL_P (fndecl))
+	    continue;
+
+	  if (fndecl_built_in_p (fndecl, BUILT_IN_NORMAL)
+	      && dealloc_code == DECL_FUNCTION_CODE (fndecl))
+	    return true;
+	}
+    }
+
+  const bool alloc_builtin = fndecl_built_in_p (alloc_decl, BUILT_IN_NORMAL);
+  alloc_kind_t realloc_dealloc_kind = alloc_kind_t::none;
+
+  /* If DEALLOC_DECL has an internal "*dealloc" attribute scan the list
+     of its associated allocation functions for ALLOC_DECL.
+     If the corresponding ALLOC_DECL is found they're a matching pair,
+     otherwise they're not.
+     With DDATS set to the Deallocator's *Dealloc ATtributes...  */
+  for (tree ddats = DECL_ATTRIBUTES (dealloc_decl);
+       (ddats = lookup_attribute ("*dealloc", ddats));
+       ddats = TREE_CHAIN (ddats))
+    {
+      tree args = TREE_VALUE (ddats);
+      if (!args)
+	continue;
+
+      tree alloc = TREE_VALUE (args);
+      if (!alloc)
+	continue;
+
+      if (alloc == DECL_NAME (dealloc_decl))
+	realloc_kind = alloc_kind_t::user;
+
+      if (DECL_P (alloc))
+	{
+	  gcc_checking_assert (fndecl_built_in_p (alloc, BUILT_IN_NORMAL));
+
+	  switch (DECL_FUNCTION_CODE (alloc))
+	    {
+	    case BUILT_IN_ALIGNED_ALLOC:
+	    case BUILT_IN_CALLOC:
+	    case BUILT_IN_GOMP_ALLOC:
+	    case BUILT_IN_MALLOC:
+	    case BUILT_IN_REALLOC:
+	    case BUILT_IN_STRDUP:
+	    case BUILT_IN_STRNDUP:
+	      realloc_dealloc_kind = alloc_kind_t::builtin;
+	      break;
+	    default:
+	      break;
+	    }
+
+	  if (!alloc_builtin)
+	    continue;
+
+	  if (DECL_FUNCTION_CODE (alloc) != DECL_FUNCTION_CODE (alloc_decl))
+	    continue;
+
+	  return true;
+	}
+
+      if (alloc == DECL_NAME (alloc_decl))
+	return true;
+    }
+
+  if (realloc_kind == alloc_kind_t::none)
+    return false;
+
+  hash_set<tree> common_deallocs;
+  /* Special handling for deallocators.  Iterate over both the allocator's
+     and the reallocator's associated deallocator functions looking for
+     the first one in common.  If one is found, the de/reallocator is
+     a match for the allocator even though the latter isn't directly
+     associated with the former.  This simplifies declarations in system
+     headers.
+     With AMATS set to the Allocator's Malloc ATtributes,
+     and  RMATS set to Reallocator's Malloc ATtributes...  */
+  for (tree amats = DECL_ATTRIBUTES (alloc_decl),
+	 rmats = DECL_ATTRIBUTES (dealloc_decl);
+       (amats = lookup_attribute ("malloc", amats))
+	 || (rmats = lookup_attribute ("malloc", rmats));
+       amats = amats ? TREE_CHAIN (amats) : NULL_TREE,
+	 rmats = rmats ? TREE_CHAIN (rmats) : NULL_TREE)
+    {
+      if (tree args = amats ? TREE_VALUE (amats) : NULL_TREE)
+	if (tree adealloc = TREE_VALUE (args))
+	  {
+	    if (DECL_P (adealloc)
+		&& fndecl_built_in_p (adealloc, BUILT_IN_NORMAL))
+	      {
+		built_in_function fncode = DECL_FUNCTION_CODE (adealloc);
+		if (fncode == BUILT_IN_FREE || fncode == BUILT_IN_REALLOC)
+		  {
+		    if (realloc_kind == alloc_kind_t::builtin)
+		      return true;
+		    alloc_dealloc_kind = alloc_kind_t::builtin;
+		  }
+		continue;
+	      }
+
+	    common_deallocs.add (adealloc);
+	  }
+
+      if (tree args = rmats ? TREE_VALUE (rmats) : NULL_TREE)
+	if (tree ddealloc = TREE_VALUE (args))
+	  {
+	    if (DECL_P (ddealloc)
+		&& fndecl_built_in_p (ddealloc, BUILT_IN_NORMAL))
+	      {
+		built_in_function fncode = DECL_FUNCTION_CODE (ddealloc);
+		if (fncode == BUILT_IN_FREE || fncode == BUILT_IN_REALLOC)
+		  {
+		    if (alloc_dealloc_kind == alloc_kind_t::builtin)
+		      return true;
+		    realloc_dealloc_kind = alloc_kind_t::builtin;
+		  }
+		continue;
+	      }
+
+	    if (common_deallocs.add (ddealloc))
+	      return true;
+	  }
+    }
+
+  /* Succeed only if ALLOC_DECL and the reallocator DEALLOC_DECL share
+     a built-in deallocator.  */
+  return  (alloc_dealloc_kind == alloc_kind_t::builtin
+	   && realloc_dealloc_kind == alloc_kind_t::builtin);
+}
+
+/* Return true if DEALLOC_DECL is a function suitable to deallocate
+   objectes allocated by the ALLOC call.  */
+
+static bool
+matching_alloc_calls_p (gimple *alloc, tree dealloc_decl)
+{
+  tree alloc_decl = gimple_call_fndecl (alloc);
+  if (!alloc_decl)
+    return true;
+
+  return matching_alloc_calls_p (alloc_decl, dealloc_decl);
+}
+
+/* Diagnose a call EXP to deallocate a pointer referenced by AREF if it
+   includes a nonzero offset.  Such a pointer cannot refer to the beginning
+   of an allocated object.  A negative offset may refer to it only if
+   the target pointer is unknown.  */
+
+static bool
+warn_dealloc_offset (location_t loc, gimple *call, const access_ref &aref)
+{
+  if (aref.deref || aref.offrng[0] <= 0 || aref.offrng[1] <= 0)
+    return false;
+
+  tree dealloc_decl = gimple_call_fndecl (call);
+  if (!dealloc_decl)
+    return false;
+
+  if (DECL_IS_OPERATOR_DELETE_P (dealloc_decl)
+      && !DECL_IS_REPLACEABLE_OPERATOR (dealloc_decl))
+    {
+      /* A call to a user-defined operator delete with a pointer plus offset
+	 may be valid if it's returned from an unknown function (i.e., one
+	 that's not operator new).  */
+      if (TREE_CODE (aref.ref) == SSA_NAME)
+	{
+	  gimple *def_stmt = SSA_NAME_DEF_STMT (aref.ref);
+	  if (is_gimple_call (def_stmt))
+	    {
+	      tree alloc_decl = gimple_call_fndecl (def_stmt);
+	      if (!alloc_decl || !DECL_IS_OPERATOR_NEW_P (alloc_decl))
+		return false;
+	    }
+	}
+    }
+
+  char offstr[80];
+  offstr[0] = '\0';
+  if (wi::fits_shwi_p (aref.offrng[0]))
+    {
+      if (aref.offrng[0] == aref.offrng[1]
+	  || !wi::fits_shwi_p (aref.offrng[1]))
+	sprintf (offstr, " %lli",
+		 (long long)aref.offrng[0].to_shwi ());
+      else
+	sprintf (offstr, " [%lli, %lli]",
+		 (long long)aref.offrng[0].to_shwi (),
+		 (long long)aref.offrng[1].to_shwi ());
+    }
+
+  if (!warning_at (loc, OPT_Wfree_nonheap_object,
+		   "%qD called on pointer %qE with nonzero offset%s",
+		   dealloc_decl, aref.ref, offstr))
+    return false;
+
+  if (DECL_P (aref.ref))
+    inform (DECL_SOURCE_LOCATION (aref.ref), "declared here");
+  else if (TREE_CODE (aref.ref) == SSA_NAME)
+    {
+      gimple *def_stmt = SSA_NAME_DEF_STMT (aref.ref);
+      if (is_gimple_call (def_stmt))
+	{
+	  location_t def_loc = gimple_location (def_stmt);
+	  tree alloc_decl = gimple_call_fndecl (def_stmt);
+	  if (alloc_decl)
+	    inform (def_loc,
+		    "returned from %qD", alloc_decl);
+	  else if (tree alloc_fntype = gimple_call_fntype (def_stmt))
+	    inform (def_loc,
+		    "returned from %qT", alloc_fntype);
+	  else
+	    inform (def_loc,  "obtained here");
+	}
+    }
+
+  return true;
+}
+
+/* Issue a warning if a deallocation function such as free, realloc,
+   or C++ operator delete is called with an argument not returned by
+   a matching allocation function such as malloc or the corresponding
+   form of C++ operatorn new.  */
+
+void
+maybe_emit_free_warning (gcall *call)
+{
+  tree fndecl = gimple_call_fndecl (call);
+  if (!fndecl)
+    return;
+
+  unsigned argno = fndecl_dealloc_argno (fndecl);
+  if ((unsigned) gimple_call_num_args (call) <= argno)
+    return;
+
+  tree ptr = gimple_call_arg (call, argno);
+  if (integer_zerop (ptr))
+    return;
+
+  access_ref aref;
+  if (!compute_objsize (ptr, 0, &aref))
+    return;
+
+  tree ref = aref.ref;
+  if (integer_zerop (ref))
+    return;
+
+  tree dealloc_decl = fndecl;
+  location_t loc = gimple_location (call);
+
+  if (DECL_P (ref) || EXPR_P (ref))
+    {
+      /* Diagnose freeing a declared object.  */
+      if (aref.ref_declared ()
+	  && warning_at (loc, OPT_Wfree_nonheap_object,
+			 "%qD called on unallocated object %qD",
+			 dealloc_decl, ref))
+	{
+	  loc = (DECL_P (ref)
+		 ? DECL_SOURCE_LOCATION (ref)
+		 : EXPR_LOCATION (ref));
+	  inform (loc, "declared here");
+	  return;
+	}
+
+      /* Diagnose freeing a pointer that includes a positive offset.
+	 Such a pointer cannot refer to the beginning of an allocated
+	 object.  A negative offset may refer to it.  */
+      if (aref.sizrng[0] != aref.sizrng[1]
+	  && warn_dealloc_offset (loc, call, aref))
+	return;
+    }
+  else if (CONSTANT_CLASS_P (ref))
+    {
+      if (warning_at (loc, OPT_Wfree_nonheap_object,
+		      "%qD called on a pointer to an unallocated "
+		      "object %qE", dealloc_decl, ref))
+	{
+	  if (TREE_CODE (ptr) == SSA_NAME)
+	    {
+	      gimple *def_stmt = SSA_NAME_DEF_STMT (ptr);
+	      if (is_gimple_assign (def_stmt))
+		{
+		  location_t loc = gimple_location (def_stmt);
+		  inform (loc, "assigned here");
+		}
+	    }
+	  return;
+	}
+    }
+  else if (TREE_CODE (ref) == SSA_NAME)
+    {
+      /* Also warn if the pointer argument refers to the result
+	 of an allocation call like alloca or VLA.  */
+      gimple *def_stmt = SSA_NAME_DEF_STMT (ref);
+      if (is_gimple_call (def_stmt))
+	{
+	  bool warned = false;
+	  if (gimple_call_alloc_p (def_stmt))
+	    {
+	      if (matching_alloc_calls_p (def_stmt, dealloc_decl))
+		{
+		  if (warn_dealloc_offset (loc, call, aref))
+		    return;
+		}
+	      else
+		{
+		  tree alloc_decl = gimple_call_fndecl (def_stmt);
+		  const opt_code opt =
+		    (DECL_IS_OPERATOR_NEW_P (alloc_decl)
+		     || DECL_IS_OPERATOR_DELETE_P (dealloc_decl)
+		     ? OPT_Wmismatched_new_delete
+		     : OPT_Wmismatched_dealloc);
+		  warned = warning_at (loc, opt,
+				       "%qD called on pointer returned "
+				       "from a mismatched allocation "
+				       "function", dealloc_decl);
+		}
+	    }
+	  else if (gimple_call_builtin_p (def_stmt, BUILT_IN_ALLOCA)
+		   || gimple_call_builtin_p (def_stmt,
+					     BUILT_IN_ALLOCA_WITH_ALIGN))
+	    warned = warning_at (loc, OPT_Wfree_nonheap_object,
+				 "%qD called on pointer to "
+				 "an unallocated object",
+				 dealloc_decl);
+	  else if (warn_dealloc_offset (loc, call, aref))
+	    return;
+
+	  if (warned)
+	    {
+	      tree fndecl = gimple_call_fndecl (def_stmt);
+	      inform (gimple_location (def_stmt),
+		      "returned from %qD", fndecl);
+	      return;
+	    }
+	}
+      else if (gimple_nop_p (def_stmt))
+	{
+	  ref = SSA_NAME_VAR (ref);
+	  /* Diagnose freeing a pointer that includes a positive offset.  */
+	  if (TREE_CODE (ref) == PARM_DECL
+	      && !aref.deref
+	      && aref.sizrng[0] != aref.sizrng[1]
+	      && aref.offrng[0] > 0 && aref.offrng[1] > 0
+	      && warn_dealloc_offset (loc, call, aref))
+	    return;
+	}
+    }
+}
+
+namespace {
+
+const pass_data pass_data_waccess = {
+  GIMPLE_PASS,
+  "waccess",
+  OPTGROUP_NONE,
+  TV_NONE,
+  PROP_cfg, /* properties_required  */
+  0,	    /* properties_provided  */
+  0,	    /* properties_destroyed  */
+  0,	    /* properties_start */
+  0,	    /* properties_finish */
+};
+
+/* Pass to detect invalid accesses.  */
+class pass_waccess : public gimple_opt_pass
+{
+ public:
+  pass_waccess (gcc::context *ctxt)
+    : gimple_opt_pass (pass_data_waccess, ctxt), m_ranger ()
+    { }
+
+  opt_pass *clone () { return new pass_waccess (m_ctxt); }
+
+  virtual bool gate (function *);
+  virtual unsigned int execute (function *);
+
+  void check (basic_block);
+  void check (gcall *);
+
+private:
+  gimple_ranger *m_ranger;
+};
+
+/* Return true when any checks performed by the pass are enabled.  */
+
+bool
+pass_waccess::gate (function *)
+{
+  return (warn_free_nonheap_object
+	  || warn_mismatched_alloc
+	  || warn_mismatched_new_delete);
+}
+
+/* Check call STMT for invalid accesses.  */
+
+void
+pass_waccess::check (gcall *stmt)
+{
+  maybe_emit_free_warning (stmt);
+}
+
+/* Check basic block BB for invalid accesses.  */
+
+void
+pass_waccess::check (basic_block bb)
+{
+  /* Iterate over statements, looking for function calls.  */
+  for (gimple_stmt_iterator si = gsi_start_bb (bb); !gsi_end_p (si);
+       gsi_next (&si))
+    {
+      gimple *stmt = gsi_stmt (si);
+      if (!is_gimple_call (stmt))
+	continue;
+
+      check (as_a <gcall *>(stmt));
+    }
+}
+
+/* Check function FUN for invalid accesses.  */
+
+unsigned
+pass_waccess::execute (function *fun)
+{
+  basic_block bb;
+  FOR_EACH_BB_FN (bb, fun)
+    check (bb);
+
+  return 0;
+}
+
+}   // namespace
+
+/* Return a new instance of the pass.  */
+
+gimple_opt_pass *
+make_pass_warn_access (gcc::context *ctxt)
+{
+  return new pass_waccess (ctxt);
+}
diff --git a/gcc/gimple-ssa-warn-access.h b/gcc/gimple-ssa-warn-access.h
new file mode 100644
index 00000000000..6197574cf44
--- /dev/null
+++ b/gcc/gimple-ssa-warn-access.h
@@ -0,0 +1,37 @@ 
+/* Pass to detect and issue warnings for invalid accesses, including
+   invalid or mismatched allocation/deallocation calls.
+
+   Copyright (C) 2020-2021 Free Software Foundation, Inc.
+   Contributed by Martin Sebor <msebor@redhat.com>.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it under
+   the terms of the GNU General Public License as published by the Free
+   Software Foundation; either version 3, or (at your option) any later
+   version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+   WARRANTY; without even the implied warranty of MERCHANTABILITY or
+   FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+   for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_GIMPLE_SSA_WARN_ACCESS_H
+#define GCC_GIMPLE_SSA_WARN_ACCESS_H
+
+extern bool check_nul_terminated_array (tree, tree, tree = NULL_TREE);
+extern void warn_string_no_nul (location_t, tree, const char *, tree,
+				tree, tree = NULL_TREE, bool = false,
+				const wide_int[2] = NULL);
+extern tree unterminated_array (tree, tree * = NULL, bool * = NULL);
+extern void get_size_range (tree, tree[2], const offset_int[2]);
+
+class access_data;
+extern bool maybe_warn_for_bound (opt_code, location_t, tree, tree,
+				  tree[2], tree, const access_data * = NULL);
+
+#endif   // GCC_GIMPLE_SSA_WARN_ACCESS_H
diff --git a/gcc/gimple-ssa-warn-restrict.c b/gcc/gimple-ssa-warn-restrict.c
index efb8db98393..404acb03195 100644
--- a/gcc/gimple-ssa-warn-restrict.c
+++ b/gcc/gimple-ssa-warn-restrict.c
@@ -26,7 +26,7 @@ 
 #include "tree.h"
 #include "gimple.h"
 #include "tree-pass.h"
-#include "builtins.h"
+#include "pointer-query.h"
 #include "ssa.h"
 #include "gimple-pretty-print.h"
 #include "gimple-ssa-warn-restrict.h"
diff --git a/gcc/passes.def b/gcc/passes.def
index f5d88a61b0e..e2858368b7d 100644
--- a/gcc/passes.def
+++ b/gcc/passes.def
@@ -419,6 +419,7 @@  along with GCC; see the file COPYING3.  If not see
   NEXT_PASS (pass_gimple_isel);
   NEXT_PASS (pass_cleanup_cfg_post_optimizing);
   NEXT_PASS (pass_warn_function_noreturn);
+  NEXT_PASS (pass_warn_access);
 
   NEXT_PASS (pass_expand);
 
diff --git a/gcc/pointer-query.cc b/gcc/pointer-query.cc
new file mode 100644
index 00000000000..03ccde47897
--- /dev/null
+++ b/gcc/pointer-query.cc
@@ -0,0 +1,1827 @@ 
+/* Definitions of the pointer_query and related classes.
+
+   Copyright (C) 2020-2021 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it under
+   the terms of the GNU General Public License as published by the Free
+   Software Foundation; either version 3, or (at your option) any later
+   version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+   WARRANTY; without even the implied warranty of MERCHANTABILITY or
+   FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+   for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "target.h"
+#include "rtl.h"
+#include "tree.h"
+#include "memmodel.h"
+#include "gimple.h"
+#include "predict.h"
+#include "tm_p.h"
+#include "stringpool.h"
+#include "tree-vrp.h"
+#include "tree-ssanames.h"
+#include "expmed.h"
+#include "optabs.h"
+#include "emit-rtl.h"
+#include "recog.h"
+#include "diagnostic-core.h"
+#include "alias.h"
+#include "fold-const.h"
+#include "fold-const-call.h"
+#include "gimple-ssa-warn-restrict.h"
+#include "stor-layout.h"
+#include "calls.h"
+#include "varasm.h"
+#include "tree-object-size.h"
+#include "tree-ssa-strlen.h"
+#include "realmpfr.h"
+#include "cfgrtl.h"
+#include "except.h"
+#include "dojump.h"
+#include "explow.h"
+#include "stmt.h"
+#include "expr.h"
+#include "libfuncs.h"
+#include "output.h"
+#include "typeclass.h"
+#include "langhooks.h"
+#include "value-prof.h"
+#include "builtins.h"
+#include "stringpool.h"
+#include "attribs.h"
+#include "asan.h"
+#include "internal-fn.h"
+#include "case-cfn-macros.h"
+#include "gimple-fold.h"
+#include "intl.h"
+#include "tree-dfa.h"
+#include "gimple-iterator.h"
+#include "gimple-ssa.h"
+#include "tree-ssa-live.h"
+#include "tree-outof-ssa.h"
+#include "attr-fnspec.h"
+#include "gimple-range.h"
+
+#include "pointer-query.h"
+
+static bool compute_objsize_r (tree, int, access_ref *, ssa_name_limit_t &,
+			       pointer_query *);
+
+/* Wrapper around the wide_int overload of get_range that accepts
+   offset_int instead.  For middle end expressions returns the same
+   result.  For a subset of nonconstamt expressions emitted by the front
+   end determines a more precise range than would be possible otherwise.  */
+
+static bool
+get_offset_range (tree x, gimple *stmt, offset_int r[2], range_query *rvals)
+{
+  offset_int add = 0;
+  if (TREE_CODE (x) == PLUS_EXPR)
+    {
+      /* Handle constant offsets in pointer addition expressions seen
+	 n the front end IL.  */
+      tree op = TREE_OPERAND (x, 1);
+      if (TREE_CODE (op) == INTEGER_CST)
+	{
+	  op = fold_convert (signed_type_for (TREE_TYPE (op)), op);
+	  add = wi::to_offset (op);
+	  x = TREE_OPERAND (x, 0);
+	}
+    }
+
+  if (TREE_CODE (x) == NOP_EXPR)
+    /* Also handle conversions to sizetype seen in the front end IL.  */
+    x = TREE_OPERAND (x, 0);
+
+  tree type = TREE_TYPE (x);
+  if (!INTEGRAL_TYPE_P (type) && !POINTER_TYPE_P (type))
+    return false;
+
+   if (TREE_CODE (x) != INTEGER_CST
+      && TREE_CODE (x) != SSA_NAME)
+    {
+      if (TYPE_UNSIGNED (type)
+	  && TYPE_PRECISION (type) == TYPE_PRECISION (sizetype))
+	type = signed_type_for (type);
+
+      r[0] = wi::to_offset (TYPE_MIN_VALUE (type)) + add;
+      r[1] = wi::to_offset (TYPE_MAX_VALUE (type)) + add;
+      return x;
+    }
+
+  wide_int wr[2];
+  if (!get_range (x, stmt, wr, rvals))
+    return false;
+
+  signop sgn = SIGNED;
+  /* Only convert signed integers or unsigned sizetype to a signed
+     offset and avoid converting large positive values in narrower
+     types to negative offsets.  */
+  if (TYPE_UNSIGNED (type)
+      && wr[0].get_precision () < TYPE_PRECISION (sizetype))
+    sgn = UNSIGNED;
+
+  r[0] = offset_int::from (wr[0], sgn);
+  r[1] = offset_int::from (wr[1], sgn);
+  return true;
+}
+
+/* Return the argument that the call STMT to a built-in function returns
+   or null if it doesn't.  On success, set OFFRNG[] to the range of offsets
+   from the argument reflected in the value returned by the built-in if it
+   can be determined, otherwise to 0 and HWI_M1U respectively.  */
+
+static tree
+gimple_call_return_array (gimple *stmt, offset_int offrng[2],
+			  range_query *rvals)
+{
+  {
+    /* Check for attribute fn spec to see if the function returns one
+       of its arguments.  */
+    attr_fnspec fnspec = gimple_call_fnspec (as_a <gcall *>(stmt));
+    unsigned int argno;
+    if (fnspec.returns_arg (&argno))
+      {
+	offrng[0] = offrng[1] = 0;
+	return gimple_call_arg (stmt, argno);
+      }
+  }
+
+  if (gimple_call_num_args (stmt) < 1)
+    return NULL_TREE;
+
+  tree fn = gimple_call_fndecl (stmt);
+  if (!gimple_call_builtin_p (stmt, BUILT_IN_NORMAL))
+    {
+      /* See if this is a call to placement new.  */
+      if (!fn
+	  || !DECL_IS_OPERATOR_NEW_P (fn)
+	  || DECL_IS_REPLACEABLE_OPERATOR_NEW_P (fn))
+	return NULL_TREE;
+
+      /* Check the mangling, keeping in mind that operator new takes
+	 a size_t which could be unsigned int or unsigned long.  */
+      tree fname = DECL_ASSEMBLER_NAME (fn);
+      if (!id_equal (fname, "_ZnwjPv")       // ordinary form
+	  && !id_equal (fname, "_ZnwmPv")    // ordinary form
+	  && !id_equal (fname, "_ZnajPv")    // array form
+	  && !id_equal (fname, "_ZnamPv"))   // array form
+	return NULL_TREE;
+
+      if (gimple_call_num_args (stmt) != 2)
+	return NULL_TREE;
+
+      offrng[0] = offrng[1] = 0;
+      return gimple_call_arg (stmt, 1);
+    }
+
+  switch (DECL_FUNCTION_CODE (fn))
+    {
+    case BUILT_IN_MEMCPY:
+    case BUILT_IN_MEMCPY_CHK:
+    case BUILT_IN_MEMMOVE:
+    case BUILT_IN_MEMMOVE_CHK:
+    case BUILT_IN_MEMSET:
+    case BUILT_IN_STPCPY:
+    case BUILT_IN_STPCPY_CHK:
+    case BUILT_IN_STPNCPY:
+    case BUILT_IN_STPNCPY_CHK:
+    case BUILT_IN_STRCAT:
+    case BUILT_IN_STRCAT_CHK:
+    case BUILT_IN_STRCPY:
+    case BUILT_IN_STRCPY_CHK:
+    case BUILT_IN_STRNCAT:
+    case BUILT_IN_STRNCAT_CHK:
+    case BUILT_IN_STRNCPY:
+    case BUILT_IN_STRNCPY_CHK:
+      offrng[0] = offrng[1] = 0;
+      return gimple_call_arg (stmt, 0);
+
+    case BUILT_IN_MEMPCPY:
+    case BUILT_IN_MEMPCPY_CHK:
+      {
+	tree off = gimple_call_arg (stmt, 2);
+	if (!get_offset_range (off, stmt, offrng, rvals))
+	  {
+	    offrng[0] = 0;
+	    offrng[1] = HOST_WIDE_INT_M1U;
+	  }
+	return gimple_call_arg (stmt, 0);
+      }
+
+    case BUILT_IN_MEMCHR:
+      {
+	tree off = gimple_call_arg (stmt, 2);
+	if (get_offset_range (off, stmt, offrng, rvals))
+	  offrng[0] = 0;
+	else
+	  {
+	    offrng[0] = 0;
+	    offrng[1] = HOST_WIDE_INT_M1U;
+	  }
+	return gimple_call_arg (stmt, 0);
+      }
+
+    case BUILT_IN_STRCHR:
+    case BUILT_IN_STRRCHR:
+    case BUILT_IN_STRSTR:
+      {
+	offrng[0] = 0;
+	offrng[1] = HOST_WIDE_INT_M1U;
+      }
+      return gimple_call_arg (stmt, 0);
+
+    default:
+      break;
+    }
+
+  return NULL_TREE;
+}
+
+/* If STMT is a call to an allocation function, returns the constant
+   maximum size of the object allocated by the call represented as
+   sizetype.  If nonnull, sets RNG1[] to the range of the size.
+   When nonnull, uses RVALS for range information, otherwise gets global
+   range info.
+   Returns null when STMT is not a call to a valid allocation function.  */
+
+tree
+gimple_call_alloc_size (gimple *stmt, wide_int rng1[2] /* = NULL */,
+			range_query * /* = NULL */)
+{
+  if (!stmt || !is_gimple_call (stmt))
+    return NULL_TREE;
+
+  tree allocfntype;
+  if (tree fndecl = gimple_call_fndecl (stmt))
+    allocfntype = TREE_TYPE (fndecl);
+  else
+    allocfntype = gimple_call_fntype (stmt);
+
+  if (!allocfntype)
+    return NULL_TREE;
+
+  unsigned argidx1 = UINT_MAX, argidx2 = UINT_MAX;
+  tree at = lookup_attribute ("alloc_size", TYPE_ATTRIBUTES (allocfntype));
+  if (!at)
+    {
+      if (!gimple_call_builtin_p (stmt, BUILT_IN_ALLOCA_WITH_ALIGN))
+	return NULL_TREE;
+
+      argidx1 = 0;
+    }
+
+  unsigned nargs = gimple_call_num_args (stmt);
+
+  if (argidx1 == UINT_MAX)
+    {
+      tree atval = TREE_VALUE (at);
+      if (!atval)
+	return NULL_TREE;
+
+      argidx1 = TREE_INT_CST_LOW (TREE_VALUE (atval)) - 1;
+      if (nargs <= argidx1)
+	return NULL_TREE;
+
+      atval = TREE_CHAIN (atval);
+      if (atval)
+	{
+	  argidx2 = TREE_INT_CST_LOW (TREE_VALUE (atval)) - 1;
+	  if (nargs <= argidx2)
+	    return NULL_TREE;
+	}
+    }
+
+  tree size = gimple_call_arg (stmt, argidx1);
+
+  wide_int rng1_buf[2];
+  /* If RNG1 is not set, use the buffer.  */
+  if (!rng1)
+    rng1 = rng1_buf;
+
+  /* Use maximum precision to avoid overflow below.  */
+  const int prec = ADDR_MAX_PRECISION;
+
+  {
+    tree r[2];
+    /* Determine the largest valid range size, including zero.  */
+    if (!get_size_range (size, r, SR_ALLOW_ZERO | SR_USE_LARGEST))
+      return NULL_TREE;
+    rng1[0] = wi::to_wide (r[0], prec);
+    rng1[1] = wi::to_wide (r[1], prec);
+  }
+
+  if (argidx2 > nargs && TREE_CODE (size) == INTEGER_CST)
+    return fold_convert (sizetype, size);
+
+  /* To handle ranges do the math in wide_int and return the product
+     of the upper bounds as a constant.  Ignore anti-ranges.  */
+  tree n = argidx2 < nargs ? gimple_call_arg (stmt, argidx2) : integer_one_node;
+  wide_int rng2[2];
+  {
+    tree r[2];
+      /* As above, use the full non-negative range on failure.  */
+    if (!get_size_range (n, r, SR_ALLOW_ZERO | SR_USE_LARGEST))
+      return NULL_TREE;
+    rng2[0] = wi::to_wide (r[0], prec);
+    rng2[1] = wi::to_wide (r[1], prec);
+  }
+
+  /* Compute products of both bounds for the caller but return the lesser
+     of SIZE_MAX and the product of the upper bounds as a constant.  */
+  rng1[0] = rng1[0] * rng2[0];
+  rng1[1] = rng1[1] * rng2[1];
+
+  const tree size_max = TYPE_MAX_VALUE (sizetype);
+  if (wi::gtu_p (rng1[1], wi::to_wide (size_max, prec)))
+    {
+      rng1[1] = wi::to_wide (size_max, prec);
+      return size_max;
+    }
+
+  return wide_int_to_tree (sizetype, rng1[1]);
+}
+
+/* For an access to an object referenced to by the function parameter PTR
+   of pointer type, and set RNG[] to the range of sizes of the object
+   obtainedfrom the attribute access specification for the current function.
+   Set STATIC_ARRAY if the array parameter has been declared [static].
+   Return the function parameter on success and null otherwise.  */
+
+tree
+gimple_parm_array_size (tree ptr, wide_int rng[2],
+			bool *static_array /* = NULL */)
+{
+  /* For a function argument try to determine the byte size of the array
+     from the current function declaratation (e.g., attribute access or
+     related).  */
+  tree var = SSA_NAME_VAR (ptr);
+  if (TREE_CODE (var) != PARM_DECL)
+    return NULL_TREE;
+
+  const unsigned prec = TYPE_PRECISION (sizetype);
+
+  rdwr_map rdwr_idx;
+  attr_access *access = get_parm_access (rdwr_idx, var);
+  if (!access)
+    return NULL_TREE;
+
+  if (access->sizarg != UINT_MAX)
+    {
+      /* TODO: Try to extract the range from the argument based on
+	 those of subsequent assertions or based on known calls to
+	 the current function.  */
+      return NULL_TREE;
+    }
+
+  if (!access->minsize)
+    return NULL_TREE;
+
+  /* Only consider ordinary array bound at level 2 (or above if it's
+     ever added).  */
+  if (warn_array_parameter < 2 && !access->static_p)
+    return NULL_TREE;
+
+  if (static_array)
+    *static_array = access->static_p;
+
+  rng[0] = wi::zero (prec);
+  rng[1] = wi::uhwi (access->minsize, prec);
+  /* Multiply the array bound encoded in the attribute by the size
+     of what the pointer argument to which it decays points to.  */
+  tree eltype = TREE_TYPE (TREE_TYPE (ptr));
+  tree size = TYPE_SIZE_UNIT (eltype);
+  if (!size || TREE_CODE (size) != INTEGER_CST)
+    return NULL_TREE;
+
+  rng[1] *= wi::to_wide (size, prec);
+  return var;
+}
+
+access_ref::access_ref (tree bound /* = NULL_TREE */,
+			bool minaccess /* = false */)
+: ref (), eval ([](tree x){ return x; }), deref (), trail1special (true),
+  base0 (true), parmarray ()
+{
+  /* Set to valid.  */
+  offrng[0] = offrng[1] = 0;
+  offmax[0] = offmax[1] = 0;
+  /* Invalidate.   */
+  sizrng[0] = sizrng[1] = -1;
+
+  /* Set the default bounds of the access and adjust below.  */
+  bndrng[0] = minaccess ? 1 : 0;
+  bndrng[1] = HOST_WIDE_INT_M1U;
+
+  /* When BOUND is nonnull and a range can be extracted from it,
+     set the bounds of the access to reflect both it and MINACCESS.
+     BNDRNG[0] is the size of the minimum access.  */
+  tree rng[2];
+  if (bound && get_size_range (bound, rng, SR_ALLOW_ZERO))
+    {
+      bndrng[0] = wi::to_offset (rng[0]);
+      bndrng[1] = wi::to_offset (rng[1]);
+      bndrng[0] = bndrng[0] > 0 && minaccess ? 1 : 0;
+    }
+}
+
+/* Return the PHI node REF refers to or null if it doesn't.  */
+
+gphi *
+access_ref::phi () const
+{
+  if (!ref || TREE_CODE (ref) != SSA_NAME)
+    return NULL;
+
+  gimple *def_stmt = SSA_NAME_DEF_STMT (ref);
+  if (gimple_code (def_stmt) != GIMPLE_PHI)
+    return NULL;
+
+  return as_a <gphi *> (def_stmt);
+}
+
+/* Determine and return the largest object to which *THIS.  If *THIS
+   refers to a PHI and PREF is nonnull, fill *PREF with the details
+   of the object determined by compute_objsize(ARG, OSTYPE) for each
+   PHI argument ARG.  */
+
+tree
+access_ref::get_ref (vec<access_ref> *all_refs,
+		     access_ref *pref /* = NULL */,
+		     int ostype /* = 1 */,
+		     ssa_name_limit_t *psnlim /* = NULL */,
+		     pointer_query *qry /* = NULL */) const
+{
+  gphi *phi_stmt = this->phi ();
+  if (!phi_stmt)
+    return ref;
+
+  /* FIXME: Calling get_ref() with a null PSNLIM is dangerous and might
+     cause unbounded recursion.  */
+  ssa_name_limit_t snlim_buf;
+  if (!psnlim)
+    psnlim = &snlim_buf;
+
+  if (!psnlim->visit_phi (ref))
+    return NULL_TREE;
+
+  /* Reflects the range of offsets of all PHI arguments refer to the same
+     object (i.e., have the same REF).  */
+  access_ref same_ref;
+  /* The conservative result of the PHI reflecting the offset and size
+     of the largest PHI argument, regardless of whether or not they all
+     refer to the same object.  */
+  pointer_query empty_qry;
+  if (!qry)
+    qry = &empty_qry;
+
+  access_ref phi_ref;
+  if (pref)
+    {
+      phi_ref = *pref;
+      same_ref = *pref;
+    }
+
+  /* Set if any argument is a function array (or VLA) parameter not
+     declared [static].  */
+  bool parmarray = false;
+  /* The size of the smallest object referenced by the PHI arguments.  */
+  offset_int minsize = 0;
+  const offset_int maxobjsize = wi::to_offset (max_object_size ());
+  /* The offset of the PHI, not reflecting those of its arguments.  */
+  const offset_int orng[2] = { phi_ref.offrng[0], phi_ref.offrng[1] };
+
+  const unsigned nargs = gimple_phi_num_args (phi_stmt);
+  for (unsigned i = 0; i < nargs; ++i)
+    {
+      access_ref phi_arg_ref;
+      tree arg = gimple_phi_arg_def (phi_stmt, i);
+      if (!compute_objsize_r (arg, ostype, &phi_arg_ref, *psnlim, qry)
+	  || phi_arg_ref.sizrng[0] < 0)
+	/* A PHI with all null pointer arguments.  */
+	return NULL_TREE;
+
+      /* Add PREF's offset to that of the argument.  */
+      phi_arg_ref.add_offset (orng[0], orng[1]);
+      if (TREE_CODE (arg) == SSA_NAME)
+	qry->put_ref (arg, phi_arg_ref);
+
+      if (all_refs)
+	all_refs->safe_push (phi_arg_ref);
+
+      const bool arg_known_size = (phi_arg_ref.sizrng[0] != 0
+				   || phi_arg_ref.sizrng[1] != maxobjsize);
+
+      parmarray |= phi_arg_ref.parmarray;
+
+      const bool nullp = integer_zerop (arg) && (i || i + 1 < nargs);
+
+      if (phi_ref.sizrng[0] < 0)
+	{
+	  if (!nullp)
+	    same_ref = phi_arg_ref;
+	  phi_ref = phi_arg_ref;
+	  if (arg_known_size)
+	    minsize = phi_arg_ref.sizrng[0];
+	  continue;
+	}
+
+      const bool phi_known_size = (phi_ref.sizrng[0] != 0
+				   || phi_ref.sizrng[1] != maxobjsize);
+
+      if (phi_known_size && phi_arg_ref.sizrng[0] < minsize)
+	minsize = phi_arg_ref.sizrng[0];
+
+      /* Disregard null pointers in PHIs with two or more arguments.
+	 TODO: Handle this better!  */
+      if (nullp)
+	continue;
+
+      /* Determine the amount of remaining space in the argument.  */
+      offset_int argrem[2];
+      argrem[1] = phi_arg_ref.size_remaining (argrem);
+
+      /* Determine the amount of remaining space computed so far and
+	 if the remaining space in the argument is more use it instead.  */
+      offset_int phirem[2];
+      phirem[1] = phi_ref.size_remaining (phirem);
+
+      if (phi_arg_ref.ref != same_ref.ref)
+	same_ref.ref = NULL_TREE;
+
+      if (phirem[1] < argrem[1]
+	  || (phirem[1] == argrem[1]
+	      && phi_ref.sizrng[1] < phi_arg_ref.sizrng[1]))
+	/* Use the argument with the most space remaining as the result,
+	   or the larger one if the space is equal.  */
+	phi_ref = phi_arg_ref;
+
+      /* Set SAME_REF.OFFRNG to the maximum range of all arguments.  */
+      if (phi_arg_ref.offrng[0] < same_ref.offrng[0])
+	same_ref.offrng[0] = phi_arg_ref.offrng[0];
+      if (same_ref.offrng[1] < phi_arg_ref.offrng[1])
+	same_ref.offrng[1] = phi_arg_ref.offrng[1];
+    }
+
+  if (!same_ref.ref && same_ref.offrng[0] != 0)
+    /* Clear BASE0 if not all the arguments refer to the same object and
+       if not all their offsets are zero-based.  This allows the final
+       PHI offset to out of bounds for some arguments but not for others
+       (or negative even of all the arguments are BASE0), which is overly
+       permissive.  */
+    phi_ref.base0 = false;
+
+  if (same_ref.ref)
+    phi_ref = same_ref;
+  else
+    {
+      /* Replace the lower bound of the largest argument with the size
+	 of the smallest argument, and set PARMARRAY if any argument
+	 was one.  */
+      phi_ref.sizrng[0] = minsize;
+      phi_ref.parmarray = parmarray;
+    }
+
+  if (phi_ref.sizrng[0] < 0)
+    {
+      /* Fail if none of the PHI's arguments resulted in updating PHI_REF
+	 (perhaps because they have all been already visited by prior
+	 recursive calls).  */
+      psnlim->leave_phi (ref);
+      return NULL_TREE;
+    }
+
+  /* Avoid changing *THIS.  */
+  if (pref && pref != this)
+    *pref = phi_ref;
+
+  psnlim->leave_phi (ref);
+
+  return phi_ref.ref;
+}
+
+/* Return the maximum amount of space remaining and if non-null, set
+   argument to the minimum.  */
+
+offset_int
+access_ref::size_remaining (offset_int *pmin /* = NULL */) const
+{
+  offset_int minbuf;
+  if (!pmin)
+    pmin = &minbuf;
+
+  /* add_offset() ensures the offset range isn't inverted.  */
+  gcc_checking_assert (offrng[0] <= offrng[1]);
+
+  if (base0)
+    {
+      /* The offset into referenced object is zero-based (i.e., it's
+	 not referenced by a pointer into middle of some unknown object).  */
+      if (offrng[0] < 0 && offrng[1] < 0)
+	{
+	  /* If the offset is negative the remaining size is zero.  */
+	  *pmin = 0;
+	  return 0;
+	}
+
+      if (sizrng[1] <= offrng[0])
+	{
+	  /* If the starting offset is greater than or equal to the upper
+	     bound on the size of the object, the space remaining is zero.
+	     As a special case, if it's equal, set *PMIN to -1 to let
+	     the caller know the offset is valid and just past the end.  */
+	  *pmin = sizrng[1] == offrng[0] ? -1 : 0;
+	  return 0;
+	}
+
+      /* Otherwise return the size minus the lower bound of the offset.  */
+      offset_int or0 = offrng[0] < 0 ? 0 : offrng[0];
+
+      *pmin = sizrng[0] - or0;
+      return sizrng[1] - or0;
+    }
+
+  /* The offset to the referenced object isn't zero-based (i.e., it may
+     refer to a byte other than the first.  The size of such an object
+     is constrained only by the size of the address space (the result
+     of max_object_size()).  */
+  if (sizrng[1] <= offrng[0])
+    {
+      *pmin = 0;
+      return 0;
+    }
+
+  offset_int or0 = offrng[0] < 0 ? 0 : offrng[0];
+
+  *pmin = sizrng[0] - or0;
+  return sizrng[1] - or0;
+}
+
+/* Return true if the offset and object size are in range for SIZE.  */
+
+bool
+access_ref::offset_in_range (const offset_int &size) const
+{
+  if (size_remaining () < size)
+    return false;
+
+  if (base0)
+    return offmax[0] >= 0 && offmax[1] <= sizrng[1];
+
+  offset_int maxoff = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
+  return offmax[0] > -maxoff && offmax[1] < maxoff;
+}
+
+/* Add the range [MIN, MAX] to the offset range.  For known objects (with
+   zero-based offsets) at least one of whose offset's bounds is in range,
+   constrain the other (or both) to the bounds of the object (i.e., zero
+   and the upper bound of its size).  This improves the quality of
+   diagnostics.  */
+
+void access_ref::add_offset (const offset_int &min, const offset_int &max)
+{
+  if (min <= max)
+    {
+      /* To add an ordinary range just add it to the bounds.  */
+      offrng[0] += min;
+      offrng[1] += max;
+    }
+  else if (!base0)
+    {
+      /* To add an inverted range to an offset to an unknown object
+	 expand it to the maximum.  */
+      add_max_offset ();
+      return;
+    }
+  else
+    {
+      /* To add an inverted range to an offset to an known object set
+	 the upper bound to the maximum representable offset value
+	 (which may be greater than MAX_OBJECT_SIZE).
+	 The lower bound is either the sum of the current offset and
+	 MIN when abs(MAX) is greater than the former, or zero otherwise.
+	 Zero because then then inverted range includes the negative of
+	 the lower bound.  */
+      offset_int maxoff = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
+      offrng[1] = maxoff;
+
+      if (max >= 0)
+	{
+	  offrng[0] = 0;
+	  if (offmax[0] > 0)
+	    offmax[0] = 0;
+	  return;
+	}
+
+      offset_int absmax = wi::abs (max);
+      if (offrng[0] < absmax)
+	{
+	  offrng[0] += min;
+	  /* Cap the lower bound at the upper (set to MAXOFF above)
+	     to avoid inadvertently recreating an inverted range.  */
+	  if (offrng[1] < offrng[0])
+	    offrng[0] = offrng[1];
+	}
+      else
+	offrng[0] = 0;
+    }
+
+  /* Set the minimum and maximmum computed so far. */
+  if (offrng[1] < 0 && offrng[1] < offmax[0])
+    offmax[0] = offrng[1];
+  if (offrng[0] > 0 && offrng[0] > offmax[1])
+    offmax[1] = offrng[0];
+
+  if (!base0)
+    return;
+
+  /* When referencing a known object check to see if the offset computed
+     so far is in bounds... */
+  offset_int remrng[2];
+  remrng[1] = size_remaining (remrng);
+  if (remrng[1] > 0 || remrng[0] < 0)
+    {
+      /* ...if so, constrain it so that neither bound exceeds the size of
+	 the object.  Out of bounds offsets are left unchanged, and, for
+	 better or worse, become in bounds later.  They should be detected
+	 and diagnosed at the point they first become invalid by
+	 -Warray-bounds.  */
+      if (offrng[0] < 0)
+	offrng[0] = 0;
+      if (offrng[1] > sizrng[1])
+	offrng[1] = sizrng[1];
+    }
+}
+
+/* Issue one inform message describing each target of an access REF.
+   WRITE is set for a write access and clear for a read access.  */
+
+void
+access_ref::inform_access (access_mode mode) const
+{
+  const access_ref &aref = *this;
+  if (!aref.ref)
+    return;
+
+  if (aref.phi ())
+    {
+      /* Set MAXREF to refer to the largest object and fill ALL_REFS
+	 with data for all objects referenced by the PHI arguments.  */
+      access_ref maxref;
+      auto_vec<access_ref> all_refs;
+      if (!get_ref (&all_refs, &maxref))
+	return;
+
+      /* Except for MAXREF, the rest of the arguments' offsets need not
+	 reflect one added to the PHI itself.  Determine the latter from
+	 MAXREF on which the result is based.  */
+      const offset_int orng[] =
+	{
+	  offrng[0] - maxref.offrng[0],
+	  wi::smax (offrng[1] - maxref.offrng[1], offrng[0]),
+	};
+
+      /* Add the final PHI's offset to that of each of the arguments
+	 and recurse to issue an inform message for it.  */
+      for (unsigned i = 0; i != all_refs.length (); ++i)
+	{
+	  /* Skip any PHIs; those could lead to infinite recursion.  */
+	  if (all_refs[i].phi ())
+	    continue;
+
+	  all_refs[i].add_offset (orng[0], orng[1]);
+	  all_refs[i].inform_access (mode);
+	}
+      return;
+    }
+
+  /* Convert offset range and avoid including a zero range since it
+     isn't necessarily meaningful.  */
+  HOST_WIDE_INT diff_min = tree_to_shwi (TYPE_MIN_VALUE (ptrdiff_type_node));
+  HOST_WIDE_INT diff_max = tree_to_shwi (TYPE_MAX_VALUE (ptrdiff_type_node));
+  HOST_WIDE_INT minoff;
+  HOST_WIDE_INT maxoff = diff_max;
+  if (wi::fits_shwi_p (aref.offrng[0]))
+    minoff = aref.offrng[0].to_shwi ();
+  else
+    minoff = aref.offrng[0] < 0 ? diff_min : diff_max;
+
+  if (wi::fits_shwi_p (aref.offrng[1]))
+    maxoff = aref.offrng[1].to_shwi ();
+
+  if (maxoff <= diff_min || maxoff >= diff_max)
+    /* Avoid mentioning an upper bound that's equal to or in excess
+       of the maximum of ptrdiff_t.  */
+    maxoff = minoff;
+
+  /* Convert size range and always include it since all sizes are
+     meaningful. */
+  unsigned long long minsize = 0, maxsize = 0;
+  if (wi::fits_shwi_p (aref.sizrng[0])
+      && wi::fits_shwi_p (aref.sizrng[1]))
+    {
+      minsize = aref.sizrng[0].to_shwi ();
+      maxsize = aref.sizrng[1].to_shwi ();
+    }
+
+  /* SIZRNG doesn't necessarily have the same range as the allocation
+     size determined by gimple_call_alloc_size ().  */
+  char sizestr[80];
+  if (minsize == maxsize)
+    sprintf (sizestr, "%llu", minsize);
+  else
+    sprintf (sizestr, "[%llu, %llu]", minsize, maxsize);
+
+  char offstr[80];
+  if (minoff == 0
+      && (maxoff == 0 || aref.sizrng[1] <= maxoff))
+    offstr[0] = '\0';
+  else if (minoff == maxoff)
+    sprintf (offstr, "%lli", (long long) minoff);
+  else
+    sprintf (offstr, "[%lli, %lli]", (long long) minoff, (long long) maxoff);
+
+  location_t loc = UNKNOWN_LOCATION;
+
+  tree ref = this->ref;
+  tree allocfn = NULL_TREE;
+  if (TREE_CODE (ref) == SSA_NAME)
+    {
+      gimple *stmt = SSA_NAME_DEF_STMT (ref);
+      if (is_gimple_call (stmt))
+	{
+	  loc = gimple_location (stmt);
+	  if (gimple_call_builtin_p (stmt, BUILT_IN_ALLOCA_WITH_ALIGN))
+	    {
+	      /* Strip the SSA_NAME suffix from the variable name and
+		 recreate an identifier with the VLA's original name.  */
+	      ref = gimple_call_lhs (stmt);
+	      if (SSA_NAME_IDENTIFIER (ref))
+		{
+		  ref = SSA_NAME_IDENTIFIER (ref);
+		  const char *id = IDENTIFIER_POINTER (ref);
+		  size_t len = strcspn (id, ".$");
+		  if (!len)
+		    len = strlen (id);
+		  ref = get_identifier_with_length (id, len);
+		}
+	    }
+	  else
+	    {
+	      /* Except for VLAs, retrieve the allocation function.  */
+	      allocfn = gimple_call_fndecl (stmt);
+	      if (!allocfn)
+		allocfn = gimple_call_fn (stmt);
+	      if (TREE_CODE (allocfn) == SSA_NAME)
+		{
+		  /* For an ALLOC_CALL via a function pointer make a small
+		     effort to determine the destination of the pointer.  */
+		  gimple *def = SSA_NAME_DEF_STMT (allocfn);
+		  if (gimple_assign_single_p (def))
+		    {
+		      tree rhs = gimple_assign_rhs1 (def);
+		      if (DECL_P (rhs))
+			allocfn = rhs;
+		      else if (TREE_CODE (rhs) == COMPONENT_REF)
+			allocfn = TREE_OPERAND (rhs, 1);
+		    }
+		}
+	    }
+	}
+      else if (gimple_nop_p (stmt))
+	/* Handle DECL_PARM below.  */
+	ref = SSA_NAME_VAR (ref);
+    }
+
+  if (DECL_P (ref))
+    loc = DECL_SOURCE_LOCATION (ref);
+  else if (EXPR_P (ref) && EXPR_HAS_LOCATION (ref))
+    loc = EXPR_LOCATION (ref);
+  else if (TREE_CODE (ref) != IDENTIFIER_NODE
+	   && TREE_CODE (ref) != SSA_NAME)
+    return;
+
+  if (mode == access_read_write || mode == access_write_only)
+    {
+      if (allocfn == NULL_TREE)
+	{
+	  if (*offstr)
+	    inform (loc, "at offset %s into destination object %qE of size %s",
+		    offstr, ref, sizestr);
+	  else
+	    inform (loc, "destination object %qE of size %s", ref, sizestr);
+	  return;
+	}
+
+      if (*offstr)
+	inform (loc,
+		"at offset %s into destination object of size %s "
+		"allocated by %qE", offstr, sizestr, allocfn);
+      else
+	inform (loc, "destination object of size %s allocated by %qE",
+		sizestr, allocfn);
+      return;
+    }
+
+  if (mode == access_read_only)
+    {
+      if (allocfn == NULL_TREE)
+	{
+	  if (*offstr)
+	    inform (loc, "at offset %s into source object %qE of size %s",
+		    offstr, ref, sizestr);
+	  else
+	    inform (loc, "source object %qE of size %s", ref, sizestr);
+
+	  return;
+	}
+
+      if (*offstr)
+	inform (loc,
+		"at offset %s into source object of size %s allocated by %qE",
+		offstr, sizestr, allocfn);
+      else
+	inform (loc, "source object of size %s allocated by %qE",
+		sizestr, allocfn);
+      return;
+    }
+
+  if (allocfn == NULL_TREE)
+    {
+      if (*offstr)
+	inform (loc, "at offset %s into object %qE of size %s",
+		offstr, ref, sizestr);
+      else
+	inform (loc, "object %qE of size %s", ref, sizestr);
+
+      return;
+    }
+
+  if (*offstr)
+    inform (loc,
+	    "at offset %s into object of size %s allocated by %qE",
+	    offstr, sizestr, allocfn);
+  else
+    inform (loc, "object of size %s allocated by %qE",
+	    sizestr, allocfn);
+}
+
+/* Set a bit for the PHI in VISITED and return true if it wasn't
+   already set.  */
+
+bool
+ssa_name_limit_t::visit_phi (tree ssa_name)
+{
+  if (!visited)
+    visited = BITMAP_ALLOC (NULL);
+
+  /* Return false if SSA_NAME has already been visited.  */
+  return bitmap_set_bit (visited, SSA_NAME_VERSION (ssa_name));
+}
+
+/* Clear a bit for the PHI in VISITED.  */
+
+void
+ssa_name_limit_t::leave_phi (tree ssa_name)
+{
+  /* Return false if SSA_NAME has already been visited.  */
+  bitmap_clear_bit (visited, SSA_NAME_VERSION (ssa_name));
+}
+
+/* Return false if the SSA_NAME chain length counter has reached
+   the limit, otherwise increment the counter and return true.  */
+
+bool
+ssa_name_limit_t::next ()
+{
+  /* Return a negative value to let caller avoid recursing beyond
+     the specified limit.  */
+  if (ssa_def_max == 0)
+    return false;
+
+  --ssa_def_max;
+  return true;
+}
+
+/* If the SSA_NAME has already been "seen" return a positive value.
+   Otherwise add it to VISITED.  If the SSA_NAME limit has been
+   reached, return a negative value.  Otherwise return zero.  */
+
+int
+ssa_name_limit_t::next_phi (tree ssa_name)
+{
+  {
+    gimple *def_stmt = SSA_NAME_DEF_STMT (ssa_name);
+    /* Return a positive value if the PHI has already been visited.  */
+    if (gimple_code (def_stmt) == GIMPLE_PHI
+	&& !visit_phi (ssa_name))
+      return 1;
+  }
+
+  /* Return a negative value to let caller avoid recursing beyond
+     the specified limit.  */
+  if (ssa_def_max == 0)
+    return -1;
+
+  --ssa_def_max;
+
+  return 0;
+}
+
+ssa_name_limit_t::~ssa_name_limit_t ()
+{
+  if (visited)
+    BITMAP_FREE (visited);
+}
+
+/* Default ctor.  Initialize object with pointers to the range_query
+   and cache_type instances to use or null.  */
+
+pointer_query::pointer_query (range_query *qry /* = NULL */,
+			      cache_type *cache /* = NULL */)
+: rvals (qry), var_cache (cache), hits (), misses (),
+  failures (), depth (), max_depth ()
+{
+  /* No op.  */
+}
+
+/* Return a pointer to the cached access_ref instance for the SSA_NAME
+   PTR if it's there or null otherwise.  */
+
+const access_ref *
+pointer_query::get_ref (tree ptr, int ostype /* = 1 */) const
+{
+  if (!var_cache)
+    {
+      ++misses;
+      return NULL;
+    }
+
+  unsigned version = SSA_NAME_VERSION (ptr);
+  unsigned idx = version << 1 | (ostype & 1);
+  if (var_cache->indices.length () <= idx)
+    {
+      ++misses;
+      return NULL;
+    }
+
+  unsigned cache_idx = var_cache->indices[idx];
+  if (var_cache->access_refs.length () <= cache_idx)
+    {
+      ++misses;
+      return NULL;
+    }
+
+  access_ref &cache_ref = var_cache->access_refs[cache_idx];
+  if (cache_ref.ref)
+    {
+      ++hits;
+      return &cache_ref;
+    }
+
+  ++misses;
+  return NULL;
+}
+
+/* Retrieve the access_ref instance for a variable from the cache if it's
+   there or compute it and insert it into the cache if it's nonnonull.  */
+
+bool
+pointer_query::get_ref (tree ptr, access_ref *pref, int ostype /* = 1 */)
+{
+  const unsigned version
+    = TREE_CODE (ptr) == SSA_NAME ? SSA_NAME_VERSION (ptr) : 0;
+
+  if (var_cache && version)
+    {
+      unsigned idx = version << 1 | (ostype & 1);
+      if (idx < var_cache->indices.length ())
+	{
+	  unsigned cache_idx = var_cache->indices[idx] - 1;
+	  if (cache_idx < var_cache->access_refs.length ()
+	      && var_cache->access_refs[cache_idx].ref)
+	    {
+	      ++hits;
+	      *pref = var_cache->access_refs[cache_idx];
+	      return true;
+	    }
+	}
+
+      ++misses;
+    }
+
+  if (!compute_objsize (ptr, ostype, pref, this))
+    {
+      ++failures;
+      return false;
+    }
+
+  return true;
+}
+
+/* Add a copy of the access_ref REF for the SSA_NAME to the cache if it's
+   nonnull.  */
+
+void
+pointer_query::put_ref (tree ptr, const access_ref &ref, int ostype /* = 1 */)
+{
+  /* Only add populated/valid entries.  */
+  if (!var_cache || !ref.ref || ref.sizrng[0] < 0)
+    return;
+
+  /* Add REF to the two-level cache.  */
+  unsigned version = SSA_NAME_VERSION (ptr);
+  unsigned idx = version << 1 | (ostype & 1);
+
+  /* Grow INDICES if necessary.  An index is valid if it's nonzero.
+     Its value minus one is the index into ACCESS_REFS.  Not all
+     entries are valid.  */
+  if (var_cache->indices.length () <= idx)
+    var_cache->indices.safe_grow_cleared (idx + 1);
+
+  if (!var_cache->indices[idx])
+    var_cache->indices[idx] = var_cache->access_refs.length () + 1;
+
+  /* Grow ACCESS_REF cache if necessary.  An entry is valid if its
+     REF member is nonnull.  All entries except for the last two
+     are valid.  Once nonnull, the REF value must stay unchanged.  */
+  unsigned cache_idx = var_cache->indices[idx];
+  if (var_cache->access_refs.length () <= cache_idx)
+    var_cache->access_refs.safe_grow_cleared (cache_idx + 1);
+
+  access_ref cache_ref = var_cache->access_refs[cache_idx - 1];
+  if (cache_ref.ref)
+  {
+    gcc_checking_assert (cache_ref.ref == ref.ref);
+    return;
+  }
+
+  cache_ref = ref;
+}
+
+/* Flush the cache if it's nonnull.  */
+
+void
+pointer_query::flush_cache ()
+{
+  if (!var_cache)
+    return;
+  var_cache->indices.release ();
+  var_cache->access_refs.release ();
+}
+
+/* A helper of compute_objsize_r() to determine the size from an assignment
+   statement STMT with the RHS of either MIN_EXPR or MAX_EXPR.  */
+
+static bool
+handle_min_max_size (gimple *stmt, int ostype, access_ref *pref,
+		     ssa_name_limit_t &snlim, pointer_query *qry)
+{
+  tree_code code = gimple_assign_rhs_code (stmt);
+
+  tree ptr = gimple_assign_rhs1 (stmt);
+
+  /* In a valid MAX_/MIN_EXPR both operands must refer to the same array.
+     Determine the size/offset of each and use the one with more or less
+     space remaining, respectively.  If either fails, use the information
+     determined from the other instead, adjusted up or down as appropriate
+     for the expression.  */
+  access_ref aref[2] = { *pref, *pref };
+  if (!compute_objsize_r (ptr, ostype, &aref[0], snlim, qry))
+    {
+      aref[0].base0 = false;
+      aref[0].offrng[0] = aref[0].offrng[1] = 0;
+      aref[0].add_max_offset ();
+      aref[0].set_max_size_range ();
+    }
+
+  ptr = gimple_assign_rhs2 (stmt);
+  if (!compute_objsize_r (ptr, ostype, &aref[1], snlim, qry))
+    {
+      aref[1].base0 = false;
+      aref[1].offrng[0] = aref[1].offrng[1] = 0;
+      aref[1].add_max_offset ();
+      aref[1].set_max_size_range ();
+    }
+
+  if (!aref[0].ref && !aref[1].ref)
+    /* Fail if the identity of neither argument could be determined.  */
+    return false;
+
+  bool i0 = false;
+  if (aref[0].ref && aref[0].base0)
+    {
+      if (aref[1].ref && aref[1].base0)
+	{
+	  /* If the object referenced by both arguments has been determined
+	     set *PREF to the one with more or less space remainng, whichever
+	     is appopriate for CODE.
+	     TODO: Indicate when the objects are distinct so it can be
+	     diagnosed.  */
+	  i0 = code == MAX_EXPR;
+	  const bool i1 = !i0;
+
+	  if (aref[i0].size_remaining () < aref[i1].size_remaining ())
+	    *pref = aref[i1];
+	  else
+	    *pref = aref[i0];
+	  return true;
+	}
+
+      /* If only the object referenced by one of the arguments could be
+	 determined, use it and...  */
+      *pref = aref[0];
+      i0 = true;
+    }
+  else
+    *pref = aref[1];
+
+  const bool i1 = !i0;
+  /* ...see if the offset obtained from the other pointer can be used
+     to tighten up the bound on the offset obtained from the first.  */
+  if ((code == MAX_EXPR && aref[i1].offrng[1] < aref[i0].offrng[0])
+      || (code == MIN_EXPR && aref[i0].offrng[0] < aref[i1].offrng[1]))
+    {
+      pref->offrng[0] = aref[i0].offrng[0];
+      pref->offrng[1] = aref[i0].offrng[1];
+    }
+  return true;
+}
+
+/* A helper of compute_objsize_r() to determine the size from ARRAY_REF
+   AREF.  ADDR is true if PTR is the operand of ADDR_EXPR.  Return true
+   on success and false on failure.  */
+
+static bool
+handle_array_ref (tree aref, bool addr, int ostype, access_ref *pref,
+		  ssa_name_limit_t &snlim, pointer_query *qry)
+{
+  gcc_assert (TREE_CODE (aref) == ARRAY_REF);
+
+  ++pref->deref;
+
+  tree arefop = TREE_OPERAND (aref, 0);
+  tree reftype = TREE_TYPE (arefop);
+  if (!addr && TREE_CODE (TREE_TYPE (reftype)) == POINTER_TYPE)
+    /* Avoid arrays of pointers.  FIXME: Hande pointers to arrays
+       of known bound.  */
+    return false;
+
+  if (!compute_objsize_r (arefop, ostype, pref, snlim, qry))
+    return false;
+
+  offset_int orng[2];
+  tree off = pref->eval (TREE_OPERAND (aref, 1));
+  range_query *const rvals = qry ? qry->rvals : NULL;
+  if (!get_offset_range (off, NULL, orng, rvals))
+    {
+      /* Set ORNG to the maximum offset representable in ptrdiff_t.  */
+      orng[1] = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
+      orng[0] = -orng[1] - 1;
+    }
+
+  /* Convert the array index range determined above to a byte
+     offset.  */
+  tree lowbnd = array_ref_low_bound (aref);
+  if (!integer_zerop (lowbnd) && tree_fits_uhwi_p (lowbnd))
+    {
+      /* Adjust the index by the low bound of the array domain
+	 (normally zero but 1 in Fortran).  */
+      unsigned HOST_WIDE_INT lb = tree_to_uhwi (lowbnd);
+      orng[0] -= lb;
+      orng[1] -= lb;
+    }
+
+  tree eltype = TREE_TYPE (aref);
+  tree tpsize = TYPE_SIZE_UNIT (eltype);
+  if (!tpsize || TREE_CODE (tpsize) != INTEGER_CST)
+    {
+      pref->add_max_offset ();
+      return true;
+    }
+
+  offset_int sz = wi::to_offset (tpsize);
+  orng[0] *= sz;
+  orng[1] *= sz;
+
+  if (ostype && TREE_CODE (eltype) == ARRAY_TYPE)
+    {
+      /* Except for the permissive raw memory functions which use
+	 the size of the whole object determined above, use the size
+	 of the referenced array.  Because the overall offset is from
+	 the beginning of the complete array object add this overall
+	 offset to the size of array.  */
+      offset_int sizrng[2] =
+	{
+	 pref->offrng[0] + orng[0] + sz,
+	 pref->offrng[1] + orng[1] + sz
+	};
+      if (sizrng[1] < sizrng[0])
+	std::swap (sizrng[0], sizrng[1]);
+      if (sizrng[0] >= 0 && sizrng[0] <= pref->sizrng[0])
+	pref->sizrng[0] = sizrng[0];
+      if (sizrng[1] >= 0 && sizrng[1] <= pref->sizrng[1])
+	pref->sizrng[1] = sizrng[1];
+    }
+
+  pref->add_offset (orng[0], orng[1]);
+  return true;
+}
+
+/* A helper of compute_objsize_r() to determine the size from MEM_REF
+   MREF.  Return true on success and false on failure.  */
+
+static bool
+handle_mem_ref (tree mref, int ostype, access_ref *pref,
+		ssa_name_limit_t &snlim, pointer_query *qry)
+{
+  gcc_assert (TREE_CODE (mref) == MEM_REF);
+
+  ++pref->deref;
+
+  if (VECTOR_TYPE_P (TREE_TYPE (mref)))
+    {
+      /* Hack: Handle MEM_REFs of vector types as those to complete
+	 objects; those may be synthesized from multiple assignments
+	 to consecutive data members (see PR 93200 and 96963).
+	 FIXME: Vectorized assignments should only be present after
+	 vectorization so this hack is only necessary after it has
+	 run and could be avoided in calls from prior passes (e.g.,
+	 tree-ssa-strlen.c).
+	 FIXME: Deal with this more generally, e.g., by marking up
+	 such MEM_REFs at the time they're created.  */
+      ostype = 0;
+    }
+
+  tree mrefop = TREE_OPERAND (mref, 0);
+  if (!compute_objsize_r (mrefop, ostype, pref, snlim, qry))
+    return false;
+
+  offset_int orng[2];
+  tree off = pref->eval (TREE_OPERAND (mref, 1));
+  range_query *const rvals = qry ? qry->rvals : NULL;
+  if (!get_offset_range (off, NULL, orng, rvals))
+    {
+      /* Set ORNG to the maximum offset representable in ptrdiff_t.  */
+      orng[1] = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
+      orng[0] = -orng[1] - 1;
+    }
+
+  pref->add_offset (orng[0], orng[1]);
+  return true;
+}
+
+/* Helper to compute the size of the object referenced by the PTR
+   expression which must have pointer type, using Object Size type
+   OSTYPE (only the least significant 2 bits are used).
+   On success, sets PREF->REF to the DECL of the referenced object
+   if it's unique, otherwise to null, PREF->OFFRNG to the range of
+   offsets into it, and PREF->SIZRNG to the range of sizes of
+   the object(s).
+   SNLIM is used to avoid visiting the same PHI operand multiple
+   times, and, when nonnull, RVALS to determine range information.
+   Returns true on success, false when a meaningful size (or range)
+   cannot be determined.
+
+   The function is intended for diagnostics and should not be used
+   to influence code generation or optimization.  */
+
+static bool
+compute_objsize_r (tree ptr, int ostype, access_ref *pref,
+		   ssa_name_limit_t &snlim, pointer_query *qry)
+{
+  STRIP_NOPS (ptr);
+
+  const bool addr = TREE_CODE (ptr) == ADDR_EXPR;
+  if (addr)
+    {
+      --pref->deref;
+      ptr = TREE_OPERAND (ptr, 0);
+    }
+
+  if (DECL_P (ptr))
+    {
+      pref->ref = ptr;
+
+      if (!addr && POINTER_TYPE_P (TREE_TYPE (ptr)))
+	{
+	  /* Set the maximum size if the reference is to the pointer
+	     itself (as opposed to what it points to), and clear
+	     BASE0 since the offset isn't necessarily zero-based.  */
+	  pref->set_max_size_range ();
+	  pref->base0 = false;
+	  return true;
+	}
+
+      if (tree size = decl_init_size (ptr, false))
+	if (TREE_CODE (size) == INTEGER_CST)
+	  {
+	    pref->sizrng[0] = pref->sizrng[1] = wi::to_offset (size);
+	    return true;
+	  }
+
+      pref->set_max_size_range ();
+      return true;
+    }
+
+  const tree_code code = TREE_CODE (ptr);
+  range_query *const rvals = qry ? qry->rvals : NULL;
+
+  if (code == BIT_FIELD_REF)
+    {
+      tree ref = TREE_OPERAND (ptr, 0);
+      if (!compute_objsize_r (ref, ostype, pref, snlim, qry))
+	return false;
+
+      offset_int off = wi::to_offset (pref->eval (TREE_OPERAND (ptr, 2)));
+      pref->add_offset (off / BITS_PER_UNIT);
+      return true;
+    }
+
+  if (code == COMPONENT_REF)
+    {
+      tree ref = TREE_OPERAND (ptr, 0);
+      if (TREE_CODE (TREE_TYPE (ref)) == UNION_TYPE)
+	/* In accesses through union types consider the entire unions
+	   rather than just their members.  */
+	ostype = 0;
+      tree field = TREE_OPERAND (ptr, 1);
+
+      if (ostype == 0)
+	{
+	  /* In OSTYPE zero (for raw memory functions like memcpy), use
+	     the maximum size instead if the identity of the enclosing
+	     object cannot be determined.  */
+	  if (!compute_objsize_r (ref, ostype, pref, snlim, qry))
+	    return false;
+
+	  /* Otherwise, use the size of the enclosing object and add
+	     the offset of the member to the offset computed so far.  */
+	  tree offset = byte_position (field);
+	  if (TREE_CODE (offset) == INTEGER_CST)
+	    pref->add_offset (wi::to_offset (offset));
+	  else
+	    pref->add_max_offset ();
+
+	  if (!pref->ref)
+	    /* REF may have been already set to an SSA_NAME earlier
+	       to provide better context for diagnostics.  In that case,
+	       leave it unchanged.  */
+	    pref->ref = ref;
+	  return true;
+	}
+
+      pref->ref = field;
+
+      if (!addr && POINTER_TYPE_P (TREE_TYPE (field)))
+	{
+	  /* Set maximum size if the reference is to the pointer member
+	     itself (as opposed to what it points to).  */
+	  pref->set_max_size_range ();
+	  return true;
+	}
+
+      /* SAM is set for array members that might need special treatment.  */
+      special_array_member sam;
+      tree size = component_ref_size (ptr, &sam);
+      if (sam == special_array_member::int_0)
+	pref->sizrng[0] = pref->sizrng[1] = 0;
+      else if (!pref->trail1special && sam == special_array_member::trail_1)
+	pref->sizrng[0] = pref->sizrng[1] = 1;
+      else if (size && TREE_CODE (size) == INTEGER_CST)
+	pref->sizrng[0] = pref->sizrng[1] = wi::to_offset (size);
+      else
+	{
+	  /* When the size of the member is unknown it's either a flexible
+	     array member or a trailing special array member (either zero
+	     length or one-element).  Set the size to the maximum minus
+	     the constant size of the type.  */
+	  pref->sizrng[0] = 0;
+	  pref->sizrng[1] = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
+	  if (tree recsize = TYPE_SIZE_UNIT (TREE_TYPE (ref)))
+	    if (TREE_CODE (recsize) == INTEGER_CST)
+	      pref->sizrng[1] -= wi::to_offset (recsize);
+	}
+      return true;
+    }
+
+  if (code == ARRAY_REF)
+    return handle_array_ref (ptr, addr, ostype, pref, snlim, qry);
+
+  if (code == MEM_REF)
+    return handle_mem_ref (ptr, ostype, pref, snlim, qry);
+
+  if (code == TARGET_MEM_REF)
+    {
+      tree ref = TREE_OPERAND (ptr, 0);
+      if (!compute_objsize_r (ref, ostype, pref, snlim, qry))
+	return false;
+
+      /* TODO: Handle remaining operands.  Until then, add maximum offset.  */
+      pref->ref = ptr;
+      pref->add_max_offset ();
+      return true;
+    }
+
+  if (code == INTEGER_CST)
+    {
+      /* Pointer constants other than null are most likely the result
+	 of erroneous null pointer addition/subtraction.  Set size to
+	 zero.  For null pointers, set size to the maximum for now
+	 since those may be the result of jump threading.  */
+      if (integer_zerop (ptr))
+	pref->set_max_size_range ();
+      else
+	pref->sizrng[0] = pref->sizrng[1] = 0;
+      pref->ref = ptr;
+
+      return true;
+    }
+
+  if (code == STRING_CST)
+    {
+      pref->sizrng[0] = pref->sizrng[1] = TREE_STRING_LENGTH (ptr);
+      pref->ref = ptr;
+      return true;
+    }
+
+  if (code == POINTER_PLUS_EXPR)
+    {
+      tree ref = TREE_OPERAND (ptr, 0);
+      if (!compute_objsize_r (ref, ostype, pref, snlim, qry))
+	return false;
+
+      /* Clear DEREF since the offset is being applied to the target
+	 of the dereference.  */
+      pref->deref = 0;
+
+      offset_int orng[2];
+      tree off = pref->eval (TREE_OPERAND (ptr, 1));
+      if (get_offset_range (off, NULL, orng, rvals))
+	pref->add_offset (orng[0], orng[1]);
+      else
+	pref->add_max_offset ();
+      return true;
+    }
+
+  if (code == VIEW_CONVERT_EXPR)
+    {
+      ptr = TREE_OPERAND (ptr, 0);
+      return compute_objsize_r (ptr, ostype, pref, snlim, qry);
+    }
+
+  if (code == SSA_NAME)
+    {
+      if (!snlim.next ())
+	return false;
+
+      /* Only process an SSA_NAME if the recursion limit has not yet
+	 been reached.  */
+      if (qry)
+	{
+	  if (++qry->depth)
+	    qry->max_depth = qry->depth;
+	  if (const access_ref *cache_ref = qry->get_ref (ptr))
+	    {
+	      /* If the pointer is in the cache set *PREF to what it refers
+		 to and return success.  */
+	      *pref = *cache_ref;
+	      return true;
+	    }
+	}
+
+      gimple *stmt = SSA_NAME_DEF_STMT (ptr);
+      if (is_gimple_call (stmt))
+	{
+	  /* If STMT is a call to an allocation function get the size
+	     from its argument(s).  If successful, also set *PREF->REF
+	     to PTR for the caller to include in diagnostics.  */
+	  wide_int wr[2];
+	  if (gimple_call_alloc_size (stmt, wr, rvals))
+	    {
+	      pref->ref = ptr;
+	      pref->sizrng[0] = offset_int::from (wr[0], UNSIGNED);
+	      pref->sizrng[1] = offset_int::from (wr[1], UNSIGNED);
+	      /* Constrain both bounds to a valid size.  */
+	      offset_int maxsize = wi::to_offset (max_object_size ());
+	      if (pref->sizrng[0] > maxsize)
+		pref->sizrng[0] = maxsize;
+	      if (pref->sizrng[1] > maxsize)
+		pref->sizrng[1] = maxsize;
+	    }
+	  else
+	    {
+	      /* For functions known to return one of their pointer arguments
+		 try to determine what the returned pointer points to, and on
+		 success add OFFRNG which was set to the offset added by
+		 the function (e.g., memchr) to the overall offset.  */
+	      offset_int offrng[2];
+	      if (tree ret = gimple_call_return_array (stmt, offrng, rvals))
+		{
+		  if (!compute_objsize_r (ret, ostype, pref, snlim, qry))
+		    return false;
+
+		  /* Cap OFFRNG[1] to at most the remaining size of
+		     the object.  */
+		  offset_int remrng[2];
+		  remrng[1] = pref->size_remaining (remrng);
+		  if (remrng[1] < offrng[1])
+		    offrng[1] = remrng[1];
+		  pref->add_offset (offrng[0], offrng[1]);
+		}
+	      else
+		{
+		  /* For other calls that might return arbitrary pointers
+		     including into the middle of objects set the size
+		     range to maximum, clear PREF->BASE0, and also set
+		     PREF->REF to include in diagnostics.  */
+		  pref->set_max_size_range ();
+		  pref->base0 = false;
+		  pref->ref = ptr;
+		}
+	    }
+	  qry->put_ref (ptr, *pref);
+	  return true;
+	}
+
+      if (gimple_nop_p (stmt))
+	{
+	  /* For a function argument try to determine the byte size
+	     of the array from the current function declaratation
+	     (e.g., attribute access or related).  */
+	  wide_int wr[2];
+	  bool static_array = false;
+	  if (tree ref = gimple_parm_array_size (ptr, wr, &static_array))
+	    {
+	      pref->parmarray = !static_array;
+	      pref->sizrng[0] = offset_int::from (wr[0], UNSIGNED);
+	      pref->sizrng[1] = offset_int::from (wr[1], UNSIGNED);
+	      pref->ref = ref;
+	      qry->put_ref (ptr, *pref);
+	      return true;
+	    }
+
+	  pref->set_max_size_range ();
+	  pref->base0 = false;
+	  pref->ref = ptr;
+	  qry->put_ref (ptr, *pref);
+	  return true;
+	}
+
+      if (gimple_code (stmt) == GIMPLE_PHI)
+	{
+	  pref->ref = ptr;
+	  access_ref phi_ref = *pref;
+	  if (!pref->get_ref (NULL, &phi_ref, ostype, &snlim, qry))
+	    return false;
+	  *pref = phi_ref;
+	  pref->ref = ptr;
+	  qry->put_ref (ptr, *pref);
+	  return true;
+	}
+
+      if (!is_gimple_assign (stmt))
+	{
+	  /* Clear BASE0 since the assigned pointer might point into
+	     the middle of the object, set the maximum size range and,
+	     if the SSA_NAME refers to a function argumnent, set
+	     PREF->REF to it.  */
+	  pref->base0 = false;
+	  pref->set_max_size_range ();
+	  pref->ref = ptr;
+	  return true;
+	}
+
+      tree_code code = gimple_assign_rhs_code (stmt);
+
+      if (code == MAX_EXPR || code == MIN_EXPR)
+	{
+	  if (!handle_min_max_size (stmt, ostype, pref, snlim, qry))
+	    return false;
+	  qry->put_ref (ptr, *pref);
+	  return true;
+	}
+
+      tree rhs = gimple_assign_rhs1 (stmt);
+
+      if (code == ASSERT_EXPR)
+	{
+	  rhs = TREE_OPERAND (rhs, 0);
+	  return compute_objsize_r (rhs, ostype, pref, snlim, qry);
+	}
+
+      if (code == POINTER_PLUS_EXPR
+	  && TREE_CODE (TREE_TYPE (rhs)) == POINTER_TYPE)
+	{
+	  /* Compute the size of the object first. */
+	  if (!compute_objsize_r (rhs, ostype, pref, snlim, qry))
+	    return false;
+
+	  offset_int orng[2];
+	  tree off = gimple_assign_rhs2 (stmt);
+	  if (get_offset_range (off, stmt, orng, rvals))
+	    pref->add_offset (orng[0], orng[1]);
+	  else
+	    pref->add_max_offset ();
+	  qry->put_ref (ptr, *pref);
+	  return true;
+	}
+
+      if (code == ADDR_EXPR
+	  || code == SSA_NAME)
+	return compute_objsize_r (rhs, ostype, pref, snlim, qry);
+
+      /* (This could also be an assignment from a nonlocal pointer.)  Save
+	 PTR to mention in diagnostics but otherwise treat it as a pointer
+	 to an unknown object.  */
+      pref->ref = rhs;
+      pref->base0 = false;
+      pref->set_max_size_range ();
+      return true;
+    }
+
+  /* Assume all other expressions point into an unknown object
+     of the maximum valid size.  */
+  pref->ref = ptr;
+  pref->base0 = false;
+  pref->set_max_size_range ();
+  if (TREE_CODE (ptr) == SSA_NAME)
+    qry->put_ref (ptr, *pref);
+  return true;
+}
+
+/* A "public" wrapper around the above.  Clients should use this overload
+   instead.  */
+
+tree
+compute_objsize (tree ptr, int ostype, access_ref *pref,
+		 range_query *rvals /* = NULL */)
+{
+  pointer_query qry;
+  qry.rvals = rvals;
+  ssa_name_limit_t snlim;
+  if (!compute_objsize_r (ptr, ostype, pref, snlim, &qry))
+    return NULL_TREE;
+
+  offset_int maxsize = pref->size_remaining ();
+  if (pref->base0 && pref->offrng[0] < 0 && pref->offrng[1] >= 0)
+    pref->offrng[0] = 0;
+  return wide_int_to_tree (sizetype, maxsize);
+}
+
+/* Transitional wrapper.  The function should be removed once callers
+   transition to the pointer_query API.  */
+
+tree
+compute_objsize (tree ptr, int ostype, access_ref *pref, pointer_query *ptr_qry)
+{
+  pointer_query qry;
+  if (ptr_qry)
+    ptr_qry->depth = 0;
+  else
+    ptr_qry = &qry;
+
+  ssa_name_limit_t snlim;
+  if (!compute_objsize_r (ptr, ostype, pref, snlim, ptr_qry))
+    return NULL_TREE;
+
+  offset_int maxsize = pref->size_remaining ();
+  if (pref->base0 && pref->offrng[0] < 0 && pref->offrng[1] >= 0)
+    pref->offrng[0] = 0;
+  return wide_int_to_tree (sizetype, maxsize);
+}
+
+/* Legacy wrapper around the above.  The function should be removed
+   once callers transition to one of the two above.  */
+
+tree
+compute_objsize (tree ptr, int ostype, tree *pdecl /* = NULL */,
+		 tree *poff /* = NULL */, range_query *rvals /* = NULL */)
+{
+  /* Set the initial offsets to zero and size to negative to indicate
+     none has been computed yet.  */
+  access_ref ref;
+  tree size = compute_objsize (ptr, ostype, &ref, rvals);
+  if (!size || !ref.base0)
+    return NULL_TREE;
+
+  if (pdecl)
+    *pdecl = ref.ref;
+
+  if (poff)
+    *poff = wide_int_to_tree (ptrdiff_type_node, ref.offrng[ref.offrng[0] < 0]);
+
+  return size;
+}
diff --git a/gcc/pointer-query.h b/gcc/pointer-query.h
new file mode 100644
index 00000000000..6168c809ccc
--- /dev/null
+++ b/gcc/pointer-query.h
@@ -0,0 +1,234 @@ 
+/* Definitions of the pointer_query and related classes.
+
+   Copyright (C) 2020-2021 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it under
+   the terms of the GNU General Public License as published by the Free
+   Software Foundation; either version 3, or (at your option) any later
+   version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+   WARRANTY; without even the implied warranty of MERCHANTABILITY or
+   FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+   for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_POINTER_QUERY_H
+#define GCC_POINTER_QUERY_H
+
+/* Describes recursion limits used by functions that follow use-def
+   chains of SSA_NAMEs.  */
+
+class ssa_name_limit_t
+{
+  bitmap visited;         /* Bitmap of visited SSA_NAMEs.  */
+  unsigned ssa_def_max;   /* Longest chain of SSA_NAMEs to follow.  */
+
+  /* Not copyable or assignable.  */
+  DISABLE_COPY_AND_ASSIGN (ssa_name_limit_t);
+
+public:
+
+  ssa_name_limit_t ()
+    : visited (),
+      ssa_def_max (param_ssa_name_def_chain_limit) { }
+
+  /* Set a bit for the PHI in VISITED and return true if it wasn't
+     already set.  */
+  bool visit_phi (tree);
+  /* Clear a bit for the PHI in VISITED.  */
+  void leave_phi (tree);
+  /* Return false if the SSA_NAME chain length counter has reached
+     the limit, otherwise increment the counter and return true.  */
+  bool next ();
+
+  /* If the SSA_NAME has already been "seen" return a positive value.
+     Otherwise add it to VISITED.  If the SSA_NAME limit has been
+     reached, return a negative value.  Otherwise return zero.  */
+  int next_phi (tree);
+
+  ~ssa_name_limit_t ();
+};
+
+class pointer_query;
+
+/* Describes a reference to an object used in an access.  */
+struct access_ref
+{
+  /* Set the bounds of the reference to at most as many bytes
+     as the first argument or unknown when null, and at least
+     one when the second argument is true unless the first one
+     is a constant zero.  */
+  access_ref (tree = NULL_TREE, bool = false);
+
+  /* Return the PHI node REF refers to or null if it doesn't.  */
+  gphi *phi () const;
+
+  /* Return the object to which REF refers.  */
+  tree get_ref (vec<access_ref> *, access_ref * = NULL, int = 1,
+		ssa_name_limit_t * = NULL, pointer_query * = NULL) const;
+
+  /* Return true if OFFRNG is the constant zero.  */
+  bool offset_zero () const
+  {
+    return offrng[0] == 0 && offrng[1] == 0;
+  }
+
+  /* Return true if OFFRNG is bounded to a subrange of offset values
+     valid for the largest possible object.  */
+  bool offset_bounded () const;
+
+  /* Return the maximum amount of space remaining and if non-null, set
+     argument to the minimum.  */
+  offset_int size_remaining (offset_int * = NULL) const;
+
+/* Return true if the offset and object size are in range for SIZE.  */
+  bool offset_in_range (const offset_int &) const;
+
+  /* Return true if *THIS is an access to a declared object.  */
+  bool ref_declared () const
+  {
+    return DECL_P (ref) && base0 && deref < 1;
+  }
+
+  /* Set the size range to the maximum.  */
+  void set_max_size_range ()
+  {
+    sizrng[0] = 0;
+    sizrng[1] = wi::to_offset (max_object_size ());
+  }
+
+  /* Add OFF to the offset range.  */
+  void add_offset (const offset_int &off)
+  {
+    add_offset (off, off);
+  }
+
+  /* Add the range [MIN, MAX] to the offset range.  */
+  void add_offset (const offset_int &, const offset_int &);
+
+  /* Add the maximum representable offset to the offset range.  */
+  void add_max_offset ()
+  {
+    offset_int maxoff = wi::to_offset (TYPE_MAX_VALUE (ptrdiff_type_node));
+    add_offset (-maxoff - 1, maxoff);
+  }
+
+  /* Issue an informational message describing the target of an access
+     with the given mode.  */
+  void inform_access (access_mode) const;
+
+  /* Reference to the accessed object(s).  */
+  tree ref;
+
+  /* Range of byte offsets into and sizes of the object(s).  */
+  offset_int offrng[2];
+  offset_int sizrng[2];
+  /* The minimum and maximum offset computed.  */
+  offset_int offmax[2];
+  /* Range of the bound of the access: denotes that the access
+     is at least BNDRNG[0] bytes but no more than BNDRNG[1].
+     For string functions the size of the actual access is
+     further constrained by the length of the string.  */
+  offset_int bndrng[2];
+
+  /* Used to fold integer expressions when called from front ends.  */
+  tree (*eval)(tree);
+  /* Positive when REF is dereferenced, negative when its address is
+     taken.  */
+  int deref;
+  /* Set if trailing one-element arrays should be treated as flexible
+     array members.  */
+  bool trail1special;
+  /* Set if valid offsets must start at zero (for declared and allocated
+     objects but not for others referenced by pointers).  */
+  bool base0;
+  /* Set if REF refers to a function array parameter not declared
+     static.  */
+  bool parmarray;
+};
+
+class range_query;
+
+/* Queries and caches compute_objsize results.  */
+class pointer_query
+{
+  DISABLE_COPY_AND_ASSIGN (pointer_query);
+
+public:
+  /* Type of the two-level cache object defined by clients of the class
+     to have pointer SSA_NAMEs cached for speedy access.  */
+  struct cache_type
+  {
+    /* 1-based indices into cache.  */
+    vec<unsigned> indices;
+    /* The cache itself.  */
+    vec<access_ref> access_refs;
+  };
+
+  /* Construct an object with the given Ranger instance and cache.  */
+  explicit pointer_query (range_query * = NULL, cache_type * = NULL);
+
+  /* Retrieve the access_ref for a variable from cache if it's there.  */
+  const access_ref* get_ref (tree, int = 1) const;
+
+  /* Retrieve the access_ref for a variable from cache or compute it.  */
+  bool get_ref (tree, access_ref*, int = 1);
+
+  /* Add an access_ref for the SSA_NAME to the cache.  */
+  void put_ref (tree, const access_ref&, int = 1);
+
+  /* Flush the cache.  */
+  void flush_cache ();
+
+  /* A Ranger instance.  May be null to use global ranges.  */
+  range_query *rvals;
+  /* Cache of SSA_NAMEs.  May be null to disable caching.  */
+  cache_type *var_cache;
+
+  /* Cache performance counters.  */
+  mutable unsigned hits;
+  mutable unsigned misses;
+  mutable unsigned failures;
+  mutable unsigned depth;
+  mutable unsigned max_depth;
+};
+
+/* Describes a pair of references used in an access by built-in
+   functions like memcpy.  */
+struct access_data
+{
+  /* Set the access to at most MAXWRITE and MAXREAD bytes, and
+     at least 1 when MINWRITE or MINREAD, respectively, is set.  */
+  access_data (tree expr, access_mode mode,
+	       tree maxwrite = NULL_TREE, bool minwrite = false,
+	       tree maxread = NULL_TREE, bool minread = false)
+    : call (expr),
+      dst (maxwrite, minwrite), src (maxread, minread), mode (mode) { }
+
+  /* Built-in function call.  */
+  tree call;
+  /* Destination and source of the access.  */
+  access_ref dst, src;
+  /* Read-only for functions like memcmp or strlen, write-only
+     for memset, read-write for memcpy or strcat.  */
+  access_mode mode;
+};
+
+class range_query;
+extern tree gimple_call_alloc_size (gimple *, wide_int[2] = NULL,
+				    range_query * = NULL);
+extern tree gimple_parm_array_size (tree, wide_int[2], bool * = NULL);
+
+extern tree compute_objsize (tree, int, access_ref *, range_query * = NULL);
+/* Legacy/transitional API.  Should not be used in new code.  */
+extern tree compute_objsize (tree, int, access_ref *, pointer_query *);
+extern tree compute_objsize (tree, int, tree * = NULL, tree * = NULL,
+			     range_query * = NULL);
+
+#endif   // GCC_POINTER_QUERY_H
diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h
index aa9757a2fe9..1f5b1370a95 100644
--- a/gcc/tree-pass.h
+++ b/gcc/tree-pass.h
@@ -428,6 +428,7 @@  extern gimple_opt_pass *make_pass_oacc_device_lower (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_omp_device_lower (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_object_sizes (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_early_object_sizes (gcc::context *ctxt);
+extern gimple_opt_pass *make_pass_warn_access (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_warn_printf (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_strlen (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_fold_builtins (gcc::context *ctxt);
diff --git a/gcc/tree-ssa-strlen.c b/gcc/tree-ssa-strlen.c
index 94257df1067..803c59d277b 100644
--- a/gcc/tree-ssa-strlen.c
+++ b/gcc/tree-ssa-strlen.c
@@ -30,6 +30,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "ssa.h"
 #include "cgraph.h"
 #include "gimple-pretty-print.h"
+#include "gimple-ssa-warn-access.h"
 #include "gimple-ssa-warn-restrict.h"
 #include "fold-const.h"
 #include "stor-layout.h"
@@ -47,6 +48,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "tree-ssa-strlen.h"
 #include "tree-hash-traits.h"
 #include "builtins.h"
+#include "pointer-query.h"
 #include "target.h"
 #include "diagnostic-core.h"
 #include "diagnostic.h"
diff --git a/gcc/tree.c b/gcc/tree.c
index 1aa6e557a04..9a7d890c045 100644
--- a/gcc/tree.c
+++ b/gcc/tree.c
@@ -14377,6 +14377,65 @@  valid_new_delete_pair_p (tree new_asm, tree delete_asm)
   return false;
 }
 
+/* Return the zero-based number corresponding to the argument being
+   deallocated if FNDECL is a deallocation function or an out-of-bounds
+   value if it isn't.  */
+
+unsigned
+fndecl_dealloc_argno (tree fndecl)
+{
+  /* A call to operator delete isn't recognized as one to a built-in.  */
+  if (DECL_IS_OPERATOR_DELETE_P (fndecl))
+    {
+      if (DECL_IS_REPLACEABLE_OPERATOR (fndecl))
+	return 0;
+
+      /* Avoid placement delete that's not been inlined.  */
+      tree fname = DECL_ASSEMBLER_NAME (fndecl);
+      if (id_equal (fname, "_ZdlPvS_")       // ordinary form
+	  || id_equal (fname, "_ZdaPvS_"))   // array form
+	return UINT_MAX;
+      return 0;
+    }
+
+  /* TODO: Handle user-defined functions with attribute malloc?  Handle
+     known non-built-ins like fopen?  */
+  if (fndecl_built_in_p (fndecl, BUILT_IN_NORMAL))
+    {
+      switch (DECL_FUNCTION_CODE (fndecl))
+	{
+	case BUILT_IN_FREE:
+	case BUILT_IN_REALLOC:
+	  return 0;
+	default:
+	  break;
+	}
+      return UINT_MAX;
+    }
+
+  tree attrs = DECL_ATTRIBUTES (fndecl);
+  if (!attrs)
+    return UINT_MAX;
+
+  for (tree atfree = attrs;
+       (atfree = lookup_attribute ("*dealloc", atfree));
+       atfree = TREE_CHAIN (atfree))
+    {
+      tree alloc = TREE_VALUE (atfree);
+      if (!alloc)
+	continue;
+
+      tree pos = TREE_CHAIN (alloc);
+      if (!pos)
+	return 0;
+
+      pos = TREE_VALUE (pos);
+      return TREE_INT_CST_LOW (pos) - 1;
+    }
+
+  return UINT_MAX;
+}
+
 #if CHECKING_P
 
 namespace selftest {
diff --git a/gcc/tree.h b/gcc/tree.h
index 8bdf16d8b4a..782a1f79c86 100644
--- a/gcc/tree.h
+++ b/gcc/tree.h
@@ -6468,4 +6468,9 @@  extern void suppress_warning (tree, opt_code = all_warnings, bool = true)
 /* Copy warning disposition from one expression to another.  */
 extern void copy_warning (tree, const_tree);
 
+/* Return the zero-based number corresponding to the argument being
+   deallocated if FNDECL is a deallocation function or an out-of-bounds
+   value if it isn't.  */
+extern unsigned fndecl_dealloc_argno (tree);
+
 #endif  /* GCC_TREE_H  */