New modref/ipa_modref optimization passes
diff mbox series

Message ID 157394261677.27454.2367573047582814412@a285.localdomain
State New
Headers show
Series
  • New modref/ipa_modref optimization passes
Related show

Commit Message

David Čepelík Nov. 16, 2019, 10:16 p.m. UTC
Dear GCC developers,

Below you may find a patch which implements two new optimization
passes, called "modref" and "ipa_modref", which operate on GIMPLE and
during WPO, respectively.

What the passes do is fairly simple: for each analyzed function, they
collect information about loads and stores performed by that function.
The GIMPLE pass collects alias sets. The IPA pass collects sequences of
types that make up each load/store. The IPA pass furthermore constructs
a transitive closure from the data (so that we have information about
loads/stores each function, and any function that it calls, performs).

The data is collected in a data structure called a "modref tree". That's
a really simple tree-like structure which is three levels deep; the
first level is indexed by the base of the access, the second level is
indexed by the ref of the access (using what seems to be standard GCC
terminology for the innermost and outermost type of a reference) and the
last is indexed by the range of the access (currently unused). The data
structure's primary function is to organize the data and be able to
quickly answer, given a load/store, whether any other load/store in the
tree may alias it.

The data structure has configurable limits on the size of each level and
attempts to degrade gracefully when the limit is reached (i.e., it tries
to keep as much information as possible without exceeding the limits).

There's a lot of work to be done, but I believe it should mostly be
refactoring and enhancing existing alias analysis by using the
information which these new analyses collect.

On the refactoring part: I need to rewrite the data structure to get rid
of the C++ template. Since the data structure is used for both the
GIMPLE pass and the IPA pass, which are indexed by alias sets and GIMPLE
trees, respectively, a C++ template seemed appropriate. Wrong.
Especially getting GC to work with a custom template was a nightmare,
not to mention it brought lots of funky C++ peculiarities which I prefer
to avoid.

In the end, I couldn't avoid code duplication anyway, so some functions
have a _tree and an _ipa version; I will address that as well.

The integration with the alias oracle will be done incrementally over a
series of patches which I will prepare with Honza Hubička. These
shouldn't require extensive changes, so I hope they're fine as part of
early stage3.

No doubt, there will be some coding style issues and other violations of
common sense which I have not yet learned.

Last, I need to apologize. The patch worked with bootstrapping and
passed all tests (except what was already broken in master). However,
I've rebased at the last minute which was a poor judgement. With current
master, I'm getting loads of warnings in stage2 with regard to unused
variables and a few other diagnostics. I won't be able to fix that
immediately as I need to discuss it with Honza first. It's therefore
possible that I broke something with my the rebase and clean-ups. Please
bear with me, I will fix this ASAP and send an updated version of the
patch, which will bootstrap and pass all tests again with current
master.


To summarize, this patch implements a new analysis to collect
information about loads/stores in functions. That can be used (and is
used already, for the GIMPLE part) in the alias oracle. In theory, this
should substantially improve alias analysis capabilities of GCC, leading
to many more disambiguations in the alias oracle (because there will be
much more information available about function's load/store refs). The
patch needs a good deal of cleaning up and some further plumbing is
required to get it to do some real work in the alias oracle. I think
this is doable as part of stage3. I would very much like to see this in
the upcoming release as I've spent quite some time with it.

I'd like to thank Honza Hubička (my advisor) who was instrumental in
putting this together. (But all bugs are mine.) Thanks go also to other
guys from SUSE and others who were helping me to debug some peculiar
issues with the garbage collector.

                                             Regards, David

From 7d63fa301c12455aa6e9ba1b68f72e2d93fc2d2d Mon Sep 17 00:00:00 2001
From: David Cepelik <dcepelik@kam.mff.cuni.cz>
Date: Fri, 11 Oct 2019 21:15:23 +0200
Subject: [PATCH] Modref pass

---
 ChangeLog            |   20 +
 gcc/ChangeLog        |   22 ++
 gcc/Makefile.in      |    4 +
 gcc/gengtype.c       |    2 +-
 gcc/ipa-modref.c     | 1012 ++++++++++++++++++++++++++++++++++++++++++++++++++
 gcc/ipa-modref.h     |   43 +++
 gcc/lto-section-in.c |    3 +-
 gcc/lto-streamer.h   |    1 +
 gcc/modref-tree.c    |  342 +++++++++++++++++
 gcc/modref-tree.h    |  355 ++++++++++++++++++
 gcc/params.opt       |   12 +
 gcc/passes.def       |    5 +
 gcc/timevar.def      |    2 +
 gcc/tree-pass.h      |    2 +
 gcc/tree-ssa-alias.c |  105 +++++-
 15 files changed, 1924 insertions(+), 6 deletions(-)
 create mode 100644 gcc/ipa-modref.c
 create mode 100644 gcc/ipa-modref.h
 create mode 100644 gcc/modref-tree.c
 create mode 100644 gcc/modref-tree.h

Patch
diff mbox series

diff --git a/ChangeLog b/ChangeLog
index 664c018b13c..58a7bbbe643 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,23 @@ 
+2019-11-16  David Čepelík  <d@dcepelik.cz>
+
+        * Makefile.in: Add targets for new source modules.
+        * gengtype.c (open_base_files): Register new source files.
+        * ipa-modref.c: New file.
+        * ipa-modref.h: New file.
+        * lto-section-in.c: Add LTO section for the modref pass.
+        * lto-streamer.h (enum lto_section_type): Likewise.
+        * modref-tree.c: New file.
+        * modref-tree.h: New file.
+        * params.opt: Add params (unused so far) related to the modref pass.
+        * passes.def: Register new passes.
+        * timevar.def (TV_IPA_MODREF): Add TVs for new pass.
+        (TV_TREE_MODREF): Likewise.
+        * tree-pass.h (make_pass_modref): New helper (GIMPLE pass).
+        (make_pass_ipa_modref): New helper (IPA pass).
+        * tree-ssa-alias.c (modref_may_conflict): Use collected information.
+        (ref_maybe_used_by_call_p_1): Likewise.
+        (call_may_clobber_ref_p_1): Likewise.
+
 2019-11-15  Kelvin Nilsen  <kelvin@gcc.gnu.org>
 
 	* MAINTAINERS: Change my email address as maintainer.
diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index ef62b5dee44..168a8a59fa6 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,25 @@ 
+2019-11-16  David Čepelík  <d@dcepelik.cz>
+
+        * Makefile.in: Add targets for new source files.
+        * gengtype.c (open_base_files):
+        * ipa-modref.c: New file.
+        * ipa-modref.h: New file.
+        * ipa-pure-const.c (check_op):
+        * lto-section-in.c:
+        * lto-streamer.h (enum lto_section_type):
+        * modref-tree.c: New file.
+        * modref-tree.h: New file.
+        * params.opt:
+        * passes.def:
+        * timevar.def (TV_IPA_MODREF):
+        (TV_TREE_MODREF):
+        * trans-mem.c (ipa_tm_create_version):
+        * tree-pass.h (make_pass_modref):
+        (make_pass_ipa_modref):
+        * tree-ssa-alias.c (modref_may_conflict):
+        (ref_maybe_used_by_call_p_1):
+        (call_may_clobber_ref_p_1):
+
 2019-11-15  Szabolcs Nagy  <szabolcs.nagy@arm.com>
 
 	* config/m68k/linux.h (MUSL_DYNAMIC_LINKER): Define.
diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 7d3c13230e4..14c97361be5 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1388,6 +1388,7 @@  OBJS = \
 	ipa-icf-gimple.o \
 	ipa-reference.o \
 	ipa-hsa.o \
+	ipa-modref.o \
 	ipa-ref.o \
 	ipa-utils.o \
 	ipa.o \
@@ -1426,6 +1427,7 @@  OBJS = \
 	lto-compress.o \
 	mcf.o \
 	mode-switching.o \
+	modref-tree.o \
 	modulo-sched.o \
 	multiple_target.o \
 	omp-offload.o \
@@ -2547,6 +2549,8 @@  GTFILES = $(CPPLIB_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
   $(srcdir)/alias.c $(srcdir)/bitmap.c $(srcdir)/cselib.c $(srcdir)/cgraph.c \
   $(srcdir)/ipa-prop.c $(srcdir)/ipa-cp.c $(srcdir)/ipa-utils.h \
   $(srcdir)/ipa-param-manipulation.h $(srcdir)/ipa-sra.c $(srcdir)/dbxout.c \
+  $(srcdir)/ipa-modref.h $(srcdir)/ipa-modref.c \
+  $(srcdir)/modref-tree.h \
   $(srcdir)/signop.h \
   $(srcdir)/dwarf2out.h \
   $(srcdir)/dwarf2asm.c \
diff --git a/gcc/gengtype.c b/gcc/gengtype.c
index fa95776876d..a5532b3cc9f 100644
--- a/gcc/gengtype.c
+++ b/gcc/gengtype.c
@@ -1726,7 +1726,7 @@  open_base_files (void)
       "except.h", "output.h",  "cfgloop.h", "target.h", "lto-streamer.h",
       "target-globals.h", "ipa-ref.h", "cgraph.h", "symbol-summary.h",
       "ipa-prop.h", "ipa-fnsummary.h", "dwarf2out.h", "omp-general.h",
-      "omp-offload.h", NULL
+      "omp-offload.h", "ipa-modref.h", "modref-tree.h", NULL
     };
     const char *const *ifp;
     outf_p gtype_desc_c;
diff --git a/gcc/ipa-modref.c b/gcc/ipa-modref.c
new file mode 100644
index 00000000000..03d91fe15d8
--- /dev/null
+++ b/gcc/ipa-modref.c
@@ -0,0 +1,1012 @@ 
+/* Search for references that a functions loads or stores.
+   Copyright (C) 2019 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+/* This file contains a tree pass and an IPA pass.
+
+The tree pass stores, for each load/store in every analyzed function, the
+sequence of types that make up the reference (the source of the load or the
+destination of the store).  For each function, it stores a list of accesses and
+their type sequences in a function summary.
+
+The IPA pass is similar: it also constructs a load/store summary for each
+analyzed function.  However, later in the IPA pass, a transitive closure is
+calculated, so that every function's summary contains not only information
+about the loads/stores that the function does, but also about every load and
+store that any of the functions that the function calls does (recursively).
+
+The information about loads/stores and the structure of their source and
+destination references can later be used to disambiguate accesses, which is the
+point of this optimization pass.
+
+Using the information obtained in this pass, other passes can remove useless
+stores and loads from the program.  This in turn makes other optimizations
+possible.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "tree.h"
+#include "gimple.h"
+#include "alloc-pool.h"
+#include "tree-pass.h"
+#include "gimple-iterator.h"
+#include "tree-dfa.h"
+#include "cgraph.h"
+#include "ipa-utils.h"
+#include "symbol-summary.h"
+#include "ipa-modref.h"
+#include "gimple-pretty-print.h"
+#include "gimple-walk.h"
+#include "print-tree.h"
+#include "modref-tree.h"
+#include "tree-streamer.h"
+
+/* Summary for a single function which this pass produces.  */
+modref_summary::modref_summary ()
+{
+
+  loads_tree = new (ggc_alloc <modref_tree<alias_set_type> > ())
+               modref_tree <alias_set_type> (200, 200, 200);
+  stores_tree = new (ggc_alloc <modref_tree<alias_set_type> > ())
+                modref_tree <alias_set_type> (200, 200, 200);
+  stores_tree_ipa = new (ggc_alloc <modref_tree<tree> > ())
+                    modref_tree <tree> (200, 200, 200);
+  loads_tree_ipa = new (ggc_alloc <modref_tree<tree> > ())
+                   modref_tree <tree> (200, 200, 200);
+}
+
+/* Class (from which there is one global instance) that holds modref summaries
+   for all analyzed functions.  */
+class GTY((user)) modref_summaries
+  : public fast_function_summary <modref_summary *, va_gc>
+{
+public:
+  modref_summaries (symbol_table *symtab)
+      : fast_function_summary <modref_summary *, va_gc> (symtab) {}
+  virtual void insert (cgraph_node *, modref_summary *state);
+  virtual void duplicate (cgraph_node *src_node,
+                          cgraph_node *dst_node,
+                          modref_summary *src_data,
+                          modref_summary *dst_data);
+};
+
+/* Global variable holding all modref summaries.  */
+static GTY(()) fast_function_summary <modref_summary *, va_gc> *summaries;
+
+/* Return the global variable holding summaries; make sure it's initialized.  */
+static fast_function_summary <modref_summary *, va_gc> *
+get_global_summaries ()
+{
+  if (!summaries)
+    summaries = new (ggc_alloc <modref_summaries> ())
+                    modref_summaries (symtab);
+  return summaries;
+}
+
+
+/* Get function summary for FUNC if it exists, return NULL otherwise.  */
+
+modref_summary *
+get_function_summary (cgraph_node *func)
+{
+  /* Avoid creation of the summary too early (e.g. when front-end calls us).  */
+  if (!summaries)
+    return NULL;
+
+  /* A single function body may be represented by multiple symbols with
+     different visibility.  For example, if FUNC is an interposable alias,
+     we don't want to return anything, even if we have summary for the target
+     function.  */
+  enum availability avail;
+  func = func->function_or_virtual_thunk_symbol (&avail);
+  if (avail <= AVAIL_INTERPOSABLE)
+    return NULL;
+
+  /* Attempt to get summary for FUNC.  If analysis of FUNC hasn't finished yet,
+     don't return anything.  */
+  modref_summary *r = get_global_summaries ()->get (func);
+  if (r && r->finished)
+    return r;
+
+  return NULL;
+}
+
+/* Store access into the modref_tree data structure.  */
+
+static void
+store_access_tree (modref_tree <alias_set_type> *tt,
+              alias_set_type base_set,
+              alias_set_type ref_set)
+{
+    if (!base_set && !ref_set)
+      tt->collapse ();
+    else
+      {
+        /* TODO The goal is to also store the range of the access (offset, size
+           and max_size) here.  That will make it possible to do finer access
+           disambiguation.  For the time being, we're only using base/ref alias
+           sets.  Including more infromation will make the analysis useful in
+           many more cases, but it does not change it fundamentally.  Hence
+           I believe collecting more data and tuning the alias oracle to use it
+           can be done in stage3.  */
+        tt->insert (base_set, ref_set, NULL, 0, 0);
+      }
+}
+
+/* IPA version of store_access_tree.  */
+
+static void
+store_access_ipa (modref_tree <tree> *tt, tree access)
+{
+  tree ref = TREE_TYPE(access);
+  tree base = access;
+  ipa_access *x = new (ggc_alloc <ipa_access> ()) ipa_access ();
+  while (handled_component_p (base))
+    {
+      ipa_access_component c;
+      c.type = TREE_TYPE(base);
+      c.offset = 0;
+      vec_safe_push (x->crumbs, c);
+      base = TREE_OPERAND (base, 0);
+    }
+  /* TODO Same as above in store_access_tree -- the goal here is to collect
+     more information about the access range.  */
+  tt->insert(TREE_TYPE(base), ref, x, 0, 0);
+}
+
+/* Returns true if and only if we should store the access to EXPR. 
+   Some accesses, e.g. loads from automatic variables, are not interesting.  */
+
+static bool
+store_access_p (tree expr)
+{
+  /* Non-escaping memory is fine  */
+  tree t = get_base_address (expr);
+  if (t && (INDIRECT_REF_P (t)
+            || TREE_CODE (t) == MEM_REF
+            || TREE_CODE (t) == TARGET_MEM_REF)
+        && TREE_CODE (TREE_OPERAND (t, 0)) == SSA_NAME
+        && !ptr_deref_may_alias_global_p (TREE_OPERAND (t, 0)))
+    {
+      if (dump_file)
+        fprintf(dump_file, "   - Non-escaping memory, ignoring.\n");
+      return false;
+    }
+
+  /* Automatic variables are fine.  */
+  if (DECL_P (t)
+      /* TODO: It is possible to track what part of parameters are used.  */
+      && TREE_CODE (t) != PARM_DECL
+      && auto_var_in_fn_p (t, current_function_decl))
+    {
+      if (dump_file)
+        fprintf(dump_file, "   - Automatic variable, ignoring.\n");
+      return false;
+    }
+
+  /* Read-only variables are fine.  */
+  if (DECL_P (t) && TREE_READONLY (t) && TREE_CODE (t) != PARM_DECL)
+    {
+      if (dump_file)
+        fprintf(dump_file, "   - Read-only variable, ignoring.\n");
+      return false;
+    }
+
+  return true;
+}
+
+/* Analyze access EXPR.  If it's intersting, store it into TT.  */
+
+static void
+analyze_access_tree (modref_tree <alias_set_type> *tt, tree expr)
+{
+  if (dump_file)
+    {
+      fprintf(dump_file, " - Analyzing expression: ");
+      print_generic_expr (dump_file, expr);
+      fprintf(dump_file, "\n");
+    }
+
+  if (!store_access_p (expr))
+    return;
+
+  ao_ref r;
+  ao_ref_init (&r, expr);
+  if (dump_file)
+    {
+       fprintf(dump_file, "   - Storing base_set=%i ref_set=%i\n",
+               ao_ref_base_alias_set (&r),
+               ao_ref_alias_set (&r));
+    }
+  store_access_tree (tt, ao_ref_base_alias_set (&r), ao_ref_alias_set (&r));
+}
+
+/* IPA version of analyze_access_tree.  */
+
+static void
+analyze_access_ipa (modref_tree <tree> *tt, tree expr)
+{
+  if (dump_file)
+    {
+      fprintf(dump_file, " - Analyzing expression (IPA): ");
+      print_generic_expr (dump_file, expr);
+      fprintf(dump_file, "\n");
+    }
+
+  if (!store_access_p (expr))
+    return;
+
+  ao_ref r;
+  ao_ref_init (&r, expr);
+  if (dump_file)
+    {
+      fprintf(dump_file, "   - Storing base_set=%i ref_set=%i\n",
+              ao_ref_base_alias_set (&r),
+              ao_ref_alias_set (&r));
+    }
+  store_access_ipa (tt, expr);
+}
+
+/* Encode TT to the output block OB using the summary streaming API.  */
+
+static void
+write_modref_tree (modref_tree <tree> *tt, struct output_block *ob)
+{
+  streamer_write_uhwi (ob, tt->max_bases);
+  streamer_write_uhwi (ob, tt->max_refs);
+  streamer_write_uhwi (ob, tt->max_accesses);
+
+  streamer_write_uhwi (ob, tt->every_base);
+  streamer_write_uhwi (ob, vec_safe_length (tt->bases));
+  size_t i;
+  modref_base_node <tree> *base_node;
+  FOR_EACH_VEC_SAFE_ELT (tt->bases, i, base_node)
+    {
+      stream_write_tree (ob, base_node->base, true);
+
+      streamer_write_uhwi (ob, base_node->every_ref);
+      streamer_write_uhwi (ob, vec_safe_length (base_node->refs));
+      size_t j;
+      modref_ref_node <tree> *ref_node;
+      FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
+        {
+          stream_write_tree (ob, ref_node->ref, true);
+
+          streamer_write_uhwi (ob, ref_node->every_access);
+          streamer_write_uhwi (ob, vec_safe_length (ref_node->accesses));
+          size_t k;
+          modref_access_node *access_node;
+          FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
+            {
+              streamer_write_uhwi (ob, vec_safe_length (access_node->x->crumbs));
+              size_t l;
+              ipa_access_component *c;
+              FOR_EACH_VEC_SAFE_ELT (access_node->x->crumbs, l, c)
+                {
+                  stream_write_tree (ob, c->type, true);
+                }
+              // TODO Stream (and use) access sizes.
+              streamer_write_uhwi (ob, 0);
+              streamer_write_uhwi (ob, 0);
+            }
+        }
+    }
+}
+
+/* Read a modref_tree from the input block IB using the data from DATA_IN.
+   This assumes that the tree was encoded using write_modref_tree.  */
+
+static modref_tree <tree> *
+read_modref_tree (lto_input_block *ib, struct data_in *data_in)
+{
+  size_t max_bases = streamer_read_uhwi (ib);
+  size_t max_refs = streamer_read_uhwi (ib);
+  size_t max_accesses = streamer_read_uhwi (ib);
+  modref_tree <tree> *tt = new (ggc_alloc <modref_tree<tree> > ()) modref_tree<tree>
+                            (max_bases, max_refs, max_accesses);
+
+  size_t every_base = streamer_read_uhwi (ib);
+  size_t nbase = streamer_read_uhwi (ib);
+
+  gcc_assert (!every_base || nbase == 0);
+  if (every_base)
+    tt->collapse ();
+  for (size_t i = 0; i < nbase; i++)
+    {
+      tree base_tree = stream_read_tree (ib, data_in);
+      modref_base_node <tree> *base_node = tt->insert_base (base_tree);
+      size_t every_ref = streamer_read_uhwi (ib);
+      size_t nref = streamer_read_uhwi (ib);
+
+      gcc_assert (!every_ref || nref == 0);
+      if (every_ref)
+        base_node->collapse ();
+      for (size_t j = 0; j < nref; j++)
+        {
+          tree ref_tree = stream_read_tree (ib, data_in);
+          modref_ref_node <tree> *ref_node = base_node->insert_ref (ref_tree, max_refs);
+          size_t every_access = streamer_read_uhwi (ib);
+          size_t naccess = streamer_read_uhwi (ib);
+
+          gcc_assert (!every_access || naccess == 0);
+          if (every_access)
+            ref_node->collapse ();
+          for (size_t k = 0; k < naccess; k++)
+            {
+              size_t ncrumb = streamer_read_uhwi(ib);
+              ipa_access *x = new (ggc_alloc <ipa_access> ()) ipa_access ();
+              ipa_access_component c;
+
+              for (size_t l = 0; l < ncrumb; l++)
+                {
+                  c.type = stream_read_tree (ib, data_in);
+                  c.offset = 0;
+                  vec_safe_push (x->crumbs, c);
+                }
+              size_t a = streamer_read_uhwi (ib);
+              size_t b = streamer_read_uhwi (ib);
+              ref_node->insert_access (x, a, b, max_accesses);
+            }
+        }
+    }
+  return tt;
+}
+
+/* Callback for write_summary.  */
+
+static void
+modref_write ()
+{
+  struct output_block *ob = create_output_block (LTO_section_ipa_modref);
+  lto_symtab_encoder_t encoder = ob->decl_state->symtab_node_encoder;
+  unsigned int count = 0;
+  int i;
+
+  for (i = 0; i < lto_symtab_encoder_size (encoder); i++)
+    {
+      symtab_node *snode = lto_symtab_encoder_deref (encoder, i);
+      cgraph_node *cnode = dyn_cast <cgraph_node *> (snode);
+
+      if (cnode && cnode->definition && !cnode->alias)
+        count++;
+    }
+  streamer_write_uhwi (ob, count);
+
+  for (i = 0; i < lto_symtab_encoder_size (encoder); i++)
+    {
+      symtab_node *snode = lto_symtab_encoder_deref (encoder, i);
+      cgraph_node *cnode = dyn_cast <cgraph_node *> (snode);
+
+      if (cnode && cnode->definition && !cnode->alias)
+        {
+          streamer_write_uhwi (ob, lto_symtab_encoder_encode (encoder, cnode));
+
+          modref_summary *r = get_global_summaries ()->get (cnode);
+          if (!r)
+            {
+              streamer_write_uhwi (ob, 0);
+              streamer_write_uhwi (ob, 0);
+              continue;
+            }
+
+          streamer_write_uhwi (ob, r->loads_tree_ipa ? 1 : 0);
+          streamer_write_uhwi (ob, r->stores_tree_ipa ? 1 : 0);
+          if (r->loads_tree_ipa)
+            write_modref_tree (r->loads_tree_ipa, ob);
+          if (r->stores_tree_ipa)
+            write_modref_tree (r->stores_tree_ipa, ob);
+        }
+    }
+  streamer_write_char_stream (ob->main_stream, 0);
+  produce_asm (ob, NULL);
+  destroy_output_block (ob);
+}
+
+static void
+read_section (struct lto_file_decl_data *file_data, const char *data,
+                size_t len)
+{
+  const struct lto_function_header *header =
+    (const struct lto_function_header *) data;
+  const int cfg_offset = sizeof (struct lto_function_header);
+  const int main_offset = cfg_offset + header->cfg_size;
+  const int string_offset = main_offset + header->main_size;
+  struct data_in *data_in;
+  unsigned int i;
+  unsigned int f_count;
+
+  lto_input_block ib ((const char *) data + main_offset, header->main_size,
+                      file_data->mode_table);
+
+  data_in =
+    lto_data_in_create (file_data, (const char *) data + string_offset,
+                        header->string_size, vNULL);
+  f_count = streamer_read_uhwi (&ib);
+  for (i = 0; i < f_count; i++)
+    {
+      struct cgraph_node *node;
+      lto_symtab_encoder_t encoder;
+
+      unsigned int index = streamer_read_uhwi (&ib);
+      encoder = file_data->symtab_node_encoder;
+      node = dyn_cast <cgraph_node *> (lto_symtab_encoder_deref (encoder,
+                                                                index));
+
+      modref_summary *modref_sum = get_global_summaries ()->get_create (node);
+      modref_sum->finished = false;
+      int have_loads_tree = streamer_read_uhwi (&ib);
+      int have_stores_tree = streamer_read_uhwi (&ib);
+      modref_sum->loads_tree_ipa = modref_sum->stores_tree_ipa = NULL;
+      if (have_loads_tree)
+        modref_sum->loads_tree_ipa = read_modref_tree (&ib, data_in);
+      if (have_stores_tree)
+        modref_sum->stores_tree_ipa = read_modref_tree (&ib, data_in);
+    }
+
+  lto_free_section_data (file_data, LTO_section_ipa_modref, NULL, data,
+                         len);
+  lto_data_in_delete (data_in);
+}
+
+/* Callback for read_summary.  */
+
+static void
+modref_read (void)
+{
+  struct lto_file_decl_data **file_data_vec = lto_get_file_decl_data ();
+  struct lto_file_decl_data *file_data;
+  unsigned int j = 0;
+
+  while ((file_data = file_data_vec[j++]))
+    {
+      size_t len;
+      const char *data = lto_get_summary_section_data (file_data,
+                                                       LTO_section_ipa_modref,
+                                                       &len);
+      if (data)
+        read_section (file_data, data, len);
+      else
+        /* Fatal error here.  We do not want to support compiling ltrans units
+           with different version of compiler or different flags than the WPA
+           unit, so this should never happen.  */
+        fatal_error (input_location,
+                     "IPA modref summary is missing in input file");
+    }
+}
+
+/* TODO This helper can be removed if get (fnode) is used carefully everywhere,
+   and the `finished' can be dropped.  I will do it as part of cleaning up.  */
+modref_summary *
+get_cur (void)
+{
+  cgraph_node *fnode = cgraph_node::get (current_function_decl);
+  bool had = (get_global_summaries ()->get (fnode) != NULL);
+  modref_summary *r = get_global_summaries ()->get_create (fnode);
+
+  if (!had)
+    r->finished = false;
+  return r;
+}
+
+static void
+remove_cur (void)
+{
+  cgraph_node *fnode = cgraph_node::get (current_function_decl);
+  get_global_summaries ()->remove (fnode);
+}
+
+/* Analyze function call STMT in function F.  */
+
+static bool
+analyze_call (function *f, gimple *stmt, bool ipa)
+{
+  (void) f;
+
+  modref_summary *cur_summary = get_cur();
+
+  /* Check flags on the function call. In certain cases, analysis can be
+     simplified.  */
+  int flags = gimple_call_flags (stmt);
+  if (flags & ECF_CONST)
+    {
+      if (dump_file)
+        fprintf(dump_file, " - ECF_CONST, ignoring all stores and all loads except for args.\n");
+      return true;
+    }
+
+  /* TODO Further improvement: noreturn && no except region && no setjmp =>
+     discard all stores.  */
+
+  /* Pure functions do not affect global memory.  Stores by functions which are
+     noreturn and do not throw can safely be ignored.  */
+  bool ignore_stores = flags & ECF_PURE;
+  if ((flags & (ECF_NORETURN | ECF_NOTHROW)) == (ECF_NORETURN | ECF_NOTHROW)
+      || (!flag_exceptions && (flags & ECF_NORETURN)))
+    ignore_stores = true;
+
+  /* Next, we try to get the callee's function declaration. The goal is to
+     merge their summary with ours.  */
+  tree callee = gimple_call_fndecl (stmt);
+
+  /* Check if this is an indirect call.  */
+  if (!callee)
+    {
+      /* If the indirect call does not write memory, our store summary is 
+         unaffected, but we have to discard our loads summary (we don't know
+         anything about the loads that the called function performs).  */
+      if (ignore_stores)
+        {
+          if (dump_file)
+            fprintf(dump_file, " - Indirect call which does not write memory, discarding loads.\n");
+          if (ipa)
+            cur_summary->loads_tree_ipa->collapse ();
+          else
+            cur_summary->loads_tree->collapse ();
+          return true;
+        }
+      if (dump_file)
+        fprintf(dump_file, " - Indirect call.\n");
+      return false;
+    }
+
+  /* If this is a recursive call, the target summary is the same as ours, so
+     there's nothing to do.  */
+  if (recursive_call_p (current_function_decl, callee))
+    {
+      if (dump_file)
+        fprintf(dump_file, " - Skipping recursive call.\n");
+      return true;
+    }
+
+  struct cgraph_node *callee_node = cgraph_node::get (callee);
+  gcc_assert (callee_node != NULL);
+
+  /* Get the function symbol and its availability.  */
+  enum availability avail;
+  callee_node = callee_node->function_symbol (&avail);
+  if (avail <= AVAIL_INTERPOSABLE)
+    {
+      /* Keep stores summary, but discard all loads for interposable function
+         symbols.  */
+      if (ignore_stores)
+        {
+          if (ipa)
+            cur_summary->loads_tree_ipa->collapse ();
+          else
+            cur_summary->loads_tree->collapse ();
+          return true;
+        }
+      if (dump_file)
+        fprintf(dump_file, " - Function availability <= AVAIL_INTERPOSABLE.\n");
+      return false;
+    }
+
+  /* Get callee's modref summary.  As above, if there's no summary, we either
+     have to give up or, if stores are ignored, we can just purge loads.  */
+  modref_summary *callee_summary = get_global_summaries ()->get (callee_node);
+  if (!callee_summary)
+    {
+      if (ignore_stores)
+        {
+          if (ipa)
+            cur_summary->loads_tree_ipa->collapse ();
+          else
+            cur_summary->loads_tree->collapse ();
+          return true;
+        }
+      if (dump_file)
+        fprintf(dump_file, " - No modref summary available for callee.\n");
+      return false;
+    }
+
+  /* Merge with callee's summary.  */
+  cur_summary->loads_tree->merge (callee_summary->loads_tree);
+  if (!ignore_stores)
+    cur_summary->stores_tree->merge (callee_summary->stores_tree);
+
+  return true;
+}
+
+/* Helper for analyze_stmt.  */
+
+static bool
+analyze_load_tree (gimple *, tree, tree op, void *)
+{
+  analyze_access_tree (get_cur ()->loads_tree, op);
+  return false;
+}
+
+/* Helper for analyze_stmt.  */
+
+static bool
+analyze_load_ipa (gimple *, tree, tree op, void *)
+{
+  analyze_access_ipa (get_cur ()->stores_tree_ipa, op);
+  return false;
+}
+
+/* Helper for analyze_stmt.  */
+
+static bool
+analyze_store_tree (gimple *, tree, tree op, void *)
+{
+  analyze_access_tree (get_cur ()->stores_tree, op);
+  return false;
+}
+
+/* Helper for analyze_stmt.  */
+
+static bool
+analyze_store_ipa (gimple *, tree, tree op, void *)
+{
+  analyze_access_ipa (get_cur ()->stores_tree_ipa, op);
+  return false;
+}
+
+/* Analyze statement STMT of function F.  */
+
+static bool
+analyze_stmt (function *f, gimple *stmt, bool ipa)
+{
+  /* Analyze all loads and stores in STMT.  */
+  if (ipa)
+    walk_stmt_load_store_ops (stmt, NULL, analyze_load_ipa, analyze_store_ipa);
+  else
+    walk_stmt_load_store_ops (stmt, NULL, analyze_load_tree, analyze_store_tree);
+  /* or call analyze_load_ipa, analyze_store_ipa */
+
+  switch (gimple_code (stmt))
+   {
+   case GIMPLE_ASM:
+     /* If the ASM statement does not read nor write memory, there's nothing
+        to do.  Otherwise just give up.  */
+     if (!gimple_asm_clobbers_memory_p (as_a <gasm *> (stmt)))
+       return true;
+     if (dump_file)
+       fprintf(dump_file, " - Function contains GIMPLE_ASM statement which clobbers memory.\n");
+     return false;
+   case GIMPLE_CALL:
+     return analyze_call (f, stmt, ipa);
+   default:
+     /* Nothing to do for other types of statements.  */
+     return true;
+   }
+}
+
+/* Analyze function F.  IPA indicates whether we're running in tree mode (false)
+   or the IPA mode (true).  */
+
+static void
+analyze_function (function *f, bool ipa)
+{
+  if (dump_file)
+    fprintf (dump_file, "modref analyzing '%s' (ipa=%i)...\n", function_name (f), ipa);
+  
+  /* Don't analyze this function if it's compiled with -fno-strict-aliasing.  */
+  if (!flag_strict_aliasing)
+    return;
+
+  /* Create and initialize summary for F.  */
+  if (ipa)
+    {
+      if (get_cur ()->stores_tree_ipa->bases)
+        get_cur ()->stores_tree_ipa->bases->truncate(0);
+      if (get_cur ()->loads_tree_ipa->bases)
+        get_cur ()->loads_tree_ipa->bases->truncate(0);
+      get_cur ()->stores_tree_ipa->every_base = false;
+      get_cur ()->loads_tree_ipa->every_base = false;
+    }
+  else
+    {
+      if (get_cur ()->stores_tree->bases)
+        get_cur ()->stores_tree->bases->truncate(0);
+      if (get_cur ()->loads_tree->bases)
+        get_cur ()->loads_tree->bases->truncate(0);
+      get_cur ()->stores_tree->every_base = false;
+      get_cur ()->loads_tree->every_base = false;
+    }
+  get_cur ()->finished = false;
+
+  /* Analyze each statement in each basic block of the function.  If the
+     statement cannot be analyzed (for any reason), the entire function cannot
+     be analyzed by modref.  */
+  basic_block bb;
+  FOR_EACH_BB_FN (bb, f)
+    {
+      gimple_stmt_iterator si;
+      for (si = gsi_after_labels (bb); !gsi_end_p (si); gsi_next (&si))
+        {
+          if (!analyze_stmt (f, gsi_stmt (si), ipa))
+            {
+              remove_cur ();
+              if (dump_file)
+                fprintf (dump_file, " - modref done with result: not tracked.\n");
+              return;
+            }
+        }
+    }
+
+  get_cur ()->finished = true;
+
+  if (dump_file)
+    fprintf (dump_file, " - modref done with result: tracked.\n");
+}
+
+/* Callback for generate_summary.  */
+
+static void
+modref_generate (void)
+{
+  struct cgraph_node *node;
+  FOR_EACH_FUNCTION_WITH_GIMPLE_BODY (node)
+    {
+      function *f = DECL_STRUCT_FUNCTION (node->decl);
+      if (!f)
+        {
+          continue;
+        }
+      push_cfun (f);
+      analyze_function (f, true);
+      pop_cfun ();
+    }
+}
+
+/* Called when a new function is inserted to callgraph late.  */
+
+void
+modref_summaries::insert (struct cgraph_node *node, modref_summary *)
+{
+  push_cfun (DECL_STRUCT_FUNCTION (node->decl));
+  analyze_function (DECL_STRUCT_FUNCTION (node->decl), false);
+  pop_cfun ();
+}
+
+/* Called when new clone is inserted to callgraph late.  */
+
+void
+modref_summaries::duplicate (cgraph_node *, cgraph_node *,
+                               modref_summary *src_data,
+                               modref_summary *dst_data)
+{
+  dst_data->finished = src_data->finished;
+  /* TODO Add test: tree merged with empty tree the same as original tree.  */
+  /* TODO Un-hardcode the constants, use run-time params.  */
+  dst_data->stores_tree = new (ggc_alloc <modref_tree<alias_set_type> > ())
+                          modref_tree <alias_set_type> (200, 200, 200);
+  dst_data->stores_tree->merge (src_data->stores_tree);
+  dst_data->loads_tree = new (ggc_alloc <modref_tree<alias_set_type> > ())
+                         modref_tree <alias_set_type> (200, 200, 200);
+  dst_data->loads_tree->merge (src_data->loads_tree);
+
+  dst_data->stores_tree_ipa = new (ggc_alloc <modref_tree<tree> > ())
+                              modref_tree <tree> (200, 200, 200);
+  dst_data->stores_tree_ipa->merge (src_data->stores_tree_ipa);
+  dst_data->loads_tree_ipa = new (ggc_alloc <modref_tree<tree> > ())
+                             modref_tree <tree> (200, 200, 200);
+  dst_data->loads_tree_ipa->merge (src_data->loads_tree_ipa);
+}
+
+namespace
+{
+
+/*********************** GIMPLE pass specific stuff **********************/
+
+/* Definition of the modref pass on GIMPLE.  */
+const pass_data pass_data_modref = {
+  GIMPLE_PASS,
+  "modref",
+  OPTGROUP_IPA,
+  TV_TREE_MODREF,
+  (PROP_cfg | PROP_ssa),
+  0,
+  0,
+  0,
+  0,
+};
+
+class pass_modref : public gimple_opt_pass
+{
+      public:
+        pass_modref (gcc::context *ctxt)
+            : gimple_opt_pass (pass_data_modref, ctxt) {}
+  
+        ~pass_modref ()
+          {
+            /*summaries = NULL;*/
+            // TODO free
+          }
+
+        /* opt_pass methods: */
+        opt_pass *clone ()
+        {
+                return new pass_modref (m_ctxt);
+        }
+        virtual bool gate (function *) { return flag_strict_aliasing; }
+        virtual unsigned int execute (function *);
+}; // class pass_modref
+
+/*********************** IPA pass specific stuff **********************/
+
+/* Definition of the modref IPA pass.  */
+const pass_data pass_data_ipa_modref =
+{
+  IPA_PASS,           /* type */
+  "ipa_modref",       /* name */
+  OPTGROUP_IPA,       /* optinfo_flags */
+  TV_IPA_MODREF, /* tv_id */
+  0, /* properties_required */
+  0, /* properties_provided */
+  0, /* properties_destroyed */
+  0, /* todo_flags_start */
+  ( TODO_dump_symtab ), /* todo_flags_finish */
+};
+
+class pass_ipa_modref : public ipa_opt_pass_d
+{
+public:
+  pass_ipa_modref (gcc::context *ctxt)
+    : ipa_opt_pass_d (pass_data_ipa_modref, ctxt,
+                      modref_generate, /* generate_summary */
+                      modref_write,    /* write_summary */
+                      modref_read,     /* read_summary */
+                      modref_write,    /* write_optimization_summary */
+                      modref_read,     /* read_optimization_summary */
+                      NULL,            /* stmt_fixup */
+                      0,               /* function_transform_todo_flags_start */
+                      NULL,            /* function_transform */
+                      NULL)            /* variable_transform */
+  {}
+
+  /* opt_pass methods: */
+  opt_pass *clone () { return new pass_ipa_modref (m_ctxt); }
+  virtual unsigned int execute (function *);
+
+}; // class pass_ipa_modref
+
+}
+
+unsigned int pass_modref::execute (function *f)
+{
+  analyze_function (f, false);
+  return 0;
+}
+
+gimple_opt_pass *
+make_pass_modref (gcc::context *ctxt)
+{
+  return new pass_modref (ctxt);
+}
+
+ipa_opt_pass_d *
+make_pass_ipa_modref (gcc::context *ctxt)
+{
+  return new pass_ipa_modref (ctxt);
+}
+
+/* Run the IPA pass.  This will take a function's summaries and calls and
+   construct new summaries which represent a transitive closure.  So that
+   summary of an analyzed function contains information about the loads and
+   stores that the function or any function that it calls does.  */
+
+unsigned int pass_ipa_modref::execute (function *)
+{
+  struct cgraph_node **order = XCNEWVEC (struct cgraph_node *, symtab->cgraph_count);
+  int order_pos;
+  order_pos = ipa_reduced_postorder (order, true, NULL);
+  struct cgraph_node *component_node;
+  struct cgraph_edge *callee_edge;
+  struct cgraph_node *cur;
+  struct cgraph_node *callee;
+  modref_summary *cur_summary;
+  modref_summary *callee_summary;
+  struct modref_tree <tree> *ipa_loads_tree;
+  struct modref_tree <tree> *ipa_stores_tree;
+  bool its_hopeless;
+  int i;
+
+  /* Iterate over all strongly connected components in post-order.  */
+  for (i = 0; i < order_pos; i++)
+    {
+      its_hopeless = false;
+
+      /* Get the component's representative.  That's just any node in the
+         component from which we can traverse the entire component.  */
+      component_node = order[i];
+
+      /* Create loads and stores trees.  These trees will be the same for the
+         entire component.  */
+      ipa_loads_tree = new (ggc_alloc <modref_tree<tree> > ())
+                       modref_tree <tree> (200, 200, 200);
+      ipa_stores_tree = new (ggc_alloc <modref_tree<tree> > ())
+                        modref_tree <tree> (200, 200, 200);;
+
+      /* Walk the component.  CUR is the current node of the component that's
+         being processed.  */
+      for (cur = component_node; cur; cur = ((struct ipa_dfs_info *) cur->aux)->next_cycle)
+        {
+          /* Merge in summaries from CUR. */
+          cur_summary = get_function_summary (cur);
+
+          /* We don't know anything about CUR, hence we cannot tell anything
+             about the entire component.  */
+          if (!cur_summary)
+            {
+              its_hopeless = true;
+              goto propagate;
+            }
+
+          /* Walk every function that CUR calls and merge its summary.  */
+          for (callee_edge = cur->callees; callee_edge; callee_edge = callee_edge->next_callee)
+            {
+              /* Get the callee and its summary.  */
+              callee = callee_edge->callee;
+              callee_summary = get_function_summary (callee);
+
+              /* We don't know anything about CALLEE, hence we cannot tell
+                 anything about the entire component.  */
+              if (!callee_summary)
+                {
+                  its_hopeless = true;
+                  goto propagate;
+                }
+
+              /* Merge in callee's information.  */
+              ipa_loads_tree->merge (callee_summary->loads_tree_ipa);
+              ipa_stores_tree->merge (callee_summary->stores_tree_ipa);
+            }
+        }
+
+propagate:
+      /* If we couldn't find a modref summary for any function in the cycle, or
+         for any function that any function in the cycle calls, its hopeless
+         and we must assume that everything aliases.  Therefore, we will
+         collapse both the loads and the stores trees.  */
+      if (its_hopeless)
+        {
+          ipa_loads_tree->collapse ();
+          ipa_stores_tree->collapse ();
+        }
+
+      /* At this time, ipa_loads_tree and ipa_stores_tree contain information
+         about all loads and stores done by any of the component's nodes and
+         all functions that any of the nodes calls.  We will now propagate this
+         information to all nodes in the component.  Therefore, we will walk
+         the component one more time to do it.  */
+      for (cur = component_node; cur; cur = ((struct ipa_dfs_info *) cur->aux)->next_cycle)
+        {
+          //fprintf(stderr, ">> HERE %p <<\n", (void *)cur_summary);
+          cur_summary = get_function_summary (cur);
+          if (!cur_summary)
+            {
+              /* The function doesn't have a summary.  We must have noticed
+                 that during the first pass and the hopeless flag must
+                 therefore be set.  Skip the function.  */
+              gcc_assert (its_hopeless);
+              continue;
+            }
+          cur_summary->loads_tree_ipa = ipa_loads_tree;
+          cur_summary->stores_tree_ipa = ipa_stores_tree;
+        }
+    }
+  ipa_free_postorder_info ();
+  return 0;
+}
+
+#include "gt-ipa-modref.h"
diff --git a/gcc/ipa-modref.h b/gcc/ipa-modref.h
new file mode 100644
index 00000000000..d770965fae1
--- /dev/null
+++ b/gcc/ipa-modref.h
@@ -0,0 +1,43 @@ 
+/* Search for references that a functions loads or stores.
+   Copyright (C) 2019 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef IPA_MODREF_H
+#define IPA_MODREF_H
+
+#include "modref-tree.h"
+#include "alias.h"
+
+/* Single function summary.  */
+
+struct GTY(()) modref_summary
+{
+  const char *fname;
+  modref_tree <alias_set_type> *loads_tree;
+  modref_tree <alias_set_type> *stores_tree;
+  modref_tree <tree> *loads_tree_ipa;
+  modref_tree <tree> *stores_tree_ipa;
+  bool finished;
+
+  modref_summary();
+};
+
+void analyze_modref (struct cgraph_node *node);
+modref_summary *get_function_summary (cgraph_node *func);
+
+#endif
diff --git a/gcc/lto-section-in.c b/gcc/lto-section-in.c
index 67d998472e0..0c109835fe7 100644
--- a/gcc/lto-section-in.c
+++ b/gcc/lto-section-in.c
@@ -54,7 +54,8 @@  const char *lto_section_name[LTO_N_SECTION_TYPES] =
   "mode_table",
   "hsa",
   "lto",
-  "ipa_sra"
+  "ipa_sra",
+  "ipa_modref",
 };
 
 /* Hooks so that the ipa passes can call into the lto front end to get
diff --git a/gcc/lto-streamer.h b/gcc/lto-streamer.h
index 067a6660d2f..cd0f6da6587 100644
--- a/gcc/lto-streamer.h
+++ b/gcc/lto-streamer.h
@@ -236,6 +236,7 @@  enum lto_section_type
   LTO_section_ipa_hsa,
   LTO_section_lto,
   LTO_section_ipa_sra,
+  LTO_section_ipa_modref,
   LTO_N_SECTION_TYPES		/* Must be last.  */
 };
 
diff --git a/gcc/modref-tree.c b/gcc/modref-tree.c
new file mode 100644
index 00000000000..d77adc8aa05
--- /dev/null
+++ b/gcc/modref-tree.c
@@ -0,0 +1,342 @@ 
+/* Data structure for the modref pass.
+   Copyright (C) 2019 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "modref-tree.h"
+#include "selftest.h"
+
+#if CHECKING_P
+
+/*
+ * Returns true if and only if the access (T1, A1, B1) is dominated by the
+ * access (T2, A2, B2).  An access is said to be dominated by another access
+ * if and only if their types are the same and the former access is "included"
+ * in the latter access, i.e. the memory the former access touches is a subset
+ * of the memory the seconds access touches.
+ */
+
+bool
+access_covered_by (tree t1, poly_int64 a1, poly_int64 b1,
+                   tree t2, poly_int64 a2, poly_int64 b2)
+{
+  if (t1 != t2)
+    return false;
+  return known_le (a2, a1) && known_le (a1, b2) && known_le (b1, b2);
+}
+
+
+static void
+test_insert_search_collapse ()
+{
+  modref_base_node<alias_set_type> *base_node;
+  modref_ref_node<alias_set_type> *ref_node;
+  modref_access_node *access_node;
+
+  modref_tree<alias_set_type> *t = new modref_tree<alias_set_type>(1, 2, 2);
+  ASSERT_FALSE (t->every_base);
+
+  /* Insert into an empty tree.  */
+  t->insert (1, 2, NULL, 16, 32);
+  ASSERT_NE (t->bases, NULL);
+  ASSERT_EQ (t->bases->length (), 1);
+  ASSERT_FALSE (t->every_base);
+  ASSERT_EQ (t->search (2), NULL);
+
+  base_node = t->search (1);
+  ASSERT_NE (base_node, NULL);
+  ASSERT_EQ (base_node->base, 1);
+  ASSERT_NE (base_node->refs, NULL);
+  ASSERT_EQ (base_node->refs->length (), 1);
+  ASSERT_EQ (base_node->search (1), NULL);
+
+  ref_node = base_node->search (2);
+  ASSERT_NE (ref_node, NULL);
+  ASSERT_EQ (ref_node->ref, 2);
+  ASSERT_NE (ref_node->accesses, NULL);
+  ASSERT_EQ (ref_node->accesses->length (), 1);
+  
+  access_node = (*ref_node->accesses)[0];
+  ASSERT_TRUE (known_eq (access_node->a, 16));
+  ASSERT_TRUE (known_eq (access_node->b, 32));
+  ASSERT_FALSE (ref_node->every_access);
+
+  /* Insert when base exists but ref does not.  */
+  t->insert (1, 3, NULL, 2, 4);
+  ASSERT_NE (t->bases, NULL);
+  ASSERT_EQ (t->bases->length (), 1);
+  ASSERT_EQ (t->search (1), base_node);
+  ASSERT_EQ (t->search (2), NULL);
+  ASSERT_NE (base_node->refs, NULL);
+  ASSERT_EQ (base_node->refs->length (), 2);
+  
+  ref_node = base_node->search (3);
+  ASSERT_NE (ref_node, NULL);
+  ASSERT_NE (ref_node->accesses, NULL);
+  ASSERT_EQ (ref_node->accesses->length (), 1);
+  ASSERT_FALSE (ref_node->every_access);
+  
+  access_node = (*ref_node->accesses)[0];
+  ASSERT_TRUE (known_eq (access_node->a, 2));
+  ASSERT_TRUE (known_eq (access_node->b, 4));
+
+  /* Insert when base and ref exist, but access is not dominated by nor
+   * dominates other accesses.  */
+  t->insert (1, 2, NULL, 15, 17);
+  ASSERT_EQ (t->bases->length (), 1);
+  ASSERT_EQ (t->search (1), base_node);
+
+  ref_node = base_node->search (2);
+  ASSERT_NE (ref_node, NULL);
+  ASSERT_NE (ref_node->accesses, NULL);
+  ASSERT_EQ (ref_node->accesses->length (), 2);
+  ASSERT_FALSE (ref_node->every_access);
+  
+  access_node = (*ref_node->accesses)[0];
+  ASSERT_TRUE (known_eq (access_node->a, 16));
+  ASSERT_TRUE (known_eq (access_node->b, 32));
+
+  access_node = (*ref_node->accesses)[1];
+  ASSERT_TRUE (known_eq (access_node->a, 15));
+  ASSERT_TRUE (known_eq (access_node->b, 17));
+
+  /* Insert when base and ref exist and access is dominated.  */
+  t->insert (1, 2, NULL, 18, 20);
+  ASSERT_EQ (t->search (1), base_node);
+  ASSERT_EQ (base_node->search (2), ref_node);
+  ASSERT_NE (ref_node->accesses, NULL);
+  ASSERT_EQ (ref_node->accesses->length (), 2);
+  ASSERT_FALSE (ref_node->every_access);
+  
+  access_node = (*ref_node->accesses)[0];
+  ASSERT_TRUE (known_eq (access_node->a, 16));
+  ASSERT_TRUE (known_eq (access_node->b, 32));
+
+  access_node = (*ref_node->accesses)[1];
+  ASSERT_TRUE (known_eq (access_node->a, 15));
+  ASSERT_TRUE (known_eq (access_node->b, 17));
+
+  /* Insert when base and ref exists and access dominates.  */
+  t->insert (1, 2, NULL, 0, 64);
+  ASSERT_EQ (t->search (1), base_node);
+  ASSERT_EQ (base_node->search (2), ref_node);
+  ASSERT_NE (ref_node->accesses, NULL);
+  ASSERT_EQ (ref_node->accesses->length (), 1);
+  ASSERT_FALSE (ref_node->every_access);
+  
+  access_node = (*ref_node->accesses)[0];
+  ASSERT_TRUE (known_eq (access_node->a, 0));
+  ASSERT_TRUE (known_eq (access_node->b, 64));
+
+  /* Insert accesses to trigger access list collapse.  */
+  t->insert (1, 2, NULL, 64, 68);
+  ASSERT_EQ (t->search (1), base_node);
+  ASSERT_EQ (base_node->search (2), ref_node);
+  ASSERT_NE (ref_node->accesses, NULL);
+  ASSERT_EQ (ref_node->accesses->length (), 2);
+  ASSERT_FALSE (ref_node->every_access);
+
+  t->insert (1, 2, NULL, 68, 72);
+  ASSERT_EQ (t->search (1), base_node);
+  ASSERT_EQ (base_node->search (2), ref_node);
+  ASSERT_EQ (ref_node->accesses, NULL);
+  ASSERT_TRUE (ref_node->every_access);
+
+  /* Further inserts to collapsed access list are ignored.  */
+  t->insert (1, 2, NULL, 100, 108);
+  ASSERT_EQ (t->search (1), base_node);
+  ASSERT_EQ (base_node->search (2), ref_node);
+  ASSERT_EQ (ref_node->accesses, NULL);
+  ASSERT_TRUE (ref_node->every_access);
+
+  /* Insert ref to trigger ref list collapse for base 1.  */
+  t->insert (1, 4, NULL, 0, 4);
+  ASSERT_EQ (t->search (1), base_node);
+  ASSERT_EQ (base_node->refs, NULL);
+  ASSERT_EQ (base_node->search (2), NULL);
+  ASSERT_EQ (base_node->search (3), NULL);
+  ASSERT_TRUE (base_node->every_ref);
+
+  /* Further inserts to collapsed ref list are ignored.  */
+  t->insert (1, 5, NULL, 2, 2);
+  ASSERT_EQ (t->search (1), base_node);
+  ASSERT_EQ (base_node->refs, NULL);
+  ASSERT_EQ (base_node->search (2), NULL);
+  ASSERT_EQ (base_node->search (3), NULL);
+  ASSERT_TRUE (base_node->every_ref);
+
+  /* Insert base to trigger base list collapse.  */
+  t->insert (5, 6, NULL, 0, 4);
+  ASSERT_TRUE (t->every_base);
+  ASSERT_EQ (t->bases, NULL);
+  ASSERT_EQ (t->search (1), NULL);
+
+  /* Further inserts to collapsed base list are ignored.  */
+  t->insert (7, 8, NULL, 0, 1);
+  ASSERT_TRUE (t->every_base);
+  ASSERT_EQ (t->bases, NULL);
+  ASSERT_EQ (t->search (1), NULL);
+}
+
+static void
+test_merge ()
+{
+  modref_tree<alias_set_type> *t1, *t2;
+  modref_base_node<alias_set_type> *base_node;
+  modref_ref_node<alias_set_type> *ref_node;
+
+  t1 = new modref_tree<alias_set_type>(3, 4, 1);
+  t1->insert(1, 1, NULL, 0, 1);
+  t1->insert(1, 2, NULL, 1, 2);
+  t1->insert(1, 3, NULL, 2, 4);
+  t1->insert(2, 1, NULL, 4, 8);
+  t1->insert(3, 1, NULL, 4, 8);
+
+  t2 = new modref_tree<alias_set_type>(10, 10, 10);
+  t2->insert(1, 2, NULL, 8, 16);
+  t2->insert(1, 3, NULL, 2, 4);
+  t2->insert(1, 4, NULL, 4, 8);
+  t2->insert(3, 2, NULL, 0, 1);
+  t2->insert(3, 3, NULL, 0, 1);
+  t2->insert(3, 4, NULL, 0, 1);
+  t2->insert(3, 5, NULL, 0, 1);
+
+  t1->merge (t2);
+
+  ASSERT_FALSE (t1->every_base);
+  ASSERT_NE (t1->bases, NULL);
+  ASSERT_EQ (t1->bases->length (), 3);
+
+  base_node = t1->search (1);
+  ASSERT_NE (base_node->refs, NULL);
+  ASSERT_FALSE (base_node->every_ref);
+  ASSERT_EQ (base_node->refs->length (), 4);
+
+  ref_node = base_node->search (2);
+  ASSERT_EQ (ref_node->accesses, NULL);
+  ASSERT_TRUE (ref_node->every_access);
+
+  base_node = t1->search (2);
+  ASSERT_NE (base_node->refs, NULL);
+  ASSERT_FALSE (base_node->every_ref);
+  ASSERT_EQ (base_node->refs->length (), 1);
+
+  base_node = t1->search (3);
+  ASSERT_EQ (base_node->refs, NULL);
+  ASSERT_TRUE (base_node->every_ref);
+}
+
+
+void
+modref_tree_c_tests ()
+{
+  test_insert_search_collapse ();
+  test_merge ();
+}
+
+#endif
+
+void gt_ggc_mx (modref_tree<int>* const &tt)
+{
+    if (tt->bases)
+      {
+        ggc_test_and_set_mark (tt->bases);
+        gt_ggc_mx (tt->bases);
+      }
+}
+
+void gt_ggc_mx (modref_tree<tree_node*>* const &tt)
+{
+  if (tt->bases)
+    {
+      ggc_test_and_set_mark (tt->bases);
+      gt_ggc_mx (tt->bases);
+    }
+}
+
+void gt_pch_nx (modref_tree<int>* const&) {}
+void gt_pch_nx (modref_tree<tree_node*>* const&) {}
+void gt_pch_nx (modref_tree<int>* const&, gt_pointer_operator, void *) {}
+void gt_pch_nx (modref_tree<tree_node*>* const&, gt_pointer_operator, void *) {}
+
+void gt_ggc_mx (modref_base_node<int>* &b)
+{
+  ggc_test_and_set_mark (b);
+  if (b->refs)
+    {
+      ggc_test_and_set_mark (b->refs);
+      gt_ggc_mx (b->refs);
+    }
+}
+
+void gt_ggc_mx (modref_base_node<tree_node*>* &b)
+{
+  ggc_test_and_set_mark (b);
+  if (b->refs)
+    {
+      ggc_test_and_set_mark (b->refs);
+      gt_ggc_mx (b->refs);
+    }
+  if (b->base)
+    gt_ggc_mx (b->base);
+}
+
+void gt_pch_nx (modref_base_node<int>*) {}
+void gt_pch_nx (modref_base_node<tree_node*>*) {}
+void gt_pch_nx (modref_base_node<int>*, gt_pointer_operator, void *) {}
+void gt_pch_nx (modref_base_node<tree_node*>*, gt_pointer_operator, void *) {}
+
+void gt_ggc_mx (modref_ref_node<int>* &r)
+{
+  ggc_test_and_set_mark (r);
+  if (r->accesses)
+    {
+      ggc_test_and_set_mark (r->accesses);
+      gt_ggc_mx (r->accesses);
+    }
+}
+
+void gt_ggc_mx (modref_ref_node<tree_node*>* &r)
+{
+  ggc_test_and_set_mark (r);
+  if (r->accesses)
+    {
+      ggc_test_and_set_mark (r->accesses);
+      gt_ggc_mx (r->accesses);
+    }
+  if (r->ref)
+    gt_ggc_mx (r->ref);
+}
+
+void gt_pch_nx (modref_ref_node<int>* ) {}
+void gt_pch_nx (modref_ref_node<tree_node*>*) {}
+void gt_pch_nx (modref_ref_node<int>*, gt_pointer_operator, void *) {}
+void gt_pch_nx (modref_ref_node<tree_node*>*, gt_pointer_operator, void *) {}
+
+
+void gt_ggc_mx (modref_access_node* &a)
+{
+  ggc_test_and_set_mark (a);
+  //size_t l;
+  //xxx_crumb *c;
+  //if (a->x)
+  //  FOR_EACH_VEC_SAFE_ELT (a->x->crumbs, l, c)
+  //    gt_ggc_mx(c)
+}
+
+//void gt_pch_nx (modref_access_node*&) {}
+//void gt_pch_nx (modref_access_node*&, gt_pointer_operator, void *) {}
diff --git a/gcc/modref-tree.h b/gcc/modref-tree.h
new file mode 100644
index 00000000000..a5630c8df62
--- /dev/null
+++ b/gcc/modref-tree.h
@@ -0,0 +1,355 @@ 
+/* Data structure for the modref pass.
+   Copyright (C) 2019 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_MODREF_TREE_H
+#define GCC_MODREF_TREE_H
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "tree.h"
+#include "gimple.h"
+#include "alloc-pool.h"
+#include "tree-pass.h"
+#include "gimple-iterator.h"
+#include "tree-dfa.h"
+#include "cgraph.h"
+#include "ipa-utils.h"
+#include "symbol-summary.h"
+#include "gimple-pretty-print.h"
+#include "gimple-walk.h"
+#include "print-tree.h"
+#include "tree-streamer.h"
+#include "vec.h"
+
+struct ipa_modref_summary;
+
+struct GTY(()) ipa_access_component
+{
+  tree type;
+  poly_int64 offset;
+};
+
+struct GTY(()) ipa_access
+{
+  vec <ipa_access_component, va_gc> *crumbs;
+  ipa_access() : crumbs(NULL) {}
+};
+
+struct GTY(()) modref_access_node
+{
+  ipa_access *x;
+  poly_int64 a;
+  poly_int64 b;
+
+  modref_access_node (ipa_access *x, poly_int64 a, poly_int64 b):
+    x(x),
+    a(a),
+    b(b) {}
+};
+
+bool access_covered_by (tree t1, poly_int64 a1, poly_int64 b1,
+                        tree t2, poly_int64 a2, poly_int64 b2);
+
+
+template <typename T>
+struct GTY((user)) modref_ref_node
+{
+  T ref;
+  vec <modref_access_node *, va_gc> *accesses;
+  bool every_access;
+
+  modref_ref_node (T ref):
+    ref(ref),
+    accesses (NULL),
+    every_access(false)
+  {}
+
+  void insert_access (ipa_access *x, poly_int64 a, poly_int64 b, size_t max_accesses)
+  {
+    /* If this base->ref pair has no access information, bail out.  */
+    if (every_access)
+      return;
+
+    size_t i;
+    modref_access_node *n;
+    
+    /* Otherwise, try to search an access which dominates the access we would
+     * like to store.  */
+    FOR_EACH_VEC_SAFE_ELT (accesses, i, n)
+      {
+        /* Inserted access already covered by larger access stored earlier.  */
+        //if (access_covered_by (type, a, b, n->type, n->a, n->b))
+        //  return;
+      }
+
+    /* If there's no access that would already dominate the inserted access,
+     * try it the other way around: look for accesses which would be dominated
+     * by the inserted access.  */
+    bool dominated = false;
+    //FOR_EACH_VEC_SAFE_ELT (accesses, i, n)
+    //  {
+    //    //if (!access_covered_by (n->type, n->a, n->b, type, a, b))
+    //    //  continue;
+
+    //    if (!dominated)
+    //      {
+    //        /* First time we find an access that's dominated by the inserted
+    //         * access, we will "extend" the already present access.  */
+    //        n->type = type;
+    //        n->a = a;
+    //        n->b = b;
+    //        dominated = true;
+    //      }
+    //    else
+    //      {
+    //        /* Other accesses inserted earlier which are dominated by the
+    //         * inserted access will be removed.  */
+    //        accesses->ordered_remove (i);
+    //      }
+    //  }
+
+    /* If inserted access was neither dominated by any already present access,
+     * nor did it dominate any already inserted access, we have to insert a new
+     * node for it.  */
+    if (!dominated)
+      {
+        /* If this base->ref pair has too many accesses stored, we will clear
+         * all accesses and bail out.  */
+        if (accesses && accesses->length () >= max_accesses)
+          {
+            collapse ();
+            return;
+          }
+        modref_access_node *access_node = new (ggc_alloc <modref_access_node> ())
+                                          modref_access_node (x, a, b);
+        vec_safe_push (accesses, access_node);
+      }
+  }
+
+  void collapse ()
+  {
+    accesses = NULL; /* TODO cleanup */
+    every_access = true;
+  }
+};
+
+
+template <typename T>
+struct GTY((user)) modref_base_node
+{
+  T base;
+  vec <modref_ref_node <T> *, va_gc> *refs;
+  bool every_ref;
+
+  modref_base_node(T base):
+    base (base),
+    refs (NULL),
+    every_ref (false) {}
+
+  modref_ref_node <T> *search (T ref)
+  {
+    size_t i;
+    modref_ref_node <T> *n;
+    FOR_EACH_VEC_SAFE_ELT (refs, i, n)
+      if (n->ref == ref)
+        return n;
+    return NULL;
+  }
+
+  modref_ref_node <T> *insert_ref (T ref, size_t max_refs)
+  {
+    modref_ref_node <T> *ref_node;
+
+    /* If the node is collapsed, don't do anything.  */
+    if (every_ref)
+      return NULL;
+
+    /* Otherwise, insert a node for the ref of the access under the base.  */
+    ref_node = search (ref);
+    if (ref_node)
+      return ref_node;
+
+    /* Collapse the node if too full already.  */
+    if (refs && refs->length () >= max_refs)
+      {
+        collapse ();
+        return NULL;
+      }
+
+    ref_node = new  (ggc_alloc <modref_ref_node <T> > ())modref_ref_node <T> (ref);
+    vec_safe_push (refs, ref_node);
+    return ref_node;
+  }
+
+  void collapse ()
+  {
+    refs = NULL; /* TODO cleanup */
+    every_ref = true;
+  }
+};
+
+/*
+ * TODO Collect types and [a, b) ranges of memory access components.
+ * TODO Use flag_modref_max_{bases,refs,accesses}.
+ */
+template <typename T>
+struct GTY((user)) modref_tree
+{
+  vec <modref_base_node <T> *, va_gc> *bases;
+  size_t max_bases;
+  size_t max_refs;
+  size_t max_accesses;
+  bool every_base;
+
+  modref_tree (size_t max_bases, size_t max_refs, size_t max_accesses):
+    bases (NULL),
+    max_bases (max_bases),
+    max_refs (max_refs),
+    max_accesses (max_accesses),
+    every_base (false) {}
+
+  modref_base_node <T> *insert_base (T base)
+  {
+    modref_base_node <T> *base_node;
+
+    /* If the node is collapsed, don't do anything.  */
+    if (every_base)
+      return NULL;
+
+    /* Otherwise, insert a node for the base of the access into the tree.  */
+    base_node = search (base);
+    if (base_node)
+      return base_node;
+
+    /* Collapse the node if too full already.  */
+    if (bases && bases->length () >= max_bases)
+      {
+        collapse ();
+        return NULL;
+      }
+
+    base_node = new (ggc_alloc <modref_base_node <T> > ()) modref_base_node <T> (base);
+    vec_safe_push (bases, base_node);
+    return base_node;
+  }
+
+  void insert (T base, T ref, ipa_access *x, poly_int64 a, poly_int64 b)
+  {
+    modref_base_node <T> *base_node;
+    modref_ref_node <T> *ref_node;
+
+    base_node = insert_base (base);
+    if (!base_node)
+      return;
+    gcc_assert (search (base) != NULL);
+
+    ref_node = base_node->insert_ref (ref, max_refs);
+    if (!ref_node)
+      return;
+
+    ref_node->insert_access (x, a, b, max_accesses);
+  }
+
+  void merge (modref_tree <T> *other)
+  {
+    if (!other)
+      return;
+    if (other->every_base)
+      {
+        collapse ();
+        return;
+      }
+
+    size_t i, j, k;
+    modref_base_node <T> *base_node, *my_base_node;
+    modref_ref_node <T> *ref_node, *my_ref_node;
+    modref_access_node *access_node;
+    FOR_EACH_VEC_SAFE_ELT (other->bases, i, base_node)
+      {
+        my_base_node = insert_base (base_node->base);
+        if (!my_base_node)
+          continue;
+
+        if (base_node->every_ref)
+          {
+            my_base_node->collapse ();
+            continue;
+          }
+
+        FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
+          {
+            my_ref_node = my_base_node->insert_ref (ref_node->ref, max_refs);
+            if (!my_ref_node)
+              continue;
+
+            if (ref_node->every_access)
+              {
+                my_ref_node->collapse ();
+                continue;
+              }
+
+            FOR_EACH_VEC_SAFE_ELT (ref_node->accesses, k, access_node)
+              my_ref_node->insert_access (access_node->x, access_node->a, access_node->b, max_accesses);
+          }
+      }
+  }
+
+  modref_base_node <T> *search (T base)
+  {
+    size_t i;
+    modref_base_node <T> *n;
+    FOR_EACH_VEC_SAFE_ELT (bases, i, n)
+      if (n->base == base)
+        return n;
+    return NULL;
+  }
+
+  void collapse ()
+  {
+    bases = NULL; /* TODO cleanup */
+    every_base = true;
+  }
+};
+
+void modref_c_tests ();
+
+void gt_ggc_mx(modref_tree <int>* const&);
+void gt_ggc_mx(modref_tree <tree_node*>* const&);
+void gt_pch_nx(modref_tree <int>* const&);
+void gt_pch_nx(modref_tree <tree_node*>* const&);
+void gt_pch_nx(modref_tree <int>* const&, gt_pointer_operator op, void *cookie);
+void gt_pch_nx(modref_tree <tree_node*>* const&, gt_pointer_operator op, void *cookie);
+
+void gt_ggc_mx(modref_base_node <int>*);
+void gt_ggc_mx(modref_base_node <tree_node*>* &);
+void gt_pch_nx(modref_base_node <int>* const&);
+void gt_pch_nx(modref_base_node <tree_node*>* const&);
+void gt_pch_nx(modref_base_node <int>* const&, gt_pointer_operator op, void *cookie);
+void gt_pch_nx(modref_base_node <tree_node*>* const&, gt_pointer_operator op, void *cookie);
+
+void gt_ggc_mx(modref_ref_node <int>*);
+void gt_ggc_mx(modref_ref_node <tree_node*>* &);
+void gt_pch_nx(modref_ref_node <int>* const&);
+void gt_pch_nx(modref_ref_node <tree_node*>* const&);
+void gt_pch_nx(modref_ref_node <int>* const&, gt_pointer_operator op, void *cookie);
+void gt_pch_nx(modref_ref_node <tree_node*>* const&, gt_pointer_operator op, void *cookie);
+
+#endif
diff --git a/gcc/params.opt b/gcc/params.opt
index d60db5825cc..75070c8f0af 100644
--- a/gcc/params.opt
+++ b/gcc/params.opt
@@ -876,6 +876,18 @@  Maximum size of a single store merging region in bytes.
 Common Joined UInteger Var(param_switch_conversion_branch_ratio) Init(8) IntegerRange(1, 65536) Param
 The maximum ratio between array size and switch branches for a switch conversion to take place.
 
+-param=modref-max-bases=
+Common Joined UInteger Var(param_modref_max_bases) Init(200)
+Maximum number of bases stored in each modref tree.
+
+-param=modref-max-refs=
+Common Joined UInteger Var(param_modref_max_refs)
+Maximum number of refs stored in each modref tree.
+
+-param=modref-max-accesses=
+Common Joined UInteger Var(param_modref_max_accesses)
+Maximum number of accesses stored in each modref tree.
+
 -param=tm-max-aggregate-size=
 Common Joined UInteger Var(param_tm_max_aggregate_size) Init(9) Param
 Size in bytes after which thread-local aggregates should be instrumented with the logging functions instead of save/restore pairs.
diff --git a/gcc/passes.def b/gcc/passes.def
index 798a391bd35..a71ae62b690 100644
--- a/gcc/passes.def
+++ b/gcc/passes.def
@@ -51,6 +51,7 @@  along with GCC; see the file COPYING3.  If not see
   INSERT_PASSES_AFTER (all_small_ipa_passes)
   NEXT_PASS (pass_ipa_free_lang_data);
   NEXT_PASS (pass_ipa_function_and_variable_visibility);
+  NEXT_PASS (pass_ipa_modref);
   NEXT_PASS (pass_build_ssa_passes);
   PUSH_INSERT_PASSES_WITHIN (pass_build_ssa_passes)
       NEXT_PASS (pass_fixup_cfg);
@@ -89,6 +90,7 @@  along with GCC; see the file COPYING3.  If not see
           NEXT_PASS (pass_dse);
 	  NEXT_PASS (pass_cd_dce);
 	  NEXT_PASS (pass_phiopt, true /* early_p */);
+	  NEXT_PASS (pass_modref);
 	  NEXT_PASS (pass_tail_recursion);
 	  NEXT_PASS (pass_convert_switch);
 	  NEXT_PASS (pass_cleanup_eh);
@@ -143,6 +145,7 @@  along with GCC; see the file COPYING3.  If not see
 
   INSERT_PASSES_AFTER (all_regular_ipa_passes)
   NEXT_PASS (pass_ipa_whole_program_visibility);
+  NEXT_PASS (pass_ipa_modref);
   NEXT_PASS (pass_ipa_profile);
   NEXT_PASS (pass_ipa_icf);
   NEXT_PASS (pass_ipa_devirt);
@@ -342,6 +345,7 @@  along with GCC; see the file COPYING3.  If not see
       NEXT_PASS (pass_late_warn_uninitialized);
       NEXT_PASS (pass_uncprop);
       NEXT_PASS (pass_local_pure_const);
+      NEXT_PASS (pass_modref);
   POP_INSERT_PASSES ()
   NEXT_PASS (pass_all_optimizations_g);
   PUSH_INSERT_PASSES_WITHIN (pass_all_optimizations_g)
@@ -374,6 +378,7 @@  along with GCC; see the file COPYING3.  If not see
       NEXT_PASS (pass_late_warn_uninitialized);
       NEXT_PASS (pass_uncprop);
       NEXT_PASS (pass_local_pure_const);
+      NEXT_PASS (pass_modref);
   POP_INSERT_PASSES ()
   NEXT_PASS (pass_tm_init);
   PUSH_INSERT_PASSES_WITHIN (pass_tm_init)
diff --git a/gcc/timevar.def b/gcc/timevar.def
index 357fcfd65c5..61de26caeed 100644
--- a/gcc/timevar.def
+++ b/gcc/timevar.def
@@ -107,6 +107,7 @@  DEFTIMEVAR (TV_IPA_PTA               , "ipa points-to")
 DEFTIMEVAR (TV_IPA_SRA               , "ipa SRA")
 DEFTIMEVAR (TV_IPA_FREE_LANG_DATA    , "ipa free lang data")
 DEFTIMEVAR (TV_IPA_FREE_INLINE_SUMMARY, "ipa free inline summary")
+DEFTIMEVAR (TV_IPA_MODREF	     , "ipa modref")
 /* Time spent by constructing CFG.  */
 DEFTIMEVAR (TV_CFG                   , "cfg construction")
 /* Time spent by cleaning up CFG.  */
@@ -218,6 +219,7 @@  DEFTIMEVAR (TV_TREE_SINCOS           , "gimple CSE sin/cos")
 DEFTIMEVAR (TV_TREE_WIDEN_MUL        , "gimple widening/fma detection")
 DEFTIMEVAR (TV_TRANS_MEM             , "transactional memory")
 DEFTIMEVAR (TV_TREE_STRLEN           , "tree strlen optimization")
+DEFTIMEVAR (TV_TREE_MODREF	     , "tree modref")
 DEFTIMEVAR (TV_CGRAPH_VERIFY         , "callgraph verifier")
 DEFTIMEVAR (TV_DOM_FRONTIERS         , "dominance frontiers")
 DEFTIMEVAR (TV_DOMINANCE             , "dominance computation")
diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h
index a987661530e..de16cd7197b 100644
--- a/gcc/tree-pass.h
+++ b/gcc/tree-pass.h
@@ -478,6 +478,7 @@  extern gimple_opt_pass *make_pass_gen_hsail (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_warn_nonnull_compare (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_sprintf_length (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_walloca (gcc::context *ctxt);
+extern gimple_opt_pass *make_pass_modref (gcc::context *ctxt);
 
 /* IPA Passes */
 extern simple_ipa_opt_pass *make_pass_ipa_lower_emutls (gcc::context *ctxt);
@@ -514,6 +515,7 @@  extern ipa_opt_pass_d *make_pass_ipa_profile (gcc::context *ctxt);
 extern ipa_opt_pass_d *make_pass_ipa_cdtor_merge (gcc::context *ctxt);
 extern ipa_opt_pass_d *make_pass_ipa_single_use (gcc::context *ctxt);
 extern ipa_opt_pass_d *make_pass_ipa_comdats (gcc::context *ctxt);
+extern ipa_opt_pass_d *make_pass_ipa_modref (gcc::context *ctxt);
 extern simple_ipa_opt_pass *make_pass_materialize_all_clones (gcc::context *
 							      ctxt);
 
diff --git a/gcc/tree-ssa-alias.c b/gcc/tree-ssa-alias.c
index 8c63e3bf5a9..7070e0e1298 100644
--- a/gcc/tree-ssa-alias.c
+++ b/gcc/tree-ssa-alias.c
@@ -38,6 +38,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "tree-dfa.h"
 #include "ipa-reference.h"
 #include "varasm.h"
+#include "ipa-modref.h"
 
 /* Broad overview of how alias analysis on gimple works:
 
@@ -2273,6 +2274,51 @@  refs_output_dependent_p (tree store1, tree store2)
   return refs_may_alias_p_1 (&r1, &r2, false);
 }
 
+/* Returns true if and only if REF may alias any access stored in TT.  */
+
+static bool
+modref_may_conflict (modref_tree <alias_set_type> *tt, ao_ref *ref)
+{
+  alias_set_type base_set, ref_set;
+  modref_base_node <alias_set_type> *base_node;
+  modref_ref_node <alias_set_type> *ref_node;
+  size_t i, j;
+
+  if (!flag_strict_aliasing)
+    return true;
+
+  if (tt->every_base)
+    return true;
+
+  base_set = ao_ref_base_alias_set (ref);
+  if (!base_set)
+    return true;
+
+  ref_set = ao_ref_alias_set (ref);
+
+  FOR_EACH_VEC_SAFE_ELT (tt->bases, i, base_node)
+    {
+      if (base_node->every_ref)
+	return true;
+
+      if (!base_node->base)
+	return true;
+
+      if (!alias_sets_conflict_p (base_set, base_node->base))
+	continue;
+
+      FOR_EACH_VEC_SAFE_ELT (base_node->refs, j, ref_node)
+	{
+	  if (!alias_sets_conflict_p (ref_set, ref_node->ref))
+	    continue;
+
+	  /* TODO use more access information.  */
+	  return true;
+	}
+    }
+  return false;
+}
+
 /* If the call CALL may use the memory reference REF return true,
    otherwise return false.  */
 
@@ -2288,10 +2334,35 @@  ref_maybe_used_by_call_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
       && (flags & (ECF_CONST|ECF_NOVOPS)))
     goto process_args;
 
+  callee = gimple_call_fndecl (call);
+
   base = ao_ref_base (ref);
   if (!base)
     return true;
 
+  if (callee != NULL_TREE)
+    {
+      struct cgraph_node *node = cgraph_node::get (callee);
+      if (node)
+        {
+          modref_summary *summary = get_function_summary (node);
+	  if (summary) {
+	    if (!modref_may_conflict(summary->loads_tree, ref))
+	      {
+		if (dump_file)
+		  {
+		    fprintf(dump_file, "modref said: in %s, call to %s does not use ",
+			    current_function_name (), "????");
+		    fprintf(dump_file, " %i->%i\n",
+			    ao_ref_base_alias_set (ref),
+			    ao_ref_alias_set (ref));
+		  }
+		return false;
+	      }
+	  }
+       }
+    }
+
   /* A call that is not without side-effects might involve volatile
      accesses and thus conflicts with all other volatile accesses.  */
   if (ref->volatile_p)
@@ -2305,8 +2376,6 @@  ref_maybe_used_by_call_p_1 (gcall *call, ao_ref *ref, bool tbaa_p)
       && !is_global_var (base))
     goto process_args;
 
-  callee = gimple_call_fndecl (call);
-
   /* Handle those builtin functions explicitly that do not act as
      escape points.  See tree-ssa-structalias.c:find_func_aliases
      for the list of builtins we might need to handle here.  */
@@ -2691,6 +2760,36 @@  call_may_clobber_ref_p_1 (gcall *call, ao_ref *ref)
   if (!base)
     return true;
 
+  callee = gimple_call_fndecl (call);
+  if (callee != NULL_TREE)
+    {
+      struct cgraph_node *node = cgraph_node::get (callee);
+      if (node)
+	{
+          enum availability avail;
+          node = node->function_symbol(&avail);
+	  if (avail > AVAIL_INTERPOSABLE) {
+	    modref_summary *summary = get_function_summary (node);
+	    if (summary)
+	      {
+		if (!modref_may_conflict(summary->stores_tree, ref))
+		  {
+		    if (dump_file)
+		      {
+			fprintf(dump_file, "modref said: in %s, call to %s does not clobber ",
+				current_function_name (), "??????");
+			print_generic_expr(dump_file, ref->ref);
+			fprintf(dump_file, " %i->%i\n",
+				ao_ref_base_alias_set (ref),
+				ao_ref_alias_set (ref));
+		      }
+		    return false;
+		  }
+	      }
+	   }
+	}
+    }
+
   if (TREE_CODE (base) == SSA_NAME
       || CONSTANT_CLASS_P (base))
     return false;
@@ -2719,8 +2818,6 @@  call_may_clobber_ref_p_1 (gcall *call, ao_ref *ref)
       && SSA_NAME_POINTS_TO_READONLY_MEMORY (TREE_OPERAND (base, 0)))
     return false;
 
-  callee = gimple_call_fndecl (call);
-
   /* Handle those builtin functions explicitly that do not act as
      escape points.  See tree-ssa-structalias.c:find_func_aliases
      for the list of builtins we might need to handle here.  */