diff mbox

A pass which merges constant stores to bitfields

Message ID CA+=Sn1m8FiYtA5NoovVPJQO1Ls6X0C0YrBUz781Nge_W+egCxQ@mail.gmail.com
State New
Headers show

Commit Message

Andrew Pinski Nov. 13, 2012, 12:50 a.m. UTC
Hi,
  I know we are in stage3, I thought I would send this out now for
review as I finally have it ready for consumption as I finally got
around to removing the limitation of it only working on big-endian.
This pass was originally written by Adam Nemet while he was at Cavium.
 I modified it to work on little-endian and also update the code to
use the aliasing oracle and some of the new VEC interface.

Yes I know I forgot to add documentation for the new option and for
the new pass.  I will add it soon.

Thanks,
Andrew Pinski

ChangeLog:
* tree-merge-const-bfstores.c: New file.
* tree-pass.h (pass_merge_const_bfstores): Add pass.
* opts.c (default_options_table): Add OPT_fmerge_const_bfstores at -O2
and above.
* timevar.def (TV_MERGE_CONST_BFSTORES): New timevar.
* common.opt (fmerge-const-bfstores): New option.
* Makefile.in (OBJS): Add tree-merge-const-bfstores.o.
(tree-merge-const-bfstores.o): New target.
* passes.c (init_optimization_passes): Add pass_merge_const_bfstores
right after the last pass_phiopt.

Comments

Andrew Pinski Nov. 13, 2012, 12:52 a.m. UTC | #1
On Mon, Nov 12, 2012 at 4:50 PM, Andrew Pinski
<andrew.pinski@caviumnetworks.com> wrote:
> Hi,
>   I know we are in stage3, I thought I would send this out now for
> review as I finally have it ready for consumption as I finally got
> around to removing the limitation of it only working on big-endian.
> This pass was originally written by Adam Nemet while he was at Cavium.
>  I modified it to work on little-endian and also update the code to
> use the aliasing oracle and some of the new VEC interface.
>
> Yes I know I forgot to add documentation for the new option and for
> the new pass.  I will add it soon.

I forgot to say I bootstrapped and tested this on x86_64-linux-gnu
with no regressions.  I have tested it on a big-endian MIPS64 target
too.

>
> Thanks,
> Andrew Pinski
>
> ChangeLog:
> * tree-merge-const-bfstores.c: New file.
> * tree-pass.h (pass_merge_const_bfstores): Add pass.
> * opts.c (default_options_table): Add OPT_fmerge_const_bfstores at -O2
> and above.
> * timevar.def (TV_MERGE_CONST_BFSTORES): New timevar.
> * common.opt (fmerge-const-bfstores): New option.
> * Makefile.in (OBJS): Add tree-merge-const-bfstores.o.
> (tree-merge-const-bfstores.o): New target.
> * passes.c (init_optimization_passes): Add pass_merge_const_bfstores
> right after the last pass_phiopt.
Richard Biener Nov. 27, 2012, 1:27 p.m. UTC | #2
On Tue, Nov 13, 2012 at 1:50 AM, Andrew Pinski
<andrew.pinski@caviumnetworks.com> wrote:
> Hi,
>   I know we are in stage3, I thought I would send this out now for
> review as I finally have it ready for consumption as I finally got
> around to removing the limitation of it only working on big-endian.
> This pass was originally written by Adam Nemet while he was at Cavium.
>  I modified it to work on little-endian and also update the code to
> use the aliasing oracle and some of the new VEC interface.
>
> Yes I know I forgot to add documentation for the new option and for
> the new pass.  I will add it soon.

Note that I think the patch doesn't try to honor the C++ memory model
nor arms strict volatile bitfields.

My plan was to get similar results as your patch by lowering bitfield
loads and stores using DECL_BIT_FIELD_REPRESENTATIVE and
then let DCE/DSE and tree combine merge things.

That is, you can replace a bit-field a.b.c.d with
BIT_FIELD_REF <a.b.c.DECL_BIT_FIELD_REPRESENTATIVE, ...>
and perform stores via read-modify-write.  The DECL_BIT_FIELD_REPRESENTATIVE
loads can be CSEd leaving a combining opportunity, stores will end up
being redundant.

The advantage is that using DECL_BIT_FIELD_REPRESENTATIVE will
make you honor the memory model issues automatically - you basically
perform what expand would do.

Instead of generating the component-ref with the representative you can of
course also lower the whole thing to a MEM_REF (if the ref does not
contain variable indexes).

Now, you have a pass that might be able to figure out when this lowering
would be profitable - that's good, because that is what I was missing with
the very simple approach of performing the above lowering from insinde
gimplification.

For reference I attached the BITFIELD_COMPOSE tree expression
patch.

Richard.

> Thanks,
> Andrew Pinski
>
> ChangeLog:
> * tree-merge-const-bfstores.c: New file.
> * tree-pass.h (pass_merge_const_bfstores): Add pass.
> * opts.c (default_options_table): Add OPT_fmerge_const_bfstores at -O2
> and above.
> * timevar.def (TV_MERGE_CONST_BFSTORES): New timevar.
> * common.opt (fmerge-const-bfstores): New option.
> * Makefile.in (OBJS): Add tree-merge-const-bfstores.o.
> (tree-merge-const-bfstores.o): New target.
> * passes.c (init_optimization_passes): Add pass_merge_const_bfstores
> right after the last pass_phiopt.
diff mbox

Patch

Index: tree-merge-const-bfstores.c
===================================================================
--- tree-merge-const-bfstores.c	(revision 0)
+++ tree-merge-const-bfstores.c	(revision 0)
@@ -0,0 +1,455 @@ 
+/* Merge constant bitfield stores.
+   Copyright (C) 2007-2012 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+/* This pass merges adjacent or overlapping (in case of unions)
+   constant bitfield stores.  We try to merge stores into preceding
+   stores.  In case the whole structure is zeroed out initially, by
+   merging backward, we can possibly merge into this
+   initialization.
+
+   Requirements:
+
+   1. order shouldn't matter:
+
+     x.b = 2;
+     x.a = 3;
+     x.c = 4;
+
+  2. merge into initialization (with a union example):
+
+     union { uint64_t l; struct { uint64_t a:2; uint64_t b:2 ... } s } u;
+     u.l = 0;
+     ...
+     u.s.b = 2;
+
+  3. merge should happen even if use is in a different bb:
+
+     x.a = 2;
+     x.b = 3;
+     if ()
+       use (x);
+
+  4. don't give up a non-overlapping varying stores:
+
+     extern int i;
+     x.a = 2;
+     x.c = i;
+     x.b = 3;
+
+  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree-pass.h"
+#include "rtl.h"
+#include "obstack.h"
+#include "basic-block.h"
+#include "tree.h"
+#include "tree-flow.h"
+#include "langhooks.h"
+#include "timevar.h"
+#include "vec.h"
+#include "diagnostic.h"
+#include "gimple-pretty-print.h"
+
+/* Structure to keep track of constant bit-field stores.  */
+
+typedef struct const_bfstore
+{
+  tree inner;
+  HOST_WIDE_INT bitsize;
+  HOST_WIDE_INT bitpos;
+  tree constval;
+  gimple stmt;
+} const_bfstore;
+
+DEF_VEC_O (const_bfstore);
+DEF_VEC_ALLOC_O (const_bfstore, heap);
+
+static VEC (const_bfstore, heap) *bfstores;
+
+/* Build an unsigned BIT_FIELD_REF expression.  */
+
+static tree
+make_unsigned_bitfield_ref (tree inner, HOST_WIDE_INT bitsize,
+			    HOST_WIDE_INT bitpos)
+{
+  enum machine_mode mode;
+  tree exp, unsigned_type;
+
+  mode = get_best_mode (bitsize, bitpos, 0, 0,
+			TYPE_ALIGN (TREE_TYPE (inner)), word_mode, 0);
+  gcc_assert (mode != VOIDmode);
+
+  unsigned_type = build_nonstandard_integer_type (bitsize, 1);
+  exp = build3 (BIT_FIELD_REF, unsigned_type, inner, size_int (bitsize),
+		bitsize_int (bitpos));
+  return exp;
+}
+
+/* Return true iff the bitfields are overlapping.  */
+
+static bool
+overlapping_bitfields (HOST_WIDE_INT bitsize0, HOST_WIDE_INT bitpos0,
+		       HOST_WIDE_INT bitsize1, HOST_WIDE_INT bitpos1)
+{
+  return (bitpos0 < bitpos1
+	  ? bitpos1 < bitpos0 + bitsize0
+	  : bitpos0 < bitpos1 + bitsize1);
+}
+
+/* A bitfield store of OBITSIZE starting at OBITPOS is followed by a
+   bitfield store of NBITSIZE at NBITPOS, compute the new *BITPOS and
+   return the new bitsize or -1 if the stores aren't either adjacent
+   or overlapping.  Also compute *oshift and *nshift which is the
+   number bits the constants need to be shifted to the left before the
+   RHS bits are combined.  */
+
+static HOST_WIDE_INT
+compute_new_bitfield_positions (HOST_WIDE_INT obitsize, HOST_WIDE_INT obitpos,
+				HOST_WIDE_INT nbitsize, HOST_WIDE_INT nbitpos,
+				HOST_WIDE_INT *bitpos,
+				HOST_WIDE_INT *oshift, HOST_WIDE_INT *nshift)
+{
+  HOST_WIDE_INT bitsize;
+
+  *bitpos = MIN (obitpos, nbitpos);
+  bitsize = MAX (obitpos + obitsize, nbitpos + nbitsize) - *bitpos;
+
+  if (BYTES_BIG_ENDIAN)
+    *oshift = *bitpos + bitsize - (obitpos + obitsize);
+  else
+    *oshift = *bitpos + obitpos;
+
+  if (*oshift > nbitsize)
+    return -1;
+    
+  if (BYTES_BIG_ENDIAN)
+    *nshift = *bitpos + bitsize - (nbitpos + nbitsize);
+  else
+    *nshift = *bitpos + nbitpos;
+
+  if (*nshift > obitsize)
+    return -1;
+
+  return bitsize;
+}
+
+/* This is used as an early check.  Return true iff given the
+   alignment ALIGN and two bitfield references with BITSIZE0/BITPOS0
+   and BITSIZE1/BITPOS1 we will be able to combine the constant stores
+   to these locations.  The get_best_mode call at the end makes sure
+   that we never combine more than what fits in word_mode.  */
+
+static HOST_WIDE_INT
+combinable_bitfields (unsigned int align,
+		      HOST_WIDE_INT bitsize0, HOST_WIDE_INT bitpos0,
+		      HOST_WIDE_INT bitsize1, HOST_WIDE_INT bitpos1)
+{
+  HOST_WIDE_INT bitsize, bitpos, oshift, nshift;
+
+  bitsize =
+    compute_new_bitfield_positions (bitsize0, bitpos0, bitsize1, bitpos1,
+				    &bitpos, &oshift, &nshift);
+  return (bitsize != -1
+	  && bitpos + bitsize <= GET_MODE_BITSIZE (word_mode)
+	  && get_best_mode (bitsize, bitpos, 0, 0, align, word_mode, 0) != VOIDmode);
+}
+
+/* Wrap get_inner_reference to only return information relevant for
+   bit-field refs on the LHS.  Return NULL_TREE iff this is not a
+   bit-field reference.  */
+
+static tree
+get_lhs_bitfield_reference (tree exp, HOST_WIDE_INT *pbitsize,
+			    HOST_WIDE_INT *pbitpos, int *pvolatilep)
+{
+  tree inner, offset;
+  enum machine_mode mode;
+  int unsignedp;
+
+  *pvolatilep = 0;
+  inner = get_inner_reference (exp, pbitsize, pbitpos, &offset, &mode,
+			       &unsignedp, pvolatilep, false);
+  if (inner == exp || *pbitsize < 0 || offset != 0
+      || TREE_CODE (inner) == PLACEHOLDER_EXPR)
+    return NULL_TREE;
+
+  return inner;
+}
+
+/* Update stmt in an entry P of bfstores to include the constant
+   CONSTVAL.  Note that overlapping bits should get their value from
+   entry.  We can change OSTMT, OBITSIZE, OBITPOS, OCONSTVAL. */
+
+static void
+merge_one_bitfield_store (const_bfstore *p, gimple ostmt, HOST_WIDE_INT *bitsize,
+			  HOST_WIDE_INT *bitpos, tree *constval)
+{
+  tree bitfield;
+  HOST_WIDE_INT nshift, oshift, nbitpos = p->bitpos, nbitsize = p->bitsize,
+    obitsize = *bitsize, obitpos = *bitpos;
+  tree nconstval = p->constval, oconstval = *constval;
+  gimple nstmt = p->stmt;
+  tree type, oconst, nconst, mask, minus_one;
+  gimple_stmt_iterator gsi;
+
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      fprintf (dump_file, "  ** REPLACING THIS **\n");
+      print_gimple_stmt (dump_file, ostmt, 0, dump_flags);
+      fprintf (dump_file, "\n  ** AND THIS **\n");
+      print_gimple_stmt (dump_file, nstmt, 0, dump_flags);
+    }
+
+  *bitsize = compute_new_bitfield_positions (obitsize, obitpos,
+					     nbitsize, nbitpos,
+					     bitpos, &oshift, &nshift);
+  gcc_assert (*bitsize > 0);
+
+  bitfield = make_unsigned_bitfield_ref (p->inner, *bitsize, *bitpos);
+
+  type = TREE_TYPE (bitfield);
+  /* Always zero-extend to the new type.  */
+  oconst = fold_convert (type, oconstval);
+  oconst = int_const_binop (BIT_AND_EXPR, oconst,
+			    build_low_bits_mask (type, obitsize));
+  nconst = fold_convert (type, nconstval);
+  nconst = int_const_binop (BIT_AND_EXPR, nconst,
+			    build_low_bits_mask (type, nbitsize));
+
+  minus_one = fold_convert (type, integer_minus_one_node);
+
+  if (oshift)
+    oconst = int_const_binop (LSHIFT_EXPR, oconst,
+			    build_int_cst (type, oshift));
+  if (nshift)
+    nconst = int_const_binop (LSHIFT_EXPR, nconst,
+			    build_int_cst (type, nshift));
+
+  /* Mask off the bits from old value that the new value overrides.  The mask
+     is -1 ^ (((1 << NBITSIZE) - 1) << NSHIFT).  */
+  mask = build_low_bits_mask (type, nbitsize);
+  if (nshift)
+    mask = int_const_binop (LSHIFT_EXPR, mask,
+			    build_int_cst (type, nshift));
+  mask = int_const_binop (BIT_XOR_EXPR, minus_one, mask);
+  oconst = int_const_binop (BIT_AND_EXPR, oconst, mask);
+  *constval = int_const_binop (BIT_IOR_EXPR, oconst, nconst);
+
+  gimple_set_lhs (ostmt, bitfield);
+  gimple_assign_set_rhs1 (ostmt, *constval);
+  update_stmt (ostmt);
+
+  if (dump_file && (dump_flags & TDF_DETAILS))
+    {
+      fprintf (dump_file, "\n  ** WITH THIS **\n");
+      print_gimple_stmt (dump_file, ostmt, 0, dump_flags);
+      fprintf (dump_file, "\n\n");
+    }
+
+  gsi = gsi_for_stmt (nstmt);
+  unlink_stmt_vdef (nstmt);
+  gsi_remove (&gsi, true);
+  release_defs (nstmt);
+}
+
+/* Record a new bit-field store describe with base expression INNER,
+   BITSIZE, BITPOS and the STMT where the store occurs.  */
+
+static void
+record_new_const_bfstore (tree inner, HOST_WIDE_INT bitsize,
+			  HOST_WIDE_INT bitpos, gimple stmt)
+{
+  const_bfstore p;
+  p.inner = inner;
+  p.bitsize = bitsize;
+  p.bitpos = bitpos;
+  p.constval = gimple_assign_rhs1 (stmt);
+  p.stmt = stmt;
+  VEC_safe_push (const_bfstore, heap, bfstores, p);
+}
+
+/* Decide based on vops whether STMT conflicts with a later STORE.  */
+
+static bool
+vops_conflict_with_store_p (gimple stmt, const_bfstore *store)
+{
+  if (!gimple_vuse (stmt))
+    return false;
+  /* Use the aliasing oracle to decide if the stmt conflicts
+     with the store. */
+  return ref_maybe_used_by_stmt_p (stmt, gimple_get_lhs (store->stmt));
+}
+
+/* If STMT is a bfstore look through existing stores and if the base
+   expression matches update/combine the stores.  If the base
+   expression is different we still have to invalidate entries based
+   on vops.  */
+
+static bool
+merge_bitfield_stores (gimple stmt)
+{
+  tree lhs, rhs, inner;
+  HOST_WIDE_INT bitsize, bitpos;
+  int volatilep;
+  unsigned i;
+  const_bfstore *store;
+
+  if (!gimple_assign_copy_p (stmt))
+    return false;
+
+  lhs = gimple_get_lhs (stmt);
+  rhs = gimple_assign_rhs1 (stmt);
+  if (TREE_CODE (lhs) != COMPONENT_REF
+      && TREE_CODE (lhs) != BIT_FIELD_REF)
+    return false;
+
+  inner = get_lhs_bitfield_reference (lhs, &bitsize, &bitpos, &volatilep);
+
+  if (!inner || (TREE_CODE (rhs) == INTEGER_CST && volatilep))
+    return false;
+
+  for (i = 0; VEC_iterate (const_bfstore, bfstores, i, store); i++)
+    {
+      /* If we don't know how these are related we don't combine them
+	 and for invalidation we resort to vops.  Do similarly, if we
+	 can potentially have a conflicting bitfield use on the RHS.
+	 This is somewhat conservative but given how gimple is
+	 generate this does not happen *at all* in practice, still be
+	 paranoid.  */
+      if (!operand_equal_p (inner, store->inner, 0)
+	  || TREE_CODE (rhs) == COMPONENT_REF
+	  || TREE_CODE (rhs) == BIT_FIELD_REF)
+	{
+	  if (vops_conflict_with_store_p (stmt, store))
+	    VEC_unordered_remove (const_bfstore, bfstores, i--);
+
+	  continue;
+	}
+
+      if (TREE_CODE (rhs) != INTEGER_CST)
+	{
+	  /* Instead of invalidating the entry for a varying store we
+	     could adjust the constant store entry but this does not
+	     seem to be happening in our code.  */
+	  if (overlapping_bitfields (bitsize, bitpos,
+				     store->bitsize, store->bitpos))
+	    VEC_unordered_remove (const_bfstore, bfstores, i--);
+
+	  continue;
+	}
+
+      if (!combinable_bitfields (TYPE_ALIGN (TREE_TYPE (inner)),
+				 bitsize, bitpos,
+				 store->bitsize, store->bitpos))
+	  continue;
+
+      merge_one_bitfield_store (store, stmt, &bitsize, &bitpos, &rhs);
+      VEC_unordered_remove (const_bfstore, bfstores, i--);
+    }
+
+  if (TREE_CODE (rhs) == INTEGER_CST)
+    record_new_const_bfstore (inner, bitsize, bitpos, stmt);
+
+  return true;
+}
+
+/* Invalidate stores that can't be moved before STMT.  We do this
+   based on VOPS we is more conservative than it should be.  Consider
+   this:
+
+     union { uint64_t l; struct { uint64_t a:2; uint64_t b:2 } s } u;
+     u.l = 0; i = u.s.a; u.s.a = 1; u.s.b = 2;
+
+   We will not merge u.s.b = 2 into the intialization because the use
+   of the vops on i = u.s.a will conflict with it.  */
+
+static void
+invalidate_bfstores (gimple stmt)
+{
+  unsigned i;
+  const_bfstore *store;
+
+  for (i = 0; VEC_iterate (const_bfstore, bfstores, i, store);)
+    if (vops_conflict_with_store_p (stmt, store))
+      VEC_unordered_remove (const_bfstore, bfstores, i);
+    else
+      i++;
+}
+
+/* Main entry point of the pass.  */
+
+static unsigned int
+merge_const_bfstores (void)
+{
+  basic_block bb;
+
+  FOR_EACH_BB (bb)
+    {
+      gimple_stmt_iterator gsi;
+
+      VEC_truncate (const_bfstore, bfstores, 0);
+
+      /* Walk statements backward so that if for example the whole
+	 struct is zeroed out initially we merge all the constant
+	 bitfield stores into the initialization.  */
+      for (gsi = gsi_last_bb (bb); !gsi_end_p (gsi); gsi_prev (&gsi))
+	{
+	  gimple stmt = gsi_stmt (gsi);
+	  /* Look for varying stores, note constant stores and try to
+	     merge them.  */
+	  if (!merge_bitfield_stores (stmt))
+	    /* Look for uses that would prevent moving the store.  */
+	    invalidate_bfstores (stmt);
+	}
+    }
+
+  return 0;
+}
+
+/* This code needs some adjustment for little endian targets.  */
+
+static bool
+gate_merge_const_bfstores (void)
+{
+  return flag_merge_const_bfstores;
+}
+
+struct gimple_opt_pass pass_merge_const_bfstores  =
+{
+ {
+  GIMPLE_PASS,
+  "constbfstores",		/* name */
+  OPTGROUP_NONE,                /* optinfo_flags */
+  gate_merge_const_bfstores,	/* gate */
+  merge_const_bfstores,		/* execute */
+  NULL,				/* sub */
+  NULL,				/* next */
+  0,				/* static_pass_number */
+  TV_MERGE_CONST_BFSTORES,	/* tv_id */
+  PROP_cfg | PROP_ssa,		/* properties_required */
+  0,				/* properties_provided */
+  0,				/* properties_destroyed */
+  0,				/* todo_flags_start */
+  TODO_update_ssa_only_virtuals | TODO_verify_ssa
+ }
+};
Index: tree-pass.h
===================================================================
--- tree-pass.h	(revision 193446)
+++ tree-pass.h	(working copy)
@@ -323,6 +323,7 @@  extern struct gimple_opt_pass pass_expan
 extern struct gimple_opt_pass pass_object_sizes;
 extern struct gimple_opt_pass pass_strlen;
 extern struct gimple_opt_pass pass_fold_builtins;
+extern struct gimple_opt_pass pass_merge_const_bfstores;
 extern struct gimple_opt_pass pass_stdarg;
 extern struct gimple_opt_pass pass_early_warn_uninitialized;
 extern struct gimple_opt_pass pass_late_warn_uninitialized;
Index: opts.c
===================================================================
--- opts.c	(revision 193446)
+++ opts.c	(working copy)
@@ -484,6 +484,7 @@  static const struct default_options defa
     { OPT_LEVELS_2_PLUS, OPT_falign_jumps, NULL, 1 },
     { OPT_LEVELS_2_PLUS, OPT_falign_labels, NULL, 1 },
     { OPT_LEVELS_2_PLUS, OPT_falign_functions, NULL, 1 },
+    { OPT_LEVELS_2_PLUS, OPT_fmerge_const_bfstores, NULL, 1 },
     { OPT_LEVELS_2_PLUS, OPT_ftree_tail_merge, NULL, 1 },
     { OPT_LEVELS_2_PLUS_SPEED_ONLY, OPT_foptimize_strlen, NULL, 1 },
     { OPT_LEVELS_2_PLUS, OPT_fhoist_adjacent_loads, NULL, 1 },
Index: timevar.def
===================================================================
--- timevar.def	(revision 193446)
+++ timevar.def	(working copy)
@@ -250,6 +250,7 @@  DEFTIMEVAR (TV_FINAL                 , "
 DEFTIMEVAR (TV_VAROUT                , "variable output")
 DEFTIMEVAR (TV_SYMOUT                , "symout")
 DEFTIMEVAR (TV_VAR_TRACKING          , "variable tracking")
+DEFTIMEVAR (TV_MERGE_CONST_BFSTORES  , "merge const bfstores")
 DEFTIMEVAR (TV_VAR_TRACKING_DATAFLOW , "var-tracking dataflow")
 DEFTIMEVAR (TV_VAR_TRACKING_EMIT     , "var-tracking emit")
 DEFTIMEVAR (TV_TREE_IFCOMBINE        , "tree if-combine")
Index: common.opt
===================================================================
--- common.opt	(revision 193446)
+++ common.opt	(working copy)
@@ -1490,6 +1490,10 @@  fmerge-constants
 Common Report Var(flag_merge_constants,1) Optimization
 Attempt to merge identical constants across compilation units
 
+fmerge-const-bfstores
+Common Report Var(flag_merge_const_bfstores)
+Merge adjacent or overlapping constant bitfield stores
+
 fmerge-debug-strings
 Common Report Var(flag_merge_debug_strings) Init(1)
 Attempt to merge identical debug strings across compilation units
Index: Makefile.in
===================================================================
--- Makefile.in	(revision 193446)
+++ Makefile.in	(working copy)
@@ -1366,6 +1366,7 @@  OBJS = \
 	tree-into-ssa.o \
 	tree-iterator.o \
 	tree-loop-distribution.o \
+	tree-merge-const-bfstores.o \
 	tree-nested.o \
 	tree-nomudflap.o \
 	tree-nrv.o \
@@ -2274,6 +2275,9 @@  tree-ssa-ifcombine.o : tree-ssa-ifcombin
    coretypes.h $(TM_H) $(TREE_H) $(BASIC_BLOCK_H) \
    $(TREE_FLOW_H) $(TREE_PASS_H) $(DIAGNOSTIC_H) \
    $(TREE_PRETTY_PRINT_H)
+tree-merge-const-bfstores.o : tree-merge-const-bfstores.c $(CONFIG_H) \
+   $(SYSTEM_H) coretypes.h $(TM_H) tree-pass.h $(RTL_H) $(BASIC_BLOCK_H) \
+   $(TREE_H) $(TREE_FLOW_H) langhooks.h $(TIMEVAR_H) vec.h $(DIAGNOSTIC_H)
 tree-ssa-phiopt.o : tree-ssa-phiopt.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
    $(TM_H) $(GGC_H) $(TREE_H) $(TM_P_H) $(BASIC_BLOCK_H) \
    $(TREE_FLOW_H) $(TREE_PASS_H) langhooks.h $(FLAGS_H) \
Index: passes.c
===================================================================
--- passes.c	(revision 193446)
+++ passes.c	(working copy)
@@ -1522,6 +1522,7 @@  init_optimization_passes (void)
       NEXT_PASS (pass_dse);
       NEXT_PASS (pass_forwprop);
       NEXT_PASS (pass_phiopt);
+      NEXT_PASS (pass_merge_const_bfstores);
       NEXT_PASS (pass_fold_builtins);
       NEXT_PASS (pass_optimize_widening_mul);
       NEXT_PASS (pass_tail_calls);