diff mbox

Merge the "ISL optimizer" from the graphite branch

Message ID alpine.LNX.2.00.1207031310120.17233@jbgna.fhfr.qr
State New
Headers show

Commit Message

Richard Biener July 3, 2012, 11:15 a.m. UTC
This merges the last bit from the graphite ISL branch - an
integrated optimizer based on ISL.  To quote Tobias:

"The isl scheduling optimizer implements the scheduling algorithm first 
developed in Pluto [1]. Pluto has shown significant speedups and is 
nowadays even implemented in the IBM XL-C compiler. The implementation of 
this pass is a first draft and was copied largely from Polly. We need 
still to adapt the code to the gcc coding style and we need to tune the 
isl scheduler. At the moment we get reasonable compile times (at most 
2x-3x slowdown) and first speedups.  We now need to tune the compile time 
and start to investigate which optimizations and heuristics need to be 
tuned in our reimplementation.

[1] http://pluto-compiler.sourceforge.net/"

Micha kindly did the code adaption to gcc coding style and I renamed
the flag to -floop-nest-optimize (from -floop-optimize-isl).  We
both agree that such integrated LNO is the way to go, superseeding
individual graphite transforms we have now.  We might be even able
to drop the separate blocking & strip-mining transforms we have
right now in favor of this?

Bootstrapped and tested on x86_64-unknown-linux-gnu, ok for trunk
for the graphite parts?

Thanks,
Richard.

2012-07-03  Tobias Grosser <tobias@grosser.es>
	Michael Matz  <matz@suse.de>

	* Makefile.in (OBJS): Add graphite-optimize-isl.o.
	(graphite-optimize-isl.o): Add dependencies.
	* common.opt (floop-nest-optimize): New flag.
	* doc/invoke.texi (floop-nest-optimize): Document.
	* graphite-dependences.c (compute_deps): Export.
	* graphite-poly.h (compute_deps): Declare.
	* graphite-optimize-isl.c: New file.
	* graphite-poly.c (apply_poly_transforms): Run the loop
	nest optimizer.
	* tree-ssa-loop.c (gate_graphite_transforms): Enable graphite
	if -floop-nest-optimize is enabled.

Comments

Tobias Grosser July 3, 2012, 11:30 a.m. UTC | #1
On 07/03/2012 01:15 PM, Richard Guenther wrote:
>
> This merges the last bit from the graphite ISL branch - an
> integrated optimizer based on ISL.  To quote Tobias:
>
> "The isl scheduling optimizer implements the scheduling algorithm first
> developed in Pluto [1]. Pluto has shown significant speedups and is
> nowadays even implemented in the IBM XL-C compiler. The implementation of
> this pass is a first draft and was copied largely from Polly. We need
> still to adapt the code to the gcc coding style and we need to tune the
> isl scheduler. At the moment we get reasonable compile times (at most
> 2x-3x slowdown) and first speedups.  We now need to tune the compile time
> and start to investigate which optimizations and heuristics need to be
> tuned in our reimplementation.
>
> [1] http://pluto-compiler.sourceforge.net/"
>
> Micha kindly did the code adaption to gcc coding style and I renamed
> the flag to -floop-nest-optimize (from -floop-optimize-isl).  We
> both agree that such integrated LNO is the way to go, superseeding
> individual graphite transforms we have now.  We might be even able
> to drop the separate blocking&  strip-mining transforms we have
> right now in favor of this?

Thanks Micha for adapting the style to gcc.

I would like to point out that this pass is still very experimental and 
not tuned at all. Specifically, it was only tested on polybench with one 
specific set of flags. Even there we did not only get speedups, but due 
to missing heuristics some benchmarks also got large slowdowns. When 
using it on even slightly different benchmarks or with slightly 
different flags, infinite compile time or large performance regressions 
may show up! This optimizer may obviously also contain bugs that yield 
to miscompiles.

Also, the loop nest optimizer will be not very effective, as long as pre 
and licm are scheduled before graphite.

Having this said, I think it would be great to add this pass to gcc to 
allow people to experiment with it. However, we really should mark it as 
experimental. (This gives the graphite OK, in case the pass is clearly 
marked as experimental)

Some smaller nits:

> +floop-nest-optimize
> +Common Report Var(flag_loop_optimize_isl) Optimization
> +Enable the ISL based loop nest optimizer

What about adding "(experimental)" here?

>   fstrict-volatile-bitfields
>   Common Report Var(flag_strict_volatile_bitfields) Init(-1)
>   Force bitfield accesses to match their type width
> diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
> index 87e0d1c..5263152 100644
> --- a/gcc/doc/invoke.texi
> +++ b/gcc/doc/invoke.texi
> @@ -373,7 +373,7 @@ Objective-C and Objective-C++ Dialects}.
>   -fira-loop-pressure -fno-ira-share-save-slots @gol
>   -fno-ira-share-spill-slots -fira-verbose=@var{n} @gol
>   -fivopts -fkeep-inline-functions -fkeep-static-consts @gol
> --floop-block -floop-interchange -floop-strip-mine @gol
> +-floop-block -floop-interchange -floop-strip-mine -floop-nest-optimize @gol
>   -floop-parallelize-all -flto -flto-compression-level @gol
>   -flto-partition=@var{alg} -flto-report -fmerge-all-constants @gol
>   -fmerge-constants -fmodulo-sched -fmodulo-sched-allow-regmoves @gol
> @@ -7367,6 +7367,12 @@ GIMPLE ->  GRAPHITE ->  GIMPLE transformation.  Some minimal optimizations
>   are also performed by the code generator CLooG, like index splitting and
>   dead code elimination in loops.
>
> +@item -floop-nest-optimize
> +@opindex floop-nest-optimize
> +Enable the ISL based loop nest optimizer. This is a generic loop nest
> +optimizer based on the Pluto optimization algorithms. It calculates a loop
> +structure optimized for data-locality and parallelism.
> +

What about adding "(experimental)" here?

> +
> +static isl_union_map *
> +scop_get_dependences (scop_p scop ATTRIBUTE_UNUSED)

The ATTRIBUTE_UNUSED is not needd here.

Cheers
Tobi
Richard Biener July 3, 2012, 11:56 a.m. UTC | #2
On Tue, 3 Jul 2012, Tobias Grosser wrote:

> On 07/03/2012 01:15 PM, Richard Guenther wrote:
> > 
> > This merges the last bit from the graphite ISL branch - an
> > integrated optimizer based on ISL.  To quote Tobias:
> > 
> > "The isl scheduling optimizer implements the scheduling algorithm first
> > developed in Pluto [1]. Pluto has shown significant speedups and is
> > nowadays even implemented in the IBM XL-C compiler. The implementation of
> > this pass is a first draft and was copied largely from Polly. We need
> > still to adapt the code to the gcc coding style and we need to tune the
> > isl scheduler. At the moment we get reasonable compile times (at most
> > 2x-3x slowdown) and first speedups.  We now need to tune the compile time
> > and start to investigate which optimizations and heuristics need to be
> > tuned in our reimplementation.
> > 
> > [1] http://pluto-compiler.sourceforge.net/"
> > 
> > Micha kindly did the code adaption to gcc coding style and I renamed
> > the flag to -floop-nest-optimize (from -floop-optimize-isl).  We
> > both agree that such integrated LNO is the way to go, superseeding
> > individual graphite transforms we have now.  We might be even able
> > to drop the separate blocking&  strip-mining transforms we have
> > right now in favor of this?
> 
> Thanks Micha for adapting the style to gcc.
> 
> I would like to point out that this pass is still very experimental and not
> tuned at all. Specifically, it was only tested on polybench with one specific
> set of flags. Even there we did not only get speedups, but due to missing
> heuristics some benchmarks also got large slowdowns. When using it on even
> slightly different benchmarks or with slightly different flags, infinite
> compile time or large performance regressions may show up! This optimizer may
> obviously also contain bugs that yield to miscompiles.
> 
> Also, the loop nest optimizer will be not very effective, as long as pre and
> licm are scheduled before graphite.

I have noticed the change to disable those on the graphite-isl branch,
but I fail to see how we can not handle PREd/LIMd loops from within
polyhedral optimizations.  In fact even the user may have performed
PRE or LIM at the source level, thus the point to address this issue
is surely within graphite/ISL.

> Having this said, I think it would be great to add this pass to gcc to allow
> people to experiment with it. However, we really should mark it as
> experimental. (This gives the graphite OK, in case the pass is clearly marked
> as experimental)
> 
> Some smaller nits:
> 
> > +floop-nest-optimize
> > +Common Report Var(flag_loop_optimize_isl) Optimization
> > +Enable the ISL based loop nest optimizer
> 
> What about adding "(experimental)" here?

I've added it ...

> >   fstrict-volatile-bitfields
> >   Common Report Var(flag_strict_volatile_bitfields) Init(-1)
> >   Force bitfield accesses to match their type width
> > diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
> > index 87e0d1c..5263152 100644
> > --- a/gcc/doc/invoke.texi
> > +++ b/gcc/doc/invoke.texi
> > @@ -373,7 +373,7 @@ Objective-C and Objective-C++ Dialects}.
> >   -fira-loop-pressure -fno-ira-share-save-slots @gol
> >   -fno-ira-share-spill-slots -fira-verbose=@var{n} @gol
> >   -fivopts -fkeep-inline-functions -fkeep-static-consts @gol
> > --floop-block -floop-interchange -floop-strip-mine @gol
> > +-floop-block -floop-interchange -floop-strip-mine -floop-nest-optimize @gol
> >   -floop-parallelize-all -flto -flto-compression-level @gol
> >   -flto-partition=@var{alg} -flto-report -fmerge-all-constants @gol
> >   -fmerge-constants -fmodulo-sched -fmodulo-sched-allow-regmoves @gol
> > @@ -7367,6 +7367,12 @@ GIMPLE ->  GRAPHITE ->  GIMPLE transformation.  Some
> > minimal optimizations
> >   are also performed by the code generator CLooG, like index splitting and
> >   dead code elimination in loops.
> > 
> > +@item -floop-nest-optimize
> > +@opindex floop-nest-optimize
> > +Enable the ISL based loop nest optimizer. This is a generic loop nest
> > +optimizer based on the Pluto optimization algorithms. It calculates a loop
> > +structure optimized for data-locality and parallelism.
> > +
> 
> What about adding "(experimental)" here?

... here, following usual practice.

> > +
> > +static isl_union_map *
> > +scop_get_dependences (scop_p scop ATTRIBUTE_UNUSED)
> 
> The ATTRIBUTE_UNUSED is not needd here.

Fixed.

I also fixed a C++ style comment in graphite-poly.c.

Re-bootstrap on x86_64-unknown-linux-gnu running, I'll apply this
patch later.

Thanks,
Richard.

2012-07-03  Tobias Grosser <tobias@grosser.es>
	Michael Matz  <matz@suse.de>

	* Makefile.in (OBJS): Add graphite-optimize-isl.o.
	(graphite-optimize-isl.o): Add dependencies.
	* common.opt (floop-nest-optimize): New flag.
	* doc/invoke.texi (floop-nest-optimize): Document.
	* graphite-dependences.c (compute_deps): Export.
	* graphite-poly.h (compute_deps): Declare.
	* graphite-optimize-isl.c: New file.
	* graphite-poly.c (apply_poly_transforms): Run the loop
	nest optimizer.
	* tree-ssa-loop.c (gate_graphite_transforms): Enable graphite
	if -floop-nest-optimize is enabled.

Index: gcc/Makefile.in
===================================================================
*** gcc/Makefile.in	(revision 189212)
--- gcc/Makefile.in	(working copy)
*************** OBJS = \
*** 1248,1253 ****
--- 1248,1254 ----
  	graphite-clast-to-gimple.o \
  	graphite-dependences.o \
  	graphite-interchange.o \
+ 	graphite-optimize-isl.o \
  	graphite-poly.o \
  	graphite-scop-detection.o \
  	graphite-sese-to-poly.o \
*************** graphite-sese-to-poly.o : graphite-sese-
*** 2560,2565 ****
--- 2561,2569 ----
     $(SYSTEM_H) coretypes.h $(TREE_FLOW_H) $(TREE_DUMP_H) $(CFGLOOP_H) \
     $(TREE_DATA_REF_H) domwalk.h sese.h graphite-poly.h \
     graphite-sese-to-poly.h
+ graphite-optimize-isl.o : graphite-optimize-isl.c $(CONFIG_H) $(SYSTEM_H) \
+     coretypes.h $(TREE_FLOW_H) $(CFGLOOP_H) $(TREE_DATA_REF_H) $(SCEV_H) \
+     $(TREE_DUMP_H) sese.h graphite-poly.h
  tree-vect-loop.o: tree-vect-loop.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
     $(TM_H) $(GGC_H) $(TREE_H) $(BASIC_BLOCK_H) $(TREE_FLOW_H) \
     $(TREE_DUMP_H) $(CFGLOOP_H) $(EXPR_H) $(RECOG_H) $(OPTABS_H) \
Index: gcc/common.opt
===================================================================
*** gcc/common.opt	(revision 189212)
--- gcc/common.opt	(working copy)
*************** floop-flatten
*** 1219,1224 ****
--- 1219,1228 ----
  Common Ignore
  Does nothing. Preserved for backward compatibility.
  
+ floop-nest-optimize
+ Common Report Var(flag_loop_optimize_isl) Optimization
+ Enable the ISL based loop nest optimizer
+ 
  fstrict-volatile-bitfields
  Common Report Var(flag_strict_volatile_bitfields) Init(-1)
  Force bitfield accesses to match their type width
Index: gcc/doc/invoke.texi
===================================================================
*** gcc/doc/invoke.texi	(revision 189212)
--- gcc/doc/invoke.texi	(working copy)
*************** Objective-C and Objective-C++ Dialects}.
*** 373,379 ****
  -fira-loop-pressure -fno-ira-share-save-slots @gol
  -fno-ira-share-spill-slots -fira-verbose=@var{n} @gol
  -fivopts -fkeep-inline-functions -fkeep-static-consts @gol
! -floop-block -floop-interchange -floop-strip-mine @gol
  -floop-parallelize-all -flto -flto-compression-level @gol
  -flto-partition=@var{alg} -flto-report -fmerge-all-constants @gol
  -fmerge-constants -fmodulo-sched -fmodulo-sched-allow-regmoves @gol
--- 373,379 ----
  -fira-loop-pressure -fno-ira-share-save-slots @gol
  -fno-ira-share-spill-slots -fira-verbose=@var{n} @gol
  -fivopts -fkeep-inline-functions -fkeep-static-consts @gol
! -floop-block -floop-interchange -floop-strip-mine -floop-nest-optimize @gol
  -floop-parallelize-all -flto -flto-compression-level @gol
  -flto-partition=@var{alg} -flto-report -fmerge-all-constants @gol
  -fmerge-constants -fmodulo-sched -fmodulo-sched-allow-regmoves @gol
*************** GIMPLE -> GRAPHITE -> GIMPLE transformat
*** 7367,7372 ****
--- 7367,7379 ----
  are also performed by the code generator CLooG, like index splitting and
  dead code elimination in loops.
  
+ @item -floop-nest-optimize
+ @opindex floop-nest-optimize
+ Enable the ISL based loop nest optimizer.  This is a generic loop nest
+ optimizer based on the Pluto optimization algorithms.  It calculates a loop
+ structure optimized for data-locality and parallelism.  This option
+ is experimental.
+ 
  @item -floop-parallelize-all
  @opindex floop-parallelize-all
  Use the Graphite data dependence analysis to identify loops that can
Index: gcc/graphite-dependences.c
===================================================================
*** gcc/graphite-dependences.c	(revision 189212)
--- gcc/graphite-dependences.c	(working copy)
*************** subtract_commutative_associative_deps (s
*** 443,449 ****
  /* Compute the original data dependences in SCOP for all the reads and
     writes in PBBS.  */
  
! static void
  compute_deps (scop_p scop, VEC (poly_bb_p, heap) *pbbs,
  	      isl_union_map **must_raw,
  	      isl_union_map **may_raw,
--- 443,449 ----
  /* Compute the original data dependences in SCOP for all the reads and
     writes in PBBS.  */
  
! void
  compute_deps (scop_p scop, VEC (poly_bb_p, heap) *pbbs,
  	      isl_union_map **must_raw,
  	      isl_union_map **may_raw,
Index: gcc/graphite-poly.h
===================================================================
*** gcc/graphite-poly.h	(revision 189212)
--- gcc/graphite-poly.h	(working copy)
*************** extern int scop_do_interchange (scop_p);
*** 409,414 ****
--- 409,415 ----
  extern int scop_do_strip_mine (scop_p, int);
  extern bool scop_do_block (scop_p);
  extern bool flatten_all_loops (scop_p);
+ extern bool optimize_isl(scop_p);
  extern void pbb_number_of_iterations_at_time (poly_bb_p, graphite_dim_t, mpz_t);
  extern void debug_gmp_value (mpz_t);
  
*************** isl_map *reverse_loop_at_level (poly_bb_
*** 1549,1552 ****
--- 1550,1569 ----
  isl_union_map *reverse_loop_for_pbbs (scop_p, VEC (poly_bb_p, heap) *, int);
  __isl_give isl_union_map *extend_schedule (__isl_take isl_union_map *);
  
+ 
+ void
+ compute_deps (scop_p scop, VEC (poly_bb_p, heap) *pbbs,
+ 	      isl_union_map **must_raw,
+ 	      isl_union_map **may_raw,
+ 	      isl_union_map **must_raw_no_source,
+ 	      isl_union_map **may_raw_no_source,
+ 	      isl_union_map **must_war,
+ 	      isl_union_map **may_war,
+ 	      isl_union_map **must_war_no_source,
+ 	      isl_union_map **may_war_no_source,
+ 	      isl_union_map **must_waw,
+ 	      isl_union_map **may_waw,
+ 	      isl_union_map **must_waw_no_source,
+ 	      isl_union_map **may_waw_no_source);
+ 
  #endif
Index: gcc/graphite-optimize-isl.c
===================================================================
*** gcc/graphite-optimize-isl.c	(revision 0)
--- gcc/graphite-optimize-isl.c	(revision 0)
***************
*** 0 ****
--- 1,471 ----
+ /* A scheduling optimizer for Graphite
+    Copyright (C) 2012 Free Software Foundation, Inc.
+    Contributed by Tobias Grosser <tobias@grosser.es>.
+ 
+ This file is part of GCC.
+ 
+ GCC is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3, or (at your option)
+ any later version.
+ 
+ GCC is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ GNU General Public License for more details.
+ 
+ You should have received a copy of the GNU General Public License
+ along with GCC; see the file COPYING3.  If not see
+ <http://www.gnu.org/licenses/>.  */
+ 
+ #include "config.h"
+ 
+ #ifdef HAVE_cloog
+ #include <isl/set.h>
+ #include <isl/map.h>
+ #include <isl/union_map.h>
+ #include <isl/schedule.h>
+ #include <isl/band.h>
+ #include <isl/aff.h>
+ #include <isl/options.h>
+ 
+ #include "system.h"
+ #include "coretypes.h"
+ #include "tree-flow.h"
+ #include "tree-dump.h"
+ #include "cfgloop.h"
+ #include "tree-chrec.h"
+ #include "tree-data-ref.h"
+ #include "tree-scalar-evolution.h"
+ #include "sese.h"
+ 
+ #include "graphite-poly.h"
+ 
+ static isl_union_set *
+ scop_get_domains (scop_p scop ATTRIBUTE_UNUSED)
+ {
+   int i;
+   poly_bb_p pbb;
+   isl_space *space = isl_set_get_space (scop->context);
+   isl_union_set *res = isl_union_set_empty (space);
+ 
+   FOR_EACH_VEC_ELT (poly_bb_p, scop->bbs, i, pbb)
+     res = isl_union_set_add_set (res, isl_set_copy (pbb->domain));
+ 
+   return res;
+ }
+ 
+ static isl_union_map *
+ scop_get_dependences (scop_p scop)
+ {
+   isl_union_map *dependences;
+ 
+   if (!scop->must_raw)
+     compute_deps (scop, SCOP_BBS (scop),
+ 		  &scop->must_raw, &scop->may_raw,
+ 		  &scop->must_raw_no_source, &scop->may_raw_no_source,
+ 		  &scop->must_war, &scop->may_war,
+ 		  &scop->must_war_no_source, &scop->may_war_no_source,
+ 		  &scop->must_waw, &scop->may_waw,
+ 		  &scop->must_waw_no_source, &scop->may_waw_no_source);
+ 
+   dependences = isl_union_map_copy (scop->must_raw);
+   dependences = isl_union_map_union (dependences,
+ 				     isl_union_map_copy (scop->must_war));
+   dependences = isl_union_map_union (dependences,
+ 				     isl_union_map_copy (scop->must_waw));
+   dependences = isl_union_map_union (dependences,
+ 				     isl_union_map_copy (scop->may_raw));
+   dependences = isl_union_map_union (dependences,
+ 				     isl_union_map_copy (scop->may_war));
+   dependences = isl_union_map_union (dependences,
+ 				     isl_union_map_copy (scop->may_waw));
+ 
+   return dependences;
+ }
+ 
+ /* getTileMap - Create a map that describes a n-dimensonal tiling.
+   
+    getTileMap creates a map from a n-dimensional scattering space into an
+    2*n-dimensional scattering space. The map describes a rectangular tiling.
+   
+    Example:
+      scheduleDimensions = 2, parameterDimensions = 1, tileSize = 32
+  
+     tileMap := [p0] -> {[s0, s1] -> [t0, t1, s0, s1]:
+                          t0 % 32 = 0 and t0 <= s0 < t0 + 32 and
+                          t1 % 32 = 0 and t1 <= s1 < t1 + 32}
+  
+    Before tiling:
+  
+    for (i = 0; i < N; i++)
+      for (j = 0; j < M; j++)
+  	S(i,j)
+  
+    After tiling:
+  
+    for (t_i = 0; t_i < N; i+=32)
+      for (t_j = 0; t_j < M; j+=32)
+  	for (i = t_i; i < min(t_i + 32, N); i++)  | Unknown that N % 32 = 0
+  	  for (j = t_j; j < t_j + 32; j++)        |   Known that M % 32 = 0
+  	    S(i,j)
+    */
+  
+ static isl_basic_map *
+ getTileMap(isl_ctx *ctx, int scheduleDimensions, int tileSize)
+ {
+   int x;
+   /* We construct
+ 
+      tileMap := [p0] -> {[s0, s1] -> [t0, t1, p0, p1, a0, a1]:
+     	                s0 = a0 * 32 and s0 = p0 and t0 <= p0 < t0 + 32 and
+     	                s1 = a1 * 32 and s1 = p1 and t1 <= p1 < t1 + 32}
+ 
+      and project out the auxilary dimensions a0 and a1.  */
+   isl_space *Space = isl_space_alloc(ctx, 0, scheduleDimensions,
+ 				     scheduleDimensions * 3);
+   isl_basic_map *tileMap = isl_basic_map_universe(isl_space_copy(Space));
+ 
+   isl_local_space *LocalSpace = isl_local_space_from_space(Space);
+ 
+   for (x = 0; x < scheduleDimensions; x++)
+     {
+       int sX = x;
+       int tX = x;
+       int pX = scheduleDimensions + x;
+       int aX = 2 * scheduleDimensions + x;
+ 
+       isl_constraint *c;
+ 
+       /* sX = aX * tileSize; */
+       c = isl_equality_alloc(isl_local_space_copy(LocalSpace));
+       isl_constraint_set_coefficient_si(c, isl_dim_out, sX, 1);
+       isl_constraint_set_coefficient_si(c, isl_dim_out, aX, -tileSize);
+       tileMap = isl_basic_map_add_constraint(tileMap, c);
+ 
+       /* pX = sX; */
+       c = isl_equality_alloc(isl_local_space_copy(LocalSpace));
+       isl_constraint_set_coefficient_si(c, isl_dim_out, pX, 1);
+       isl_constraint_set_coefficient_si(c, isl_dim_in, sX, -1);
+       tileMap = isl_basic_map_add_constraint(tileMap, c);
+ 
+       /* tX <= pX */
+       c = isl_inequality_alloc(isl_local_space_copy(LocalSpace));
+       isl_constraint_set_coefficient_si(c, isl_dim_out, pX, 1);
+       isl_constraint_set_coefficient_si(c, isl_dim_out, tX, -1);
+       tileMap = isl_basic_map_add_constraint(tileMap, c);
+ 
+       /* pX <= tX + (tileSize - 1) */
+       c = isl_inequality_alloc(isl_local_space_copy(LocalSpace));
+       isl_constraint_set_coefficient_si(c, isl_dim_out, tX, 1);
+       isl_constraint_set_coefficient_si(c, isl_dim_out, pX, -1);
+       isl_constraint_set_constant_si(c, tileSize - 1);
+       tileMap = isl_basic_map_add_constraint(tileMap, c);
+     }
+ 
+   /* Project out auxilary dimensions.
+ 
+      The auxilary dimensions are transformed into existentially quantified ones.
+      This reduces the number of visible scattering dimensions and allows Cloog
+      to produces better code.  */
+   tileMap = isl_basic_map_project_out(tileMap, isl_dim_out,
+ 				      2 * scheduleDimensions,
+ 				      scheduleDimensions);
+   isl_local_space_free(LocalSpace);
+   return tileMap;
+ }
+ 
+ /* getScheduleForBand - Get the schedule for this band.
+   
+    Polly applies transformations like tiling on top of the isl calculated value.
+    This can influence the number of scheduling dimension. The number of
+    schedule dimensions is returned in the parameter 'Dimension'.  */
+ static bool DisableTiling = false;
+ 
+ static isl_union_map *
+ getScheduleForBand(isl_band *Band, int *Dimensions)
+ {
+   isl_union_map *PartialSchedule;
+   isl_ctx *ctx;
+   isl_space *Space;
+   isl_basic_map *TileMap;
+   isl_union_map *TileUMap;
+ 
+   PartialSchedule = isl_band_get_partial_schedule(Band);
+   *Dimensions = isl_band_n_member(Band);
+ 
+   if (DisableTiling)
+     return PartialSchedule;
+ 
+   /* It does not make any sense to tile a band with just one dimension.  */
+   if (*Dimensions == 1)
+     return PartialSchedule;
+ 
+   ctx = isl_union_map_get_ctx(PartialSchedule);
+   Space = isl_union_map_get_space(PartialSchedule);
+ 
+   TileMap = getTileMap(ctx, *Dimensions, 32);
+   TileUMap = isl_union_map_from_map(isl_map_from_basic_map(TileMap));
+   TileUMap = isl_union_map_align_params(TileUMap, Space);
+   *Dimensions = 2 * *Dimensions;
+ 
+   return isl_union_map_apply_range(PartialSchedule, TileUMap);
+ }
+ 
+ /* Create a map that pre-vectorizes one scheduling dimension.
+   
+    getPrevectorMap creates a map that maps each input dimension to the same
+    output dimension, except for the dimension DimToVectorize. DimToVectorize is
+    strip mined by 'VectorWidth' and the newly created point loop of
+    DimToVectorize is moved to the innermost level.
+   
+    Example (DimToVectorize=0, ScheduleDimensions=2, VectorWidth=4):
+   
+    | Before transformation
+    |
+    | A[i,j] -> [i,j]
+    |
+    | for (i = 0; i < 128; i++)
+    |    for (j = 0; j < 128; j++)
+    |      A(i,j);
+   
+      Prevector map:
+      [i,j] -> [it,j,ip] : it % 4 = 0 and it <= ip <= it + 3 and i = ip
+   
+    | After transformation:
+    |
+    | A[i,j] -> [it,j,ip] : it % 4 = 0 and it <= ip <= it + 3 and i = ip
+    |
+    | for (it = 0; it < 128; it+=4)
+    |    for (j = 0; j < 128; j++)
+    |      for (ip = max(0,it); ip < min(128, it + 3); ip++)
+    |        A(ip,j);
+   
+    The goal of this transformation is to create a trivially vectorizable loop.
+    This means a parallel loop at the innermost level that has a constant number
+    of iterations corresponding to the target vector width.
+   
+    This transformation creates a loop at the innermost level. The loop has a
+    constant number of iterations, if the number of loop iterations at
+    DimToVectorize can be devided by VectorWidth. The default VectorWidth is
+    currently constant and not yet target specific. This function does not reason
+    about parallelism.  */
+ static isl_map *
+ getPrevectorMap(isl_ctx *ctx, int DimToVectorize,
+ 		int ScheduleDimensions,
+ 		int VectorWidth)
+ {
+   isl_space *Space;
+   isl_local_space *LocalSpace, *LocalSpaceRange;
+   isl_set *Modulo;
+   isl_map *TilingMap;
+   isl_constraint *c;
+   isl_aff *Aff;
+   int PointDimension; /* ip */
+   int TileDimension;  /* it */
+   isl_int VectorWidthMP;
+   int i;
+ 
+   /* assert (0 <= DimToVectorize && DimToVectorize < ScheduleDimensions);*/
+ 
+   Space = isl_space_alloc(ctx, 0, ScheduleDimensions, ScheduleDimensions + 1);
+   TilingMap = isl_map_universe(isl_space_copy(Space));
+   LocalSpace = isl_local_space_from_space(Space);
+   PointDimension = ScheduleDimensions;
+   TileDimension = DimToVectorize;
+ 
+   /* Create an identity map for everything except DimToVectorize and map
+      DimToVectorize to the point loop at the innermost dimension.  */
+   for (i = 0; i < ScheduleDimensions; i++)
+     {
+       c = isl_equality_alloc(isl_local_space_copy(LocalSpace));
+       isl_constraint_set_coefficient_si(c, isl_dim_in, i, -1);
+ 
+       if (i == DimToVectorize)
+ 	isl_constraint_set_coefficient_si(c, isl_dim_out, PointDimension, 1);
+       else
+ 	isl_constraint_set_coefficient_si(c, isl_dim_out, i, 1);
+ 
+       TilingMap = isl_map_add_constraint(TilingMap, c);
+     }
+ 
+   /* it % 'VectorWidth' = 0  */
+   LocalSpaceRange = isl_local_space_range(isl_local_space_copy(LocalSpace));
+   Aff = isl_aff_zero_on_domain(LocalSpaceRange);
+   Aff = isl_aff_set_constant_si(Aff, VectorWidth);
+   Aff = isl_aff_set_coefficient_si(Aff, isl_dim_in, TileDimension, 1);
+   isl_int_init(VectorWidthMP);
+   isl_int_set_si(VectorWidthMP, VectorWidth);
+   Aff = isl_aff_mod(Aff, VectorWidthMP);
+   isl_int_clear(VectorWidthMP);
+   Modulo = isl_pw_aff_zero_set(isl_pw_aff_from_aff(Aff));
+   TilingMap = isl_map_intersect_range(TilingMap, Modulo);
+ 
+   /* it <= ip */
+   c = isl_inequality_alloc(isl_local_space_copy(LocalSpace));
+   isl_constraint_set_coefficient_si(c, isl_dim_out, TileDimension, -1);
+   isl_constraint_set_coefficient_si(c, isl_dim_out, PointDimension, 1);
+   TilingMap = isl_map_add_constraint(TilingMap, c);
+ 
+   /* ip <= it + ('VectorWidth' - 1) */
+   c = isl_inequality_alloc(LocalSpace);
+   isl_constraint_set_coefficient_si(c, isl_dim_out, TileDimension, 1);
+   isl_constraint_set_coefficient_si(c, isl_dim_out, PointDimension, -1);
+   isl_constraint_set_constant_si(c, VectorWidth - 1);
+   TilingMap = isl_map_add_constraint(TilingMap, c);
+ 
+   isl_map_dump(TilingMap);
+ 
+   return TilingMap;
+ }
+ 
+ static bool EnablePollyVector = false;
+ 
+ /* getScheduleForBandList - Get the scheduling map for a list of bands.
+     
+    We walk recursively the forest of bands to combine the schedules of the
+    individual bands to the overall schedule. In case tiling is requested,
+    the individual bands are tiled.  */
+ static isl_union_map *
+ getScheduleForBandList(isl_band_list *BandList)
+ {
+   int NumBands, i;
+   isl_union_map *Schedule;
+   isl_ctx *ctx;
+ 
+   ctx = isl_band_list_get_ctx(BandList);
+   NumBands = isl_band_list_n_band(BandList);
+   Schedule = isl_union_map_empty(isl_space_params_alloc(ctx, 0));
+ 
+   for (i = 0; i < NumBands; i++)
+     {
+       isl_band *Band;
+       isl_union_map *PartialSchedule;
+       int ScheduleDimensions;
+       isl_space *Space;
+ 
+       Band = isl_band_list_get_band(BandList, i);
+       PartialSchedule = getScheduleForBand(Band, &ScheduleDimensions);
+       Space = isl_union_map_get_space(PartialSchedule);
+ 
+       if (isl_band_has_children(Band))
+ 	{
+ 	  isl_band_list *Children;
+ 	  isl_union_map *SuffixSchedule;
+ 
+ 	  Children = isl_band_get_children(Band);
+ 	  SuffixSchedule = getScheduleForBandList(Children);
+ 	  PartialSchedule = isl_union_map_flat_range_product(PartialSchedule,
+ 							     SuffixSchedule);
+ 	  isl_band_list_free(Children);
+ 	}
+       else if (EnablePollyVector)
+ 	{
+ 	  for (i = ScheduleDimensions - 1 ;  i >= 0 ; i--)
+ 	    {
+ 	      if (isl_band_member_is_zero_distance(Band, i))
+ 		{
+ 		  isl_map *TileMap;
+ 		  isl_union_map *TileUMap;
+ 
+ 		  TileMap = getPrevectorMap(ctx, i, ScheduleDimensions, 4);
+ 		  TileUMap = isl_union_map_from_map(TileMap);
+ 		  TileUMap = isl_union_map_align_params(TileUMap,
+ 							isl_space_copy(Space));
+ 		  PartialSchedule = isl_union_map_apply_range(PartialSchedule,
+ 							      TileUMap);
+ 		  break;
+ 		}
+ 	    }
+ 	}
+ 
+       Schedule = isl_union_map_union(Schedule, PartialSchedule);
+ 
+       isl_band_free(Band);
+       isl_space_free(Space);
+     }
+ 
+   return Schedule;
+ }
+ 
+ static isl_union_map *
+ getScheduleMap(isl_schedule *Schedule)
+ {
+   isl_band_list *BandList = isl_schedule_get_band_forest(Schedule);
+   isl_union_map *ScheduleMap = getScheduleForBandList(BandList);
+   isl_band_list_free(BandList);
+   return ScheduleMap;
+ }
+ 
+ static int
+ getSingleMap(__isl_take isl_map *map, void *user)
+ {
+   isl_map **singleMap = (isl_map **) user;
+   *singleMap = map;
+ 
+   return 0;
+ }
+ 
+ static void
+ apply_schedule_map_to_scop (scop_p scop, isl_union_map *schedule_map)
+ {
+   int i;
+   poly_bb_p pbb;
+ 
+   FOR_EACH_VEC_ELT (poly_bb_p, scop->bbs, i, pbb)
+     {
+       isl_set *domain = isl_set_copy (pbb->domain);
+       isl_union_map *stmtBand;
+       isl_map *stmtSchedule;
+ 
+       stmtBand = isl_union_map_intersect_domain(isl_union_map_copy(schedule_map),
+ 						isl_union_set_from_set(domain));
+       isl_union_map_foreach_map(stmtBand, getSingleMap, &stmtSchedule);
+       isl_map_free(pbb->transformed);
+       pbb->transformed = stmtSchedule;
+       isl_union_map_free(stmtBand);
+     }
+ }
+ 
+ static const int CONSTANT_BOUND = 20;
+ 
+ bool
+ optimize_isl (scop_p scop)
+ {
+ 
+   isl_schedule *schedule;
+   isl_union_set *domain;
+   isl_union_map *validity, *proximity, *dependences;
+   isl_union_map *schedule_map;
+ 
+   domain = scop_get_domains (scop);
+   dependences = scop_get_dependences (scop);
+   dependences = isl_union_map_gist_domain(dependences,
+ 					  isl_union_set_copy(domain));
+   dependences = isl_union_map_gist_range(dependences,
+ 					 isl_union_set_copy(domain));
+   validity = dependences;
+ 
+   proximity = isl_union_map_copy (validity);
+ 
+   isl_options_set_schedule_max_constant_term(scop->ctx, CONSTANT_BOUND);
+   isl_options_set_schedule_maximize_band_depth(scop->ctx, 1);
+   isl_options_set_schedule_fuse(scop->ctx, ISL_SCHEDULE_FUSE_MIN);
+   isl_options_set_on_error(scop->ctx, ISL_ON_ERROR_CONTINUE);
+   schedule = isl_union_set_compute_schedule (domain, validity, proximity);
+   isl_options_set_on_error(scop->ctx, ISL_ON_ERROR_ABORT);
+ 
+   if (!schedule)
+     return false;
+ 
+   schedule_map = getScheduleMap (schedule);
+ 
+   apply_schedule_map_to_scop (scop, schedule_map);
+ 
+   isl_schedule_free (schedule);
+   isl_union_map_free (schedule_map);
+ 
+   return true;
+ }
+ 
+ #endif
Index: gcc/graphite-poly.c
===================================================================
*** gcc/graphite-poly.c	(revision 189212)
--- gcc/graphite-poly.c	(working copy)
*************** apply_poly_transforms (scop_p scop)
*** 250,255 ****
--- 250,260 ----
  	transform_done |= scop_do_interchange (scop);
      }
  
+   /* This pass needs to be run at the final stage, as it does not
+      update the lst.  */
+   if (flag_loop_optimize_isl)
+     transform_done |= optimize_isl (scop);
+ 
    return transform_done;
  }
  
Index: gcc/tree-ssa-loop.c
===================================================================
*** gcc/tree-ssa-loop.c	(revision 189212)
--- gcc/tree-ssa-loop.c	(working copy)
*************** gate_graphite_transforms (void)
*** 265,271 ****
        || flag_loop_interchange
        || flag_loop_strip_mine
        || flag_graphite_identity
!       || flag_loop_parallelize_all)
      flag_graphite = 1;
  
    return flag_graphite != 0;
--- 265,272 ----
        || flag_loop_interchange
        || flag_loop_strip_mine
        || flag_graphite_identity
!       || flag_loop_parallelize_all
!       || flag_loop_optimize_isl)
      flag_graphite = 1;
  
    return flag_graphite != 0;
Tobias Grosser July 3, 2012, 12:06 p.m. UTC | #3
On 07/03/2012 01:56 PM, Richard Guenther wrote:
> On Tue, 3 Jul 2012, Tobias Grosser wrote:
>
>> On 07/03/2012 01:15 PM, Richard Guenther wrote:
>>>
>>> This merges the last bit from the graphite ISL branch - an
>>> integrated optimizer based on ISL.  To quote Tobias:
>>>
>>> "The isl scheduling optimizer implements the scheduling algorithm first
>>> developed in Pluto [1]. Pluto has shown significant speedups and is
>>> nowadays even implemented in the IBM XL-C compiler. The implementation of
>>> this pass is a first draft and was copied largely from Polly. We need
>>> still to adapt the code to the gcc coding style and we need to tune the
>>> isl scheduler. At the moment we get reasonable compile times (at most
>>> 2x-3x slowdown) and first speedups.  We now need to tune the compile time
>>> and start to investigate which optimizations and heuristics need to be
>>> tuned in our reimplementation.
>>>
>>> [1] http://pluto-compiler.sourceforge.net/"
>>>
>>> Micha kindly did the code adaption to gcc coding style and I renamed
>>> the flag to -floop-nest-optimize (from -floop-optimize-isl).  We
>>> both agree that such integrated LNO is the way to go, superseeding
>>> individual graphite transforms we have now.  We might be even able
>>> to drop the separate blocking&   strip-mining transforms we have
>>> right now in favor of this?
>>
>> Thanks Micha for adapting the style to gcc.
>>
>> I would like to point out that this pass is still very experimental and not
>> tuned at all. Specifically, it was only tested on polybench with one specific
>> set of flags. Even there we did not only get speedups, but due to missing
>> heuristics some benchmarks also got large slowdowns. When using it on even
>> slightly different benchmarks or with slightly different flags, infinite
>> compile time or large performance regressions may show up! This optimizer may
>> obviously also contain bugs that yield to miscompiles.
>>
>> Also, the loop nest optimizer will be not very effective, as long as pre and
>> licm are scheduled before graphite.
>
> I have noticed the change to disable those on the graphite-isl branch,
> but I fail to see how we can not handle PREd/LIMd loops from within
> polyhedral optimizations.  In fact even the user may have performed
> PRE or LIM at the source level, thus the point to address this issue
> is surely within graphite/ISL.

You can still handle those loops, however the PREd/LIMD will introduce a 
lot of additional dependences, which will block transformations. Such 
dependences can be removed e.g. with array expansion or undoing some of 
the PREd/LIMD transformations, but this is a difficult problem 
especially if you don't want to increase the used memory too much.

I do not see any benefits from PREd and LIMD before graphite, as at the 
very best their transformations will be reverted (if such an 
optimization is ever written). However, scheduling PREd and LIMD after 
graphite makes perfect sense. (Especially as there are probably a lot of 
new opportunities).

Moving such passes behind graphite, does obviously not solve the case of 
manual PRE and LIMD done by the user. However, it will allow us to 
optimize the non manually PREd code a lot better.

Cheers
Tobi
Richard Biener July 3, 2012, 12:24 p.m. UTC | #4
On Tue, 3 Jul 2012, Tobias Grosser wrote:

> On 07/03/2012 01:56 PM, Richard Guenther wrote:
> > On Tue, 3 Jul 2012, Tobias Grosser wrote:
> > 
> > > On 07/03/2012 01:15 PM, Richard Guenther wrote:
> > > > 
> > > > This merges the last bit from the graphite ISL branch - an
> > > > integrated optimizer based on ISL.  To quote Tobias:
> > > > 
> > > > "The isl scheduling optimizer implements the scheduling algorithm first
> > > > developed in Pluto [1]. Pluto has shown significant speedups and is
> > > > nowadays even implemented in the IBM XL-C compiler. The implementation
> > > > of
> > > > this pass is a first draft and was copied largely from Polly. We need
> > > > still to adapt the code to the gcc coding style and we need to tune the
> > > > isl scheduler. At the moment we get reasonable compile times (at most
> > > > 2x-3x slowdown) and first speedups.  We now need to tune the compile
> > > > time
> > > > and start to investigate which optimizations and heuristics need to be
> > > > tuned in our reimplementation.
> > > > 
> > > > [1] http://pluto-compiler.sourceforge.net/"
> > > > 
> > > > Micha kindly did the code adaption to gcc coding style and I renamed
> > > > the flag to -floop-nest-optimize (from -floop-optimize-isl).  We
> > > > both agree that such integrated LNO is the way to go, superseeding
> > > > individual graphite transforms we have now.  We might be even able
> > > > to drop the separate blocking&   strip-mining transforms we have
> > > > right now in favor of this?
> > > 
> > > Thanks Micha for adapting the style to gcc.
> > > 
> > > I would like to point out that this pass is still very experimental and
> > > not
> > > tuned at all. Specifically, it was only tested on polybench with one
> > > specific
> > > set of flags. Even there we did not only get speedups, but due to missing
> > > heuristics some benchmarks also got large slowdowns. When using it on even
> > > slightly different benchmarks or with slightly different flags, infinite
> > > compile time or large performance regressions may show up! This optimizer
> > > may
> > > obviously also contain bugs that yield to miscompiles.
> > > 
> > > Also, the loop nest optimizer will be not very effective, as long as pre
> > > and
> > > licm are scheduled before graphite.
> > 
> > I have noticed the change to disable those on the graphite-isl branch,
> > but I fail to see how we can not handle PREd/LIMd loops from within
> > polyhedral optimizations.  In fact even the user may have performed
> > PRE or LIM at the source level, thus the point to address this issue
> > is surely within graphite/ISL.
> 
> You can still handle those loops, however the PREd/LIMD will introduce a lot
> of additional dependences, which will block transformations. Such dependences
> can be removed e.g. with array expansion or undoing some of the PREd/LIMD
> transformations, but this is a difficult problem especially if you don't want
> to increase the used memory too much.
> 
> I do not see any benefits from PREd and LIMD before graphite, as at the very
> best their transformations will be reverted (if such an optimization is ever
> written). However, scheduling PREd and LIMD after graphite makes perfect
> sense. (Especially as there are probably a lot of new opportunities).
> 
> Moving such passes behind graphite, does obviously not solve the case of
> manual PRE and LIMD done by the user. However, it will allow us to optimize
> the non manually PREd code a lot better.

I suppose it might make sense to not perform GRAPHITE stuff from within
our main loop optimization pipeline.  Are there any passes that benefit
GRAPHITE that currently run before it from inside loop?

      NEXT_PASS (pass_tree_loop);
        {
          struct opt_pass **p = &pass_tree_loop.pass.sub;
          NEXT_PASS (pass_tree_loop_init);
          NEXT_PASS (pass_lim);
          NEXT_PASS (pass_copy_prop);
          NEXT_PASS (pass_dce_loop);
          NEXT_PASS (pass_tree_unswitch);
          NEXT_PASS (pass_scev_cprop);
          NEXT_PASS (pass_record_bounds);
          NEXT_PASS (pass_check_data_deps);
          NEXT_PASS (pass_loop_distribution);
          NEXT_PASS (pass_copy_prop);
          NEXT_PASS (pass_graphite);

I suppose scev_cprop helps because it removes scalar dependences
from outside of the loops, unswitch might help because graphite
does not handle conditional code (? then if-conversion would help, too).

I would say invariant motion from LIM does not hurt either, it is probably
store-motion that does (it introduces scalar dependences from outside
of the loop).

We've already moved the PRE-like predictive commoning after loop
optimizations, and PRE has several measures to not introduce
dependences that would confuse vectorization.  The vectorizer only
handles scalar reductions, so it requires store-motion done by LIM.
The vectorizer meanwhile handles invariant loads just fine though
(I'll look if we can handle stores to invariant locations, too).

Re-ordering passes is always something tedious though ;)

Richard.
Tobias Grosser July 3, 2012, 12:45 p.m. UTC | #5
On 07/03/2012 02:24 PM, Richard Guenther wrote:
> On Tue, 3 Jul 2012, Tobias Grosser wrote:
>
>> On 07/03/2012 01:56 PM, Richard Guenther wrote:
>>> On Tue, 3 Jul 2012, Tobias Grosser wrote:
>>>
>>>> On 07/03/2012 01:15 PM, Richard Guenther wrote:
>>>>>
>>>>> This merges the last bit from the graphite ISL branch - an
>>>>> integrated optimizer based on ISL.  To quote Tobias:
>>>>>
>>>>> "The isl scheduling optimizer implements the scheduling algorithm first
>>>>> developed in Pluto [1]. Pluto has shown significant speedups and is
>>>>> nowadays even implemented in the IBM XL-C compiler. The implementation
>>>>> of
>>>>> this pass is a first draft and was copied largely from Polly. We need
>>>>> still to adapt the code to the gcc coding style and we need to tune the
>>>>> isl scheduler. At the moment we get reasonable compile times (at most
>>>>> 2x-3x slowdown) and first speedups.  We now need to tune the compile
>>>>> time
>>>>> and start to investigate which optimizations and heuristics need to be
>>>>> tuned in our reimplementation.
>>>>>
>>>>> [1] http://pluto-compiler.sourceforge.net/"
>>>>>
>>>>> Micha kindly did the code adaption to gcc coding style and I renamed
>>>>> the flag to -floop-nest-optimize (from -floop-optimize-isl).  We
>>>>> both agree that such integrated LNO is the way to go, superseeding
>>>>> individual graphite transforms we have now.  We might be even able
>>>>> to drop the separate blocking&    strip-mining transforms we have
>>>>> right now in favor of this?
>>>>
>>>> Thanks Micha for adapting the style to gcc.
>>>>
>>>> I would like to point out that this pass is still very experimental and
>>>> not
>>>> tuned at all. Specifically, it was only tested on polybench with one
>>>> specific
>>>> set of flags. Even there we did not only get speedups, but due to missing
>>>> heuristics some benchmarks also got large slowdowns. When using it on even
>>>> slightly different benchmarks or with slightly different flags, infinite
>>>> compile time or large performance regressions may show up! This optimizer
>>>> may
>>>> obviously also contain bugs that yield to miscompiles.
>>>>
>>>> Also, the loop nest optimizer will be not very effective, as long as pre
>>>> and
>>>> licm are scheduled before graphite.
>>>
>>> I have noticed the change to disable those on the graphite-isl branch,
>>> but I fail to see how we can not handle PREd/LIMd loops from within
>>> polyhedral optimizations.  In fact even the user may have performed
>>> PRE or LIM at the source level, thus the point to address this issue
>>> is surely within graphite/ISL.
>>
>> You can still handle those loops, however the PREd/LIMD will introduce a lot
>> of additional dependences, which will block transformations. Such dependences
>> can be removed e.g. with array expansion or undoing some of the PREd/LIMD
>> transformations, but this is a difficult problem especially if you don't want
>> to increase the used memory too much.
>>
>> I do not see any benefits from PREd and LIMD before graphite, as at the very
>> best their transformations will be reverted (if such an optimization is ever
>> written). However, scheduling PREd and LIMD after graphite makes perfect
>> sense. (Especially as there are probably a lot of new opportunities).
>>
>> Moving such passes behind graphite, does obviously not solve the case of
>> manual PRE and LIMD done by the user. However, it will allow us to optimize
>> the non manually PREd code a lot better.
>
> I suppose it might make sense to not perform GRAPHITE stuff from within
> our main loop optimization pipeline.

I have no idea. In general all passes that help scalar evolution and 
that canonicalize the code are good to execute before graphite.

It may make sense to create a new block of loop optimizations with the 
preparing transformations + graphite, before the normal loop 
optimizations. This makes sense in general, as graphite only does high 
level optimizations and the resulting code should anyway have normal 
low-level loop optimizations applied afterwards. Also, this would keep 
the schedule identical if graphite is not enabled, and would only add 
compile time overhead with graphite enabled.

> Are there any passes that benefit
> GRAPHITE that currently run before it from inside loop?
>
>        NEXT_PASS (pass_tree_loop);
>          {
>            struct opt_pass **p =&pass_tree_loop.pass.sub;
>            NEXT_PASS (pass_tree_loop_init);
>            NEXT_PASS (pass_lim);
>            NEXT_PASS (pass_copy_prop);
>            NEXT_PASS (pass_dce_loop);
>            NEXT_PASS (pass_tree_unswitch);
>            NEXT_PASS (pass_scev_cprop);
>            NEXT_PASS (pass_record_bounds);
>            NEXT_PASS (pass_check_data_deps);
>            NEXT_PASS (pass_loop_distribution);
>            NEXT_PASS (pass_copy_prop);
>            NEXT_PASS (pass_graphite);
>
> I suppose scev_cprop helps because it removes scalar dependences
> from outside of the loops, unswitch might help because graphite
> does not handle conditional code (? then if-conversion would help, too).

Sounds reasonable. I did not investigate this yet.

> I would say invariant motion from LIM does not hurt either, it is probably
> store-motion that does (it introduces scalar dependences from outside
> of the loop).
>
> We've already moved the PRE-like predictive commoning after loop
> optimizations, and PRE has several measures to not introduce
> dependences that would confuse vectorization.  The vectorizer only
> handles scalar reductions, so it requires store-motion done by LIM.
> The vectorizer meanwhile handles invariant loads just fine though
> (I'll look if we can handle stores to invariant locations, too).

I think that needs some further investigation. It may make sense to try 
e.g. 2mm from the polybench and look for a pass sequence that allows the 
isl optimizer to jump in. Unfortunately I don't have time to try it 
myself, but I am obviously available in case you need here. (You need to 
take aliasing into account and probably want to use C99 arrays)

> Re-ordering passes is always something tedious though ;)

Yes. The idea I mentioned above should not change the default pass ordering.

Cheers
Tobi
Jack Howarth July 5, 2012, 3:42 p.m. UTC | #6
On Tue, Jul 03, 2012 at 02:45:29PM +0200, Tobias Grosser wrote:
> On 07/03/2012 02:24 PM, Richard Guenther wrote:
>> On Tue, 3 Jul 2012, Tobias Grosser wrote:
>>
>>> On 07/03/2012 01:56 PM, Richard Guenther wrote:
>>>> On Tue, 3 Jul 2012, Tobias Grosser wrote:
>>>>
>>>>> On 07/03/2012 01:15 PM, Richard Guenther wrote:
>>>>>>
>>>>>> This merges the last bit from the graphite ISL branch - an
>>>>>> integrated optimizer based on ISL.  To quote Tobias:
>>>>>>
>>>>>> "The isl scheduling optimizer implements the scheduling algorithm first
>>>>>> developed in Pluto [1]. Pluto has shown significant speedups and is
>>>>>> nowadays even implemented in the IBM XL-C compiler. The implementation
>>>>>> of
>>>>>> this pass is a first draft and was copied largely from Polly. We need
>>>>>> still to adapt the code to the gcc coding style and we need to tune the
>>>>>> isl scheduler. At the moment we get reasonable compile times (at most
>>>>>> 2x-3x slowdown) and first speedups.  We now need to tune the compile
>>>>>> time
>>>>>> and start to investigate which optimizations and heuristics need to be
>>>>>> tuned in our reimplementation.
>>>>>>
>>>>>> [1] http://pluto-compiler.sourceforge.net/"
>>>>>>
>>>>>> Micha kindly did the code adaption to gcc coding style and I renamed
>>>>>> the flag to -floop-nest-optimize (from -floop-optimize-isl).  We
>>>>>> both agree that such integrated LNO is the way to go, superseeding
>>>>>> individual graphite transforms we have now.  We might be even able
>>>>>> to drop the separate blocking&    strip-mining transforms we have
>>>>>> right now in favor of this?
>>>>>
>>>>> Thanks Micha for adapting the style to gcc.
>>>>>
>>>>> I would like to point out that this pass is still very experimental and
>>>>> not
>>>>> tuned at all. Specifically, it was only tested on polybench with one
>>>>> specific
>>>>> set of flags. Even there we did not only get speedups, but due to missing
>>>>> heuristics some benchmarks also got large slowdowns. When using it on even
>>>>> slightly different benchmarks or with slightly different flags, infinite
>>>>> compile time or large performance regressions may show up! This optimizer
>>>>> may
>>>>> obviously also contain bugs that yield to miscompiles.
>>>>>
>>>>> Also, the loop nest optimizer will be not very effective, as long as pre
>>>>> and
>>>>> licm are scheduled before graphite.
>>>>
>>>> I have noticed the change to disable those on the graphite-isl branch,
>>>> but I fail to see how we can not handle PREd/LIMd loops from within
>>>> polyhedral optimizations.  In fact even the user may have performed
>>>> PRE or LIM at the source level, thus the point to address this issue
>>>> is surely within graphite/ISL.
>>>
>>> You can still handle those loops, however the PREd/LIMD will introduce a lot
>>> of additional dependences, which will block transformations. Such dependences
>>> can be removed e.g. with array expansion or undoing some of the PREd/LIMD
>>> transformations, but this is a difficult problem especially if you don't want
>>> to increase the used memory too much.
>>>
>>> I do not see any benefits from PREd and LIMD before graphite, as at the very
>>> best their transformations will be reverted (if such an optimization is ever
>>> written). However, scheduling PREd and LIMD after graphite makes perfect
>>> sense. (Especially as there are probably a lot of new opportunities).
>>>
>>> Moving such passes behind graphite, does obviously not solve the case of
>>> manual PRE and LIMD done by the user. However, it will allow us to optimize
>>> the non manually PREd code a lot better.
>>
>> I suppose it might make sense to not perform GRAPHITE stuff from within
>> our main loop optimization pipeline.
>
> I have no idea. In general all passes that help scalar evolution and  
> that canonicalize the code are good to execute before graphite.
>
> It may make sense to create a new block of loop optimizations with the  
> preparing transformations + graphite, before the normal loop  
> optimizations. This makes sense in general, as graphite only does high  
> level optimizations and the resulting code should anyway have normal  
> low-level loop optimizations applied afterwards. Also, this would keep  
> the schedule identical if graphite is not enabled, and would only add  
> compile time overhead with graphite enabled.
>
>> Are there any passes that benefit
>> GRAPHITE that currently run before it from inside loop?
>>
>>        NEXT_PASS (pass_tree_loop);
>>          {
>>            struct opt_pass **p =&pass_tree_loop.pass.sub;
>>            NEXT_PASS (pass_tree_loop_init);
>>            NEXT_PASS (pass_lim);
>>            NEXT_PASS (pass_copy_prop);
>>            NEXT_PASS (pass_dce_loop);
>>            NEXT_PASS (pass_tree_unswitch);
>>            NEXT_PASS (pass_scev_cprop);
>>            NEXT_PASS (pass_record_bounds);
>>            NEXT_PASS (pass_check_data_deps);
>>            NEXT_PASS (pass_loop_distribution);
>>            NEXT_PASS (pass_copy_prop);
>>            NEXT_PASS (pass_graphite);
>>
>> I suppose scev_cprop helps because it removes scalar dependences
>> from outside of the loops, unswitch might help because graphite
>> does not handle conditional code (? then if-conversion would help, too).
>
> Sounds reasonable. I did not investigate this yet.
>
>> I would say invariant motion from LIM does not hurt either, it is probably
>> store-motion that does (it introduces scalar dependences from outside
>> of the loop).
>>
>> We've already moved the PRE-like predictive commoning after loop
>> optimizations, and PRE has several measures to not introduce
>> dependences that would confuse vectorization.  The vectorizer only
>> handles scalar reductions, so it requires store-motion done by LIM.
>> The vectorizer meanwhile handles invariant loads just fine though
>> (I'll look if we can handle stores to invariant locations, too).
>
> I think that needs some further investigation. It may make sense to try  
> e.g. 2mm from the polybench and look for a pass sequence that allows the  
> isl optimizer to jump in. Unfortunately I don't have time to try it  
> myself, but I am obviously available in case you need here. (You need to  
> take aliasing into account and probably want to use C99 arrays)
>
>> Re-ordering passes is always something tedious though ;)
>
> Yes. The idea I mentioned above should not change the default pass ordering.
>
> Cheers
> Tobi

   On a slightly different topic, I thought there was a plan at one point to make
-ftree-loop-linear the default for -O3 when gcc was built with graphite support.
Has this idea been abandoned?
                      Jack
diff mbox

Patch

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index ec39e2e..33775ac 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1248,6 +1248,7 @@  OBJS = \
 	graphite-clast-to-gimple.o \
 	graphite-dependences.o \
 	graphite-interchange.o \
+	graphite-optimize-isl.o \
 	graphite-poly.o \
 	graphite-scop-detection.o \
 	graphite-sese-to-poly.o \
@@ -2560,6 +2561,9 @@  graphite-sese-to-poly.o : graphite-sese-to-poly.c $(CONFIG_H) \
    $(SYSTEM_H) coretypes.h $(TREE_FLOW_H) $(TREE_DUMP_H) $(CFGLOOP_H) \
    $(TREE_DATA_REF_H) domwalk.h sese.h graphite-poly.h \
    graphite-sese-to-poly.h
+graphite-optimize-isl.o : graphite-optimize-isl.c $(CONFIG_H) $(SYSTEM_H) \
+    coretypes.h $(TREE_FLOW_H) $(CFGLOOP_H) $(TREE_DATA_REF_H) $(SCEV_H) \
+    $(TREE_DUMP_H) sese.h graphite-poly.h
 tree-vect-loop.o: tree-vect-loop.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \
    $(TM_H) $(GGC_H) $(TREE_H) $(BASIC_BLOCK_H) $(TREE_FLOW_H) \
    $(TREE_DUMP_H) $(CFGLOOP_H) $(EXPR_H) $(RECOG_H) $(OPTABS_H) \
diff --git a/gcc/common.opt b/gcc/common.opt
index ad6db61..12f557a 100644
--- a/gcc/common.opt
+++ b/gcc/common.opt
@@ -1219,6 +1219,10 @@  floop-flatten
 Common Ignore
 Does nothing. Preserved for backward compatibility.
 
+floop-nest-optimize
+Common Report Var(flag_loop_optimize_isl) Optimization
+Enable the ISL based loop nest optimizer
+
 fstrict-volatile-bitfields
 Common Report Var(flag_strict_volatile_bitfields) Init(-1)
 Force bitfield accesses to match their type width
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 87e0d1c..5263152 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -373,7 +373,7 @@  Objective-C and Objective-C++ Dialects}.
 -fira-loop-pressure -fno-ira-share-save-slots @gol
 -fno-ira-share-spill-slots -fira-verbose=@var{n} @gol
 -fivopts -fkeep-inline-functions -fkeep-static-consts @gol
--floop-block -floop-interchange -floop-strip-mine @gol
+-floop-block -floop-interchange -floop-strip-mine -floop-nest-optimize @gol
 -floop-parallelize-all -flto -flto-compression-level @gol
 -flto-partition=@var{alg} -flto-report -fmerge-all-constants @gol
 -fmerge-constants -fmodulo-sched -fmodulo-sched-allow-regmoves @gol
@@ -7367,6 +7367,12 @@  GIMPLE -> GRAPHITE -> GIMPLE transformation.  Some minimal optimizations
 are also performed by the code generator CLooG, like index splitting and
 dead code elimination in loops.
 
+@item -floop-nest-optimize
+@opindex floop-nest-optimize
+Enable the ISL based loop nest optimizer. This is a generic loop nest
+optimizer based on the Pluto optimization algorithms. It calculates a loop
+structure optimized for data-locality and parallelism.
+
 @item -floop-parallelize-all
 @opindex floop-parallelize-all
 Use the Graphite data dependence analysis to identify loops that can
diff --git a/gcc/graphite-dependences.c b/gcc/graphite-dependences.c
index 0c10e60..1ef063d 100644
--- a/gcc/graphite-dependences.c
+++ b/gcc/graphite-dependences.c
@@ -443,7 +443,7 @@  subtract_commutative_associative_deps (scop_p scop,
 /* Compute the original data dependences in SCOP for all the reads and
    writes in PBBS.  */
 
-static void
+void
 compute_deps (scop_p scop, VEC (poly_bb_p, heap) *pbbs,
 	      isl_union_map **must_raw,
 	      isl_union_map **may_raw,
diff --git a/gcc/graphite-optimize-isl.c b/gcc/graphite-optimize-isl.c
new file mode 100644
index 0000000..27de86b
--- /dev/null
+++ b/gcc/graphite-optimize-isl.c
@@ -0,0 +1,471 @@ 
+/* A scheduling optimizer for Graphite
+   Copyright (C) 2012 Free Software Foundation, Inc.
+   Contributed by Tobias Grosser <tobias@grosser.es>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+
+#ifdef HAVE_cloog
+#include <isl/set.h>
+#include <isl/map.h>
+#include <isl/union_map.h>
+#include <isl/schedule.h>
+#include <isl/band.h>
+#include <isl/aff.h>
+#include <isl/options.h>
+
+#include "system.h"
+#include "coretypes.h"
+#include "tree-flow.h"
+#include "tree-dump.h"
+#include "cfgloop.h"
+#include "tree-chrec.h"
+#include "tree-data-ref.h"
+#include "tree-scalar-evolution.h"
+#include "sese.h"
+
+#include "graphite-poly.h"
+
+static isl_union_set *
+scop_get_domains (scop_p scop ATTRIBUTE_UNUSED)
+{
+  int i;
+  poly_bb_p pbb;
+  isl_space *space = isl_set_get_space (scop->context);
+  isl_union_set *res = isl_union_set_empty (space);
+
+  FOR_EACH_VEC_ELT (poly_bb_p, scop->bbs, i, pbb)
+    res = isl_union_set_add_set (res, isl_set_copy (pbb->domain));
+
+  return res;
+}
+
+static isl_union_map *
+scop_get_dependences (scop_p scop ATTRIBUTE_UNUSED)
+{
+  isl_union_map *dependences;
+
+  if (!scop->must_raw)
+    compute_deps (scop, SCOP_BBS (scop),
+		  &scop->must_raw, &scop->may_raw,
+		  &scop->must_raw_no_source, &scop->may_raw_no_source,
+		  &scop->must_war, &scop->may_war,
+		  &scop->must_war_no_source, &scop->may_war_no_source,
+		  &scop->must_waw, &scop->may_waw,
+		  &scop->must_waw_no_source, &scop->may_waw_no_source);
+
+  dependences = isl_union_map_copy (scop->must_raw);
+  dependences = isl_union_map_union (dependences,
+				     isl_union_map_copy (scop->must_war));
+  dependences = isl_union_map_union (dependences,
+				     isl_union_map_copy (scop->must_waw));
+  dependences = isl_union_map_union (dependences,
+				     isl_union_map_copy (scop->may_raw));
+  dependences = isl_union_map_union (dependences,
+				     isl_union_map_copy (scop->may_war));
+  dependences = isl_union_map_union (dependences,
+				     isl_union_map_copy (scop->may_waw));
+
+  return dependences;
+}
+
+/* getTileMap - Create a map that describes a n-dimensonal tiling.
+  
+   getTileMap creates a map from a n-dimensional scattering space into an
+   2*n-dimensional scattering space. The map describes a rectangular tiling.
+  
+   Example:
+     scheduleDimensions = 2, parameterDimensions = 1, tileSize = 32
+ 
+    tileMap := [p0] -> {[s0, s1] -> [t0, t1, s0, s1]:
+                         t0 % 32 = 0 and t0 <= s0 < t0 + 32 and
+                         t1 % 32 = 0 and t1 <= s1 < t1 + 32}
+ 
+   Before tiling:
+ 
+   for (i = 0; i < N; i++)
+     for (j = 0; j < M; j++)
+ 	S(i,j)
+ 
+   After tiling:
+ 
+   for (t_i = 0; t_i < N; i+=32)
+     for (t_j = 0; t_j < M; j+=32)
+ 	for (i = t_i; i < min(t_i + 32, N); i++)  | Unknown that N % 32 = 0
+ 	  for (j = t_j; j < t_j + 32; j++)        |   Known that M % 32 = 0
+ 	    S(i,j)
+   */
+ 
+static isl_basic_map *
+getTileMap(isl_ctx *ctx, int scheduleDimensions, int tileSize)
+{
+  int x;
+  /* We construct
+
+     tileMap := [p0] -> {[s0, s1] -> [t0, t1, p0, p1, a0, a1]:
+    	                s0 = a0 * 32 and s0 = p0 and t0 <= p0 < t0 + 32 and
+    	                s1 = a1 * 32 and s1 = p1 and t1 <= p1 < t1 + 32}
+
+     and project out the auxilary dimensions a0 and a1.  */
+  isl_space *Space = isl_space_alloc(ctx, 0, scheduleDimensions,
+				     scheduleDimensions * 3);
+  isl_basic_map *tileMap = isl_basic_map_universe(isl_space_copy(Space));
+
+  isl_local_space *LocalSpace = isl_local_space_from_space(Space);
+
+  for (x = 0; x < scheduleDimensions; x++)
+    {
+      int sX = x;
+      int tX = x;
+      int pX = scheduleDimensions + x;
+      int aX = 2 * scheduleDimensions + x;
+
+      isl_constraint *c;
+
+      /* sX = aX * tileSize; */
+      c = isl_equality_alloc(isl_local_space_copy(LocalSpace));
+      isl_constraint_set_coefficient_si(c, isl_dim_out, sX, 1);
+      isl_constraint_set_coefficient_si(c, isl_dim_out, aX, -tileSize);
+      tileMap = isl_basic_map_add_constraint(tileMap, c);
+
+      /* pX = sX; */
+      c = isl_equality_alloc(isl_local_space_copy(LocalSpace));
+      isl_constraint_set_coefficient_si(c, isl_dim_out, pX, 1);
+      isl_constraint_set_coefficient_si(c, isl_dim_in, sX, -1);
+      tileMap = isl_basic_map_add_constraint(tileMap, c);
+
+      /* tX <= pX */
+      c = isl_inequality_alloc(isl_local_space_copy(LocalSpace));
+      isl_constraint_set_coefficient_si(c, isl_dim_out, pX, 1);
+      isl_constraint_set_coefficient_si(c, isl_dim_out, tX, -1);
+      tileMap = isl_basic_map_add_constraint(tileMap, c);
+
+      /* pX <= tX + (tileSize - 1) */
+      c = isl_inequality_alloc(isl_local_space_copy(LocalSpace));
+      isl_constraint_set_coefficient_si(c, isl_dim_out, tX, 1);
+      isl_constraint_set_coefficient_si(c, isl_dim_out, pX, -1);
+      isl_constraint_set_constant_si(c, tileSize - 1);
+      tileMap = isl_basic_map_add_constraint(tileMap, c);
+    }
+
+  /* Project out auxilary dimensions.
+
+     The auxilary dimensions are transformed into existentially quantified ones.
+     This reduces the number of visible scattering dimensions and allows Cloog
+     to produces better code.  */
+  tileMap = isl_basic_map_project_out(tileMap, isl_dim_out,
+				      2 * scheduleDimensions,
+				      scheduleDimensions);
+  isl_local_space_free(LocalSpace);
+  return tileMap;
+}
+
+/* getScheduleForBand - Get the schedule for this band.
+  
+   Polly applies transformations like tiling on top of the isl calculated value.
+   This can influence the number of scheduling dimension. The number of
+   schedule dimensions is returned in the parameter 'Dimension'.  */
+static bool DisableTiling = false;
+
+static isl_union_map *
+getScheduleForBand(isl_band *Band, int *Dimensions)
+{
+  isl_union_map *PartialSchedule;
+  isl_ctx *ctx;
+  isl_space *Space;
+  isl_basic_map *TileMap;
+  isl_union_map *TileUMap;
+
+  PartialSchedule = isl_band_get_partial_schedule(Band);
+  *Dimensions = isl_band_n_member(Band);
+
+  if (DisableTiling)
+    return PartialSchedule;
+
+  /* It does not make any sense to tile a band with just one dimension.  */
+  if (*Dimensions == 1)
+    return PartialSchedule;
+
+  ctx = isl_union_map_get_ctx(PartialSchedule);
+  Space = isl_union_map_get_space(PartialSchedule);
+
+  TileMap = getTileMap(ctx, *Dimensions, 32);
+  TileUMap = isl_union_map_from_map(isl_map_from_basic_map(TileMap));
+  TileUMap = isl_union_map_align_params(TileUMap, Space);
+  *Dimensions = 2 * *Dimensions;
+
+  return isl_union_map_apply_range(PartialSchedule, TileUMap);
+}
+
+/* Create a map that pre-vectorizes one scheduling dimension.
+  
+   getPrevectorMap creates a map that maps each input dimension to the same
+   output dimension, except for the dimension DimToVectorize. DimToVectorize is
+   strip mined by 'VectorWidth' and the newly created point loop of
+   DimToVectorize is moved to the innermost level.
+  
+   Example (DimToVectorize=0, ScheduleDimensions=2, VectorWidth=4):
+  
+   | Before transformation
+   |
+   | A[i,j] -> [i,j]
+   |
+   | for (i = 0; i < 128; i++)
+   |    for (j = 0; j < 128; j++)
+   |      A(i,j);
+  
+     Prevector map:
+     [i,j] -> [it,j,ip] : it % 4 = 0 and it <= ip <= it + 3 and i = ip
+  
+   | After transformation:
+   |
+   | A[i,j] -> [it,j,ip] : it % 4 = 0 and it <= ip <= it + 3 and i = ip
+   |
+   | for (it = 0; it < 128; it+=4)
+   |    for (j = 0; j < 128; j++)
+   |      for (ip = max(0,it); ip < min(128, it + 3); ip++)
+   |        A(ip,j);
+  
+   The goal of this transformation is to create a trivially vectorizable loop.
+   This means a parallel loop at the innermost level that has a constant number
+   of iterations corresponding to the target vector width.
+  
+   This transformation creates a loop at the innermost level. The loop has a
+   constant number of iterations, if the number of loop iterations at
+   DimToVectorize can be devided by VectorWidth. The default VectorWidth is
+   currently constant and not yet target specific. This function does not reason
+   about parallelism.  */
+static isl_map *
+getPrevectorMap(isl_ctx *ctx, int DimToVectorize,
+		int ScheduleDimensions,
+		int VectorWidth)
+{
+  isl_space *Space;
+  isl_local_space *LocalSpace, *LocalSpaceRange;
+  isl_set *Modulo;
+  isl_map *TilingMap;
+  isl_constraint *c;
+  isl_aff *Aff;
+  int PointDimension; /* ip */
+  int TileDimension;  /* it */
+  isl_int VectorWidthMP;
+  int i;
+
+  /* assert (0 <= DimToVectorize && DimToVectorize < ScheduleDimensions);*/
+
+  Space = isl_space_alloc(ctx, 0, ScheduleDimensions, ScheduleDimensions + 1);
+  TilingMap = isl_map_universe(isl_space_copy(Space));
+  LocalSpace = isl_local_space_from_space(Space);
+  PointDimension = ScheduleDimensions;
+  TileDimension = DimToVectorize;
+
+  /* Create an identity map for everything except DimToVectorize and map
+     DimToVectorize to the point loop at the innermost dimension.  */
+  for (i = 0; i < ScheduleDimensions; i++)
+    {
+      c = isl_equality_alloc(isl_local_space_copy(LocalSpace));
+      isl_constraint_set_coefficient_si(c, isl_dim_in, i, -1);
+
+      if (i == DimToVectorize)
+	isl_constraint_set_coefficient_si(c, isl_dim_out, PointDimension, 1);
+      else
+	isl_constraint_set_coefficient_si(c, isl_dim_out, i, 1);
+
+      TilingMap = isl_map_add_constraint(TilingMap, c);
+    }
+
+  /* it % 'VectorWidth' = 0  */
+  LocalSpaceRange = isl_local_space_range(isl_local_space_copy(LocalSpace));
+  Aff = isl_aff_zero_on_domain(LocalSpaceRange);
+  Aff = isl_aff_set_constant_si(Aff, VectorWidth);
+  Aff = isl_aff_set_coefficient_si(Aff, isl_dim_in, TileDimension, 1);
+  isl_int_init(VectorWidthMP);
+  isl_int_set_si(VectorWidthMP, VectorWidth);
+  Aff = isl_aff_mod(Aff, VectorWidthMP);
+  isl_int_clear(VectorWidthMP);
+  Modulo = isl_pw_aff_zero_set(isl_pw_aff_from_aff(Aff));
+  TilingMap = isl_map_intersect_range(TilingMap, Modulo);
+
+  /* it <= ip */
+  c = isl_inequality_alloc(isl_local_space_copy(LocalSpace));
+  isl_constraint_set_coefficient_si(c, isl_dim_out, TileDimension, -1);
+  isl_constraint_set_coefficient_si(c, isl_dim_out, PointDimension, 1);
+  TilingMap = isl_map_add_constraint(TilingMap, c);
+
+  /* ip <= it + ('VectorWidth' - 1) */
+  c = isl_inequality_alloc(LocalSpace);
+  isl_constraint_set_coefficient_si(c, isl_dim_out, TileDimension, 1);
+  isl_constraint_set_coefficient_si(c, isl_dim_out, PointDimension, -1);
+  isl_constraint_set_constant_si(c, VectorWidth - 1);
+  TilingMap = isl_map_add_constraint(TilingMap, c);
+
+  isl_map_dump(TilingMap);
+
+  return TilingMap;
+}
+
+static bool EnablePollyVector = false;
+
+/* getScheduleForBandList - Get the scheduling map for a list of bands.
+    
+   We walk recursively the forest of bands to combine the schedules of the
+   individual bands to the overall schedule. In case tiling is requested,
+   the individual bands are tiled.  */
+static isl_union_map *
+getScheduleForBandList(isl_band_list *BandList)
+{
+  int NumBands, i;
+  isl_union_map *Schedule;
+  isl_ctx *ctx;
+
+  ctx = isl_band_list_get_ctx(BandList);
+  NumBands = isl_band_list_n_band(BandList);
+  Schedule = isl_union_map_empty(isl_space_params_alloc(ctx, 0));
+
+  for (i = 0; i < NumBands; i++)
+    {
+      isl_band *Band;
+      isl_union_map *PartialSchedule;
+      int ScheduleDimensions;
+      isl_space *Space;
+
+      Band = isl_band_list_get_band(BandList, i);
+      PartialSchedule = getScheduleForBand(Band, &ScheduleDimensions);
+      Space = isl_union_map_get_space(PartialSchedule);
+
+      if (isl_band_has_children(Band))
+	{
+	  isl_band_list *Children;
+	  isl_union_map *SuffixSchedule;
+
+	  Children = isl_band_get_children(Band);
+	  SuffixSchedule = getScheduleForBandList(Children);
+	  PartialSchedule = isl_union_map_flat_range_product(PartialSchedule,
+							     SuffixSchedule);
+	  isl_band_list_free(Children);
+	}
+      else if (EnablePollyVector)
+	{
+	  for (i = ScheduleDimensions - 1 ;  i >= 0 ; i--)
+	    {
+	      if (isl_band_member_is_zero_distance(Band, i))
+		{
+		  isl_map *TileMap;
+		  isl_union_map *TileUMap;
+
+		  TileMap = getPrevectorMap(ctx, i, ScheduleDimensions, 4);
+		  TileUMap = isl_union_map_from_map(TileMap);
+		  TileUMap = isl_union_map_align_params(TileUMap,
+							isl_space_copy(Space));
+		  PartialSchedule = isl_union_map_apply_range(PartialSchedule,
+							      TileUMap);
+		  break;
+		}
+	    }
+	}
+
+      Schedule = isl_union_map_union(Schedule, PartialSchedule);
+
+      isl_band_free(Band);
+      isl_space_free(Space);
+    }
+
+  return Schedule;
+}
+
+static isl_union_map *
+getScheduleMap(isl_schedule *Schedule)
+{
+  isl_band_list *BandList = isl_schedule_get_band_forest(Schedule);
+  isl_union_map *ScheduleMap = getScheduleForBandList(BandList);
+  isl_band_list_free(BandList);
+  return ScheduleMap;
+}
+
+static int
+getSingleMap(__isl_take isl_map *map, void *user)
+{
+  isl_map **singleMap = (isl_map **) user;
+  *singleMap = map;
+
+  return 0;
+}
+
+static void
+apply_schedule_map_to_scop (scop_p scop, isl_union_map *schedule_map)
+{
+  int i;
+  poly_bb_p pbb;
+
+  FOR_EACH_VEC_ELT (poly_bb_p, scop->bbs, i, pbb)
+    {
+      isl_set *domain = isl_set_copy (pbb->domain);
+      isl_union_map *stmtBand;
+      isl_map *stmtSchedule;
+
+      stmtBand = isl_union_map_intersect_domain(isl_union_map_copy(schedule_map),
+						isl_union_set_from_set(domain));
+      isl_union_map_foreach_map(stmtBand, getSingleMap, &stmtSchedule);
+      isl_map_free(pbb->transformed);
+      pbb->transformed = stmtSchedule;
+      isl_union_map_free(stmtBand);
+    }
+}
+
+static const int CONSTANT_BOUND = 20;
+
+bool
+optimize_isl (scop_p scop)
+{
+
+  isl_schedule *schedule;
+  isl_union_set *domain;
+  isl_union_map *validity, *proximity, *dependences;
+  isl_union_map *schedule_map;
+
+  domain = scop_get_domains (scop);
+  dependences = scop_get_dependences (scop);
+  dependences = isl_union_map_gist_domain(dependences,
+					  isl_union_set_copy(domain));
+  dependences = isl_union_map_gist_range(dependences,
+					 isl_union_set_copy(domain));
+  validity = dependences;
+
+  proximity = isl_union_map_copy (validity);
+
+  isl_options_set_schedule_max_constant_term(scop->ctx, CONSTANT_BOUND);
+  isl_options_set_schedule_maximize_band_depth(scop->ctx, 1);
+  isl_options_set_schedule_fuse(scop->ctx, ISL_SCHEDULE_FUSE_MIN);
+  isl_options_set_on_error(scop->ctx, ISL_ON_ERROR_CONTINUE);
+  schedule = isl_union_set_compute_schedule (domain, validity, proximity);
+  isl_options_set_on_error(scop->ctx, ISL_ON_ERROR_ABORT);
+
+  if (!schedule)
+    return false;
+
+  schedule_map = getScheduleMap (schedule);
+
+  apply_schedule_map_to_scop (scop, schedule_map);
+
+  isl_schedule_free (schedule);
+  isl_union_map_free (schedule_map);
+
+  return true;
+}
+
+#endif
diff --git a/gcc/graphite-poly.c b/gcc/graphite-poly.c
index e3563a2..4869de6 100644
--- a/gcc/graphite-poly.c
+++ b/gcc/graphite-poly.c
@@ -250,6 +250,11 @@  apply_poly_transforms (scop_p scop)
 	transform_done |= scop_do_interchange (scop);
     }
 
+  // This pass needs to be run at the final stage, as it does not
+  // update the lst.
+  if (flag_loop_optimize_isl)
+    transform_done |= optimize_isl (scop);
+
   return transform_done;
 }
 
diff --git a/gcc/graphite-poly.h b/gcc/graphite-poly.h
index 0b95662..de1f06f 100644
--- a/gcc/graphite-poly.h
+++ b/gcc/graphite-poly.h
@@ -409,6 +409,7 @@  extern int scop_do_interchange (scop_p);
 extern int scop_do_strip_mine (scop_p, int);
 extern bool scop_do_block (scop_p);
 extern bool flatten_all_loops (scop_p);
+extern bool optimize_isl(scop_p);
 extern void pbb_number_of_iterations_at_time (poly_bb_p, graphite_dim_t, mpz_t);
 extern void debug_gmp_value (mpz_t);
 
@@ -1549,4 +1550,20 @@  isl_map *reverse_loop_at_level (poly_bb_p, int);
 isl_union_map *reverse_loop_for_pbbs (scop_p, VEC (poly_bb_p, heap) *, int);
 __isl_give isl_union_map *extend_schedule (__isl_take isl_union_map *);
 
+
+void
+compute_deps (scop_p scop, VEC (poly_bb_p, heap) *pbbs,
+	      isl_union_map **must_raw,
+	      isl_union_map **may_raw,
+	      isl_union_map **must_raw_no_source,
+	      isl_union_map **may_raw_no_source,
+	      isl_union_map **must_war,
+	      isl_union_map **may_war,
+	      isl_union_map **must_war_no_source,
+	      isl_union_map **may_war_no_source,
+	      isl_union_map **must_waw,
+	      isl_union_map **may_waw,
+	      isl_union_map **must_waw_no_source,
+	      isl_union_map **may_waw_no_source);
+
 #endif
diff --git a/gcc/tree-ssa-loop.c b/gcc/tree-ssa-loop.c
index c6304c4..bef68e7 100644
--- a/gcc/tree-ssa-loop.c
+++ b/gcc/tree-ssa-loop.c
@@ -265,7 +265,8 @@  gate_graphite_transforms (void)
       || flag_loop_interchange
       || flag_loop_strip_mine
       || flag_graphite_identity
-      || flag_loop_parallelize_all)
+      || flag_loop_parallelize_all
+      || flag_loop_optimize_isl)
     flag_graphite = 1;
 
   return flag_graphite != 0;