diff mbox series

[1/4] openacc: Middle-end worker-partitioning support

Message ID 2ef7b2ebaf056858d6484a260c3897f844e2df4a.1614685766.git.julian@codesourcery.com
State New
Headers show
Series openacc: Worker partitioning in the middle end | expand

Commit Message

Julian Brown March 2, 2021, 12:20 p.m. UTC
A version of this patch was previously posted here:

  https://gcc.gnu.org/pipermail/gcc-patches/2019-November/534553.html

This patch implements worker-partitioning support in the middle end,
by rewriting gimple. The OpenACC execution model requires that code
can run in either "worker single" mode where only a single worker per
gang is active, or "worker partitioned" mode, where multiple workers
per gang are active. This means we need to do something equivalent
to spawning additional workers when transitioning from worker-single
to worker-partitioned mode. However, GPUs typically fix the number of
threads of invoked kernels at launch time, so we need to do something
with the "extra" threads when they are not wanted.

The scheme used is to conditionalise each basic block that executes
in "worker single" mode for worker 0 only. Conditional branches
are handled specially so "idle" (non-0) workers follow along with
worker 0. On transitioning to "worker partitioned" mode, any variables
modified by worker 0 are propagated to the other workers via GPU shared
memory. Special care is taken for routine calls, writes through pointers,
and so forth, as follows:

  - There are two types of function calls to consider in worker-single
    mode: "normal" calls to maths library routines, etc. are called from
    worker 0 only. OpenACC routines may contain worker-partitioned loops
    themselves, so are called from all workers, including "idle" ones.

  - SSA names set in worker-single mode, but used in worker-partitioned
    mode, are copied to shared memory in worker 0. Other workers retrieve
    the value from the appropriate shared-memory location after a barrier,
    and new phi nodes are introduced at the convergence point to resolve
    the worker 0/other worker copies of the value.

  - Local scalar variables (on the stack) also need special handling. We
    broadcast any variables that are written in the current worker-single
    block, and that are read in any worker-partitioned block.  (This is
    believed to be safe, and is flow-insensitive to ease analysis.)

  - Local aggregates (arrays and composites) on the stack are *not*
    broadcast. Instead we force gimple stmts modifying elements/fields of
    local aggregates into fully-partitioned mode. The RHS of the
    assignment is a scalar, and is thus subject to broadcasting as above.

  - Writes through pointers may affect any local variable that has
    its address taken. We use points-to analysis to determine the set
    of potentially-affected variables for a given pointer indirection.
    We broadcast any such variable which is used in worker-partitioned
    mode, on a per-block basis for any block containing a write through
    a pointer.

Some slides about the implementation (from 2018) are available at:

  https://jtb20.github.io/gcnworkers.pdf

This version of the patch includes several follow-on bug fixes by myself
and Kwok. This version also avoids moving SESE-region finding code out
of the NVPTX backend, since that code isn't used by the middle-end worker
partitioning neutering/broadcasting implementation yet.

This patch (and the rest of the series) should be applied on top of the
private variable patches posted previously here:

  https://gcc.gnu.org/pipermail/gcc-patches/2021-February/565925.html

Tested with offloading to AMD GCN. OK for stage 1?

Julian

2021-03-02  Julian Brown  <julian@codesourcery.com>
	    Nathan Sidwell  <nathan@acm.org>
	    Kwok Cheung Yeung  <kcy@codesourcery.com>

gcc/
	* Makefile.in (OBJS): Add oacc-neuter-bcast.o.
	* doc/tm.texi.in (TARGET_GOACC_WORKER_PARTITIONING,
	TARGET_GOACC_CREATE_PROPAGATION_RECORD): Add documentation hooks.
	* doc/tm.texi: Regenerate.
	* oacc-neuter-bcast.c: New file.
	* oacc-neuter-bcast.h: New file.
	* omp-builtins.def (BUILT_IN_GOACC_BARRIER, BUILT_IN_GOACC_SINGLE_START,
	BUILT_IN_GOACC_SINGLE_COPY_START, BUILT_IN_GOACC_SINGLE_COPY_END): New
	builtins.
	* omp-offload.c (oacc-neuter-bcast.h): Include header.
	(oacc_loop_xform_head_tail): Call update_stmt for modified builtin
	calls.
	(oacc_loop_process): Likewise.
	(default_goacc_create_propagation_record): New function.
	(execute_oacc_loop_designation): New.  Split out of oacc_device_lower.
	(execute_oacc_gimple_workers): New.  Likewise.
	(execute_oacc_device_lower): Recreate dims array.
	(pass_data_oacc_loop_designation, pass_data_oacc_gimple_workers): New.
	(pass_oacc_loop_designation, pass_oacc_gimple_workers): New.
	(make_pass_oacc_loop_designation, make_pass_oacc_gimple_workers): New.
	* omp-offload.h (oacc_fn_attrib_level): Add prototype.
	* passes.def (pass_oacc_loop_designation, pass_oacc_gimple_workers):
	Add passes.
	* target.def (worker_partitioning, create_propagation_record): Add
	target hooks.
	* targhooks.h (default_goacc_create_propagation_record): Add prototype.
	* tree-pass.h (make_pass_oacc_loop_designation,
	make_pass_oacc_gimple_workers): Add prototypes.

gcc/testsuite/
	* c-c++-common/goacc/classify-kernels-unparallelized.c: Scan oaccloops
	dump instead of oaccdevlow.
	* c-c++-common/goacc/classify-kernels.c: Likewise.
	* c-c++-common/goacc/classify-parallel.c: Likewise
	* c-c++-common/goacc/classify-routine.c: Likewise.
	* c-c++-common/goacc/classify-serial.c: Likewise.
	* gcc.dg/goacc/loop-processing-1.c: Likewise.
	* gfortran.dg/goacc/classify-kernels-unparallelized.f95: Likewise.
	* gfortran.dg/goacc/classify-kernels.f95: Likewise.
	* gfortran.dg/goacc/classify-parallel.f95: Likewise.
	* gfortran.dg/goacc/classify-routine.f95: Likewise.
	* gfortran.dg/goacc/classify-serial.f95: Likewise.
---
 gcc/Makefile.in                               |    1 +
 gcc/doc/tm.texi                               |   10 +
 gcc/doc/tm.texi.in                            |    4 +
 gcc/oacc-neuter-bcast.c                       | 1471 +++++++++++++++++
 gcc/oacc-neuter-bcast.h                       |   26 +
 gcc/omp-builtins.def                          |    8 +
 gcc/omp-offload.c                             |  159 +-
 gcc/omp-offload.h                             |    1 +
 gcc/passes.def                                |    2 +
 gcc/target.def                                |   13 +
 gcc/targhooks.h                               |    1 +
 .../goacc/classify-kernels-unparallelized.c   |    8 +-
 .../c-c++-common/goacc/classify-kernels.c     |    8 +-
 .../c-c++-common/goacc/classify-parallel.c    |    8 +-
 .../c-c++-common/goacc/classify-routine.c     |    8 +-
 .../c-c++-common/goacc/classify-serial.c      |    8 +-
 .../gcc.dg/goacc/loop-processing-1.c          |    2 +-
 .../goacc/classify-kernels-unparallelized.f95 |    8 +-
 .../gfortran.dg/goacc/classify-kernels.f95    |    8 +-
 .../gfortran.dg/goacc/classify-parallel.f95   |    8 +-
 .../gfortran.dg/goacc/classify-routine.f95    |    8 +-
 .../gfortran.dg/goacc/classify-serial.f95     |    8 +-
 gcc/tree-pass.h                               |    2 +
 23 files changed, 1720 insertions(+), 60 deletions(-)
 create mode 100644 gcc/oacc-neuter-bcast.c
 create mode 100644 gcc/oacc-neuter-bcast.h

Comments

Thomas Schwinge Aug. 4, 2021, 1:13 p.m. UTC | #1
Hi!

On 2021-03-02T04:20:11-0800, Julian Brown <julian@codesourcery.com> wrote:
> This patch implements worker-partitioning support in the middle end,
> by rewriting gimple.  [...]

Yay!

Given:

> --- /dev/null
> +++ b/gcc/oacc-neuter-bcast.c

> +/* A map from SSA names or var decls to record fields.  */
> +
> +typedef hash_map<tree, tree> field_map_t;
> +
> +/* For each propagation record type, this is a map from SSA names or var decls
> +   to propagate, to the field in the record type that should be used for
> +   transmission and reception.  */
> +
> +typedef hash_map<tree, field_map_t *> record_field_map_t;
> +
> +static GTY(()) record_field_map_t *field_map;

Per 'gcc/doc/gty.texi': "Whenever you [...] create a new source file
containing 'GTY' markers, [...] add the filename to the 'GTFILES'
variable in 'Makefile.in'.  [...] The generated header file should be
included after everything else in the source file."  Thus:

    --- gcc/Makefile.in
    +++ gcc/Makefile.in
    @@ -2720,2 +2720,3 @@ GTFILES = $(CPPLIB_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
       $(srcdir)/omp-general.h \
    +  $(srcdir)/oacc-neuter-bcast.c \
       @all_gtfiles@
    --- gcc/oacc-neuter-bcast.c
    +++ gcc/oacc-neuter-bcast.c
    @@ -1514 +1514,4 @@ make_pass_oacc_gimple_workers (gcc::context *ctxt)
     }
    +
    +
    +#include "gt-oacc-neuter-bcast.h"

That however results in:

    [...]
    build/gengtype  \
                        -r gtype.state
    warning: structure `field_map_t' used but not defined
    gengtype: Internal error: abort in error_at_line, at gengtype.c:111
    make[2]: *** [Makefile:2796: s-gtype] Error 1
    [...]

I shall try to figure out the right GC annotations to make the 'typedef's
known to the GC machinery (unless somebody can tell me off hand) -- but
actually is this really necessary to allocate as GC memory?

> +void
> +oacc_do_neutering (void)
> +{
> +  [...]
> +  field_map = record_field_map_t::create_ggc (40);
> +  [...]
> +  FOR_ALL_BB_FN (bb, cfun)
> +    {
> +      propagation_set *ws_prop = prop_set[bb->index];
> +      if (ws_prop)
> +     {
> +       tree record_type = lang_hooks.types.make_type (RECORD_TYPE);
> +       [...]
> +       field_map->put (record_type, field_map_t::create_ggc (17));
> +       [...]
> +    }
> +  [...]
> +}

'oacc_do_neutering' is the 'execute' function of the pass, so that means
every time this executes, a fresh 'field_map' is set up, no state
persists across runs (assuming I'm understanding that correctly).  Why
don't we simply use standard (non-GC) memory management for that?  "For
convenience" shall be fine as an answer ;-) -- but maybe instead of
figuring out the right GC annotations, changing the memory management
will be easier?  (Or, of course, maybe I completely misunderstood that?)


Grüße
 Thomas
-----------------
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955
Thomas Schwinge Aug. 4, 2021, 1:56 p.m. UTC | #2
Hi!

On 2021-03-02T04:20:11-0800, Julian Brown <julian@codesourcery.com> wrote:
> This patch implements worker-partitioning support in the middle end,
> by rewriting gimple.  [...]

Yay!


> This version of the patch [...]
> avoids moving SESE-region finding code out
> of the NVPTX backend

So that's 'struct bb_sese' and following functions.

> since that code isn't used by the middle-end worker
> partitioning neutering/broadcasting implementation yet.

I understand correctly that "isn't used [...] yet" means that (a) "this
isn't implemented yet" (on og11 etc.), and doesn't mean (b) "is missing
from this patch submission"?  ... thus from (a) it follows that we may
later also drop from the og11 branch these changes?


Relatedly, a nontrivial amount of data structures/logic/code did get
duplicated from the nvptx back end, and modified slightly or
not-so-slightly (RTL vs. GIMPLE plus certain implementation "details").

We should at least cross reference the two instances, to make sure that
any changes to one are also propagated to the other.  (I'll take care.)

And then, do you (or anyone else, of course) happen to have any clever
idea about how to avoid the duplication, and somehow combine the RTL
vs. GIMPLE implementations?  Given that we nowadays may use C++ -- do you
foresee it feasible to have an abstract base class capturing basically
the data structures, logic, common code, and then RTL-specialized plus
GIMPLE-specialized classes inheriting from that?

For example:

    $ sed -e s%parallel_g%parallel%g < gcc/oacc-neuter-bcast.c > gcc/oacc-neuter-bcast.c_
    $ git diff --no-index --word-diff -b --patience gcc/config/nvptx/nvptx.c gcc/oacc-neuter-bcast.c_
    [...]
    /* Loop structure of the function.  The entire function is described as
       a NULL loop.  */
    @@ -3229,17 +80,21 @@ struct parallel
      basic_block forked_block;
      basic_block join_block;

      [-rtx_insn *forked_insn;-]
    [-  rtx_insn *join_insn;-]{+gimple *forked_stmt;+}
    {+  gimple *join_stmt;+}

      [-rtx_insn *fork_insn;-]
    [-  rtx_insn *joining_insn;-]{+gimple *fork_stmt;+}
    {+  gimple *joining_stmt;+}

      /* Basic blocks in this parallel, but not in child parallels.  The
         FORKED and JOINING blocks are in the partition.  The FORK and JOIN
         blocks are not.  */
      auto_vec<basic_block> blocks;

      {+tree record_type;+}
    {+  tree sender_decl;+}
    {+  tree receiver_decl;+}

    public:
      parallel (parallel *parent, unsigned mode);
      ~parallel ();
    @@ -3252,8 +107,12 @@ parallel::parallel (parallel *parent_, unsigned mask_)
      :parent (parent_), next (0), inner (0), mask (mask_), inner_mask (0)
    {
      forked_block = join_block = 0;
      [-forked_insn-]{+forked_stmt+} = [-join_insn-]{+join_stmt+} = [-0;-]
    [-  fork_insn-]{+NULL;+}
    {+  fork_stmt+} = [-joining_insn-]{+joining_stmt+} = [-0;-]{+NULL;+}

    {+  record_type = NULL_TREE;+}
    {+  sender_decl = NULL_TREE;+}
    {+  receiver_decl = NULL_TREE;+}

      if (parent)
        {
    @@ -3268,12 +127,54 @@ parallel::~parallel ()
      delete next;
    }
    [...]
    /* Split basic blocks such that each forked and join unspecs are at
       the start of their basic blocks.  Thus afterwards each block will
    @@ -3284,111 +185,168 @@ typedef auto_vec<insn_bb_t> insn_bb_vec_t;
       used when finding partitions.  */

    static void
    [-nvptx_split_blocks (bb_insn_map_t-]{+omp_sese_split_blocks (bb_stmt_map_t+} *map)
    {
      [-insn_bb_vec_t-]{+auto_vec<gimple *>+} worklist;
      basic_block block;
    [-  rtx_insn *insn;-]

      /* Locate all the reorg instructions of interest.  */
      FOR_ALL_BB_FN (block, cfun)
        {
    [-      bool seen_insn = false;-]

          /* Clear visited flag, for use by parallel locator  */
          block->flags &= ~BB_VISITED;

          [-FOR_BB_INSNS (block, insn)-]{+for (gimple_stmt_iterator gsi = gsi_start_bb (block);+}
    {+     !gsi_end_p (gsi);+}
    {+     gsi_next (&gsi))+}
        {
    [...]
    /* Dump this parallel and all its inner parallels.  */

    static void
    [-nvptx_dump_pars-]{+omp_sese_dump_pars+} (parallel *par, unsigned depth)
    {
      fprintf (dump_file, "%u: mask %d {+(%s)+} head=%d, tail=%d\n",
           depth, par->mask, {+mask_name (par->mask),+}
           par->forked_block ? par->forked_block->index : -1,
           par->join_block ? par->join_block->index : -1);

    @@ -3399,10 +357,10 @@ nvptx_dump_pars (parallel *par, unsigned depth)
        fprintf (dump_file, " %d", block->index);
      fprintf (dump_file, "\n");
      if (par->inner)
        [-nvptx_dump_pars-]{+omp_sese_dump_pars+} (par->inner, depth + 1);

      if (par->next)
        [-nvptx_dump_pars-]{+omp_sese_dump_pars+} (par->next, depth);
    }

    /* If BLOCK contains a fork/join marker, process it to create or
    @@ -3410,60 +368,84 @@ nvptx_dump_pars (parallel *par, unsigned depth)
       and then walk successor blocks.   */

    static parallel *
    [-nvptx_find_par (bb_insn_map_t-]{+omp_sese_find_par (bb_stmt_map_t+} *map, parallel *par, basic_block block)
    {
      if (block->flags & BB_VISITED)
        return par;
      block->flags |= BB_VISITED;

      if [-(rtx_insn **endp-]{+(gimple **stmtp+} = map->get (block))
        {
    [...]
    static parallel *
    [-nvptx_discover_pars (bb_insn_map_t-]{+omp_sese_discover_pars (bb_stmt_map_t+} *map)
    {
      basic_block block;

    @@ -3502,3468 +485,1033 @@ nvptx_discover_pars (bb_insn_map_t *map)
      block = ENTRY_BLOCK_PTR_FOR_FN (cfun);
      block->flags &= ~BB_VISITED;

      parallel *par = [-nvptx_find_par-]{+omp_sese_find_par+} (map, 0, block);

      if (dump_file)
        {
          fprintf (dump_file, "\nLoops\n");
          [-nvptx_dump_pars-]{+omp_sese_dump_pars+} (par, 0);
          fprintf (dump_file, "\n");
        }

      return par;
    }

(For brevity, I stripped out the parts where implementation "details"
differ considerably.)


Grüße
 Thomas
-----------------
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955
Julian Brown Aug. 6, 2021, 8:49 a.m. UTC | #3
On Wed, 4 Aug 2021 15:13:30 +0200
Thomas Schwinge <thomas@codesourcery.com> wrote:

> 'oacc_do_neutering' is the 'execute' function of the pass, so that
> means every time this executes, a fresh 'field_map' is set up, no
> state persists across runs (assuming I'm understanding that
> correctly).  Why don't we simply use standard (non-GC) memory
> management for that?  "For convenience" shall be fine as an answer
> ;-) -- but maybe instead of figuring out the right GC annotations,
> changing the memory management will be easier?  (Or, of course, maybe
> I completely misunderstood that?)

I suspect you're right, and there's no need for this to be GC-allocated
memory. If non-standard memory allocation will work out fine, we should
probably use that instead.

Thanks,

Julian
Julian Brown Aug. 6, 2021, 9:25 a.m. UTC | #4
On Wed, 4 Aug 2021 15:56:49 +0200
Thomas Schwinge <thomas@codesourcery.com> wrote:

> > This version of the patch [...]
> > avoids moving SESE-region finding code out
> > of the NVPTX backend  
> 
> So that's 'struct bb_sese' and following functions.
> 
> > since that code isn't used by the middle-end worker
> > partitioning neutering/broadcasting implementation yet.  
> 
> I understand correctly that "isn't used [...] yet" means that (a)
> "this isn't implemented yet" (on og11 etc.), and doesn't mean (b) "is
> missing from this patch submission"?  ... thus from (a) it follows
> that we may later also drop from the og11 branch these changes?

Yes, the former -- the SESE region-finding code isn't used anywhere for
the middle-end worker partitioning support. Thus if we happen to have
two adjacent blocks that both use worker-single mode, we will
conditionalise them to run on one worker only separately, with
redundant tests/branches.

I'm not sure how often that happens in practice. We don't need to
handle vector-single mode for GCN, which possibly means that the
SESE-finding code's ability to skip entire inner loop nests (IIRC!) is
unnecessary.

So yes, that code could probably be dropped for og11 too, though
perhaps we should try to evaluate if it would still be useful first.

> Relatedly, a nontrivial amount of data structures/logic/code did get
> duplicated from the nvptx back end, and modified slightly or
> not-so-slightly (RTL vs. GIMPLE plus certain implementation
> "details").
> 
> We should at least cross reference the two instances, to make sure
> that any changes to one are also propagated to the other.  (I'll take
> care.)

OK, thanks,

> And then, do you (or anyone else, of course) happen to have any clever
> idea about how to avoid the duplication, and somehow combine the RTL
> vs. GIMPLE implementations?  Given that we nowadays may use C++ -- do
> you foresee it feasible to have an abstract base class capturing
> basically the data structures, logic, common code, and then
> RTL-specialized plus GIMPLE-specialized classes inheriting from that?

I suppose one could either use "old-style" inheritance, or maybe do
it with templates. There's probably both costs & benefits when it comes
to maintenance, either way -- having this code shared would mean any
changes need testing for both nvptx & GCN targets, and risks making it
harder to follow. OTOH, like you say, changes would only need to be
made in one place.

TBH, I'd spend effort on trying to integrate the SESE code (if it'd be
beneficial) first, before trying to de-duplicate those other bits.

Thanks,

Julian
Thomas Schwinge Aug. 9, 2021, 1:21 p.m. UTC | #5
Hi!

On 2021-03-02T04:20:11-0800, Julian Brown <julian@codesourcery.com> wrote:
> This patch implements worker-partitioning support in the middle end,
> by rewriting gimple.  [...]

Thanks!

> The OpenACC execution model requires that code
> can run in either "worker single" mode where only a single worker per
> gang is active, or "worker partitioned" mode, where multiple workers
> per gang are active. This means we need to do something equivalent
> to spawning additional workers when transitioning from worker-single
> to worker-partitioned mode. However, GPUs typically fix the number of
> threads of invoked kernels at launch time, so we need to do something
> with the "extra" threads when they are not wanted.

ACK, and we shall (at some later point in time) see whether this new
middle end implementation can't also replace the nvptx back end one,
adapted as necessary.

> The scheme used is [...]

I haven't in detail verified the algorithmic aspects and most of the
implementation of those -- it appears to generally work (see also
additional testsuite changes I've folded in).


> --- /dev/null
> +++ b/gcc/oacc-neuter-bcast.c

For consistency with other recent OpenACC implementation files, I renamed
this to 'gcc/omp-oacc-neuter-broadcast.cc', and renamed the pass from
'oaccworkers' to 'omp_oacc_neuter_broadcast'.

> +void
> +oacc_do_neutering (void)
> +{

> +  for (unsigned i = 0; i < last_basic_block_for_fn (cfun); i++)
> +    prop_set.quick_push (0);

    [...]/gcc/oacc-neuter-bcast.c: In function ‘void oacc_do_neutering()’:
    [...]/gcc/oacc-neuter-bcast.c:1405:26: error: comparison of integer expressions of different signedness: ‘unsigned int’ and ‘int’ [-Werror=sign-compare]
     1405 |   for (unsigned i = 0; i < last_basic_block_for_fn (cfun); i++)

Fixed.

> --- a/gcc/omp-builtins.def
> +++ b/gcc/omp-builtins.def
> @@ -75,6 +75,8 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_BARRIER, "GOMP_barrier",
>                 BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
>  DEF_GOMP_BUILTIN (BUILT_IN_GOMP_BARRIER_CANCEL, "GOMP_barrier_cancel",
>                 BT_FN_BOOL, ATTR_NOTHROW_LEAF_LIST)
> +DEF_GOACC_BUILTIN (BUILT_IN_GOACC_BARRIER, "GOACC_barrier",
> +                BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
>  DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKWAIT, "GOMP_taskwait",
>                 BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
>  DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKWAIT_DEPEND, "GOMP_taskwait_depend",
> @@ -412,6 +414,12 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SINGLE_COPY_START, "GOMP_single_copy_start",
>                 BT_FN_PTR, ATTR_NOTHROW_LEAF_LIST)
>  DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SINGLE_COPY_END, "GOMP_single_copy_end",
>                 BT_FN_VOID_PTR, ATTR_NOTHROW_LEAF_LIST)
> +DEF_GOACC_BUILTIN (BUILT_IN_GOACC_SINGLE_START, "GOACC_single_start",
> +                BT_FN_BOOL, ATTR_NOTHROW_LEAF_LIST)
> +DEF_GOACC_BUILTIN (BUILT_IN_GOACC_SINGLE_COPY_START, "GOACC_single_copy_start",
> +                BT_FN_PTR, ATTR_NOTHROW_LEAF_LIST)
> +DEF_GOACC_BUILTIN (BUILT_IN_GOACC_SINGLE_COPY_END, "GOACC_single_copy_end",
> +                BT_FN_VOID_PTR, ATTR_NOTHROW_LEAF_LIST)
>  DEF_GOMP_BUILTIN (BUILT_IN_GOMP_OFFLOAD_REGISTER, "GOMP_offload_register_ver",
>                 BT_FN_VOID_UINT_PTR_INT_PTR, ATTR_NOTHROW_LIST)
>  DEF_GOMP_BUILTIN (BUILT_IN_GOMP_OFFLOAD_UNREGISTER,

These should use 'DEF_GOACC_BUILTIN_ONLY' instead of 'DEF_GOACC_BUILTIN':
we're expecting them to be all handled inside the compiler, not resulting
in a libgomp function call.

It looks a bit strange to group the 'GOACC' ones in between the 'GOMP'
one, so I split that up instead.

> --- a/gcc/omp-offload.c
> +++ b/gcc/omp-offload.c

> +int
> +execute_oacc_gimple_workers (void)
> +{
> +  oacc_do_neutering ();
> +  calculate_dominance_info (CDI_DOMINATORS);
> +  return 0;
> +}

Similar to discussion re "[OpenACC] Extract 'pass_oacc_loop_designation'
out of 'pass_oacc_device_lower'": "given 'TODO_cleanup_cfg' as part of
'todo_flags_finish' for new [pass], we no longer need [manual
'calculate_dominance_info (CDI_DOMINATORS)'], as far as I can tell.  So I
removed [that] -- but please do tell if there is a reason to keep it".

The guts of the implementation are in a new, separate file (thanks!), and
I see no reason for 'execute_oacc_gimple_workers' plus the following
bits:

> +const pass_data pass_data_oacc_gimple_workers =
> +{
> +  GIMPLE_PASS, /* type */
> +  "oaccworkers", /* name */
> +  OPTGROUP_OMP, /* optinfo_flags */
> +  TV_NONE, /* tv_id */
> +  PROP_cfg, /* properties_required */
> +  0, /* properties_provided */
> +  0, /* properties_destroyed */
> +  0, /* todo_flags_start */
> +  TODO_update_ssa | TODO_cleanup_cfg, /* todo_flags_finish */
> +};
> +
> +class pass_oacc_gimple_workers : public gimple_opt_pass
> +{
> +public:
> +  pass_oacc_gimple_workers (gcc::context *ctxt)
> +    : gimple_opt_pass (pass_data_oacc_gimple_workers, ctxt)
> +  {}
> +
> +  /* opt_pass methods: */
> +  virtual bool gate (function *)
> +  {
> +    return flag_openacc && targetm.goacc.worker_partitioning;
> +  };
> +
> +  virtual unsigned int execute (function *)
> +    {
> +      return execute_oacc_gimple_workers ();
> +    }
> +
> +}; // class pass_oacc_gimple_workers

> +gimple_opt_pass *
> +make_pass_oacc_gimple_workers (gcc::context *ctxt)
> +{
> +  return new pass_oacc_gimple_workers (ctxt);
> +}

... to reside in 'gcc/omp-offload.c', so I moved all that into the new
file, too.  Relationship with related OpenACC passes is apparent via
'gcc/passes.def'.

> --- a/gcc/omp-offload.h
> +++ b/gcc/omp-offload.h

> +extern int oacc_fn_attrib_level (tree attr);

Removed, already declared elsewhere in that file.

> --- a/gcc/target.def
> +++ b/gcc/target.def
> @@ -1742,6 +1742,19 @@ way.",
>  tree, (tree var, int level),
>  NULL)
>
> +DEFHOOK
> +(create_propagation_record,
> +"Create a record used to propagate local-variable state from an active\n\
> +worker to other workers.  A possible implementation might adjust the type\n\
> +of REC to place the new variable in shared GPU memory.",
> +tree, (tree rec, bool sender, const char *name),
> +default_goacc_create_propagation_record)

To make this easier for the cursory reading where calls to
'goacc.create_propagation_record' appear, I renamed that to
'create_worker_broadcast_record'.  It remains to be seen (later) whether
it makes sense for (nvptx) vector state propagation to use this interface
(then to be renamed) or a new additional one.

> +DEFHOOKPOD
> +(worker_partitioning,
> +"Use gimple transformation for worker neutering/broadcasting.",
> +bool, false)
> +
>  HOOK_VECTOR_END (goacc)

As (at least currently) it wouldn't be possible to have
'targetm.goacc.worker_partitioning' without
'targetm.goacc.create_worker_propagation_record' being different from
'default_goacc_create_worker_propagation_record', and thus the latter
effectively being 'gcc_unreachable' dead code, I removed the latter and
the former, and instead now "Presence of
[targetm.goacc.create_worker_propagation_record] indicates that middle
end neutering/broadcasting be used".

For consistency, I moved the relevant GCN back end adjustments
('gcn_goacc_adjust_propagation_record' ->
'gcn_goacc_create_worker_propagation_record') from the forthcoming
"amdgcn: Enable OpenACC worker partitioning for AMD GCN" changes into
this commit here.

Pushed "openacc: Middle-end worker-partitioning support" to master branch
in commit e2a58ed6dc5293602d0d168475109caa81ad0f0d, see attached.


Grüße
 Thomas


-----------------
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955
Thomas Schwinge Aug. 9, 2021, 1:32 p.m. UTC | #6
Hi!

On 2021-08-06T10:25:22+0100, Julian Brown <julian@codesourcery.com> wrote:
> On Wed, 4 Aug 2021 15:56:49 +0200
> Thomas Schwinge <thomas@codesourcery.com> wrote:
>> a nontrivial amount of data structures/logic/code did get
>> duplicated from the nvptx back end, and modified slightly or
>> not-so-slightly (RTL vs. GIMPLE plus certain implementation
>> "details").
>>
>> We should at least cross reference the two instances, to make sure
>> that any changes to one are also propagated to the other.  (I'll take
>> care.)
>
> OK, thanks,

Pushed "Cross-reference parts adapted in
'gcc/omp-oacc-neuter-broadcast.cc'" to master branch in
commit 62f01243fb27030b8d99c671f27349c2e7465edc, see attached.

>> And then, do you (or anyone else, of course) happen to have any clever
>> idea about how to avoid the duplication, and somehow combine the RTL
>> vs. GIMPLE implementations?  Given that we nowadays may use C++ -- do
>> you foresee it feasible to have an abstract base class capturing
>> basically the data structures, logic, common code, and then
>> RTL-specialized plus GIMPLE-specialized classes inheriting from that?
>
> I suppose one could either use "old-style" inheritance, or maybe do
> it with templates.

Or, as my WIP would show: both of these.  ;-) (To be posted later.)

> There's probably both costs & benefits when it comes
> to maintenance, either way -- having this code shared would mean any
> changes need testing for both nvptx & GCN targets, and risks making it
> harder to follow. OTOH, like you say, changes would only need to be
> made in one place.


> TBH, I'd spend effort on trying to integrate the SESE code (if it'd be
> beneficial) first, before trying to de-duplicate those other bits.

Spending effort on that may make sense, but I'm not able to do that as
part of this task here, because that's new development and related
performance etc. analysis -- which additionally I don't know much about
in the GCN context.


Grüße
 Thomas


-----------------
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955
Thomas Schwinge Aug. 16, 2021, 10:34 a.m. UTC | #7
Hi!

On 2021-08-06T09:49:58+0100, Julian Brown <julian@codesourcery.com> wrote:
> On Wed, 4 Aug 2021 15:13:30 +0200
> Thomas Schwinge <thomas@codesourcery.com> wrote:
>
>> 'oacc_do_neutering' is the 'execute' function of the pass, so that
>> means every time this executes, a fresh 'field_map' is set up, no
>> state persists across runs (assuming I'm understanding that
>> correctly).  Why don't we simply use standard (non-GC) memory
>> management for that?  "For convenience" shall be fine as an answer
>> ;-) -- but maybe instead of figuring out the right GC annotations,
>> changing the memory management will be easier?  (Or, of course, maybe
>> I completely misunderstood that?)
>
> I suspect you're right, and there's no need for this to be GC-allocated
> memory. If non-standard memory allocation will work out fine, we should

("non-GC", I suppose.)

> probably use that instead.

Pushed "Avoid 'GTY' use for 'gcc/omp-oacc-neuter-broadcast.cc:field_map'"
to master branch in commit 049eda8274b7394523238b17ab12c3e2889f253e, see
attached.


Grüße
 Thomas


-----------------
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955
Thomas Schwinge Aug. 16, 2021, 10:34 a.m. UTC | #8
Hi!

On 2021-03-02T04:20:11-0800, Julian Brown <julian@codesourcery.com> wrote:
> --- /dev/null
> +++ b/gcc/oacc-neuter-bcast.c

Allocated here:

> +/* Sets of SSA_NAMES or VAR_DECLs to propagate.  */
> +typedef hash_set<tree> propagation_set;
> +
> +static void
> +find_ssa_names_to_propagate ([...],
> +                          vec<propagation_set *> *prop_set)
> +{

> +                   if (!(*prop_set)[def_bb->index])
> +                     (*prop_set)[def_bb->index] = new propagation_set;

> +                   if (!(*prop_set)[def_bb->index])
> +                     (*prop_set)[def_bb->index] = new propagation_set;

..., and here:

> +static void
> +find_local_vars_to_propagate ([...],
> +                           vec<propagation_set *> *prop_set)
> +{

> +                   if (!(*prop_set)[block->index])
> +                     (*prop_set)[block->index] = new propagation_set;

..., and deallocated here:

> +static void
> +neuter_worker_single ([...],
> +                   vec<propagation_set *> *prop_set,
> +                   [...])
> +{

> +       propagation_set *ws_prop = (*prop_set)[block->index];
> +
> +       if (ws_prop)
> +         {
> +           [...]
> +           delete ws_prop;
> +           (*prop_set)[block->index] = 0;
> +         }

..., and defined here:

> +void
> +oacc_do_neutering (void)
> +{

> +  vec<propagation_set *> prop_set;
> +  prop_set.create (last_basic_block_for_fn (cfun));
> +
> +  for (unsigned i = 0; i < last_basic_block_for_fn (cfun); i++)
> +    prop_set.quick_push (0);

I recently learned about 'safe_grow_cleared', which allows for
simplifying this loop.

> +  find_ssa_names_to_propagate ([...], &prop_set);

> +  find_local_vars_to_propagate ([...], &prop_set);

> +  neuter_worker_single ([...], &prop_set, [...]);
> +
> +  prop_set.release ();

It seems appropriate to add a check that 'neuter_worker_single' has
indeed handled/'delete'd all these.

Pushed "Clarify memory management for 'prop_set' in
'gcc/omp-oacc-neuter-broadcast.cc'" to master branch in
commit 7b9d99e615212c24cecae4202d8def9aa5e71809, see attached.


Grüße
 Thomas


-----------------
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955
Thomas Schwinge Aug. 16, 2021, 10:34 a.m. UTC | #9
Hi!

On 2021-03-02T04:20:11-0800, Julian Brown <julian@codesourcery.com> wrote:
> --- /dev/null
> +++ b/gcc/oacc-neuter-bcast.c

Allocated here:

> +static parallel_g *
> +omp_sese_find_par (bb_stmt_map_t *map, parallel_g *par, basic_block block)
> +{

> +       par = new parallel_g ([...]);

> +               par = new parallel_g (par, mask);

> +    par = new parallel_g ([...]);

> +  return par;
> +}

> +static parallel_g *
> +omp_sese_discover_pars (bb_stmt_map_t *map)
> +{

> +  parallel_g *par = omp_sese_find_par (map, 0, block);

> +  return par;
> +}

..., and used here:

> +void
> +oacc_do_neutering (void)
> +{

> +  parallel_g *par = omp_sese_discover_pars (&bb_stmt_map);
> +  populate_single_mode_bitmaps (par, [...]);

> +  find_ssa_names_to_propagate (par, [...]);

> +  find_partitioned_var_uses (par, [...]);
> +  find_local_vars_to_propagate (par, [...]);

> +  neuter_worker_single (par, [...]);

... but never released; memory leak.

Pushed "Plug 'par' memory leak in
'gcc/omp-oacc-neuter-broadcast.cc:execute_omp_oacc_neuter_broadcast'" to
master branch in commit df98015fb7db2ed754a7c154669bc7777f8e1612, see
attached.


Grüße
 Thomas


-----------------
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955
diff mbox series

Patch

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index a63c5d9cab6..c3be0633932 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1500,6 +1500,7 @@  OBJS = \
 	omp-offload.o \
 	omp-expand.o \
 	omp-general.o \
+	oacc-neuter-bcast.o \
 	omp-low.o \
 	omp-oacc-kernels-decompose.o \
 	omp-simd-clone.o \
diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi
index 94927ea7b2b..9936fa6b6f9 100644
--- a/gcc/doc/tm.texi
+++ b/gcc/doc/tm.texi
@@ -6253,6 +6253,16 @@  adjusted variable declaration needs to be expanded to RTL in a non-standard
 way.
 @end deftypefn
 
+@deftypevr {Target Hook} bool TARGET_GOACC_WORKER_PARTITIONING
+Use gimple transformation for worker neutering/broadcasting.
+@end deftypevr
+
+@deftypefn {Target Hook} tree TARGET_GOACC_CREATE_PROPAGATION_RECORD (tree @var{rec}, bool @var{sender}, const char *@var{name})
+Create a record used to propagate local-variable state from an active
+worker to other workers.  A possible implementation might adjust the type
+of REC to place the new variable in shared GPU memory.
+@end deftypefn
+
 @node Anchored Addresses
 @section Anchored Addresses
 @cindex anchored addresses
diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in
index b8c23cf6db5..a10aed1d78d 100644
--- a/gcc/doc/tm.texi.in
+++ b/gcc/doc/tm.texi.in
@@ -4223,6 +4223,10 @@  address;  but often a machine-dependent strategy can generate better code.
 
 @hook TARGET_GOACC_ADJUST_PRIVATE_DECL
 
+@hook TARGET_GOACC_WORKER_PARTITIONING
+
+@hook TARGET_GOACC_CREATE_PROPAGATION_RECORD
+
 @node Anchored Addresses
 @section Anchored Addresses
 @cindex anchored addresses
diff --git a/gcc/oacc-neuter-bcast.c b/gcc/oacc-neuter-bcast.c
new file mode 100644
index 00000000000..e258b88bf66
--- /dev/null
+++ b/gcc/oacc-neuter-bcast.c
@@ -0,0 +1,1471 @@ 
+/* Implement worker partitioning for OpenACC via neutering/broadcasting scheme.
+
+   Copyright (C) 2015-2021 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "rtl.h"
+#include "tree.h"
+#include "gimple.h"
+#include "tree-pass.h"
+#include "ssa.h"
+#include "cgraph.h"
+#include "pretty-print.h"
+#include "fold-const.h"
+#include "gimplify.h"
+#include "gimple-iterator.h"
+#include "gimple-walk.h"
+#include "tree-inline.h"
+#include "langhooks.h"
+#include "omp-general.h"
+#include "omp-low.h"
+#include "gimple-pretty-print.h"
+#include "cfghooks.h"
+#include "insn-config.h"
+#include "recog.h"
+#include "internal-fn.h"
+#include "bitmap.h"
+#include "tree-nested.h"
+#include "stor-layout.h"
+#include "tree-ssa-threadupdate.h"
+#include "tree-into-ssa.h"
+#include "splay-tree.h"
+#include "target.h"
+#include "cfgloop.h"
+#include "tree-cfg.h"
+#include "omp-offload.h"
+#include "attribs.h"
+#include "oacc-neuter-bcast.h"
+
+/* Loop structure of the function.  The entire function is described as
+   a NULL loop.  */
+
+struct parallel_g
+{
+  /* Parent parallel.  */
+  parallel_g *parent;
+
+  /* Next sibling parallel.  */
+  parallel_g *next;
+
+  /* First child parallel.  */
+  parallel_g *inner;
+
+  /* Partitioning mask of the parallel.  */
+  unsigned mask;
+
+  /* Partitioning used within inner parallels. */
+  unsigned inner_mask;
+
+  /* Location of parallel forked and join.  The forked is the first
+     block in the parallel and the join is the first block after of
+     the partition.  */
+  basic_block forked_block;
+  basic_block join_block;
+
+  gimple *forked_stmt;
+  gimple *join_stmt;
+
+  gimple *fork_stmt;
+  gimple *joining_stmt;
+
+  /* Basic blocks in this parallel, but not in child parallels.  The
+     FORKED and JOINING blocks are in the partition.  The FORK and JOIN
+     blocks are not.  */
+  auto_vec<basic_block> blocks;
+
+  tree record_type;
+  tree sender_decl;
+  tree receiver_decl;
+
+public:
+  parallel_g (parallel_g *parent, unsigned mode);
+  ~parallel_g ();
+};
+
+/* Constructor links the new parallel into it's parent's chain of
+   children.  */
+
+parallel_g::parallel_g (parallel_g *parent_, unsigned mask_)
+  :parent (parent_), next (0), inner (0), mask (mask_), inner_mask (0)
+{
+  forked_block = join_block = 0;
+  forked_stmt = join_stmt = NULL;
+  fork_stmt = joining_stmt = NULL;
+
+  record_type = NULL_TREE;
+  sender_decl = NULL_TREE;
+  receiver_decl = NULL_TREE;
+
+  if (parent)
+    {
+      next = parent->inner;
+      parent->inner = this;
+    }
+}
+
+parallel_g::~parallel_g ()
+{
+  delete inner;
+  delete next;
+}
+
+static bool
+local_var_based_p (tree decl)
+{
+  switch (TREE_CODE (decl))
+    {
+    case VAR_DECL:
+      return !is_global_var (decl);
+
+    case COMPONENT_REF:
+    case BIT_FIELD_REF:
+    case ARRAY_REF:
+      return local_var_based_p (TREE_OPERAND (decl, 0));
+
+    default:
+      return false;
+    }
+}
+
+/* Map of basic blocks to gimple stmts.  */
+typedef hash_map<basic_block, gimple *> bb_stmt_map_t;
+
+/* Calls to OpenACC routines are made by all workers/wavefronts/warps, since
+   the routine likely contains partitioned loops (else will do its own
+   neutering and variable propagation). Return TRUE if a function call CALL
+   should be made in (worker) single mode instead, rather than redundant
+   mode.  */
+
+static bool
+omp_sese_active_worker_call (gcall *call)
+{
+#define GOMP_DIM_SEQ GOMP_DIM_MAX
+  tree fndecl = gimple_call_fndecl (call);
+
+  if (!fndecl)
+    return true;
+
+  tree attrs = oacc_get_fn_attrib (fndecl);
+
+  if (!attrs)
+    return true;
+
+  int level = oacc_fn_attrib_level (attrs);
+
+  /* Neither regular functions nor "seq" routines should be run by all threads
+     in worker-single mode.  */
+  return level == -1 || level == GOMP_DIM_SEQ;
+#undef GOMP_DIM_SEQ
+}
+
+/* Split basic blocks such that each forked and join unspecs are at
+   the start of their basic blocks.  Thus afterwards each block will
+   have a single partitioning mode.  We also do the same for return
+   insns, as they are executed by every thread.  Return the
+   partitioning mode of the function as a whole.  Populate MAP with
+   head and tail blocks.  We also clear the BB visited flag, which is
+   used when finding partitions.  */
+
+static void
+omp_sese_split_blocks (bb_stmt_map_t *map)
+{
+  auto_vec<gimple *> worklist;
+  basic_block block;
+
+  /* Locate all the reorg instructions of interest.  */
+  FOR_ALL_BB_FN (block, cfun)
+    {
+      /* Clear visited flag, for use by parallel locator  */
+      block->flags &= ~BB_VISITED;
+
+      for (gimple_stmt_iterator gsi = gsi_start_bb (block);
+	   !gsi_end_p (gsi);
+	   gsi_next (&gsi))
+	{
+	  gimple *stmt = gsi_stmt (gsi);
+
+	  if (gimple_call_internal_p (stmt, IFN_UNIQUE))
+	    {
+	      enum ifn_unique_kind k = ((enum ifn_unique_kind)
+		TREE_INT_CST_LOW (gimple_call_arg (stmt, 0)));
+
+	      if (k == IFN_UNIQUE_OACC_JOIN)
+		worklist.safe_push (stmt);
+	      else if (k == IFN_UNIQUE_OACC_FORK)
+		{
+		  gcc_assert (gsi_one_before_end_p (gsi));
+		  basic_block forked_block = single_succ (block);
+		  gimple_stmt_iterator gsi2 = gsi_start_bb (forked_block);
+
+		  /* We push a NOP as a placeholder for the "forked" stmt.
+		     This is then recognized in omp_sese_find_par.  */
+		  gimple *nop = gimple_build_nop ();
+		  gsi_insert_before (&gsi2, nop, GSI_SAME_STMT);
+
+		  worklist.safe_push (nop);
+		}
+	    }
+	  else if (gimple_code (stmt) == GIMPLE_RETURN
+		   || gimple_code (stmt) == GIMPLE_COND
+		   || gimple_code (stmt) == GIMPLE_SWITCH
+		   || (gimple_code (stmt) == GIMPLE_CALL
+		       && !gimple_call_internal_p (stmt)
+		       && !omp_sese_active_worker_call (as_a <gcall *> (stmt))))
+	    worklist.safe_push (stmt);
+	  else if (is_gimple_assign (stmt))
+	    {
+	      tree lhs = gimple_assign_lhs (stmt);
+
+	      /* Force assignments to components/fields/elements of local
+		 aggregates into fully-partitioned (redundant) mode.  This
+		 avoids having to broadcast the whole aggregate.  The RHS of
+		 the assignment will be propagated using the normal
+		 mechanism.  */
+
+	      switch (TREE_CODE (lhs))
+		{
+		case COMPONENT_REF:
+		case BIT_FIELD_REF:
+		case ARRAY_REF:
+		  {
+		    tree aggr = TREE_OPERAND (lhs, 0);
+
+		    if (local_var_based_p (aggr))
+		      worklist.safe_push (stmt);
+		  }
+		  break;
+
+		default:
+		  ;
+		}
+	    }
+	}
+    }
+
+  /* Split blocks on the worklist.  */
+  unsigned ix;
+  gimple *stmt;
+
+  for (ix = 0; worklist.iterate (ix, &stmt); ix++)
+    {
+      basic_block block = gimple_bb (stmt);
+
+      if (gimple_code (stmt) == GIMPLE_COND)
+	{
+	  gcond *orig_cond = as_a <gcond *> (stmt);
+	  tree_code code = gimple_expr_code (orig_cond);
+	  tree pred = make_ssa_name (boolean_type_node);
+	  gimple *asgn = gimple_build_assign (pred, code,
+			   gimple_cond_lhs (orig_cond),
+			   gimple_cond_rhs (orig_cond));
+	  gcond *new_cond
+	    = gimple_build_cond (NE_EXPR, pred, boolean_false_node,
+				 gimple_cond_true_label (orig_cond),
+				 gimple_cond_false_label (orig_cond));
+
+	  gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+	  gsi_insert_before (&gsi, asgn, GSI_SAME_STMT);
+	  gsi_replace (&gsi, new_cond, true);
+
+	  edge e = split_block (block, asgn);
+	  block = e->dest;
+	  map->get_or_insert (block) = new_cond;
+	}
+      else if ((gimple_code (stmt) == GIMPLE_CALL
+		&& !gimple_call_internal_p (stmt))
+	       || is_gimple_assign (stmt))
+	{
+	  gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+	  gsi_prev (&gsi);
+
+	  edge call = split_block (block, gsi_stmt (gsi));
+
+	  gimple *call_stmt = gsi_stmt (gsi_start_bb (call->dest));
+
+	  edge call_to_ret = split_block (call->dest, call_stmt);
+
+	  map->get_or_insert (call_to_ret->src) = call_stmt;
+	}
+      else
+	{
+	  gimple_stmt_iterator gsi = gsi_for_stmt (stmt);
+	  gsi_prev (&gsi);
+
+	  if (gsi_end_p (gsi))
+	    map->get_or_insert (block) = stmt;
+	  else
+	    {
+	      /* Split block before insn. The insn is in the new block.  */
+	      edge e = split_block (block, gsi_stmt (gsi));
+
+	      block = e->dest;
+	      map->get_or_insert (block) = stmt;
+	    }
+	}
+    }
+}
+
+static const char *
+mask_name (unsigned mask)
+{
+  switch (mask)
+    {
+    case 0: return "gang redundant";
+    case 1: return "gang partitioned";
+    case 2: return "worker partitioned";
+    case 3: return "gang+worker partitioned";
+    case 4: return "vector partitioned";
+    case 5: return "gang+vector partitioned";
+    case 6: return "worker+vector partitioned";
+    case 7: return "fully partitioned";
+    default: return "<illegal>";
+    }
+}
+
+/* Dump this parallel and all its inner parallels.  */
+
+static void
+omp_sese_dump_pars (parallel_g *par, unsigned depth)
+{
+  fprintf (dump_file, "%u: mask %d (%s) head=%d, tail=%d\n",
+	   depth, par->mask, mask_name (par->mask),
+	   par->forked_block ? par->forked_block->index : -1,
+	   par->join_block ? par->join_block->index : -1);
+
+  fprintf (dump_file, "    blocks:");
+
+  basic_block block;
+  for (unsigned ix = 0; par->blocks.iterate (ix, &block); ix++)
+    fprintf (dump_file, " %d", block->index);
+  fprintf (dump_file, "\n");
+  if (par->inner)
+    omp_sese_dump_pars (par->inner, depth + 1);
+
+  if (par->next)
+    omp_sese_dump_pars (par->next, depth);
+}
+
+/* If BLOCK contains a fork/join marker, process it to create or
+   terminate a loop structure.  Add this block to the current loop,
+   and then walk successor blocks.   */
+
+static parallel_g *
+omp_sese_find_par (bb_stmt_map_t *map, parallel_g *par, basic_block block)
+{
+  if (block->flags & BB_VISITED)
+    return par;
+  block->flags |= BB_VISITED;
+
+  if (gimple **stmtp = map->get (block))
+    {
+      gimple *stmt = *stmtp;
+
+      if (gimple_code (stmt) == GIMPLE_COND
+	  || gimple_code (stmt) == GIMPLE_SWITCH
+	  || gimple_code (stmt) == GIMPLE_RETURN
+	  || (gimple_code (stmt) == GIMPLE_CALL
+	      && !gimple_call_internal_p (stmt))
+	  || is_gimple_assign (stmt))
+	{
+	  /* A single block that is forced to be at the maximum partition
+	     level.  Make a singleton par for it.  */
+	  par = new parallel_g (par, GOMP_DIM_MASK (GOMP_DIM_GANG)
+				   | GOMP_DIM_MASK (GOMP_DIM_WORKER)
+				   | GOMP_DIM_MASK (GOMP_DIM_VECTOR));
+	  par->forked_block = block;
+	  par->forked_stmt = stmt;
+	  par->blocks.safe_push (block);
+	  par = par->parent;
+	  goto walk_successors;
+	}
+      else if (gimple_nop_p (stmt))
+	{
+	  basic_block pred = single_pred (block);
+	  gcc_assert (pred);
+	  gimple_stmt_iterator gsi = gsi_last_bb (pred);
+	  gimple *final_stmt = gsi_stmt (gsi);
+
+	  if (gimple_call_internal_p (final_stmt, IFN_UNIQUE))
+	    {
+	      gcall *call = as_a <gcall *> (final_stmt);
+	      enum ifn_unique_kind k = ((enum ifn_unique_kind)
+		TREE_INT_CST_LOW (gimple_call_arg (call, 0)));
+
+	      if (k == IFN_UNIQUE_OACC_FORK)
+		{
+		  HOST_WIDE_INT dim
+		    = TREE_INT_CST_LOW (gimple_call_arg (call, 2));
+		  unsigned mask = (dim >= 0) ? GOMP_DIM_MASK (dim) : 0;
+
+		  par = new parallel_g (par, mask);
+		  par->forked_block = block;
+		  par->forked_stmt = final_stmt;
+		  par->fork_stmt = stmt;
+		}
+	      else
+		gcc_unreachable ();
+	    }
+	  else
+	    gcc_unreachable ();
+	}
+      else if (gimple_call_internal_p (stmt, IFN_UNIQUE))
+	{
+	  gcall *call = as_a <gcall *> (stmt);
+	  enum ifn_unique_kind k = ((enum ifn_unique_kind)
+	    TREE_INT_CST_LOW (gimple_call_arg (call, 0)));
+	  if (k == IFN_UNIQUE_OACC_JOIN)
+	    {
+	      HOST_WIDE_INT dim = TREE_INT_CST_LOW (gimple_call_arg (stmt, 2));
+	      unsigned mask = (dim >= 0) ? GOMP_DIM_MASK (dim) : 0;
+
+	      gcc_assert (par->mask == mask);
+	      par->join_block = block;
+	      par->join_stmt = stmt;
+	      par = par->parent;
+	    }
+	  else
+	    gcc_unreachable ();
+	}
+      else
+	gcc_unreachable ();
+    }
+
+  if (par)
+    /* Add this block onto the current loop's list of blocks.  */
+    par->blocks.safe_push (block);
+  else
+    /* This must be the entry block.  Create a NULL parallel.  */
+    par = new parallel_g (0, 0);
+
+walk_successors:
+  /* Walk successor blocks.  */
+  edge e;
+  edge_iterator ei;
+
+  FOR_EACH_EDGE (e, ei, block->succs)
+    omp_sese_find_par (map, par, e->dest);
+
+  return par;
+}
+
+/* DFS walk the CFG looking for fork & join markers.  Construct
+   loop structures as we go.  MAP is a mapping of basic blocks
+   to head & tail markers, discovered when splitting blocks.  This
+   speeds up the discovery.  We rely on the BB visited flag having
+   been cleared when splitting blocks.  */
+
+static parallel_g *
+omp_sese_discover_pars (bb_stmt_map_t *map)
+{
+  basic_block block;
+
+  /* Mark exit blocks as visited.  */
+  block = EXIT_BLOCK_PTR_FOR_FN (cfun);
+  block->flags |= BB_VISITED;
+
+  /* And entry block as not.  */
+  block = ENTRY_BLOCK_PTR_FOR_FN (cfun);
+  block->flags &= ~BB_VISITED;
+
+  parallel_g *par = omp_sese_find_par (map, 0, block);
+
+  if (dump_file)
+    {
+      fprintf (dump_file, "\nLoops\n");
+      omp_sese_dump_pars (par, 0);
+      fprintf (dump_file, "\n");
+    }
+
+  return par;
+}
+
+static void
+populate_single_mode_bitmaps (parallel_g *par, bitmap worker_single,
+			      bitmap vector_single, unsigned outer_mask,
+			      int depth)
+{
+  unsigned mask = outer_mask | par->mask;
+
+  basic_block block;
+
+  for (unsigned i = 0; par->blocks.iterate (i, &block); i++)
+    {
+      if ((mask & GOMP_DIM_MASK (GOMP_DIM_WORKER)) == 0)
+	bitmap_set_bit (worker_single, block->index);
+
+      if ((mask & GOMP_DIM_MASK (GOMP_DIM_VECTOR)) == 0)
+	bitmap_set_bit (vector_single, block->index);
+    }
+
+  if (par->inner)
+    populate_single_mode_bitmaps (par->inner, worker_single, vector_single,
+				  mask, depth + 1);
+  if (par->next)
+    populate_single_mode_bitmaps (par->next, worker_single, vector_single,
+				  outer_mask, depth);
+}
+
+/* A map from SSA names or var decls to record fields.  */
+
+typedef hash_map<tree, tree> field_map_t;
+
+/* For each propagation record type, this is a map from SSA names or var decls
+   to propagate, to the field in the record type that should be used for
+   transmission and reception.  */
+
+typedef hash_map<tree, field_map_t *> record_field_map_t;
+
+static GTY(()) record_field_map_t *field_map;
+
+static void
+install_var_field (tree var, tree record_type)
+{
+  field_map_t *fields = *field_map->get (record_type);
+  tree name;
+  char tmp[20];
+
+  if (TREE_CODE (var) == SSA_NAME)
+    {
+      name = SSA_NAME_IDENTIFIER (var);
+      if (!name)
+	{
+	  sprintf (tmp, "_%u", (unsigned) SSA_NAME_VERSION (var));
+	  name = get_identifier (tmp);
+	}
+    }
+  else if (TREE_CODE (var) == VAR_DECL)
+    {
+      name = DECL_NAME (var);
+      if (!name)
+	{
+	  sprintf (tmp, "D_%u", (unsigned) DECL_UID (var));
+	  name = get_identifier (tmp);
+	}
+    }
+  else
+    gcc_unreachable ();
+
+  gcc_assert (!fields->get (var));
+
+  tree type = TREE_TYPE (var);
+
+  if (POINTER_TYPE_P (type)
+      && TYPE_RESTRICT (type))
+    type = build_qualified_type (type, TYPE_QUALS (type) & ~TYPE_QUAL_RESTRICT);
+
+  tree field = build_decl (BUILTINS_LOCATION, FIELD_DECL, name, type);
+
+  if (TREE_CODE (var) == VAR_DECL && type == TREE_TYPE (var))
+    {
+      SET_DECL_ALIGN (field, DECL_ALIGN (var));
+      DECL_USER_ALIGN (field) = DECL_USER_ALIGN (var);
+      TREE_THIS_VOLATILE (field) = TREE_THIS_VOLATILE (var);
+    }
+  else
+    SET_DECL_ALIGN (field, TYPE_ALIGN (type));
+
+  fields->put (var, field);
+
+  insert_field_into_struct (record_type, field);
+}
+
+/* Sets of SSA_NAMES or VAR_DECLs to propagate.  */
+typedef hash_set<tree> propagation_set;
+
+static void
+find_ssa_names_to_propagate (parallel_g *par, unsigned outer_mask,
+			     bitmap worker_single, bitmap vector_single,
+			     vec<propagation_set *> *prop_set)
+{
+  unsigned mask = outer_mask | par->mask;
+
+  if (par->inner)
+    find_ssa_names_to_propagate (par->inner, mask, worker_single,
+				 vector_single, prop_set);
+  if (par->next)
+    find_ssa_names_to_propagate (par->next, outer_mask, worker_single,
+				 vector_single, prop_set);
+
+  if (mask & GOMP_DIM_MASK (GOMP_DIM_WORKER))
+    {
+      basic_block block;
+      int ix;
+
+      for (ix = 0; par->blocks.iterate (ix, &block); ix++)
+	{
+	  for (gphi_iterator psi = gsi_start_phis (block);
+	       !gsi_end_p (psi); gsi_next (&psi))
+	    {
+	      gphi *phi = psi.phi ();
+	      use_operand_p use;
+	      ssa_op_iter iter;
+
+	      FOR_EACH_PHI_ARG (use, phi, iter, SSA_OP_USE)
+		{
+		  tree var = USE_FROM_PTR (use);
+
+		  if (TREE_CODE (var) != SSA_NAME)
+		    continue;
+
+		  gimple *def_stmt = SSA_NAME_DEF_STMT (var);
+
+		  if (gimple_nop_p (def_stmt))
+		    continue;
+
+		  basic_block def_bb = gimple_bb (def_stmt);
+
+		  if (bitmap_bit_p (worker_single, def_bb->index))
+		    {
+		      if (!(*prop_set)[def_bb->index])
+			(*prop_set)[def_bb->index] = new propagation_set;
+
+		      propagation_set *ws_prop = (*prop_set)[def_bb->index];
+
+		      ws_prop->add (var);
+		    }
+		}
+	    }
+
+	  for (gimple_stmt_iterator gsi = gsi_start_bb (block);
+	       !gsi_end_p (gsi); gsi_next (&gsi))
+	    {
+	      use_operand_p use;
+	      ssa_op_iter iter;
+	      gimple *stmt = gsi_stmt (gsi);
+
+	      FOR_EACH_SSA_USE_OPERAND (use, stmt, iter, SSA_OP_USE)
+		{
+		  tree var = USE_FROM_PTR (use);
+
+		  gimple *def_stmt = SSA_NAME_DEF_STMT (var);
+
+		  if (gimple_nop_p (def_stmt))
+		    continue;
+
+		  basic_block def_bb = gimple_bb (def_stmt);
+
+		  if (bitmap_bit_p (worker_single, def_bb->index))
+		    {
+		      if (!(*prop_set)[def_bb->index])
+			(*prop_set)[def_bb->index] = new propagation_set;
+
+		      propagation_set *ws_prop = (*prop_set)[def_bb->index];
+
+		      ws_prop->add (var);
+		    }
+		}
+	    }
+	}
+    }
+}
+
+/* Callback for walk_gimple_stmt to find RHS VAR_DECLs (uses) in a
+   statement.  */
+
+static tree
+find_partitioned_var_uses_1 (tree *node, int *, void *data)
+{
+  walk_stmt_info *wi = (walk_stmt_info *) data;
+  hash_set<tree> *partitioned_var_uses = (hash_set<tree> *) wi->info;
+
+  if (!wi->is_lhs && VAR_P (*node))
+    partitioned_var_uses->add (*node);
+
+  return NULL_TREE;
+}
+
+static void
+find_partitioned_var_uses (parallel_g *par, unsigned outer_mask,
+			   hash_set<tree> *partitioned_var_uses)
+{
+  unsigned mask = outer_mask | par->mask;
+
+  if (par->inner)
+    find_partitioned_var_uses (par->inner, mask, partitioned_var_uses);
+  if (par->next)
+    find_partitioned_var_uses (par->next, outer_mask, partitioned_var_uses);
+
+  if (mask & GOMP_DIM_MASK (GOMP_DIM_WORKER))
+    {
+      basic_block block;
+      int ix;
+
+      for (ix = 0; par->blocks.iterate (ix, &block); ix++)
+	for (gimple_stmt_iterator gsi = gsi_start_bb (block);
+	     !gsi_end_p (gsi); gsi_next (&gsi))
+	  {
+	    walk_stmt_info wi;
+	    memset (&wi, 0, sizeof (wi));
+	    wi.info = (void *) partitioned_var_uses;
+	    walk_gimple_stmt (&gsi, NULL, find_partitioned_var_uses_1, &wi);
+	  }
+    }
+}
+
+/* Gang-private variables (typically placed in a GPU's shared memory) do not
+   need to be processed by the worker-propagation mechanism.  Populate the
+   GANGPRIVATE_VARS set with any such variables found in the current
+   function.  */
+
+static void
+find_gangprivate_vars (hash_set<tree> *gangprivate_vars)
+{
+  basic_block block;
+
+  FOR_EACH_BB_FN (block, cfun)
+    {
+      for (gimple_stmt_iterator gsi = gsi_start_bb (block);
+	   !gsi_end_p (gsi);
+	   gsi_next (&gsi))
+	{
+	  gimple *stmt = gsi_stmt (gsi);
+
+	  if (gimple_call_internal_p (stmt, IFN_UNIQUE))
+	    {
+	      enum ifn_unique_kind k = ((enum ifn_unique_kind)
+		TREE_INT_CST_LOW (gimple_call_arg (stmt, 0)));
+	      if (k == IFN_UNIQUE_OACC_PRIVATE)
+		{
+		  HOST_WIDE_INT level
+		    = TREE_INT_CST_LOW (gimple_call_arg (stmt, 2));
+		  if (level != GOMP_DIM_GANG)
+		    continue;
+		  for (unsigned i = 3; i < gimple_call_num_args (stmt); i++)
+		    {
+		      tree arg = gimple_call_arg (stmt, i);
+		      gcc_assert (TREE_CODE (arg) == ADDR_EXPR);
+		      tree decl = TREE_OPERAND (arg, 0);
+		      gangprivate_vars->add (decl);
+		    }
+		}
+	    }
+	}
+    }
+}
+
+static void
+find_local_vars_to_propagate (parallel_g *par, unsigned outer_mask,
+			      hash_set<tree> *partitioned_var_uses,
+			      hash_set<tree> *gangprivate_vars,
+			      vec<propagation_set *> *prop_set)
+{
+  unsigned mask = outer_mask | par->mask;
+
+  if (par->inner)
+    find_local_vars_to_propagate (par->inner, mask, partitioned_var_uses,
+				  gangprivate_vars, prop_set);
+  if (par->next)
+    find_local_vars_to_propagate (par->next, outer_mask, partitioned_var_uses,
+				  gangprivate_vars, prop_set);
+
+  if (!(mask & GOMP_DIM_MASK (GOMP_DIM_WORKER)))
+    {
+      basic_block block;
+      int ix;
+
+      for (ix = 0; par->blocks.iterate (ix, &block); ix++)
+	{
+	  for (gimple_stmt_iterator gsi = gsi_start_bb (block);
+	       !gsi_end_p (gsi); gsi_next (&gsi))
+	    {
+	      gimple *stmt = gsi_stmt (gsi);
+	      tree var;
+	      unsigned i;
+
+	      FOR_EACH_LOCAL_DECL (cfun, i, var)
+		{
+		  if (!VAR_P (var)
+		      || is_global_var (var)
+		      || AGGREGATE_TYPE_P (TREE_TYPE (var))
+		      || !partitioned_var_uses->contains (var)
+		      || gangprivate_vars->contains (var))
+		    continue;
+
+		  if (stmt_may_clobber_ref_p (stmt, var))
+		    {
+		      if (dump_file)
+			{
+			  fprintf (dump_file, "bb %u: local variable may be "
+				   "clobbered in %s mode: ", block->index,
+				   mask_name (mask));
+			  print_generic_expr (dump_file, var, TDF_SLIM);
+			  fprintf (dump_file, "\n");
+			}
+
+		      if (!(*prop_set)[block->index])
+			(*prop_set)[block->index] = new propagation_set;
+
+		      propagation_set *ws_prop
+			= (*prop_set)[block->index];
+
+		      ws_prop->add (var);
+		    }
+		}
+	    }
+	}
+    }
+}
+
+/* Transform basic blocks FROM, TO (which may be the same block) into:
+   if (GOACC_single_start ())
+     BLOCK;
+   GOACC_barrier ();
+			      \  |  /
+			      +----+
+			      |    |        (new) predicate block
+			      +----+--
+   \  |  /   \  |  /	        |t    \
+   +----+    +----+	      +----+  |
+   |	|    |    |	===>  |    |  | f   (old) from block
+   +----+    +----+	      +----+  |
+     |       t/  \f	        |    /
+			      +----+/
+  (split  (split before       |    |        skip block
+  at end)   condition)	      +----+
+			      t/  \f
+*/
+
+static void
+worker_single_simple (basic_block from, basic_block to,
+		      hash_set<tree> *def_escapes_block)
+{
+  gimple *call, *cond;
+  tree lhs, decl;
+  basic_block skip_block;
+
+  gimple_stmt_iterator gsi = gsi_last_bb (to);
+  if (EDGE_COUNT (to->succs) > 1)
+    {
+      gcc_assert (gimple_code (gsi_stmt (gsi)) == GIMPLE_COND);
+      gsi_prev (&gsi);
+    }
+  edge e = split_block (to, gsi_stmt (gsi));
+  skip_block = e->dest;
+
+  gimple_stmt_iterator start = gsi_after_labels (from);
+
+  decl = builtin_decl_explicit (BUILT_IN_GOACC_SINGLE_START);
+  lhs = create_tmp_var (TREE_TYPE (TREE_TYPE (decl)));
+  call = gimple_build_call (decl, 0);
+  gimple_call_set_lhs (call, lhs);
+  gsi_insert_before (&start, call, GSI_NEW_STMT);
+  update_stmt (call);
+
+  cond = gimple_build_cond (EQ_EXPR, lhs,
+			    fold_convert_loc (UNKNOWN_LOCATION,
+					      TREE_TYPE (lhs),
+					      boolean_true_node),
+			    NULL_TREE, NULL_TREE);
+  gsi_insert_after (&start, cond, GSI_NEW_STMT);
+  update_stmt (cond);
+
+  edge et = split_block (from, cond);
+  et->flags &= ~EDGE_FALLTHRU;
+  et->flags |= EDGE_TRUE_VALUE;
+  /* Make the active worker the more probable path so we prefer fallthrough
+     (letting the idle workers jump around more).  */
+  et->probability = profile_probability::likely ();
+
+  edge ef = make_edge (from, skip_block, EDGE_FALSE_VALUE);
+  ef->probability = et->probability.invert ();
+
+  basic_block neutered = split_edge (ef);
+  gimple_stmt_iterator neut_gsi = gsi_last_bb (neutered);
+
+  for (gsi = gsi_start_bb (et->dest); !gsi_end_p (gsi); gsi_next (&gsi))
+    {
+      gimple *stmt = gsi_stmt (gsi);
+      ssa_op_iter iter;
+      tree var;
+
+      FOR_EACH_SSA_TREE_OPERAND (var, stmt, iter, SSA_OP_DEF)
+	{
+	  if (def_escapes_block->contains (var))
+	    {
+	      gphi *join_phi = create_phi_node (NULL_TREE, skip_block);
+	      create_new_def_for (var, join_phi,
+				  gimple_phi_result_ptr (join_phi));
+	      add_phi_arg (join_phi, var, e, UNKNOWN_LOCATION);
+
+	      tree neutered_def = copy_ssa_name (var, NULL);
+	      /* We really want "don't care" or some value representing
+		 undefined here, but optimizers will probably get rid of the
+		 zero-assignments anyway.  */
+	      gassign *zero = gimple_build_assign (neutered_def,
+				build_zero_cst (TREE_TYPE (neutered_def)));
+
+	      gsi_insert_after (&neut_gsi, zero, GSI_CONTINUE_LINKING);
+	      update_stmt (zero);
+
+	      add_phi_arg (join_phi, neutered_def, single_succ_edge (neutered),
+			   UNKNOWN_LOCATION);
+	      update_stmt (join_phi);
+	    }
+	}
+    }
+
+  gsi = gsi_start_bb (skip_block);
+
+  decl = builtin_decl_explicit (BUILT_IN_GOACC_BARRIER);
+  gimple *acc_bar = gimple_build_call (decl, 0);
+
+  gsi_insert_before (&gsi, acc_bar, GSI_SAME_STMT);
+  update_stmt (acc_bar);
+}
+
+/* This is a copied and renamed omp-low.c:omp_build_component_ref.  */
+
+static tree
+oacc_build_component_ref (tree obj, tree field)
+{
+  tree field_type = TREE_TYPE (field);
+  tree obj_type = TREE_TYPE (obj);
+  if (!ADDR_SPACE_GENERIC_P (TYPE_ADDR_SPACE (obj_type)))
+    field_type = build_qualified_type
+			(field_type,
+			 KEEP_QUAL_ADDR_SPACE (TYPE_QUALS (obj_type)));
+
+  tree ret = build3 (COMPONENT_REF, field_type, obj, field, NULL);
+  if (TREE_THIS_VOLATILE (field))
+    TREE_THIS_VOLATILE (ret) |= 1;
+  if (TREE_READONLY (field))
+    TREE_READONLY (ret) |= 1;
+  return ret;
+}
+
+static tree
+build_receiver_ref (tree record_type, tree var, tree receiver_decl)
+{
+  field_map_t *fields = *field_map->get (record_type);
+  tree x = build_simple_mem_ref (receiver_decl);
+  tree field = *fields->get (var);
+  TREE_THIS_NOTRAP (x) = 1;
+  x = oacc_build_component_ref (x, field);
+  return x;
+}
+
+static tree
+build_sender_ref (tree record_type, tree var, tree sender_decl)
+{
+  field_map_t *fields = *field_map->get (record_type);
+  tree field = *fields->get (var);
+  return oacc_build_component_ref (sender_decl, field);
+}
+
+static int
+sort_by_ssa_version_or_uid (const void *p1, const void *p2)
+{
+  const tree t1 = *(const tree *)p1;
+  const tree t2 = *(const tree *)p2;
+
+  if (TREE_CODE (t1) == SSA_NAME && TREE_CODE (t2) == SSA_NAME)
+    return SSA_NAME_VERSION (t1) - SSA_NAME_VERSION (t2);
+  else if (TREE_CODE (t1) == SSA_NAME && TREE_CODE (t2) != SSA_NAME)
+    return -1;
+  else if (TREE_CODE (t1) != SSA_NAME && TREE_CODE (t2) == SSA_NAME)
+    return 1;
+  else
+    return DECL_UID (t1) - DECL_UID (t2);
+}
+
+static int
+sort_by_size_then_ssa_version_or_uid (const void *p1, const void *p2)
+{
+  const tree t1 = *(const tree *)p1;
+  const tree t2 = *(const tree *)p2;
+  unsigned HOST_WIDE_INT s1 = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (t1)));
+  unsigned HOST_WIDE_INT s2 = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (t2)));
+  if (s1 != s2)
+    return s2 - s1;
+  else
+    return sort_by_ssa_version_or_uid (p1, p2);
+}
+
+static void
+worker_single_copy (basic_block from, basic_block to,
+		    hash_set<tree> *def_escapes_block,
+		    hash_set<tree> *worker_partitioned_uses,
+		    tree record_type)
+{
+  /* If we only have virtual defs, we'll have no record type, but we still want
+     to emit single_copy_start and (particularly) single_copy_end to act as
+     a vdef source on the neutered edge representing memory writes on the
+     non-neutered edge.  */
+  if (!record_type)
+    record_type = char_type_node;
+
+  tree sender_decl
+    = targetm.goacc.create_propagation_record (record_type, true,
+					       ".oacc_worker_o");
+  tree receiver_decl
+    = targetm.goacc.create_propagation_record (record_type, false,
+					       ".oacc_worker_i");
+
+  gimple_stmt_iterator gsi = gsi_last_bb (to);
+  if (EDGE_COUNT (to->succs) > 1)
+    gsi_prev (&gsi);
+  edge e = split_block (to, gsi_stmt (gsi));
+  basic_block barrier_block = e->dest;
+
+  gimple_stmt_iterator start = gsi_after_labels (from);
+
+  tree decl = builtin_decl_explicit (BUILT_IN_GOACC_SINGLE_COPY_START);
+
+  tree lhs = create_tmp_var (TREE_TYPE (TREE_TYPE (decl)));
+
+  gimple *call = gimple_build_call (decl, 1,
+				    build_fold_addr_expr (sender_decl));
+  gimple_call_set_lhs (call, lhs);
+  gsi_insert_before (&start, call, GSI_NEW_STMT);
+  update_stmt (call);
+
+  tree conv_tmp = make_ssa_name (TREE_TYPE (receiver_decl));
+
+  gimple *conv = gimple_build_assign (conv_tmp,
+				      fold_convert (TREE_TYPE (receiver_decl),
+						    lhs));
+  update_stmt (conv);
+  gsi_insert_after (&start, conv, GSI_NEW_STMT);
+  gimple *asgn = gimple_build_assign (receiver_decl, conv_tmp);
+  gsi_insert_after (&start, asgn, GSI_NEW_STMT);
+  update_stmt (asgn);
+
+  tree zero_ptr = build_int_cst (TREE_TYPE (receiver_decl), 0);
+
+  tree recv_tmp = make_ssa_name (TREE_TYPE (receiver_decl));
+  asgn = gimple_build_assign (recv_tmp, receiver_decl);
+  gsi_insert_after (&start, asgn, GSI_NEW_STMT);
+  update_stmt (asgn);
+
+  gimple *cond = gimple_build_cond (EQ_EXPR, recv_tmp, zero_ptr, NULL_TREE,
+				    NULL_TREE);
+  update_stmt (cond);
+
+  gsi_insert_after (&start, cond, GSI_NEW_STMT);
+
+  edge et = split_block (from, cond);
+  et->flags &= ~EDGE_FALLTHRU;
+  et->flags |= EDGE_TRUE_VALUE;
+  /* Make the active worker the more probable path so we prefer fallthrough
+     (letting the idle workers jump around more).  */
+  et->probability = profile_probability::likely ();
+
+  basic_block body = et->dest;
+
+  edge ef = make_edge (from, barrier_block, EDGE_FALSE_VALUE);
+  ef->probability = et->probability.invert ();
+
+  decl = builtin_decl_explicit (BUILT_IN_GOACC_BARRIER);
+  gimple *acc_bar = gimple_build_call (decl, 0);
+
+  gimple_stmt_iterator bar_gsi = gsi_start_bb (barrier_block);
+  gsi_insert_before (&bar_gsi, acc_bar, GSI_NEW_STMT);
+
+  cond = gimple_build_cond (NE_EXPR, recv_tmp, zero_ptr, NULL_TREE, NULL_TREE);
+  gsi_insert_after (&bar_gsi, cond, GSI_NEW_STMT);
+
+  edge et2 = split_block (barrier_block, cond);
+  et2->flags &= ~EDGE_FALLTHRU;
+  et2->flags |= EDGE_TRUE_VALUE;
+  et2->probability = profile_probability::unlikely ();
+
+  basic_block exit_block = et2->dest;
+
+  basic_block copyout_block = split_edge (et2);
+  edge ef2 = make_edge (barrier_block, exit_block, EDGE_FALSE_VALUE);
+  ef2->probability = et2->probability.invert ();
+
+  gimple_stmt_iterator copyout_gsi = gsi_start_bb (copyout_block);
+
+  edge copyout_to_exit = single_succ_edge (copyout_block);
+
+  gimple_seq sender_seq = NULL;
+
+  /* Make sure we iterate over definitions in a stable order.  */
+  auto_vec<tree> escape_vec (def_escapes_block->elements ());
+  for (hash_set<tree>::iterator it = def_escapes_block->begin ();
+       it != def_escapes_block->end (); ++it)
+    escape_vec.quick_push (*it);
+  escape_vec.qsort (sort_by_ssa_version_or_uid);
+
+  for (unsigned i = 0; i < escape_vec.length (); i++)
+    {
+      tree var = escape_vec[i];
+
+      if (TREE_CODE (var) == SSA_NAME && SSA_NAME_IS_VIRTUAL_OPERAND (var))
+	continue;
+
+      tree barrier_def = 0;
+
+      if (TREE_CODE (var) == SSA_NAME)
+	{
+	  gimple *def_stmt = SSA_NAME_DEF_STMT (var);
+
+	  if (gimple_nop_p (def_stmt))
+	    continue;
+
+	  /* The barrier phi takes one result from the actual work of the
+	     block we're neutering, and the other result is constant zero of
+	     the same type.  */
+
+	  gphi *barrier_phi = create_phi_node (NULL_TREE, barrier_block);
+	  barrier_def = create_new_def_for (var, barrier_phi,
+			  gimple_phi_result_ptr (barrier_phi));
+
+	  add_phi_arg (barrier_phi, var, e, UNKNOWN_LOCATION);
+	  add_phi_arg (barrier_phi, build_zero_cst (TREE_TYPE (var)), ef,
+		       UNKNOWN_LOCATION);
+
+	  update_stmt (barrier_phi);
+	}
+      else
+	gcc_assert (TREE_CODE (var) == VAR_DECL);
+
+      /* If we had no record type, we will have no fields map.  */
+      field_map_t **fields_p = field_map->get (record_type);
+      field_map_t *fields = fields_p ? *fields_p : NULL;
+
+      if (worker_partitioned_uses->contains (var)
+	  && fields
+	  && fields->get (var))
+	{
+	  tree neutered_def = make_ssa_name (TREE_TYPE (var));
+
+	  /* Receive definition from shared memory block.  */
+
+	  tree receiver_ref = build_receiver_ref (record_type, var,
+						  receiver_decl);
+	  gassign *recv = gimple_build_assign (neutered_def,
+					       receiver_ref);
+	  gsi_insert_after (&copyout_gsi, recv, GSI_CONTINUE_LINKING);
+	  update_stmt (recv);
+
+	  if (TREE_CODE (var) == VAR_DECL)
+	    {
+	      /* If it's a VAR_DECL, we only copied to an SSA temporary.  Copy
+		 to the final location now.  */
+	      gassign *asgn = gimple_build_assign (var, neutered_def);
+	      gsi_insert_after (&copyout_gsi, asgn, GSI_CONTINUE_LINKING);
+	      update_stmt (asgn);
+	    }
+	  else
+	    {
+	      /* If it's an SSA name, create a new phi at the join node to
+		 represent either the output from the active worker (the
+		 barrier) or the inactive workers (the copyout block).  */
+	      gphi *join_phi = create_phi_node (NULL_TREE, exit_block);
+	      create_new_def_for (barrier_def, join_phi,
+				  gimple_phi_result_ptr (join_phi));
+	      add_phi_arg (join_phi, barrier_def, ef2, UNKNOWN_LOCATION);
+	      add_phi_arg (join_phi, neutered_def, copyout_to_exit,
+			   UNKNOWN_LOCATION);
+	      update_stmt (join_phi);
+	    }
+
+	  /* Send definition to shared memory block.  */
+
+	  tree sender_ref = build_sender_ref (record_type, var, sender_decl);
+
+	  if (TREE_CODE (var) == SSA_NAME)
+	    {
+	      gassign *send = gimple_build_assign (sender_ref, var);
+	      gimple_seq_add_stmt (&sender_seq, send);
+	      update_stmt (send);
+	    }
+	  else if (TREE_CODE (var) == VAR_DECL)
+	    {
+	      tree tmp = make_ssa_name (TREE_TYPE (var));
+	      gassign *send = gimple_build_assign (tmp, var);
+	      gimple_seq_add_stmt (&sender_seq, send);
+	      update_stmt (send);
+	      send = gimple_build_assign (sender_ref, tmp);
+	      gimple_seq_add_stmt (&sender_seq, send);
+	      update_stmt (send);
+	    }
+	  else
+	    gcc_unreachable ();
+	}
+    }
+
+  /* It's possible for the ET->DEST block (the work done by the active thread)
+     to finish with a control-flow insn, e.g. a UNIQUE function call.  Split
+     the block and add SENDER_SEQ in the latter part to avoid having control
+     flow in the middle of a BB.  */
+
+  decl = builtin_decl_explicit (BUILT_IN_GOACC_SINGLE_COPY_END);
+  call = gimple_build_call (decl, 1, build_fold_addr_expr (sender_decl));
+  gimple_seq_add_stmt (&sender_seq, call);
+
+  gsi = gsi_last_bb (body);
+  gimple *last = gsi_stmt (gsi);
+  basic_block sender_block = split_block (body, last)->dest;
+  gsi = gsi_last_bb (sender_block);
+  gsi_insert_seq_after (&gsi, sender_seq, GSI_CONTINUE_LINKING);
+}
+
+static void
+neuter_worker_single (parallel_g *par, unsigned outer_mask,
+		      bitmap worker_single, bitmap vector_single,
+		      vec<propagation_set *> *prop_set,
+		      hash_set<tree> *partitioned_var_uses)
+{
+  unsigned mask = outer_mask | par->mask;
+
+  if ((mask & GOMP_DIM_MASK (GOMP_DIM_WORKER)) == 0)
+    {
+      basic_block block;
+
+      for (unsigned i = 0; par->blocks.iterate (i, &block); i++)
+	{
+	  bool has_defs = false;
+	  hash_set<tree> def_escapes_block;
+	  hash_set<tree> worker_partitioned_uses;
+	  unsigned j;
+	  tree var;
+
+	  FOR_EACH_SSA_NAME (j, var, cfun)
+	    {
+	      if (SSA_NAME_IS_VIRTUAL_OPERAND (var))
+		{
+		  has_defs = true;
+		  continue;
+		}
+
+	      gimple *def_stmt = SSA_NAME_DEF_STMT (var);
+
+	      if (gimple_nop_p (def_stmt))
+		continue;
+
+	      if (gimple_bb (def_stmt)->index != block->index)
+		continue;
+
+	      gimple *use_stmt;
+	      imm_use_iterator use_iter;
+	      bool uses_outside_block = false;
+	      bool worker_partitioned_use = false;
+
+	      FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, var)
+		{
+		  int blocknum = gimple_bb (use_stmt)->index;
+
+		  /* Don't propagate SSA names that are only used in the
+		     current block, unless the usage is in a phi node: that
+		     means the name left the block, then came back in at the
+		     top.  */
+		  if (blocknum != block->index
+		      || gimple_code (use_stmt) == GIMPLE_PHI)
+		    uses_outside_block = true;
+		  if (!bitmap_bit_p (worker_single, blocknum))
+		    worker_partitioned_use = true;
+		}
+
+	      if (uses_outside_block)
+		def_escapes_block.add (var);
+
+	      if (worker_partitioned_use)
+		{
+		  worker_partitioned_uses.add (var);
+		  has_defs = true;
+		}
+	    }
+
+	  propagation_set *ws_prop = (*prop_set)[block->index];
+
+	  if (ws_prop)
+	    {
+	      for (propagation_set::iterator it = ws_prop->begin ();
+		   it != ws_prop->end ();
+		   ++it)
+		{
+		  tree var = *it;
+		  if (TREE_CODE (var) == VAR_DECL)
+		    {
+		      def_escapes_block.add (var);
+		      if (partitioned_var_uses->contains (var))
+			{
+			  worker_partitioned_uses.add (var);
+			  has_defs = true;
+			}
+		    }
+		}
+
+	      delete ws_prop;
+	      (*prop_set)[block->index] = 0;
+	    }
+
+	  tree record_type = (tree) block->aux;
+
+	  if (has_defs)
+	    worker_single_copy (block, block, &def_escapes_block,
+				&worker_partitioned_uses, record_type);
+	  else
+	    worker_single_simple (block, block, &def_escapes_block);
+	}
+    }
+
+  if ((outer_mask & GOMP_DIM_MASK (GOMP_DIM_WORKER)) == 0)
+    {
+      basic_block block;
+
+      for (unsigned i = 0; par->blocks.iterate (i, &block); i++)
+	for (gimple_stmt_iterator gsi = gsi_start_bb (block);
+	     !gsi_end_p (gsi);
+	     gsi_next (&gsi))
+	  {
+	    gimple *stmt = gsi_stmt (gsi);
+
+	    if (gimple_code (stmt) == GIMPLE_CALL
+		&& !gimple_call_internal_p (stmt)
+		&& !omp_sese_active_worker_call (as_a <gcall *> (stmt)))
+	      {
+		/* If we have an OpenACC routine call in worker-single mode,
+		   place barriers before and afterwards to prevent
+		   clobbering re-used shared memory regions (as are used
+		   for AMDGCN at present, for example).  */
+		tree decl = builtin_decl_explicit (BUILT_IN_GOACC_BARRIER);
+		gsi_insert_before (&gsi, gimple_build_call (decl, 0),
+				   GSI_SAME_STMT);
+		gsi_insert_after (&gsi, gimple_build_call (decl, 0),
+				  GSI_NEW_STMT);
+	      }
+	  }
+    }
+
+  if (par->inner)
+    neuter_worker_single (par->inner, mask, worker_single, vector_single,
+			  prop_set, partitioned_var_uses);
+  if (par->next)
+    neuter_worker_single (par->next, outer_mask, worker_single, vector_single,
+			  prop_set, partitioned_var_uses);
+}
+
+
+void
+oacc_do_neutering (void)
+{
+  bb_stmt_map_t bb_stmt_map;
+  auto_bitmap worker_single, vector_single;
+
+  omp_sese_split_blocks (&bb_stmt_map);
+
+  if (dump_file)
+    {
+      fprintf (dump_file, "\n\nAfter splitting:\n\n");
+      dump_function_to_file (current_function_decl, dump_file, dump_flags);
+    }
+
+  unsigned mask = 0;
+
+  /* If this is a routine, calculate MASK as if the outer levels are already
+     partitioned.  */
+  tree attr = oacc_get_fn_attrib (current_function_decl);
+  if (attr)
+    {
+      tree dims = TREE_VALUE (attr);
+      unsigned ix;
+      for (ix = 0; ix != GOMP_DIM_MAX; ix++, dims = TREE_CHAIN (dims))
+	{
+	  tree allowed = TREE_PURPOSE (dims);
+	  if (allowed && integer_zerop (allowed))
+	    mask |= GOMP_DIM_MASK (ix);
+	}
+    }
+
+  parallel_g *par = omp_sese_discover_pars (&bb_stmt_map);
+  populate_single_mode_bitmaps (par, worker_single, vector_single, mask, 0);
+
+  basic_block bb;
+  FOR_ALL_BB_FN (bb, cfun)
+    bb->aux = NULL;
+
+  field_map = record_field_map_t::create_ggc (40);
+
+  vec<propagation_set *> prop_set;
+  prop_set.create (last_basic_block_for_fn (cfun));
+
+  for (unsigned i = 0; i < last_basic_block_for_fn (cfun); i++)
+    prop_set.quick_push (0);
+
+  find_ssa_names_to_propagate (par, mask, worker_single, vector_single,
+			       &prop_set);
+
+  hash_set<tree> partitioned_var_uses;
+  hash_set<tree> gangprivate_vars;
+
+  find_gangprivate_vars (&gangprivate_vars);
+  find_partitioned_var_uses (par, mask, &partitioned_var_uses);
+  find_local_vars_to_propagate (par, mask, &partitioned_var_uses,
+				&gangprivate_vars, &prop_set);
+
+  FOR_ALL_BB_FN (bb, cfun)
+    {
+      propagation_set *ws_prop = prop_set[bb->index];
+      if (ws_prop)
+	{
+	  tree record_type = lang_hooks.types.make_type (RECORD_TYPE);
+	  tree name = create_tmp_var_name (".oacc_ws_data_s");
+	  name = build_decl (UNKNOWN_LOCATION, TYPE_DECL, name, record_type);
+	  DECL_ARTIFICIAL (name) = 1;
+	  DECL_NAMELESS (name) = 1;
+	  TYPE_NAME (record_type) = name;
+	  TYPE_ARTIFICIAL (record_type) = 1;
+
+	  auto_vec<tree> field_vec (ws_prop->elements ());
+	  for (hash_set<tree>::iterator it = ws_prop->begin ();
+	       it != ws_prop->end (); ++it)
+	    field_vec.quick_push (*it);
+
+	  field_vec.qsort (sort_by_size_then_ssa_version_or_uid);
+
+	  field_map->put (record_type, field_map_t::create_ggc (17));
+
+	  /* Insert var fields in reverse order, so the last inserted element
+	     is the first in the structure.  */
+	  for (int i = field_vec.length () - 1; i >= 0; i--)
+	    install_var_field (field_vec[i], record_type);
+
+	  layout_type (record_type);
+
+	  bb->aux = (tree) record_type;
+	}
+    }
+
+  neuter_worker_single (par, mask, worker_single, vector_single, &prop_set,
+			&partitioned_var_uses);
+
+  prop_set.release ();
+
+  /* This doesn't seem to make a difference.  */
+  loops_state_clear (LOOP_CLOSED_SSA);
+
+  /* Neutering worker-single neutered blocks will invalidate dominance info.
+     It may be possible to incrementally update just the affected blocks, but
+     obliterate everything for now.  */
+  free_dominance_info (CDI_DOMINATORS);
+  free_dominance_info (CDI_POST_DOMINATORS);
+
+  if (dump_file)
+    {
+      fprintf (dump_file, "\n\nAfter neutering:\n\n");
+      dump_function_to_file (current_function_decl, dump_file, dump_flags);
+    }
+}
diff --git a/gcc/oacc-neuter-bcast.h b/gcc/oacc-neuter-bcast.h
new file mode 100644
index 00000000000..b3f38426e64
--- /dev/null
+++ b/gcc/oacc-neuter-bcast.h
@@ -0,0 +1,26 @@ 
+/* Implement worker partitioning for OpenACC via neutering/broadcasting scheme.
+
+   Copyright (C) 2015-2021 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_OACC_NEUTER_BCAST_H
+#define GCC_OACC_NEUTER_BCAST_H
+
+extern void oacc_do_neutering (void);
+
+#endif
diff --git a/gcc/omp-builtins.def b/gcc/omp-builtins.def
index cfbf1e67b8e..1cc807f77c0 100644
--- a/gcc/omp-builtins.def
+++ b/gcc/omp-builtins.def
@@ -75,6 +75,8 @@  DEF_GOMP_BUILTIN (BUILT_IN_GOMP_BARRIER, "GOMP_barrier",
 		  BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
 DEF_GOMP_BUILTIN (BUILT_IN_GOMP_BARRIER_CANCEL, "GOMP_barrier_cancel",
 		  BT_FN_BOOL, ATTR_NOTHROW_LEAF_LIST)
+DEF_GOACC_BUILTIN (BUILT_IN_GOACC_BARRIER, "GOACC_barrier",
+		   BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
 DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKWAIT, "GOMP_taskwait",
 		  BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST)
 DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKWAIT_DEPEND, "GOMP_taskwait_depend",
@@ -412,6 +414,12 @@  DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SINGLE_COPY_START, "GOMP_single_copy_start",
 		  BT_FN_PTR, ATTR_NOTHROW_LEAF_LIST)
 DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SINGLE_COPY_END, "GOMP_single_copy_end",
 		  BT_FN_VOID_PTR, ATTR_NOTHROW_LEAF_LIST)
+DEF_GOACC_BUILTIN (BUILT_IN_GOACC_SINGLE_START, "GOACC_single_start",
+		   BT_FN_BOOL, ATTR_NOTHROW_LEAF_LIST)
+DEF_GOACC_BUILTIN (BUILT_IN_GOACC_SINGLE_COPY_START, "GOACC_single_copy_start",
+		   BT_FN_PTR, ATTR_NOTHROW_LEAF_LIST)
+DEF_GOACC_BUILTIN (BUILT_IN_GOACC_SINGLE_COPY_END, "GOACC_single_copy_end",
+		   BT_FN_VOID_PTR, ATTR_NOTHROW_LEAF_LIST)
 DEF_GOMP_BUILTIN (BUILT_IN_GOMP_OFFLOAD_REGISTER, "GOMP_offload_register_ver",
 		  BT_FN_VOID_UINT_PTR_INT_PTR, ATTR_NOTHROW_LIST)
 DEF_GOMP_BUILTIN (BUILT_IN_GOMP_OFFLOAD_UNREGISTER,
diff --git a/gcc/omp-offload.c b/gcc/omp-offload.c
index b3f543b597a..27d77532737 100644
--- a/gcc/omp-offload.c
+++ b/gcc/omp-offload.c
@@ -53,6 +53,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "attribs.h"
 #include "cfgloop.h"
 #include "context.h"
+#include "oacc-neuter-bcast.h"
 #include "convert.h"
 
 /* Describe the OpenACC looping structure of a function.  The entire
@@ -1367,6 +1368,8 @@  oacc_loop_xform_head_tail (gcall *from, int level)
       else if (gimple_call_internal_p (stmt, IFN_GOACC_REDUCTION))
 	*gimple_call_arg_ptr (stmt, 3) = replacement;
 
+      update_stmt (stmt);
+
       gsi_next (&gsi);
       while (gsi_end_p (gsi))
 	gsi = gsi_start_bb (single_succ (gsi_bb (gsi)));
@@ -1391,25 +1394,28 @@  oacc_loop_process (oacc_loop *loop)
       gcall *call;
       
       for (ix = 0; loop->ifns.iterate (ix, &call); ix++)
-	switch (gimple_call_internal_fn (call))
-	  {
-	  case IFN_GOACC_LOOP:
+	{
+	  switch (gimple_call_internal_fn (call))
 	    {
-	      bool is_e = gimple_call_arg (call, 5) == integer_minus_one_node;
-	      gimple_call_set_arg (call, 5, is_e ? e_mask_arg : mask_arg);
-	      if (!is_e)
-		gimple_call_set_arg (call, 4, chunk_arg);
-	    }
-	    break;
+	    case IFN_GOACC_LOOP:
+	      {
+		bool is_e = gimple_call_arg (call, 5) == integer_minus_one_node;
+		gimple_call_set_arg (call, 5, is_e ? e_mask_arg : mask_arg);
+		if (!is_e)
+		  gimple_call_set_arg (call, 4, chunk_arg);
+	      }
+	      break;
 
-	  case IFN_GOACC_TILE:
-	    gimple_call_set_arg (call, 3, mask_arg);
-	    gimple_call_set_arg (call, 4, e_mask_arg);
-	    break;
+	    case IFN_GOACC_TILE:
+	      gimple_call_set_arg (call, 3, mask_arg);
+	      gimple_call_set_arg (call, 4, e_mask_arg);
+	      break;
 
-	  default:
-	    gcc_unreachable ();
-	  }
+	    default:
+	      gcc_unreachable ();
+	    }
+	  update_stmt (call);
+	}
 
       unsigned dim = GOMP_DIM_GANG;
       unsigned mask = loop->mask | loop->e_mask;
@@ -1906,12 +1912,27 @@  is_sync_builtin_call (gcall *call)
   return false;
 }
 
+/* Default implementation creates a temporary variable of type RECORD_TYPE if
+   SENDER is true, else a pointer to RECORD_TYPE if SENDER is false.  */
+
+tree
+default_goacc_create_propagation_record (tree record_type, bool sender,
+					 const char *name)
+{
+  tree type = record_type;
+
+  if (!sender)
+    type = build_pointer_type (type);
+
+  return create_tmp_var (type, name);
+}
+
 /* Main entry point for oacc transformations which run on the device
    compiler after LTO, so we know what the target device is at this
    point (including the host fallback).  */
 
 static unsigned int
-execute_oacc_device_lower ()
+execute_oacc_loop_designation ()
 {
   tree attrs = oacc_get_fn_attrib (current_function_decl);
 
@@ -2051,10 +2072,36 @@  execute_oacc_device_lower ()
 	free_oacc_loop (l);
     }
 
+  free_oacc_loop (loops);
+
   /* Offloaded targets may introduce new basic blocks, which require
      dominance information to update SSA.  */
   calculate_dominance_info (CDI_DOMINATORS);
 
+  return 0;
+}
+
+int
+execute_oacc_gimple_workers (void)
+{
+  oacc_do_neutering ();
+  calculate_dominance_info (CDI_DOMINATORS);
+  return 0;
+}
+
+static unsigned int
+execute_oacc_device_lower ()
+{
+  int dims[GOMP_DIM_MAX];
+  tree attr = oacc_get_fn_attrib (current_function_decl);
+
+  if (!attr)
+    /* Not an offloaded function.  */
+    return 0;
+
+  for (unsigned i = 0; i < GOMP_DIM_MAX; i++)
+    dims[i] = oacc_get_fn_dim_size (current_function_decl, i);
+
   hash_map<tree, tree> adjusted_vars;
 
   /* Now lower internal loop functions to target-specific code
@@ -2252,8 +2299,6 @@  execute_oacc_device_lower ()
 	  }
     }
 
-  free_oacc_loop (loops);
-
   return 0;
 }
 
@@ -2294,6 +2339,70 @@  default_goacc_dim_limit (int ARG_UNUSED (axis))
 
 namespace {
 
+const pass_data pass_data_oacc_loop_designation =
+{
+  GIMPLE_PASS, /* type */
+  "oaccloops", /* name */
+  OPTGROUP_OMP, /* optinfo_flags */
+  TV_NONE, /* tv_id */
+  PROP_cfg, /* properties_required */
+  0 /* Possibly PROP_gimple_eomp.  */, /* properties_provided */
+  0, /* properties_destroyed */
+  0, /* todo_flags_start */
+  TODO_update_ssa | TODO_cleanup_cfg
+  | TODO_rebuild_alias, /* todo_flags_finish */
+};
+
+class pass_oacc_loop_designation : public gimple_opt_pass
+{
+public:
+  pass_oacc_loop_designation (gcc::context *ctxt)
+    : gimple_opt_pass (pass_data_oacc_loop_designation, ctxt)
+  {}
+
+  /* opt_pass methods: */
+  virtual bool gate (function *) { return flag_openacc; };
+
+  virtual unsigned int execute (function *)
+    {
+      return execute_oacc_loop_designation ();
+    }
+
+}; // class pass_oacc_loop_designation
+
+const pass_data pass_data_oacc_gimple_workers =
+{
+  GIMPLE_PASS, /* type */
+  "oaccworkers", /* name */
+  OPTGROUP_OMP, /* optinfo_flags */
+  TV_NONE, /* tv_id */
+  PROP_cfg, /* properties_required */
+  0, /* properties_provided */
+  0, /* properties_destroyed */
+  0, /* todo_flags_start */
+  TODO_update_ssa | TODO_cleanup_cfg, /* todo_flags_finish */
+};
+
+class pass_oacc_gimple_workers : public gimple_opt_pass
+{
+public:
+  pass_oacc_gimple_workers (gcc::context *ctxt)
+    : gimple_opt_pass (pass_data_oacc_gimple_workers, ctxt)
+  {}
+
+  /* opt_pass methods: */
+  virtual bool gate (function *)
+  {
+    return flag_openacc && targetm.goacc.worker_partitioning;
+  };
+
+  virtual unsigned int execute (function *)
+    {
+      return execute_oacc_gimple_workers ();
+    }
+
+}; // class pass_oacc_gimple_workers
+
 const pass_data pass_data_oacc_device_lower =
 {
   GIMPLE_PASS, /* type */
@@ -2326,6 +2435,18 @@  public:
 
 } // anon namespace
 
+gimple_opt_pass *
+make_pass_oacc_loop_designation (gcc::context *ctxt)
+{
+  return new pass_oacc_loop_designation (ctxt);
+}
+
+gimple_opt_pass *
+make_pass_oacc_gimple_workers (gcc::context *ctxt)
+{
+  return new pass_oacc_gimple_workers (ctxt);
+}
+
 gimple_opt_pass *
 make_pass_oacc_device_lower (gcc::context *ctxt)
 {
diff --git a/gcc/omp-offload.h b/gcc/omp-offload.h
index b91d08cd218..a6f26a7c962 100644
--- a/gcc/omp-offload.h
+++ b/gcc/omp-offload.h
@@ -29,6 +29,7 @@  extern int oacc_fn_attrib_level (tree attr);
 extern GTY(()) vec<tree, va_gc> *offload_funcs;
 extern GTY(()) vec<tree, va_gc> *offload_vars;
 
+extern int oacc_fn_attrib_level (tree attr);
 extern void omp_finish_file (void);
 extern void omp_discover_implicit_declare_target (void);
 
diff --git a/gcc/passes.def b/gcc/passes.def
index e9ed3c7bc57..f6e99ac1f4e 100644
--- a/gcc/passes.def
+++ b/gcc/passes.def
@@ -183,6 +183,8 @@  along with GCC; see the file COPYING3.  If not see
   INSERT_PASSES_AFTER (all_passes)
   NEXT_PASS (pass_fixup_cfg);
   NEXT_PASS (pass_lower_eh_dispatch);
+  NEXT_PASS (pass_oacc_loop_designation);
+  NEXT_PASS (pass_oacc_gimple_workers);
   NEXT_PASS (pass_oacc_device_lower);
   NEXT_PASS (pass_omp_device_lower);
   NEXT_PASS (pass_omp_target_link);
diff --git a/gcc/target.def b/gcc/target.def
index 00b6f8f1bc9..35e4ec92ba1 100644
--- a/gcc/target.def
+++ b/gcc/target.def
@@ -1742,6 +1742,19 @@  way.",
 tree, (tree var, int level),
 NULL)
 
+DEFHOOK
+(create_propagation_record,
+"Create a record used to propagate local-variable state from an active\n\
+worker to other workers.  A possible implementation might adjust the type\n\
+of REC to place the new variable in shared GPU memory.",
+tree, (tree rec, bool sender, const char *name),
+default_goacc_create_propagation_record)
+
+DEFHOOKPOD
+(worker_partitioning,
+"Use gimple transformation for worker neutering/broadcasting.",
+bool, false)
+
 HOOK_VECTOR_END (goacc)
 
 /* Functions relating to vectorization.  */
diff --git a/gcc/targhooks.h b/gcc/targhooks.h
index 39a6f82f143..19cb0e5325d 100644
--- a/gcc/targhooks.h
+++ b/gcc/targhooks.h
@@ -130,6 +130,7 @@  extern bool default_goacc_validate_dims (tree, int [], int, unsigned);
 extern int default_goacc_dim_limit (int);
 extern bool default_goacc_fork_join (gcall *, const int [], bool);
 extern void default_goacc_reduction (gcall *);
+extern tree default_goacc_create_propagation_record (tree, bool, const char *);
 
 /* These are here, and not in hooks.[ch], because not all users of
    hooks.h include tm.h, and thus we don't have CUMULATIVE_ARGS.  */
diff --git a/gcc/testsuite/c-c++-common/goacc/classify-kernels-unparallelized.c b/gcc/testsuite/c-c++-common/goacc/classify-kernels-unparallelized.c
index d4c4b2ca237..79b4cad7916 100644
--- a/gcc/testsuite/c-c++-common/goacc/classify-kernels-unparallelized.c
+++ b/gcc/testsuite/c-c++-common/goacc/classify-kernels-unparallelized.c
@@ -5,7 +5,7 @@ 
    { dg-additional-options "-fopt-info-optimized-omp" }
    { dg-additional-options "-fdump-tree-ompexp" }
    { dg-additional-options "-fdump-tree-parloops1-all" }
-   { dg-additional-options "-fdump-tree-oaccdevlow" } */
+   { dg-additional-options "-fdump-tree-oaccloops" } */
 
 #define N 1024
 
@@ -35,6 +35,6 @@  void KERNELS ()
 
 /* Check the offloaded function's classification and compute dimensions (will
    always be 1 x 1 x 1 for non-offloading compilation).
-   { dg-final { scan-tree-dump-times "(?n)Function is unparallelized OpenACC kernels offload" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc kernels, omp target entrypoint\\)\\)" 1 "oaccdevlow" } } */
+   { dg-final { scan-tree-dump-times "(?n)Function is unparallelized OpenACC kernels offload" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc kernels, omp target entrypoint\\)\\)" 1 "oaccloops" } } */
diff --git a/gcc/testsuite/c-c++-common/goacc/classify-kernels.c b/gcc/testsuite/c-c++-common/goacc/classify-kernels.c
index 16e9b9e31d1..8fcfc3f4278 100644
--- a/gcc/testsuite/c-c++-common/goacc/classify-kernels.c
+++ b/gcc/testsuite/c-c++-common/goacc/classify-kernels.c
@@ -5,7 +5,7 @@ 
    { dg-additional-options "-fopt-info-optimized-omp" }
    { dg-additional-options "-fdump-tree-ompexp" }
    { dg-additional-options "-fdump-tree-parloops1-all" }
-   { dg-additional-options "-fdump-tree-oaccdevlow" } */
+   { dg-additional-options "-fdump-tree-oaccloops" } */
 
 #define N 1024
 
@@ -31,6 +31,6 @@  void KERNELS ()
 
 /* Check the offloaded function's classification and compute dimensions (will
    always be 1 x 1 x 1 for non-offloading compilation).
-   { dg-final { scan-tree-dump-times "(?n)Function is parallelized OpenACC kernels offload" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc kernels parallelized, oacc function \\(, , \\), oacc kernels, omp target entrypoint\\)\\)" 1 "oaccdevlow" } } */
+   { dg-final { scan-tree-dump-times "(?n)Function is parallelized OpenACC kernels offload" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc kernels parallelized, oacc function \\(, , \\), oacc kernels, omp target entrypoint\\)\\)" 1 "oaccloops" } } */
diff --git a/gcc/testsuite/c-c++-common/goacc/classify-parallel.c b/gcc/testsuite/c-c++-common/goacc/classify-parallel.c
index 933d7664386..34259099bcf 100644
--- a/gcc/testsuite/c-c++-common/goacc/classify-parallel.c
+++ b/gcc/testsuite/c-c++-common/goacc/classify-parallel.c
@@ -4,7 +4,7 @@ 
 /* { dg-additional-options "-O2" }
    { dg-additional-options "-fopt-info-optimized-omp" }
    { dg-additional-options "-fdump-tree-ompexp" }
-   { dg-additional-options "-fdump-tree-oaccdevlow" } */
+   { dg-additional-options "-fdump-tree-oaccloops" } */
 
 #define N 1024
 
@@ -24,6 +24,6 @@  void PARALLEL ()
 
 /* Check the offloaded function's classification and compute dimensions (will
    always be 1 x 1 x 1 for non-offloading compilation).
-   { dg-final { scan-tree-dump-times "(?n)Function is OpenACC parallel offload" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc parallel, omp target entrypoint\\)\\)" 1 "oaccdevlow" } } */
+   { dg-final { scan-tree-dump-times "(?n)Function is OpenACC parallel offload" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc parallel, omp target entrypoint\\)\\)" 1 "oaccloops" } } */
diff --git a/gcc/testsuite/c-c++-common/goacc/classify-routine.c b/gcc/testsuite/c-c++-common/goacc/classify-routine.c
index 0b9ba6ea69f..54eddc10b3c 100644
--- a/gcc/testsuite/c-c++-common/goacc/classify-routine.c
+++ b/gcc/testsuite/c-c++-common/goacc/classify-routine.c
@@ -4,7 +4,7 @@ 
 /* { dg-additional-options "-O2" }
    { dg-additional-options "-fopt-info-optimized-omp" }
    { dg-additional-options "-fdump-tree-ompexp" }
-   { dg-additional-options "-fdump-tree-oaccdevlow" } */
+   { dg-additional-options "-fdump-tree-oaccloops" } */
 
 #define N 1024
 
@@ -26,6 +26,6 @@  void ROUTINE ()
 
 /* Check the offloaded function's classification and compute dimensions (will
    always be 1 x 1 x 1 for non-offloading compilation).
-   { dg-final { scan-tree-dump-times "(?n)Function is OpenACC routine level 1" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(0 1, 1 1, 1 1\\), omp declare target \\(worker\\), oacc function \\(0 1, 1 0, 1 0\\)\\)\\)" 1 "oaccdevlow" } } */
+   { dg-final { scan-tree-dump-times "(?n)Function is OpenACC routine level 1" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(0 1, 1 1, 1 1\\), omp declare target \\(worker\\), oacc function \\(0 1, 1 0, 1 0\\)\\)\\)" 1 "oaccloops" } } */
diff --git a/gcc/testsuite/c-c++-common/goacc/classify-serial.c b/gcc/testsuite/c-c++-common/goacc/classify-serial.c
index 89b5f5ed9f9..cbf7a331372 100644
--- a/gcc/testsuite/c-c++-common/goacc/classify-serial.c
+++ b/gcc/testsuite/c-c++-common/goacc/classify-serial.c
@@ -4,7 +4,7 @@ 
 /* { dg-additional-options "-O2" }
    { dg-additional-options "-fopt-info-optimized-omp" }
    { dg-additional-options "-fdump-tree-ompexp" }
-   { dg-additional-options "-fdump-tree-oaccdevlow" } */
+   { dg-additional-options "-fdump-tree-oaccloops" } */
 
 #define N 1024
 
@@ -26,6 +26,6 @@  void SERIAL ()
 
 /* Check the offloaded function's classification and compute dimensions (will
    always be 1 x 1 x 1 for non-offloading compilation).
-   { dg-final { scan-tree-dump-times "(?n)Function is OpenACC serial offload" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc serial, omp target entrypoint\\)\\)" 1 "oaccdevlow" } } */
+   { dg-final { scan-tree-dump-times "(?n)Function is OpenACC serial offload" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+   { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc serial, omp target entrypoint\\)\\)" 1 "oaccloops" } } */
diff --git a/gcc/testsuite/gcc.dg/goacc/loop-processing-1.c b/gcc/testsuite/gcc.dg/goacc/loop-processing-1.c
index bd4c07e7d81..f234449cff8 100644
--- a/gcc/testsuite/gcc.dg/goacc/loop-processing-1.c
+++ b/gcc/testsuite/gcc.dg/goacc/loop-processing-1.c
@@ -1,5 +1,5 @@ 
 /* Make sure that OpenACC loop processing happens.  */
-/* { dg-additional-options "-O2 -fdump-tree-oaccdevlow" } */
+/* { dg-additional-options "-O2 -fdump-tree-oaccloops" } */
 
 extern int place ();
 
diff --git a/gcc/testsuite/gfortran.dg/goacc/classify-kernels-unparallelized.f95 b/gcc/testsuite/gfortran.dg/goacc/classify-kernels-unparallelized.f95
index 6cca3d6eefb..aa62c63f57b 100644
--- a/gcc/testsuite/gfortran.dg/goacc/classify-kernels-unparallelized.f95
+++ b/gcc/testsuite/gfortran.dg/goacc/classify-kernels-unparallelized.f95
@@ -5,7 +5,7 @@ 
 ! { dg-additional-options "-fopt-info-optimized-omp" }
 ! { dg-additional-options "-fdump-tree-ompexp" }
 ! { dg-additional-options "-fdump-tree-parloops1-all" }
-! { dg-additional-options "-fdump-tree-oaccdevlow" }
+! { dg-additional-options "-fdump-tree-oaccloops" }
 
 program main
   implicit none
@@ -37,6 +37,6 @@  end program main
 
 ! Check the offloaded function's classification and compute dimensions (will
 ! always be 1 x 1 x 1 for non-offloading compilation).
-! { dg-final { scan-tree-dump-times "(?n)Function is unparallelized OpenACC kernels offload" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc kernels, omp target entrypoint\\)\\)" 1 "oaccdevlow" } }
+! { dg-final { scan-tree-dump-times "(?n)Function is unparallelized OpenACC kernels offload" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc kernels, omp target entrypoint\\)\\)" 1 "oaccloops" } }
diff --git a/gcc/testsuite/gfortran.dg/goacc/classify-kernels.f95 b/gcc/testsuite/gfortran.dg/goacc/classify-kernels.f95
index 715a983bb26..0bc5fb5cf30 100644
--- a/gcc/testsuite/gfortran.dg/goacc/classify-kernels.f95
+++ b/gcc/testsuite/gfortran.dg/goacc/classify-kernels.f95
@@ -5,7 +5,7 @@ 
 ! { dg-additional-options "-fopt-info-optimized-omp" }
 ! { dg-additional-options "-fdump-tree-ompexp" }
 ! { dg-additional-options "-fdump-tree-parloops1-all" }
-! { dg-additional-options "-fdump-tree-oaccdevlow" }
+! { dg-additional-options "-fdump-tree-oaccloops" }
 
 program main
   implicit none
@@ -33,6 +33,6 @@  end program main
 
 ! Check the offloaded function's classification and compute dimensions (will
 ! always be 1 x 1 x 1 for non-offloading compilation).
-! { dg-final { scan-tree-dump-times "(?n)Function is parallelized OpenACC kernels offload" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc kernels parallelized, oacc function \\(, , \\), oacc kernels, omp target entrypoint\\)\\)" 1 "oaccdevlow" } }
+! { dg-final { scan-tree-dump-times "(?n)Function is parallelized OpenACC kernels offload" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc kernels parallelized, oacc function \\(, , \\), oacc kernels, omp target entrypoint\\)\\)" 1 "oaccloops" } }
diff --git a/gcc/testsuite/gfortran.dg/goacc/classify-parallel.f95 b/gcc/testsuite/gfortran.dg/goacc/classify-parallel.f95
index 01f06bbcc27..20bbdb0fbd3 100644
--- a/gcc/testsuite/gfortran.dg/goacc/classify-parallel.f95
+++ b/gcc/testsuite/gfortran.dg/goacc/classify-parallel.f95
@@ -4,7 +4,7 @@ 
 ! { dg-additional-options "-O2" }
 ! { dg-additional-options "-fopt-info-optimized-omp" }
 ! { dg-additional-options "-fdump-tree-ompexp" }
-! { dg-additional-options "-fdump-tree-oaccdevlow" }
+! { dg-additional-options "-fdump-tree-oaccloops" }
 
 program main
   implicit none
@@ -26,6 +26,6 @@  end program main
 
 ! Check the offloaded function's classification and compute dimensions (will
 ! always be 1 x 1 x 1 for non-offloading compilation).
-! { dg-final { scan-tree-dump-times "(?n)Function is OpenACC parallel offload" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc parallel, omp target entrypoint\\)\\)" 1 "oaccdevlow" } }
+! { dg-final { scan-tree-dump-times "(?n)Function is OpenACC parallel offload" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc parallel, omp target entrypoint\\)\\)" 1 "oaccloops" } }
diff --git a/gcc/testsuite/gfortran.dg/goacc/classify-routine.f95 b/gcc/testsuite/gfortran.dg/goacc/classify-routine.f95
index 401d5270391..ed24cee10d8 100644
--- a/gcc/testsuite/gfortran.dg/goacc/classify-routine.f95
+++ b/gcc/testsuite/gfortran.dg/goacc/classify-routine.f95
@@ -4,7 +4,7 @@ 
 ! { dg-additional-options "-O2" }
 ! { dg-additional-options "-fopt-info-optimized-omp" }
 ! { dg-additional-options "-fdump-tree-ompexp" }
-! { dg-additional-options "-fdump-tree-oaccdevlow" }
+! { dg-additional-options "-fdump-tree-oaccloops" }
 
 subroutine ROUTINE
   !$acc routine worker
@@ -25,6 +25,6 @@  end subroutine ROUTINE
 
 ! Check the offloaded function's classification and compute dimensions (will
 ! always be 1 x 1 x 1 for non-offloading compilation).
-! { dg-final { scan-tree-dump-times "(?n)Function is OpenACC routine level 1" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(0 1, 1 1, 1 1\\), omp declare target \\(worker\\)\\)\\)" 1 "oaccdevlow" } }
+! { dg-final { scan-tree-dump-times "(?n)Function is OpenACC routine level 1" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(0 1, 1 1, 1 1\\), omp declare target \\(worker\\)\\)\\)" 1 "oaccloops" } }
diff --git a/gcc/testsuite/gfortran.dg/goacc/classify-serial.f95 b/gcc/testsuite/gfortran.dg/goacc/classify-serial.f95
index d7052bacbe8..33fbdbc07d0 100644
--- a/gcc/testsuite/gfortran.dg/goacc/classify-serial.f95
+++ b/gcc/testsuite/gfortran.dg/goacc/classify-serial.f95
@@ -4,7 +4,7 @@ 
 ! { dg-additional-options "-O2" }
 ! { dg-additional-options "-fopt-info-optimized-omp" }
 ! { dg-additional-options "-fdump-tree-ompexp" }
-! { dg-additional-options "-fdump-tree-oaccdevlow" }
+! { dg-additional-options "-fdump-tree-oaccloops" }
 
 program main
   implicit none
@@ -28,6 +28,6 @@  end program main
 
 ! Check the offloaded function's classification and compute dimensions (will
 ! always be 1 x 1 x 1 for non-offloading compilation).
-! { dg-final { scan-tree-dump-times "(?n)Function is OpenACC serial offload" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccdevlow" } }
-! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc serial, omp target entrypoint\\)\\)" 1 "oaccdevlow" } }
+! { dg-final { scan-tree-dump-times "(?n)Function is OpenACC serial offload" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)Compute dimensions \\\[1, 1, 1\\\]" 1 "oaccloops" } }
+! { dg-final { scan-tree-dump-times "(?n)__attribute__\\(\\(oacc function \\(1, 1, 1\\), oacc serial, omp target entrypoint\\)\\)" 1 "oaccloops" } }
diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h
index 15693fee150..4a575a54c04 100644
--- a/gcc/tree-pass.h
+++ b/gcc/tree-pass.h
@@ -423,6 +423,8 @@  extern gimple_opt_pass *make_pass_diagnose_omp_blocks (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_expand_omp (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_expand_omp_ssa (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_omp_target_link (gcc::context *ctxt);
+extern gimple_opt_pass *make_pass_oacc_loop_designation (gcc::context *ctxt);
+extern gimple_opt_pass *make_pass_oacc_gimple_workers (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_oacc_device_lower (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_omp_device_lower (gcc::context *ctxt);
 extern gimple_opt_pass *make_pass_object_sizes (gcc::context *ctxt);