Patchwork [0/2] Proof-of-concept towards removal of the "cfun" global

login
register
mail settings
Submitter David Malcolm
Date May 31, 2013, 2:12 p.m.
Message ID <1370009523.10670.13.camel@surprise>
Download mbox | patch
Permalink /patch/247963/
State New
Headers show

Comments

David Malcolm - May 31, 2013, 2:12 p.m.
On Tue, 2013-05-28 at 12:30 -0600, Jeff Law wrote:
> On 05/28/2013 11:00 AM, David Malcolm wrote:
> > On Tue, 2013-05-28 at 06:39 -0600, Jeff Law wrote:
> >> On 05/25/2013 07:02 AM, David Malcolm wrote:
> >>> I can think of three approaches to "cfun":
> >>> (a) status quo: a global variable, with macros to prevent direct
> >>>       assignment, and an API for changing cfun.
> >>> (b) have a global "context" or "universe" object, and put cfun in
> >>>       there (perhaps with tricks to be able to make this a singleton in a
> >>>       non-library build, optimizing away the context lookups somehow
> >>>       - see [2] for discussion on this)
> >>> (c) go through all of the places where cfun is used, and somehow ensure
> >>>       that they're passed in the data they need.  Often it's not the
> >>>       function that's used, but its cfg.
> >> I'd think B or C is going to be the way to go here.  B may also be an
> >> intermediate step towards C.
> >>
> >>>
> >>> One part of the puzzle is that various header files in the build define
> >>> macros that reference the "cfun" global, e.g.:
> >>>
> >>>     #define n_basic_blocks         (cfun->cfg->x_n_basic_blocks)
> >>>
> >>> This one isn't in block caps, which might mislead a new contributor into
> >>> thinking it's a variable, rather than a macro, so there may be virtue in
> >>> removing these macros for that reason alone.  (I know that these confused
> >>> me for a while when I first started writing my plugin) [3]
> >> There's a few of these that have crept in over the years.
> >> n_basic_blocks used to be a global variable.  At some point it was
> >> stuffed into cfun, but it was decided not to go back and fix all the
> >> references -- possibly due to not wanting to fix the overly long lines
> >> after the mechanical change.
> >
> > If a mechanical change could fix the overly-long lines as it went along,
> > would such a change be acceptable now?
> Probably.  However, I would advise against trying to pack too much into 
> a single patch.
> 
> So one possibility to move forward would be a single patch which removes 
> the n_basic_blocks macro and fixes all the uses along with their 
> overly-long lines.   It's simple, obvious and very self-contained.

Thanks.  I'm attaching such a patch.

Successful 3-stage build against r199533; "make check" shows the same
results as an unpatched 3-stage build of that same revision (both on
x86_64-unknown-linux-gnu).

OK for trunk?

Should I continue with individual patches to remove each macro, in turn?

Dave
Richard Guenther - June 13, 2013, 8:41 a.m.
On Fri, May 31, 2013 at 4:12 PM, David Malcolm <dmalcolm@redhat.com> wrote:
> On Tue, 2013-05-28 at 12:30 -0600, Jeff Law wrote:
>> On 05/28/2013 11:00 AM, David Malcolm wrote:
>> > On Tue, 2013-05-28 at 06:39 -0600, Jeff Law wrote:
>> >> On 05/25/2013 07:02 AM, David Malcolm wrote:
>> >>> I can think of three approaches to "cfun":
>> >>> (a) status quo: a global variable, with macros to prevent direct
>> >>>       assignment, and an API for changing cfun.
>> >>> (b) have a global "context" or "universe" object, and put cfun in
>> >>>       there (perhaps with tricks to be able to make this a singleton in a
>> >>>       non-library build, optimizing away the context lookups somehow
>> >>>       - see [2] for discussion on this)
>> >>> (c) go through all of the places where cfun is used, and somehow ensure
>> >>>       that they're passed in the data they need.  Often it's not the
>> >>>       function that's used, but its cfg.
>> >> I'd think B or C is going to be the way to go here.  B may also be an
>> >> intermediate step towards C.
>> >>
>> >>>
>> >>> One part of the puzzle is that various header files in the build define
>> >>> macros that reference the "cfun" global, e.g.:
>> >>>
>> >>>     #define n_basic_blocks         (cfun->cfg->x_n_basic_blocks)
>> >>>
>> >>> This one isn't in block caps, which might mislead a new contributor into
>> >>> thinking it's a variable, rather than a macro, so there may be virtue in
>> >>> removing these macros for that reason alone.  (I know that these confused
>> >>> me for a while when I first started writing my plugin) [3]
>> >> There's a few of these that have crept in over the years.
>> >> n_basic_blocks used to be a global variable.  At some point it was
>> >> stuffed into cfun, but it was decided not to go back and fix all the
>> >> references -- possibly due to not wanting to fix the overly long lines
>> >> after the mechanical change.
>> >
>> > If a mechanical change could fix the overly-long lines as it went along,
>> > would such a change be acceptable now?
>> Probably.  However, I would advise against trying to pack too much into
>> a single patch.
>>
>> So one possibility to move forward would be a single patch which removes
>> the n_basic_blocks macro and fixes all the uses along with their
>> overly-long lines.   It's simple, obvious and very self-contained.
>
> Thanks.  I'm attaching such a patch.
>
> Successful 3-stage build against r199533; "make check" shows the same
> results as an unpatched 3-stage build of that same revision (both on
> x86_64-unknown-linux-gnu).
>
> OK for trunk?
>
> Should I continue with individual patches to remove each macro, in turn?

Sorry for taking so long to look at this.  I'd prefer instead of hunks like

@@ -2080,7 +2080,7 @@ reorder_basic_blocks (void)

   gcc_assert (current_ir_type () == IR_RTL_CFGLAYOUT);

-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (cfun->cfg->n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
     return;

   set_edge_can_fallthru_flag ();

to use

   if (n_basic_blocks_for_function (cfun) <= NUM_FIXED_BLOCKS + 1)

so we have a single way over the compiler to access n_basic_blocks.

Ok with that change, and ok with following up with patches for the
other individual macros, replacing them with existing _for_function
variants.

Btw, I'm also ok with shortening these macros to use _for_fn instead
at the same time, adjusting the very few existing invocations of them.

Thanks,
Richard.

> Dave

Patch

commit 29e3fae400c10932e6aeed8f4d873c16a2af7d33
Author: David Malcolm <dmalcolm@redhat.com>
Date:   Thu May 30 21:18:43 2013 -0400

    	Eliminate the n_basic_blocks macro
    
    	Patch partially autogenerated by refactor_cfun.py from
    	https://github.com/davidmalcolm/gcc-refactoring-scripts
    	revision aac792516f0b657c492ee158d20158abf4c7bde3 with some
    	linewrap cleanups by hand.
    
    	* basic-block.h (n_basic_blocks): Eliminate this macro.
    	(control_flow_graph): Rename x_n_basic_blocks field to n_basic_blocks.
    	(n_basic_blocks_for_function): Update to reflect above change.
    	* alias.c (init_alias_analysis): Update to reflect above changes,
    	making the "cfun->cfg->" access explicit.
    	* bb-reorder.c (reorder_basic_blocks): Likewise.
    	(duplicate_computed_gotos): Likewise.
    	(partition_hot_cold_basic_blocks): Likewise.
    	* bt-load.c (augment_live_range): Likewise.
    	* cfg.c (compact_blocks): Likewise.
    	(expunge_block): Likewise.
    	* cfganal.c (mark_dfs_back_edges): Likewise.
    	(find_unreachable_blocks): Likewise.
    	(print_edge_list): Likewise.
    	(post_order_compute): Likewise.
    	(inverted_post_order_compute): Likewise.
    	(pre_and_rev_post_order_compute): Likewise.
    	(flow_dfs_compute_reverse_init): Likewise.
    	(compute_idf): Likewise.
    	* cfgcleanup.c (try_forward_edges): Likewise.
    	(try_optimize_cfg): Likewise.
    	* cfghooks.c (dump_flow_info): Likewise.
    	* cfgloop.c (flow_loops_find): Likewise.
    	(get_loop_body): Likewise.
    	(verify_loop_structure): Likewise.
    	* cfgloopmanip.c (find_path): Likewise.
    	(remove_path): Likewise.
    	(add_loop): Likewise.
    	* cfgrtl.c (rtl_create_basic_block): Likewise.
    	(entry_of_function): Likewise.
    	(rtl_verify_bb_layout): Likewise.
    	(rtl_flow_call_edges_add): Likewise.
    	* coverage.c (coverage_compute_cfg_checksum): Likewise.
    	* cprop.c (is_too_expensive): Likewise.
    	(one_cprop_pass): Likewise.
    	* df-core.c (df_worklist_dataflow_doublequeue): Likewise.
    	(df_compact_blocks): Likewise.
    	(df_compute_cfg_image): Likewise.
    	* dominance.c (init_dom_info): Likewise.
    	(calc_dfs_tree_nonrec): Likewise.
    	(calc_dfs_tree): Likewise.
    	(calculate_dominance_info): Likewise.
    	* domwalk.c (walk_dominator_tree): Likewise.
    	* function.c (generate_setjmp_warnings): Likewise.
    	(thread_prologue_and_epilogue_insns): Likewise.
    	* fwprop.c (build_single_def_use_links): Likewise.
    	* gcse.c (one_pre_gcse_pass): Likewise.
    	(one_code_hoisting_pass): Likewise.
    	(is_too_expensive): Likewise.
    	* graphite.c (graphite_initialize): Likewise.
    	* haifa-sched.c (haifa_sched_init): Likewise.
    	* ipa-inline-analysis.c (estimate_function_body_sizes): Likewise.
    	* ira-build.c (ira_build): Likewise.
    	* lcm.c (compute_antinout_edge): Likewise.
    	(compute_laterin): Likewise.
    	(compute_available): Likewise.
    	(compute_nearerout): Likewise.
    	* lra-lives.c (lra_create_live_ranges): Likewise.
    	* lra.c (has_nonexceptional_receiver): Likewise.
    	* mcf.c (create_fixup_graph): Likewise.
    	* profile.c (branch_prob): Likewise.
    	* reg-stack.c (convert_regs_2): Likewise.
    	* regrename.c (regrename_analyze): Likewise.
    	* reload1.c (has_nonexceptional_receiver): Likewise.
    	* reorg.c (dbr_schedule): Likewise.
    	* sched-deps.c (sched_deps_init): Likewise.
    	* sched-ebb.c (schedule_ebbs): Likewise.
    	* sched-rgn.c (haifa_find_rgns): Likewise.
    	(extend_rgns): Likewise.
    	(sched_rgn_init): Likewise.
    	(schedule_insns): Likewise.
    	(extend_regions): Likewise.
    	* sel-sched-ir.c (sel_recompute_toporder): Likewise.
    	(recompute_rev_top_order): Likewise.
    	* sel-sched.c (run_selective_scheduling): Likewise.
    	* store-motion.c (remove_reachable_equiv_notes): Likewise.
    	(one_store_motion_pass): Likewise.
    	* tracer.c (tail_duplicate): Likewise.
    	(tracer): Likewise.
    	* tree-cfg.c (build_gimple_cfg): Likewise.
    	(create_bb): Likewise.
    	(gimple_dump_cfg): Likewise.
    	(dump_cfg_stats): Likewise.
    	(gimple_flow_call_edges_add): Likewise.
    	(move_block_to_fn): Likewise.
    	* tree-cfgcleanup.c (merge_phi_nodes): Likewise.
    	* tree-inline.c (fold_marked_statements): Likewise.
    	(optimize_inline_calls): Likewise.
    	* tree-ssa-ifcombine.c (tree_ssa_ifcombine): Likewise.
    	* tree-ssa-loop-ch.c (copy_loop_headers): Likewise.
    	* tree-ssa-loop-im.c (analyze_memory_references): Likewise.
    	* tree-ssa-loop-manip.c (compute_live_loop_exits): Likewise.
    	* tree-ssa-math-opts.c (execute_cse_reciprocals): Likewise.
    	* tree-ssa-phiopt.c (tree_ssa_phiopt_worker): Likewise.
    	(blocks_in_phiopt_order): Likewise.
    	* tree-ssa-pre.c (compute_avail): Likewise.
    	(init_pre): Likewise.
    	(do_pre): Likewise.
    	* tree-ssa-reassoc.c (init_reassoc): Likewise.
    	* tree-ssa-sccvn.c (init_scc_vn): Likewise.
    	* tree-ssa-tail-merge.c (init_worklist): Likewise.
    	(alloc_cluster_vectors): Likewise.
    	* tree-ssa-uncprop.c (associate_equivalences_with_edges): Likewise.
    	* var-tracking.c (vt_stack_adjustments): Likewise.
    	(vt_find_locations): Likewise.
    	(variable_tracking_main_1): Likewise.
    	* config/spu/spu.c (spu_machine_dependent_reorg): Likewise.

diff --git a/gcc/alias.c b/gcc/alias.c
index ef11c6a..959e7d1 100644
--- a/gcc/alias.c
+++ b/gcc/alias.c
@@ -2846,7 +2846,7 @@  init_alias_analysis (void)
      The state of the arrays for the set chain in question does not matter
      since the program has undefined behavior.  */
 
-  rpo = XNEWVEC (int, n_basic_blocks);
+  rpo = XNEWVEC (int, cfun->cfg->n_basic_blocks);
   rpo_cnt = pre_and_rev_post_order_compute (NULL, rpo, false);
 
   pass = 0;
diff --git a/gcc/basic-block.h b/gcc/basic-block.h
index eed320c..ff60fe8 100644
--- a/gcc/basic-block.h
+++ b/gcc/basic-block.h
@@ -285,7 +285,7 @@  struct GTY(()) control_flow_graph {
   vec<basic_block, va_gc> *x_basic_block_info;
 
   /* Number of basic blocks in this flow graph.  */
-  int x_n_basic_blocks;
+  int n_basic_blocks;
 
   /* Number of edges in this flow graph.  */
   int x_n_edges;
@@ -317,7 +317,7 @@  struct GTY(()) control_flow_graph {
 #define ENTRY_BLOCK_PTR_FOR_FUNCTION(FN)     ((FN)->cfg->x_entry_block_ptr)
 #define EXIT_BLOCK_PTR_FOR_FUNCTION(FN)	     ((FN)->cfg->x_exit_block_ptr)
 #define basic_block_info_for_function(FN)    ((FN)->cfg->x_basic_block_info)
-#define n_basic_blocks_for_function(FN)	     ((FN)->cfg->x_n_basic_blocks)
+#define n_basic_blocks_for_function(FN)	     ((FN)->cfg->n_basic_blocks)
 #define n_edges_for_function(FN)	     ((FN)->cfg->x_n_edges)
 #define last_basic_block_for_function(FN)    ((FN)->cfg->x_last_basic_block)
 #define label_to_block_map_for_function(FN)  ((FN)->cfg->x_label_to_block_map)
@@ -332,7 +332,6 @@  struct GTY(()) control_flow_graph {
 #define ENTRY_BLOCK_PTR		(cfun->cfg->x_entry_block_ptr)
 #define EXIT_BLOCK_PTR		(cfun->cfg->x_exit_block_ptr)
 #define basic_block_info	(cfun->cfg->x_basic_block_info)
-#define n_basic_blocks		(cfun->cfg->x_n_basic_blocks)
 #define n_edges			(cfun->cfg->x_n_edges)
 #define last_basic_block	(cfun->cfg->x_last_basic_block)
 #define label_to_block_map	(cfun->cfg->x_label_to_block_map)
diff --git a/gcc/bb-reorder.c b/gcc/bb-reorder.c
index 0a1f42a..02ec498 100644
--- a/gcc/bb-reorder.c
+++ b/gcc/bb-reorder.c
@@ -2080,7 +2080,7 @@  reorder_basic_blocks (void)
 
   gcc_assert (current_ir_type () == IR_RTL_CFGLAYOUT);
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (cfun->cfg->n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
     return;
 
   set_edge_can_fallthru_flag ();
@@ -2104,7 +2104,7 @@  reorder_basic_blocks (void)
       bbd[i].node = NULL;
     }
 
-  traces = XNEWVEC (struct trace, n_basic_blocks);
+  traces = XNEWVEC (struct trace, cfun->cfg->n_basic_blocks);
   n_traces = 0;
   find_traces (&n_traces, traces);
   connect_traces (n_traces, traces);
@@ -2229,7 +2229,7 @@  duplicate_computed_gotos (void)
   bitmap candidates;
   int max_size;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (cfun->cfg->n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   clear_bb_flags ();
@@ -2458,7 +2458,7 @@  partition_hot_cold_basic_blocks (void)
 {
   vec<edge> crossing_edges;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (cfun->cfg->n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   df_set_flags (DF_DEFER_INSN_RESCAN);
diff --git a/gcc/bt-load.c b/gcc/bt-load.c
index 9ca1bd9..602b920 100644
--- a/gcc/bt-load.c
+++ b/gcc/bt-load.c
@@ -900,7 +900,7 @@  augment_live_range (bitmap live_range, HARD_REG_SET *btrs_live_in_range,
 {
   basic_block *worklist, *tos;
 
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks + 1);
 
   if (dominated_by_p (CDI_DOMINATORS, new_bb, head_bb))
     {
diff --git a/gcc/cfg.c b/gcc/cfg.c
index 9c6c939..126432e 100644
--- a/gcc/cfg.c
+++ b/gcc/cfg.c
@@ -169,12 +169,12 @@  compact_blocks (void)
 	  bb->index = i;
 	  i++;
 	}
-      gcc_assert (i == n_basic_blocks);
+      gcc_assert (i == cfun->cfg->n_basic_blocks);
 
       for (; i < last_basic_block; i++)
 	SET_BASIC_BLOCK (i, NULL);
     }
-  last_basic_block = n_basic_blocks;
+  last_basic_block = cfun->cfg->n_basic_blocks;
 }
 
 /* Remove block B from the basic block array.  */
@@ -184,7 +184,7 @@  expunge_block (basic_block b)
 {
   unlink_block (b);
   SET_BASIC_BLOCK (b->index, NULL);
-  n_basic_blocks--;
+  cfun->cfg->n_basic_blocks--;
   /* We should be able to ggc_free here, but we are not.
      The dead SSA_NAMES are left pointing to dead statements that are pointing
      to dead basic blocks making garbage collector to die.
diff --git a/gcc/cfganal.c b/gcc/cfganal.c
index 63d17ce..8425590 100644
--- a/gcc/cfganal.c
+++ b/gcc/cfganal.c
@@ -76,7 +76,7 @@  mark_dfs_back_edges (void)
   post = XCNEWVEC (int, last_basic_block);
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, cfun->cfg->n_basic_blocks + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -152,7 +152,7 @@  find_unreachable_blocks (void)
   edge_iterator ei;
   basic_block *tos, *worklist, bb;
 
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks);
+  tos = worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
 
   /* Clear all the reachability flags.  */
 
@@ -256,7 +256,7 @@  print_edge_list (FILE *f, struct edge_list *elist)
   int x;
 
   fprintf (f, "Compressed edge list, %d BBs + entry & exit, and %d edges\n",
-	   n_basic_blocks, elist->num_edges);
+	   cfun->cfg->n_basic_blocks, elist->num_edges);
 
   for (x = 0; x < elist->num_edges; x++)
     {
@@ -495,7 +495,7 @@  post_order_compute (int *post_order, bool include_entry_exit,
     post_order[post_order_num++] = EXIT_BLOCK;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, cfun->cfg->n_basic_blocks + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -553,7 +553,7 @@  post_order_compute (int *post_order, bool include_entry_exit,
 
   /* Delete the unreachable blocks if some were found and we are
      supposed to do it.  */
-  if (delete_unreachable && (count != n_basic_blocks))
+  if (delete_unreachable && (count != cfun->cfg->n_basic_blocks))
     {
       basic_block b;
       basic_block next_bb;
@@ -648,7 +648,7 @@  inverted_post_order_compute (int *post_order)
   sbitmap visited;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, cfun->cfg->n_basic_blocks + 1);
   sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -782,11 +782,11 @@  pre_and_rev_post_order_compute (int *pre_order, int *rev_post_order,
   edge_iterator *stack;
   int sp;
   int pre_order_num = 0;
-  int rev_post_order_num = n_basic_blocks - 1;
+  int rev_post_order_num = cfun->cfg->n_basic_blocks - 1;
   sbitmap visited;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, cfun->cfg->n_basic_blocks + 1);
   sp = 0;
 
   if (include_entry_exit)
@@ -866,12 +866,12 @@  pre_and_rev_post_order_compute (int *pre_order, int *rev_post_order,
       if (rev_post_order)
 	rev_post_order[rev_post_order_num--] = EXIT_BLOCK;
       /* The number of nodes visited should be the number of blocks.  */
-      gcc_assert (pre_order_num == n_basic_blocks);
+      gcc_assert (pre_order_num == cfun->cfg->n_basic_blocks);
     }
   else
     /* The number of nodes visited should be the number of blocks minus
        the entry and exit blocks which are not visited here.  */
-    gcc_assert (pre_order_num == n_basic_blocks - NUM_FIXED_BLOCKS);
+    gcc_assert (pre_order_num == cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS);
 
   return pre_order_num;
 }
@@ -910,7 +910,7 @@  static void
 flow_dfs_compute_reverse_init (depth_first_search_ds data)
 {
   /* Allocate stack for back-tracking up CFG.  */
-  data->stack = XNEWVEC (basic_block, n_basic_blocks);
+  data->stack = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
   data->sp = 0;
 
   /* Allocate bitmap to track nodes that have been visited.  */
@@ -1142,7 +1142,7 @@  compute_idf (bitmap def_blocks, bitmap_head *dfs)
   bitmap phi_insertion_points;
 
   /* Each block can appear at most twice on the work-stack.  */
-  work_stack.create (2 * n_basic_blocks);
+  work_stack.create (2 * cfun->cfg->n_basic_blocks);
   phi_insertion_points = BITMAP_ALLOC (NULL);
 
   /* Seed the work list with all the blocks in DEF_BLOCKS.  We use
diff --git a/gcc/cfgcleanup.c b/gcc/cfgcleanup.c
index 1379cf7..1a0dc34 100644
--- a/gcc/cfgcleanup.c
+++ b/gcc/cfgcleanup.c
@@ -458,7 +458,7 @@  try_forward_edges (int mode, basic_block b)
 	  && find_reg_note (BB_END (first), REG_CROSSING_JUMP, NULL_RTX))
 	return false;
 
-      while (counter < n_basic_blocks)
+      while (counter < cfun->cfg->n_basic_blocks)
 	{
 	  basic_block new_target = NULL;
 	  bool new_target_threaded = false;
@@ -471,7 +471,7 @@  try_forward_edges (int mode, basic_block b)
 	      /* Bypass trivial infinite loops.  */
 	      new_target = single_succ (target);
 	      if (target == new_target)
-		counter = n_basic_blocks;
+		counter = cfun->cfg->n_basic_blocks;
 	      else if (!optimize)
 		{
 		  /* When not optimizing, ensure that edges or forwarder
@@ -520,7 +520,7 @@  try_forward_edges (int mode, basic_block b)
 	      if (t)
 		{
 		  if (!threaded_edges)
-		    threaded_edges = XNEWVEC (edge, n_basic_blocks);
+		    threaded_edges = XNEWVEC (edge, cfun->cfg->n_basic_blocks);
 		  else
 		    {
 		      int i;
@@ -532,7 +532,7 @@  try_forward_edges (int mode, basic_block b)
 			  break;
 		      if (i < nthreaded_edges)
 			{
-			  counter = n_basic_blocks;
+			  counter = cfun->cfg->n_basic_blocks;
 			  break;
 			}
 		    }
@@ -541,7 +541,8 @@  try_forward_edges (int mode, basic_block b)
 		  if (t->dest == b)
 		    break;
 
-		  gcc_assert (nthreaded_edges < n_basic_blocks - NUM_FIXED_BLOCKS);
+		  gcc_assert (nthreaded_edges
+			      < cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS);
 		  threaded_edges[nthreaded_edges++] = t;
 
 		  new_target = t->dest;
@@ -557,7 +558,7 @@  try_forward_edges (int mode, basic_block b)
 	  threaded |= new_target_threaded;
 	}
 
-      if (counter >= n_basic_blocks)
+      if (counter >= cfun->cfg->n_basic_blocks)
 	{
 	  if (dump_file)
 	    fprintf (dump_file, "Infinite loop in BB %i.\n",
@@ -2694,7 +2695,7 @@  try_optimize_cfg (int mode)
 		  /* Note that forwarder_block_p true ensures that
 		     there is a successor for this block.  */
 		  && (single_succ_edge (b)->flags & EDGE_FALLTHRU)
-		  && n_basic_blocks > NUM_FIXED_BLOCKS + 1)
+		  && cfun->cfg->n_basic_blocks > NUM_FIXED_BLOCKS + 1)
 		{
 		  if (dump_file)
 		    fprintf (dump_file,
diff --git a/gcc/cfghooks.c b/gcc/cfghooks.c
index 8331fa0..86ab3de 100644
--- a/gcc/cfghooks.c
+++ b/gcc/cfghooks.c
@@ -323,7 +323,8 @@  dump_flow_info (FILE *file, int flags)
 {
   basic_block bb;
 
-  fprintf (file, "\n%d basic blocks, %d edges.\n", n_basic_blocks, n_edges);
+  fprintf (file, "\n%d basic blocks, %d edges.\n", cfun->cfg->n_basic_blocks,
+	   n_edges);
   FOR_ALL_BB (bb)
     dump_bb (file, bb, 0, flags);
 
diff --git a/gcc/cfgloop.c b/gcc/cfgloop.c
index 0128724..8c47556 100644
--- a/gcc/cfgloop.c
+++ b/gcc/cfgloop.c
@@ -420,21 +420,21 @@  flow_loops_find (struct loops *loops)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS)
     return loops;
 
   /* The root loop node contains all basic-blocks.  */
-  loops->tree_root->num_nodes = n_basic_blocks;
+  loops->tree_root->num_nodes = cfun->cfg->n_basic_blocks;
 
   /* Compute depth first search order of the CFG so that outer
      natural loops will be found before inner natural loops.  */
-  rc_order = XNEWVEC (int, n_basic_blocks);
+  rc_order = XNEWVEC (int, cfun->cfg->n_basic_blocks);
   pre_and_rev_post_order_compute (NULL, rc_order, false);
 
   /* Gather all loop headers in reverse completion order and allocate
      loop structures for loops that are not already present.  */
   larray.create (loops->larray->length());
-  for (b = 0; b < n_basic_blocks - NUM_FIXED_BLOCKS; b++)
+  for (b = 0; b < cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS; b++)
     {
       basic_block header = BASIC_BLOCK (rc_order[b]);
       if (bb_loop_header_p (header))
@@ -830,7 +830,7 @@  get_loop_body (const struct loop *loop)
     {
       /* There may be blocks unreachable from EXIT_BLOCK, hence we need to
 	 special-case the fake loop that contains the whole function.  */
-      gcc_assert (loop->num_nodes == (unsigned) n_basic_blocks);
+      gcc_assert (loop->num_nodes == (unsigned) cfun->cfg->n_basic_blocks);
       body[tv++] = loop->header;
       body[tv++] = EXIT_BLOCK_PTR;
       FOR_EACH_BB (bb)
@@ -1366,7 +1366,7 @@  verify_loop_structure (void)
   /* Check the recorded loop father and sizes of loops.  */
   visited = sbitmap_alloc (last_basic_block);
   bitmap_clear (visited);
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
+  bbs = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
   FOR_EACH_LOOP (li, loop, LI_FROM_INNERMOST)
     {
       unsigned n;
@@ -1378,7 +1378,7 @@  verify_loop_structure (void)
 	  continue;
 	}
 
-      n = get_loop_body_with_size (loop, bbs, n_basic_blocks);
+      n = get_loop_body_with_size (loop, bbs, cfun->cfg->n_basic_blocks);
       if (loop->num_nodes != n)
 	{
 	  error ("size of loop %d should be %d, not %d",
diff --git a/gcc/cfgloopmanip.c b/gcc/cfgloopmanip.c
index bc87755..efe8f50 100644
--- a/gcc/cfgloopmanip.c
+++ b/gcc/cfgloopmanip.c
@@ -67,9 +67,9 @@  find_path (edge e, basic_block **bbs)
   gcc_assert (EDGE_COUNT (e->dest->preds) <= 1);
 
   /* Find bbs in the path.  */
-  *bbs = XNEWVEC (basic_block, n_basic_blocks);
+  *bbs = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
   return dfs_enumerate_from (e->dest, 0, rpe_enum_p, *bbs,
-			     n_basic_blocks, e->dest);
+			     cfun->cfg->n_basic_blocks, e->dest);
 }
 
 /* Fix placement of basic block BB inside loop hierarchy --
@@ -332,7 +332,7 @@  remove_path (edge e)
   nrem = find_path (e, &rem_bbs);
 
   n_bord_bbs = 0;
-  bord_bbs = XNEWVEC (basic_block, n_basic_blocks);
+  bord_bbs = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
   seen = sbitmap_alloc (last_basic_block);
   bitmap_clear (seen);
 
@@ -435,8 +435,8 @@  add_loop (struct loop *loop, struct loop *outer)
   flow_loop_tree_node_add (outer, loop);
 
   /* Find its nodes.  */
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
-  n = get_loop_body_with_size (loop, bbs, n_basic_blocks);
+  bbs = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
+  n = get_loop_body_with_size (loop, bbs, cfun->cfg->n_basic_blocks);
 
   for (i = 0; i < n; i++)
     {
diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c
index 0ea297e..3c0866d 100644
--- a/gcc/cfgrtl.c
+++ b/gcc/cfgrtl.c
@@ -360,7 +360,7 @@  rtl_create_basic_block (void *headp, void *endp, basic_block after)
       vec_safe_grow_cleared (basic_block_info, new_size);
     }
 
-  n_basic_blocks++;
+  cfun->cfg->n_basic_blocks++;
 
   bb = create_basic_block_structure (head, end, NULL, after);
   bb->aux = NULL;
@@ -479,8 +479,8 @@  struct rtl_opt_pass pass_free_cfg =
 rtx
 entry_of_function (void)
 {
-  return (n_basic_blocks > NUM_FIXED_BLOCKS ?
-	  BB_HEAD (ENTRY_BLOCK_PTR->next_bb) : get_insns ());
+  return (cfun->cfg->n_basic_blocks > NUM_FIXED_BLOCKS
+	  ? BB_HEAD (ENTRY_BLOCK_PTR->next_bb) : get_insns ());
 }
 
 /* Emit INSN at the entry point of the function, ensuring that it is only
@@ -2609,10 +2609,10 @@  rtl_verify_bb_layout (void)
 	curr_bb = NULL;
     }
 
-  if (num_bb_notes != n_basic_blocks - NUM_FIXED_BLOCKS)
+  if (num_bb_notes != cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS)
     internal_error
       ("number of bb notes in insn chain (%d) != n_basic_blocks (%d)",
-       num_bb_notes, n_basic_blocks);
+       num_bb_notes, cfun->cfg->n_basic_blocks);
 
    return err;
 }
@@ -4417,7 +4417,7 @@  rtl_flow_call_edges_add (sbitmap blocks)
   int last_bb = last_basic_block;
   bool check_last_block = false;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS)
     return 0;
 
   if (! blocks)
diff --git a/gcc/config/spu/spu.c b/gcc/config/spu/spu.c
index 6cbd3f8..d6e53b8 100644
--- a/gcc/config/spu/spu.c
+++ b/gcc/config/spu/spu.c
@@ -2469,13 +2469,13 @@  spu_machine_dependent_reorg (void)
   compact_blocks ();
 
   spu_bb_info =
-    (struct spu_bb_info *) xcalloc (n_basic_blocks,
+    (struct spu_bb_info *) xcalloc (cfun->cfg->n_basic_blocks,
 				    sizeof (struct spu_bb_info));
 
   /* We need exact insn addresses and lengths.  */
   shorten_branches (get_insns ());
 
-  for (i = n_basic_blocks - 1; i >= 0; i--)
+  for (i = cfun->cfg->n_basic_blocks - 1; i >= 0; i--)
     {
       bb = BASIC_BLOCK (i);
       branch = 0;
diff --git a/gcc/coverage.c b/gcc/coverage.c
index 7c395f4..2e83aac 100644
--- a/gcc/coverage.c
+++ b/gcc/coverage.c
@@ -553,7 +553,7 @@  unsigned
 coverage_compute_cfg_checksum (void)
 {
   basic_block bb;
-  unsigned chksum = n_basic_blocks;
+  unsigned chksum = cfun->cfg->n_basic_blocks;
 
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/cprop.c b/gcc/cprop.c
index 6a6b5f1..36eebe6 100644
--- a/gcc/cprop.c
+++ b/gcc/cprop.c
@@ -1728,24 +1728,25 @@  is_too_expensive (const char *pass)
      which have a couple switch statements.  Rather than simply
      threshold the number of blocks, uses something with a more
      graceful degradation.  */
-  if (n_edges > 20000 + n_basic_blocks * 4)
+  if (n_edges > 20000 + cfun->cfg->n_basic_blocks * 4)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d edges/basic block",
-	       pass, n_basic_blocks, n_edges / n_basic_blocks);
+	       pass, cfun->cfg->n_basic_blocks,
+	       n_edges / cfun->cfg->n_basic_blocks);
 
       return true;
     }
 
   /* If allocating memory for the cprop bitmap would take up too much
      storage it's better just to disable the optimization.  */
-  if ((n_basic_blocks
+  if ((cfun->cfg->n_basic_blocks
        * SBITMAP_SET_SIZE (max_reg_num ())
        * sizeof (SBITMAP_ELT_TYPE)) > MAX_GCSE_MEMORY)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d registers",
-	       pass, n_basic_blocks, max_reg_num ());
+	       pass, cfun->cfg->n_basic_blocks, max_reg_num ());
 
       return true;
     }
@@ -1762,7 +1763,7 @@  one_cprop_pass (void)
   int changed = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (cfun->cfg->n_basic_blocks <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_ ("const/copy propagation disabled")))
     return 0;
 
@@ -1872,7 +1873,7 @@  one_cprop_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "CPROP of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), cfun->cfg->n_basic_blocks, bytes_used);
       fprintf (dump_file, "%d local const props, %d local copy props, ",
 	       local_const_prop_count, local_copy_prop_count);
       fprintf (dump_file, "%d global const props, %d global copy props\n\n",
diff --git a/gcc/df-core.c b/gcc/df-core.c
index e602290..780f7d2 100644
--- a/gcc/df-core.c
+++ b/gcc/df-core.c
@@ -1044,8 +1044,8 @@  df_worklist_dataflow_doublequeue (struct dataflow *dataflow,
     fprintf (dump_file, "df_worklist_dataflow_doublequeue:"
 	     "n_basic_blocks %d n_edges %d"
 	     " count %d (%5.2g)\n",
-	     n_basic_blocks, n_edges,
-	     dcount, dcount / (float)n_basic_blocks);
+	     cfun->cfg->n_basic_blocks, n_edges,
+	     dcount, dcount / (float)cfun->cfg->n_basic_blocks);
 }
 
 /* Worklist-based dataflow solver. It uses sbitmap as a worklist,
@@ -1553,7 +1553,7 @@  df_compact_blocks (void)
       i++;
     }
 
-  gcc_assert (i == n_basic_blocks);
+  gcc_assert (i == cfun->cfg->n_basic_blocks);
 
   for (; i < last_basic_block; i++)
     SET_BASIC_BLOCK (i, NULL);
@@ -1661,7 +1661,7 @@  static int *
 df_compute_cfg_image (void)
 {
   basic_block bb;
-  int size = 2 + (2 * n_basic_blocks);
+  int size = 2 + (2 * cfun->cfg->n_basic_blocks);
   int i;
   int * map;
 
diff --git a/gcc/dominance.c b/gcc/dominance.c
index 5c96dad..8e88874 100644
--- a/gcc/dominance.c
+++ b/gcc/dominance.c
@@ -146,7 +146,7 @@  static void
 init_dom_info (struct dom_info *di, enum cdi_direction dir)
 {
   /* We need memory for n_basic_blocks nodes.  */
-  unsigned int num = n_basic_blocks;
+  unsigned int num = cfun->cfg->n_basic_blocks;
   init_ar (di->dfs_parent, TBB, num, 0);
   init_ar (di->path_min, TBB, num, i);
   init_ar (di->key, TBB, num, i);
@@ -233,7 +233,7 @@  calc_dfs_tree_nonrec (struct dom_info *di, basic_block bb, bool reverse)
   /* Ending block.  */
   basic_block ex_block;
 
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, cfun->cfg->n_basic_blocks + 1);
   sp = 0;
 
   /* Initialize our border blocks, and the first edge.  */
@@ -394,7 +394,7 @@  calc_dfs_tree (struct dom_info *di, bool reverse)
   di->nodes = di->dfsnum - 1;
 
   /* This aborts e.g. when there is _no_ path from ENTRY to EXIT at all.  */
-  gcc_assert (di->nodes == (unsigned int) n_basic_blocks - 1);
+  gcc_assert (di->nodes == (unsigned int) cfun->cfg->n_basic_blocks - 1);
 }
 
 /* Compress the path from V to the root of its set and update path_min at the
@@ -652,7 +652,7 @@  calculate_dominance_info (enum cdi_direction dir)
 	{
 	  b->dom[dir_index] = et_new_tree (b);
 	}
-      n_bbs_in_dom_tree[dir_index] = n_basic_blocks;
+      n_bbs_in_dom_tree[dir_index] = cfun->cfg->n_basic_blocks;
 
       init_dom_info (&di, dir);
       calc_dfs_tree (&di, reverse);
diff --git a/gcc/domwalk.c b/gcc/domwalk.c
index 8c1ddc6..023b553 100644
--- a/gcc/domwalk.c
+++ b/gcc/domwalk.c
@@ -156,13 +156,13 @@  walk_dominator_tree (struct dom_walk_data *walk_data, basic_block bb)
 {
   void *bd = NULL;
   basic_block dest;
-  basic_block *worklist = XNEWVEC (basic_block, n_basic_blocks * 2);
+  basic_block *worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks * 2);
   int sp = 0;
   int *postorder, postorder_num;
 
   if (walk_data->dom_direction == CDI_DOMINATORS)
     {
-      postorder = XNEWVEC (int, n_basic_blocks);
+      postorder = XNEWVEC (int, cfun->cfg->n_basic_blocks);
       postorder_num = inverted_post_order_compute (postorder);
       bb_postorder = XNEWVEC (int, last_basic_block);
       for (int i = 0; i < postorder_num; ++i)
diff --git a/gcc/function.c b/gcc/function.c
index 36c874f..7d826f1 100644
--- a/gcc/function.c
+++ b/gcc/function.c
@@ -4019,7 +4019,7 @@  generate_setjmp_warnings (void)
 {
   bitmap setjmp_crosses = regstat_get_setjmp_crosses ();
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS
       || bitmap_empty_p (setjmp_crosses))
     return;
 
@@ -6032,7 +6032,7 @@  thread_prologue_and_epilogue_insns (void)
       /* Find the set of basic blocks that require a stack frame,
 	 and blocks that are too big to be duplicated.  */
 
-      vec.create (n_basic_blocks);
+      vec.create (cfun->cfg->n_basic_blocks);
 
       CLEAR_HARD_REG_SET (set_up_by_prologue.set);
       add_to_hard_reg_set (&set_up_by_prologue.set, Pmode,
diff --git a/gcc/fwprop.c b/gcc/fwprop.c
index 17cc62a..4a41f47 100644
--- a/gcc/fwprop.c
+++ b/gcc/fwprop.c
@@ -285,7 +285,7 @@  build_single_def_use_links (void)
   reg_defs.create (max_reg_num ());
   reg_defs.safe_grow_cleared (max_reg_num ());
 
-  reg_defs_stack.create (n_basic_blocks * 10);
+  reg_defs_stack.create (cfun->cfg->n_basic_blocks * 10);
   local_md = BITMAP_ALLOC (NULL);
   local_lr = BITMAP_ALLOC (NULL);
 
diff --git a/gcc/gcse.c b/gcc/gcse.c
index e485985..6a4c4b5 100644
--- a/gcc/gcse.c
+++ b/gcc/gcse.c
@@ -2662,7 +2662,7 @@  one_pre_gcse_pass (void)
   gcse_create_count = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (cfun->cfg->n_basic_blocks <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_("PRE disabled")))
     return 0;
 
@@ -2708,7 +2708,7 @@  one_pre_gcse_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "PRE GCSE of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), cfun->cfg->n_basic_blocks, bytes_used);
       fprintf (dump_file, "%d substs, %d insns created\n",
 	       gcse_subst_count, gcse_create_count);
     }
@@ -3591,7 +3591,7 @@  one_code_hoisting_pass (void)
   gcse_create_count = 0;
 
   /* Return if there's nothing to do, or it is too expensive.  */
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1
+  if (cfun->cfg->n_basic_blocks <= NUM_FIXED_BLOCKS + 1
       || is_too_expensive (_("GCSE disabled")))
     return 0;
 
@@ -3642,7 +3642,7 @@  one_code_hoisting_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "HOIST of %s, %d basic blocks, %d bytes needed, ",
-	       current_function_name (), n_basic_blocks, bytes_used);
+	       current_function_name (), cfun->cfg->n_basic_blocks, bytes_used);
       fprintf (dump_file, "%d substs, %d insns created\n",
 	       gcse_subst_count, gcse_create_count);
     }
@@ -4067,24 +4067,25 @@  is_too_expensive (const char *pass)
      which have a couple switch statements.  Rather than simply
      threshold the number of blocks, uses something with a more
      graceful degradation.  */
-  if (n_edges > 20000 + n_basic_blocks * 4)
+  if (n_edges > 20000 + cfun->cfg->n_basic_blocks * 4)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d edges/basic block",
-	       pass, n_basic_blocks, n_edges / n_basic_blocks);
+	       pass, cfun->cfg->n_basic_blocks, n_edges /
+	       cfun->cfg->n_basic_blocks);
 
       return true;
     }
 
   /* If allocating memory for the dataflow bitmaps would take up too much
      storage it's better just to disable the optimization.  */
-  if ((n_basic_blocks
+  if ((cfun->cfg->n_basic_blocks
        * SBITMAP_SET_SIZE (max_reg_num ())
        * sizeof (SBITMAP_ELT_TYPE)) > MAX_GCSE_MEMORY)
     {
       warning (OPT_Wdisabled_optimization,
 	       "%s: %d basic blocks and %d registers",
-	       pass, n_basic_blocks, max_reg_num ());
+	       pass, cfun->cfg->n_basic_blocks, max_reg_num ());
 
       return true;
     }
diff --git a/gcc/graphite.c b/gcc/graphite.c
index f953663..06fe226 100644
--- a/gcc/graphite.c
+++ b/gcc/graphite.c
@@ -201,7 +201,8 @@  graphite_initialize (isl_ctx *ctx)
   if (number_of_loops (cfun) <= 1
       /* FIXME: This limit on the number of basic blocks of a function
 	 should be removed when the SCOP detection is faster.  */
-      || n_basic_blocks > PARAM_VALUE (PARAM_GRAPHITE_MAX_BBS_PER_FUNCTION))
+      || (cfun->cfg->n_basic_blocks
+          > PARAM_VALUE (PARAM_GRAPHITE_MAX_BBS_PER_FUNCTION)))
     {
       if (dump_file && (dump_flags & TDF_DETAILS))
 	print_global_statistics (dump_file);
diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c
index 61eaaef..8f8ab5a 100644
--- a/gcc/haifa-sched.c
+++ b/gcc/haifa-sched.c
@@ -6668,7 +6668,7 @@  haifa_sched_init (void)
      whole function.  */
   {
     bb_vec_t bbs;
-    bbs.create (n_basic_blocks);
+    bbs.create (cfun->cfg->n_basic_blocks);
     basic_block bb;
 
     sched_init_bbs ();
diff --git a/gcc/ipa-inline-analysis.c b/gcc/ipa-inline-analysis.c
index a25f517..1699c3f 100644
--- a/gcc/ipa-inline-analysis.c
+++ b/gcc/ipa-inline-analysis.c
@@ -2311,7 +2311,7 @@  estimate_function_body_sizes (struct cgraph_node *node, bool early)
   if (parms_info)
     compute_bb_predicates (node, parms_info, info);
   gcc_assert (cfun == my_function);
-  order = XNEWVEC (int, n_basic_blocks);
+  order = XNEWVEC (int, cfun->cfg->n_basic_blocks);
   nblocks = pre_and_rev_post_order_compute (NULL, order, false);
   for (n = 0; n < nblocks; n++)
     {
diff --git a/gcc/ira-build.c b/gcc/ira-build.c
index 0e2fd0c..1e798d4 100644
--- a/gcc/ira-build.c
+++ b/gcc/ira-build.c
@@ -3291,7 +3291,7 @@  ira_build (void)
 	}
       fprintf (ira_dump_file, "  regions=%d, blocks=%d, points=%d\n",
 	       current_loops == NULL ? 1 : number_of_loops (cfun),
-	       n_basic_blocks, ira_max_point);
+	       cfun->cfg->n_basic_blocks, ira_max_point);
       fprintf (ira_dump_file,
 	       "    allocnos=%d (big %d), copies=%d, conflicts=%d, ranges=%d\n",
 	       ira_allocnos_num, nr_big, ira_copies_num, n, nr);
diff --git a/gcc/lcm.c b/gcc/lcm.c
index c13d2a6..bebceb9 100644
--- a/gcc/lcm.c
+++ b/gcc/lcm.c
@@ -101,7 +101,7 @@  compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
   /* Allocate a worklist array/queue.  Entries are only added to the
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
-  qin = qout = worklist = XNEWVEC (basic_block, n_basic_blocks);
+  qin = qout = worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
 
   /* We want a maximal solution, so make an optimistic initialization of
      ANTIN.  */
@@ -116,8 +116,8 @@  compute_antinout_edge (sbitmap *antloc, sbitmap *transp, sbitmap *antin,
     }
 
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS];
+  qlen = cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS;
 
   /* Mark blocks which are predecessors of the exit block so that we
      can easily identify them below.  */
@@ -254,7 +254,7 @@  compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
   qin = qout = worklist
-    = XNEWVEC (basic_block, n_basic_blocks);
+    = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
 
   /* Initialize a mapping from each edge to its index.  */
   for (i = 0; i < num_edges; i++)
@@ -290,8 +290,8 @@  compute_laterin (struct edge_list *edge_list, sbitmap *earliest,
   /* Note that we do not use the last allocated element for our queue,
      as EXIT_BLOCK is never inserted into it. */
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS];
+  qlen = cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS;
 
   /* Iterate until the worklist is empty.  */
   while (qlen)
@@ -481,7 +481,7 @@  compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
   qin = qout = worklist =
-    XNEWVEC (basic_block, n_basic_blocks - NUM_FIXED_BLOCKS);
+    XNEWVEC (basic_block, cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS);
 
   /* We want a maximal solution.  */
   bitmap_vector_ones (avout, last_basic_block);
@@ -495,8 +495,8 @@  compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout,
     }
 
   qin = worklist;
-  qend = &worklist[n_basic_blocks - NUM_FIXED_BLOCKS];
-  qlen = n_basic_blocks - NUM_FIXED_BLOCKS;
+  qend = &worklist[cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS];
+  qlen = cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS;
 
   /* Mark blocks which are successors of the entry block so that we
      can easily identify them below.  */
@@ -610,7 +610,7 @@  compute_nearerout (struct edge_list *edge_list, sbitmap *farthest,
   /* Allocate a worklist array/queue.  Entries are only added to the
      list if they were not already on the list.  So the size is
      bounded by the number of basic blocks.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks + 1);
 
   /* Initialize NEARER for each edge and build a mapping from an edge to
      its index.  */
diff --git a/gcc/lra-lives.c b/gcc/lra-lives.c
index 6eaeb2d..ca59d30 100644
--- a/gcc/lra-lives.c
+++ b/gcc/lra-lives.c
@@ -992,7 +992,7 @@  lra_create_live_ranges (bool all_p)
   lra_point_freq = point_freq_vec.address ();
   int *post_order_rev_cfg = XNEWVEC (int, last_basic_block);
   int n_blocks_inverted = inverted_post_order_compute (post_order_rev_cfg);
-  lra_assert (n_blocks_inverted == n_basic_blocks);
+  lra_assert (n_blocks_inverted == cfun->cfg->n_basic_blocks);
   for (i = n_blocks_inverted - 1; i >= 0; --i)
     {
       bb = BASIC_BLOCK (post_order_rev_cfg[i]);
diff --git a/gcc/lra.c b/gcc/lra.c
index 7c6bff1..1c2f076 100644
--- a/gcc/lra.c
+++ b/gcc/lra.c
@@ -2043,7 +2043,7 @@  has_nonexceptional_receiver (void)
     return true;
 
   /* First determine which blocks can reach exit via normal paths.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks + 1);
 
   FOR_EACH_BB (bb)
     bb->flags &= ~BB_REACHABLE;
diff --git a/gcc/mcf.c b/gcc/mcf.c
index 7a716f5..45d67f8c 100644
--- a/gcc/mcf.c
+++ b/gcc/mcf.c
@@ -471,12 +471,12 @@  create_fixup_graph (fixup_graph_type *fixup_graph)
   int fnum_edges;
 
   /* Each basic_block will be split into 2 during vertex transformation.  */
-  int fnum_vertices_after_transform =  2 * n_basic_blocks;
-  int fnum_edges_after_transform = n_edges + n_basic_blocks;
+  int fnum_vertices_after_transform =  2 * cfun->cfg->n_basic_blocks;
+  int fnum_edges_after_transform = n_edges + cfun->cfg->n_basic_blocks;
 
   /* Count the new SOURCE and EXIT vertices to be added.  */
   int fmax_num_vertices =
-    fnum_vertices_after_transform + n_edges + n_basic_blocks + 2;
+    fnum_vertices_after_transform + n_edges + cfun->cfg->n_basic_blocks + 2;
 
   /* In create_fixup_graph: Each basic block and edge can be split into 3
      edges. Number of balance edges = n_basic_blocks. So after
@@ -486,10 +486,10 @@  create_fixup_graph (fixup_graph_type *fixup_graph)
      max_edges = 2 * (4 * n_basic_blocks + 3 * n_edges)
      = 8 * n_basic_blocks + 6 * n_edges
      < 8 * n_basic_blocks + 8 * n_edges.  */
-  int fmax_num_edges = 8 * (n_basic_blocks + n_edges);
+  int fmax_num_edges = 8 * (cfun->cfg->n_basic_blocks + n_edges);
 
   /* Initial num of vertices in the fixup graph.  */
-  fixup_graph->num_vertices = n_basic_blocks;
+  fixup_graph->num_vertices = cfun->cfg->n_basic_blocks;
 
   /* Fixup graph vertex list.  */
   fixup_graph->vertex_list =
@@ -508,7 +508,8 @@  create_fixup_graph (fixup_graph_type *fixup_graph)
   FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR, NULL, next_bb)
     total_vertex_weight += bb->count;
 
-  sqrt_avg_vertex_weight = mcf_sqrt (total_vertex_weight / n_basic_blocks);
+  sqrt_avg_vertex_weight = mcf_sqrt (total_vertex_weight
+				     / cfun->cfg->n_basic_blocks);
 
   k_pos = K_POS (sqrt_avg_vertex_weight);
   k_neg = K_NEG (sqrt_avg_vertex_weight);
diff --git a/gcc/profile.c b/gcc/profile.c
index b833398..32f07a2 100644
--- a/gcc/profile.c
+++ b/gcc/profile.c
@@ -1149,9 +1149,9 @@  branch_prob (void)
 	num_instrumented++;
     }
 
-  total_num_blocks += n_basic_blocks;
+  total_num_blocks += cfun->cfg->n_basic_blocks;
   if (dump_file)
-    fprintf (dump_file, "%d basic blocks\n", n_basic_blocks);
+    fprintf (dump_file, "%d basic blocks\n", cfun->cfg->n_basic_blocks);
 
   total_num_edges += num_edges;
   if (dump_file)
@@ -1180,7 +1180,7 @@  branch_prob (void)
 
       /* Basic block flags */
       offset = gcov_write_tag (GCOV_TAG_BLOCKS);
-      for (i = 0; i != (unsigned) (n_basic_blocks); i++)
+      for (i = 0; i != (unsigned) (cfun->cfg->n_basic_blocks); i++)
 	gcov_write_unsigned (0);
       gcov_write_length (offset);
 
diff --git a/gcc/reg-stack.c b/gcc/reg-stack.c
index 2dd9289..9dd933c 100644
--- a/gcc/reg-stack.c
+++ b/gcc/reg-stack.c
@@ -3078,7 +3078,7 @@  convert_regs_2 (basic_block block)
      is only processed after all its predecessors.  The number of predecessors
      of every block has already been computed.  */
 
-  stack = XNEWVEC (basic_block, n_basic_blocks);
+  stack = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
   sp = stack;
 
   *sp++ = block;
diff --git a/gcc/regrename.c b/gcc/regrename.c
index 20e2ae9..caf18b4 100644
--- a/gcc/regrename.c
+++ b/gcc/regrename.c
@@ -672,7 +672,7 @@  regrename_analyze (bitmap bb_mask)
   n_bbs = pre_and_rev_post_order_compute (NULL, inverse_postorder, false);
 
   /* Gather some information about the blocks in this function.  */
-  rename_info = XCNEWVEC (struct bb_rename_info, n_basic_blocks);
+  rename_info = XCNEWVEC (struct bb_rename_info, cfun->cfg->n_basic_blocks);
   i = 0;
   FOR_EACH_BB (bb)
     {
diff --git a/gcc/reload1.c b/gcc/reload1.c
index b8c3bfa..e07212c 100644
--- a/gcc/reload1.c
+++ b/gcc/reload1.c
@@ -610,7 +610,7 @@  has_nonexceptional_receiver (void)
     return true;
 
   /* First determine which blocks can reach exit via normal paths.  */
-  tos = worklist = XNEWVEC (basic_block, n_basic_blocks + 1);
+  tos = worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks + 1);
 
   FOR_EACH_BB (bb)
     bb->flags &= ~BB_REACHABLE;
diff --git a/gcc/reorg.c b/gcc/reorg.c
index e601818..a4ee821 100644
--- a/gcc/reorg.c
+++ b/gcc/reorg.c
@@ -3632,7 +3632,7 @@  dbr_schedule (rtx first)
 
   /* If the current function has no insns other than the prologue and
      epilogue, then do not try to fill any delay slots.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS)
     return;
 
   /* Find the highest INSN_UID and allocate and initialize our map from
diff --git a/gcc/sched-deps.c b/gcc/sched-deps.c
index c7ef1d8..f68ff4f 100644
--- a/gcc/sched-deps.c
+++ b/gcc/sched-deps.c
@@ -3957,7 +3957,7 @@  sched_deps_init (bool global_p)
 {
   /* Average number of insns in the basic block.
      '+ 1' is used to make it nonzero.  */
-  int insns_in_block = sched_max_luid / n_basic_blocks + 1;
+  int insns_in_block = sched_max_luid / cfun->cfg->n_basic_blocks + 1;
 
   init_deps_data_vector ();
 
diff --git a/gcc/sched-ebb.c b/gcc/sched-ebb.c
index b70e071..e02dbe5 100644
--- a/gcc/sched-ebb.c
+++ b/gcc/sched-ebb.c
@@ -625,7 +625,7 @@  schedule_ebbs (void)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS)
     return;
 
   if (profile_info && flag_branch_probabilities)
diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c
index 2c971e2..e607056 100644
--- a/gcc/sched-rgn.c
+++ b/gcc/sched-rgn.c
@@ -793,7 +793,7 @@  haifa_find_rgns (void)
       /* Second traversal:find reducible inner loops and topologically sort
 	 block of each region.  */
 
-      queue = XNEWVEC (int, n_basic_blocks);
+      queue = XNEWVEC (int, cfun->cfg->n_basic_blocks);
 
       extend_regions_p = PARAM_VALUE (PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS) > 0;
       if (extend_regions_p)
@@ -1153,7 +1153,7 @@  void
 extend_rgns (int *degree, int *idxp, sbitmap header, int *loop_hdr)
 {
   int *order, i, rescan = 0, idx = *idxp, iter = 0, max_iter, *max_hdr;
-  int nblocks = n_basic_blocks - NUM_FIXED_BLOCKS;
+  int nblocks = cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS;
 
   max_iter = PARAM_VALUE (PARAM_MAX_SCHED_EXTEND_REGIONS_ITERS);
 
@@ -3112,7 +3112,7 @@  sched_rgn_init (bool single_blocks_p)
 
   /* Compute regions for scheduling.  */
   if (single_blocks_p
-      || n_basic_blocks == NUM_FIXED_BLOCKS + 1
+      || cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS + 1
       || !flag_schedule_interblock
       || is_cfg_nonregular ())
     {
@@ -3136,7 +3136,7 @@  sched_rgn_init (bool single_blocks_p)
 	free_dominance_info (CDI_DOMINATORS);
     }
 
-  gcc_assert (0 < nr_regions && nr_regions <= n_basic_blocks);
+  gcc_assert (0 < nr_regions && nr_regions <= cfun->cfg->n_basic_blocks);
 
   RGN_BLOCKS (nr_regions) = (RGN_BLOCKS (nr_regions - 1) +
 			     RGN_NR_BLOCKS (nr_regions - 1));
@@ -3372,7 +3372,7 @@  schedule_insns (void)
 
   /* Taking care of this degenerate case makes the rest of
      this code simpler.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS)
     return;
 
   rgn_setup_common_sched_info ();
@@ -3418,8 +3418,8 @@  rgn_add_remove_insn (rtx insn, int remove_p)
 void
 extend_regions (void)
 {
-  rgn_table = XRESIZEVEC (region, rgn_table, n_basic_blocks);
-  rgn_bb_table = XRESIZEVEC (int, rgn_bb_table, n_basic_blocks);
+  rgn_table = XRESIZEVEC (region, rgn_table, cfun->cfg->n_basic_blocks);
+  rgn_bb_table = XRESIZEVEC (int, rgn_bb_table, cfun->cfg->n_basic_blocks);
   block_to_bb = XRESIZEVEC (int, block_to_bb, last_basic_block);
   containing_rgn = XRESIZEVEC (int, containing_rgn, last_basic_block);
 }
diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c
index 47e7695..ccf80f3 100644
--- a/gcc/sel-sched-ir.c
+++ b/gcc/sel-sched-ir.c
@@ -3649,7 +3649,7 @@  sel_recompute_toporder (void)
   int i, n, rgn;
   int *postorder, n_blocks;
 
-  postorder = XALLOCAVEC (int, n_basic_blocks);
+  postorder = XALLOCAVEC (int, cfun->cfg->n_basic_blocks);
   n_blocks = post_order_compute (postorder, false, false);
 
   rgn = CONTAINING_RGN (BB_TO_BLOCK (0));
@@ -4912,10 +4912,10 @@  recompute_rev_top_order (void)
                                         rev_top_order_index_len);
     }
 
-  postorder = XNEWVEC (int, n_basic_blocks);
+  postorder = XNEWVEC (int, cfun->cfg->n_basic_blocks);
 
   n_blocks = post_order_compute (postorder, true, false);
-  gcc_assert (n_basic_blocks == n_blocks);
+  gcc_assert (cfun->cfg->n_basic_blocks == n_blocks);
 
   /* Build reverse function: for each basic block with BB->INDEX == K
      rev_top_order_index[K] is it's reverse topological sort number.  */
diff --git a/gcc/sel-sched.c b/gcc/sel-sched.c
index fb9386f..a4bbf7e 100644
--- a/gcc/sel-sched.c
+++ b/gcc/sel-sched.c
@@ -7751,7 +7751,7 @@  run_selective_scheduling (void)
 {
   int rgn;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS)
     return;
 
   sel_global_init ();
diff --git a/gcc/store-motion.c b/gcc/store-motion.c
index df75670..b4bec9c 100644
--- a/gcc/store-motion.c
+++ b/gcc/store-motion.c
@@ -848,7 +848,7 @@  remove_reachable_equiv_notes (basic_block bb, struct st_expr *smexpr)
   rtx last, insn, note;
   rtx mem = smexpr->pattern;
 
-  stack = XNEWVEC (edge_iterator, n_basic_blocks);
+  stack = XNEWVEC (edge_iterator, cfun->cfg->n_basic_blocks);
   sp = 0;
   ei = ei_start (bb->succs);
 
@@ -1208,7 +1208,7 @@  one_store_motion_pass (void)
   if (dump_file)
     {
       fprintf (dump_file, "STORE_MOTION of %s, %d basic blocks, ",
-	       current_function_name (), n_basic_blocks);
+	       current_function_name (), cfun->cfg->n_basic_blocks);
       fprintf (dump_file, "%d insns deleted, %d insns created\n",
 	       n_stores_deleted, n_stores_created);
     }
diff --git a/gcc/tracer.c b/gcc/tracer.c
index 975cadb..940dd5a 100644
--- a/gcc/tracer.c
+++ b/gcc/tracer.c
@@ -224,7 +224,7 @@  static bool
 tail_duplicate (void)
 {
   fibnode_t *blocks = XCNEWVEC (fibnode_t, last_basic_block);
-  basic_block *trace = XNEWVEC (basic_block, n_basic_blocks);
+  basic_block *trace = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
   int *counts = XNEWVEC (int, last_basic_block);
   int ninsns = 0, nduplicated = 0;
   gcov_type weighted_insns = 0, traced_insns = 0;
@@ -368,7 +368,7 @@  tracer (void)
 {
   bool changed;
 
-  if (n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
+  if (cfun->cfg->n_basic_blocks <= NUM_FIXED_BLOCKS + 1)
     return 0;
 
   mark_dfs_back_edges ();
diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
index 4b91a35..50f1d99 100644
--- a/gcc/tree-cfg.c
+++ b/gcc/tree-cfg.c
@@ -213,12 +213,12 @@  build_gimple_cfg (gimple_seq seq)
     factor_computed_gotos ();
 
   /* Make sure there is always at least one block, even if it's empty.  */
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS)
     create_empty_bb (ENTRY_BLOCK_PTR);
 
   /* Adjust the size of the array.  */
-  if (basic_block_info->length () < (size_t) n_basic_blocks)
-    vec_safe_grow_cleared (basic_block_info, n_basic_blocks);
+  if (basic_block_info->length () < (size_t) cfun->cfg->n_basic_blocks)
+    vec_safe_grow_cleared (basic_block_info, cfun->cfg->n_basic_blocks);
 
   /* To speed up statement iterator walks, we first purge dead labels.  */
   cleanup_dead_labels ();
@@ -460,7 +460,7 @@  create_bb (void *h, void *e, basic_block after)
   /* Add the newly created block to the array.  */
   SET_BASIC_BLOCK (last_basic_block, bb);
 
-  n_basic_blocks++;
+  cfun->cfg->n_basic_blocks++;
   last_basic_block++;
 
   return bb;
@@ -2079,7 +2079,7 @@  gimple_dump_cfg (FILE *file, int flags)
     {
       dump_function_header (file, current_function_decl, flags);
       fprintf (file, ";; \n%d basic blocks, %d edges, last basic block %d.\n\n",
-	       n_basic_blocks, n_edges, last_basic_block);
+	       cfun->cfg->n_basic_blocks, n_edges, last_basic_block);
 
       brief_dump_cfg (file, flags | TDF_COMMENT);
       fprintf (file, "\n");
@@ -2114,9 +2114,9 @@  dump_cfg_stats (FILE *file)
   fprintf (file, fmt_str, "", "  instances  ", "used ");
   fprintf (file, "---------------------------------------------------------\n");
 
-  size = n_basic_blocks * sizeof (struct basic_block_def);
+  size = cfun->cfg->n_basic_blocks * sizeof (struct basic_block_def);
   total += size;
-  fprintf (file, fmt_str_1, "Basic blocks", n_basic_blocks,
+  fprintf (file, fmt_str_1, "Basic blocks", cfun->cfg->n_basic_blocks,
 	   SCALE (size), LABEL (size));
 
   num_edges = 0;
@@ -6393,11 +6393,11 @@  move_block_to_fn (struct function *dest_cfun, basic_block bb,
 
   /* Remove BB from the original basic block array.  */
   (*cfun->cfg->x_basic_block_info)[bb->index] = NULL;
-  cfun->cfg->x_n_basic_blocks--;
+  cfun->cfg->n_basic_blocks--;
 
   /* Grow DEST_CFUN's basic block array if needed.  */
   cfg = dest_cfun->cfg;
-  cfg->x_n_basic_blocks++;
+  cfg->n_basic_blocks++;
   if (bb->index >= cfg->x_last_basic_block)
     cfg->x_last_basic_block = bb->index + 1;
 
@@ -7352,7 +7352,7 @@  gimple_flow_call_edges_add (sbitmap blocks)
   int last_bb = last_basic_block;
   bool check_last_block = false;
 
-  if (n_basic_blocks == NUM_FIXED_BLOCKS)
+  if (cfun->cfg->n_basic_blocks == NUM_FIXED_BLOCKS)
     return 0;
 
   if (! blocks)
diff --git a/gcc/tree-cfgcleanup.c b/gcc/tree-cfgcleanup.c
index 9b314f7..96395e1 100644
--- a/gcc/tree-cfgcleanup.c
+++ b/gcc/tree-cfgcleanup.c
@@ -895,7 +895,7 @@  remove_forwarder_block_with_phi (basic_block bb)
 static unsigned int
 merge_phi_nodes (void)
 {
-  basic_block *worklist = XNEWVEC (basic_block, n_basic_blocks);
+  basic_block *worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
   basic_block *current = worklist;
   basic_block bb;
 
diff --git a/gcc/tree-inline.c b/gcc/tree-inline.c
index bee7766..f49a7dd 100644
--- a/gcc/tree-inline.c
+++ b/gcc/tree-inline.c
@@ -4233,7 +4233,7 @@  gimple_expand_calls_inline (basic_block bb, copy_body_data *id)
 static void
 fold_marked_statements (int first, struct pointer_set_t *statements)
 {
-  for (; first < n_basic_blocks; first++)
+  for (; first < cfun->cfg->n_basic_blocks; first++)
     if (BASIC_BLOCK (first))
       {
         gimple_stmt_iterator gsi;
@@ -4336,7 +4336,7 @@  optimize_inline_calls (tree fn)
 {
   copy_body_data id;
   basic_block bb;
-  int last = n_basic_blocks;
+  int last = cfun->cfg->n_basic_blocks;
   struct gimplify_ctx gctx;
   bool inlined_p = false;
 
diff --git a/gcc/tree-ssa-ifcombine.c b/gcc/tree-ssa-ifcombine.c
index 9598eb8..834e986 100644
--- a/gcc/tree-ssa-ifcombine.c
+++ b/gcc/tree-ssa-ifcombine.c
@@ -627,7 +627,7 @@  tree_ssa_ifcombine (void)
   bbs = blocks_in_phiopt_order ();
   calculate_dominance_info (CDI_DOMINATORS);
 
-  for (i = 0; i < n_basic_blocks - NUM_FIXED_BLOCKS; ++i)
+  for (i = 0; i < cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS; ++i)
     {
       basic_block bb = bbs[i];
       gimple stmt = last_stmt (bb);
diff --git a/gcc/tree-ssa-loop-ch.c b/gcc/tree-ssa-loop-ch.c
index ff17c7e..1998af6 100644
--- a/gcc/tree-ssa-loop-ch.c
+++ b/gcc/tree-ssa-loop-ch.c
@@ -142,9 +142,9 @@  copy_loop_headers (void)
       return 0;
     }
 
-  bbs = XNEWVEC (basic_block, n_basic_blocks);
-  copied_bbs = XNEWVEC (basic_block, n_basic_blocks);
-  bbs_size = n_basic_blocks;
+  bbs = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
+  copied_bbs = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
+  bbs_size = cfun->cfg->n_basic_blocks;
 
   FOR_EACH_LOOP (li, loop, 0)
     {
diff --git a/gcc/tree-ssa-loop-im.c b/gcc/tree-ssa-loop-im.c
index e5e502b..2f4839e 100644
--- a/gcc/tree-ssa-loop-im.c
+++ b/gcc/tree-ssa-loop-im.c
@@ -1599,7 +1599,7 @@  analyze_memory_references (void)
   /* Collect all basic-blocks in loops and sort them after their
      loops postorder.  */
   i = 0;
-  bbs = XNEWVEC (basic_block, n_basic_blocks - NUM_FIXED_BLOCKS);
+  bbs = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS);
   FOR_EACH_BB (bb)
     if (bb->loop_father != current_loops->tree_root)
       bbs[i++] = bb;
diff --git a/gcc/tree-ssa-loop-manip.c b/gcc/tree-ssa-loop-manip.c
index edc5b7b..78741ff 100644
--- a/gcc/tree-ssa-loop-manip.c
+++ b/gcc/tree-ssa-loop-manip.c
@@ -180,7 +180,7 @@  compute_live_loop_exits (bitmap live_exits, bitmap use_blocks,
   /* Normally the work list size is bounded by the number of basic
      blocks in the largest loop.  We don't know this number, but we
      can be fairly sure that it will be relatively small.  */
-  worklist.create (MAX (8, n_basic_blocks / 128));
+  worklist.create (MAX (8, cfun->cfg->n_basic_blocks / 128));
 
   EXECUTE_IF_SET_IN_BITMAP (use_blocks, 0, i, bi)
     {
diff --git a/gcc/tree-ssa-math-opts.c b/gcc/tree-ssa-math-opts.c
index a94172d..841709d 100644
--- a/gcc/tree-ssa-math-opts.c
+++ b/gcc/tree-ssa-math-opts.c
@@ -503,7 +503,7 @@  execute_cse_reciprocals (void)
 
   occ_pool = create_alloc_pool ("dominators for recip",
 				sizeof (struct occurrence),
-				n_basic_blocks / 3 + 1);
+				cfun->cfg->n_basic_blocks / 3 + 1);
 
   memset (&reciprocal_stats, 0, sizeof (reciprocal_stats));
   calculate_dominance_info (CDI_DOMINATORS);
diff --git a/gcc/tree-ssa-phiopt.c b/gcc/tree-ssa-phiopt.c
index 5e99678..5a240f5 100644
--- a/gcc/tree-ssa-phiopt.c
+++ b/gcc/tree-ssa-phiopt.c
@@ -309,7 +309,7 @@  tree_ssa_phiopt_worker (bool do_store_elim, bool do_hoist_loads)
      outer ones, and also that we do not try to visit a removed
      block.  */
   bb_order = blocks_in_phiopt_order ();
-  n = n_basic_blocks - NUM_FIXED_BLOCKS;
+  n = cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS;
 
   for (i = 0; i < n; i++)
     {
@@ -484,8 +484,8 @@  basic_block *
 blocks_in_phiopt_order (void)
 {
   basic_block x, y;
-  basic_block *order = XNEWVEC (basic_block, n_basic_blocks);
-  unsigned n = n_basic_blocks - NUM_FIXED_BLOCKS;
+  basic_block *order = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
+  unsigned n = cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS;
   unsigned np, i;
   sbitmap visited = sbitmap_alloc (last_basic_block);
 
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index 345ebcc..fff1674 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -3716,7 +3716,7 @@  compute_avail (void)
     }
 
   /* Allocate the worklist.  */
-  worklist = XNEWVEC (basic_block, n_basic_blocks);
+  worklist = XNEWVEC (basic_block, cfun->cfg->n_basic_blocks);
 
   /* Seed the algorithm by putting the dominator children of the entry
      block on the worklist.  */
@@ -4648,7 +4648,7 @@  init_pre (void)
   connect_infinite_loops_to_exit ();
   memset (&pre_stats, 0, sizeof (pre_stats));
 
-  postorder = XNEWVEC (int, n_basic_blocks);
+  postorder = XNEWVEC (int, cfun->cfg->n_basic_blocks);
   postorder_num = inverted_post_order_compute (postorder);
 
   alloc_aux_for_blocks (sizeof (struct bb_bitmap_sets));
@@ -4724,7 +4724,7 @@  do_pre (void)
      fixed, don't run it when he have an incredibly large number of
      bb's.  If we aren't going to run insert, there is no point in
      computing ANTIC, either, even though it's plenty fast.  */
-  if (n_basic_blocks < 4000)
+  if (cfun->cfg->n_basic_blocks < 4000)
     {
       compute_antic ();
       insert ();
diff --git a/gcc/tree-ssa-reassoc.c b/gcc/tree-ssa-reassoc.c
index 784477b..4fdd3f5 100644
--- a/gcc/tree-ssa-reassoc.c
+++ b/gcc/tree-ssa-reassoc.c
@@ -4365,7 +4365,7 @@  init_reassoc (void)
 {
   int i;
   long rank = 2;
-  int *bbs = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  int *bbs = XNEWVEC (int, cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS);
 
   /* Find the loops, so that we can prevent moving calculations in
      them.  */
@@ -4395,7 +4395,7 @@  init_reassoc (void)
     }
 
   /* Set up rank for each BB  */
-  for (i = 0; i < n_basic_blocks - NUM_FIXED_BLOCKS; i++)
+  for (i = 0; i < cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS; i++)
     bb_rank[bbs[i]] = ++rank  << 16;
 
   free (bbs);
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index 6886efb..f88303d 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -3969,13 +3969,14 @@  init_scc_vn (void)
   shared_lookup_phiargs.create (0);
   shared_lookup_references.create (0);
   rpo_numbers = XNEWVEC (int, last_basic_block);
-  rpo_numbers_temp = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  rpo_numbers_temp = XNEWVEC (int,
+			      cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS);
   pre_and_rev_post_order_compute (NULL, rpo_numbers_temp, false);
 
   /* RPO numbers is an array of rpo ordering, rpo[i] = bb means that
      the i'th block in RPO order is bb.  We want to map bb's to RPO
      numbers, so we need to rearrange this array.  */
-  for (j = 0; j < n_basic_blocks - NUM_FIXED_BLOCKS; j++)
+  for (j = 0; j < cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS; j++)
     rpo_numbers[rpo_numbers_temp[j]] = j;
 
   XDELETE (rpo_numbers_temp);
diff --git a/gcc/tree-ssa-tail-merge.c b/gcc/tree-ssa-tail-merge.c
index 317fe4c..e6b2a5f 100644
--- a/gcc/tree-ssa-tail-merge.c
+++ b/gcc/tree-ssa-tail-merge.c
@@ -756,11 +756,11 @@  static void
 init_worklist (void)
 {
   alloc_aux_for_blocks (sizeof (struct aux_bb_info));
-  same_succ_htab.create (n_basic_blocks);
+  same_succ_htab.create (cfun->cfg->n_basic_blocks);
   same_succ_edge_flags = XCNEWVEC (int, last_basic_block);
   deleted_bbs = BITMAP_ALLOC (NULL);
   deleted_bb_preds = BITMAP_ALLOC (NULL);
-  worklist.create (n_basic_blocks);
+  worklist.create (cfun->cfg->n_basic_blocks);
   find_same_succ ();
 
   if (dump_file && (dump_flags & TDF_DETAILS))
@@ -988,7 +988,7 @@  static vec<bb_cluster> all_clusters;
 static void
 alloc_cluster_vectors (void)
 {
-  all_clusters.create (n_basic_blocks);
+  all_clusters.create (cfun->cfg->n_basic_blocks);
 }
 
 /* Reset all cluster vectors.  */
diff --git a/gcc/tree-ssa-uncprop.c b/gcc/tree-ssa-uncprop.c
index 1fbc524..6937eb7 100644
--- a/gcc/tree-ssa-uncprop.c
+++ b/gcc/tree-ssa-uncprop.c
@@ -188,7 +188,7 @@  associate_equivalences_with_edges (void)
 
 	      /* Now walk over the blocks to determine which ones were
 		 marked as being reached by a useful case label.  */
-	      for (i = 0; i < n_basic_blocks; i++)
+	      for (i = 0; i < cfun->cfg->n_basic_blocks; i++)
 		{
 		  tree node = info[i];
 
diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c
index 8108413..fee0347 100644
--- a/gcc/var-tracking.c
+++ b/gcc/var-tracking.c
@@ -836,7 +836,7 @@  vt_stack_adjustments (void)
   VTI (ENTRY_BLOCK_PTR)->out.stack_adjust = INCOMING_FRAME_SP_OFFSET;
 
   /* Allocate stack for back-tracking up CFG.  */
-  stack = XNEWVEC (edge_iterator, n_basic_blocks + 1);
+  stack = XNEWVEC (edge_iterator, cfun->cfg->n_basic_blocks + 1);
   sp = 0;
 
   /* Push the first edge on to the stack.  */
@@ -6883,10 +6883,10 @@  vt_find_locations (void)
   timevar_push (TV_VAR_TRACKING_DATAFLOW);
   /* Compute reverse completion order of depth first search of the CFG
      so that the data-flow runs faster.  */
-  rc_order = XNEWVEC (int, n_basic_blocks - NUM_FIXED_BLOCKS);
+  rc_order = XNEWVEC (int, cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS);
   bb_order = XNEWVEC (int, last_basic_block);
   pre_and_rev_post_order_compute (NULL, rc_order, false);
-  for (i = 0; i < n_basic_blocks - NUM_FIXED_BLOCKS; i++)
+  for (i = 0; i < cfun->cfg->n_basic_blocks - NUM_FIXED_BLOCKS; i++)
     bb_order[rc_order[i]] = i;
   free (rc_order);
 
@@ -10143,7 +10143,8 @@  variable_tracking_main_1 (void)
       return 0;
     }
 
-  if (n_basic_blocks > 500 && n_edges / n_basic_blocks >= 20)
+  if (cfun->cfg->n_basic_blocks > 500
+      && n_edges / cfun->cfg->n_basic_blocks >= 20)
     {
       vt_debug_insns_local (true);
       return 0;