From patchwork Tue Oct 23 06:07:43 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sharad Singhai X-Patchwork-Id: 193361 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 250982C00A6 for ; Tue, 23 Oct 2012 17:09:05 +1100 (EST) Comment: DKIM? See http://www.dkim.org DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=gcc.gnu.org; s=default; x=1351577347; h=Comment: DomainKey-Signature:Received:Received:Received:Received:Received: MIME-Version:Received:From:Date:Message-ID:Subject:To:Cc: Content-Type:Mailing-List:Precedence:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:Sender:Delivered-To; bh=LSd2pCC ZXZvUQZuZ08V+UuWo+dk=; b=aLlCp+4Cq35mp/QZZvkHEKbZifskoJEG+AQvrZy bq3izzz5AHVSsjuaZNMPu6lPMF54iyhpExYOkzU6EpRmvkpeK4Nxw+DbW4WD8jep nqoib6v/Mdf4kWtmoFxrKpAna43snZw6dFdsE1fqe2FdMkE14w1xMUt5VGfwUqxX 8nzY= Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=gcc.gnu.org; h=Received:Received:X-SWARE-Spam-Status:X-Spam-Check-By:Received:Received:X-Google-DKIM-Signature:Received:MIME-Version:Received:From:Date:Message-ID:Subject:To:Cc:Content-Type:X-System-Of-Record:X-Gm-Message-State:Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help:Sender:Delivered-To; b=EE8CkNCaaF5IwA7fsD1ARIqvHk1Oq9DHjdtJN21jRC2GzT7S5A7a0nEoSHsPNz IvXJHHucrgXeiYNR8qortQ+n971eibMRTX96XdSjwllGjtKNNUmHgAfGgfrJk+dK J08td3yYWBE9AQtiQZAnPcGqgs6jjgMXMyGdCasIS7jAo=; Received: (qmail 31083 invoked by alias); 23 Oct 2012 06:08:55 -0000 Received: (qmail 31061 invoked by uid 22791); 23 Oct 2012 06:08:53 -0000 X-SWARE-Spam-Status: No, hits=-3.9 required=5.0 tests=AWL, BAYES_50, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, KHOP_RCVD_TRUST, RCVD_IN_DNSWL_LOW, RCVD_IN_HOSTKARMA_YE, RP_MATCHES_RCVD, TW_DR, TW_FN, TW_TM X-Spam-Check-By: sourceware.org Received: from mail-vc0-f175.google.com (HELO mail-vc0-f175.google.com) (209.85.220.175) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 23 Oct 2012 06:08:25 +0000 Received: by mail-vc0-f175.google.com with SMTP id p1so4024870vcq.20 for ; Mon, 22 Oct 2012 23:08:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:from:date:message-id:subject:to:cc:content-type :x-system-of-record:x-gm-message-state; bh=5qz96O32SeKC1GnbLM/NnL/9rrFEAEkZXKPF5Tk+0mE=; b=H150+YYIUoHby/nwkNlmehbLf/C1KwUaMPvawE+44E9zZ2usPdwSLcd5H6J5rvcQ+S 6N1WjyszDdv+BxBJEXh3xp7S41XKrL3qbSAyxFm1EVqHaG1XsSCVWKbyR7IgP2IcU0db o4VQYWO3TYoNByKwMKmkLZ4sx1Q1YnN0vMgHpjwUOSus+xxczlFXQQqAySfdZqpNUx6e miMuxL4+Q2mUXqoWARvZjPlo1Cla/DNQ8zo22+b41IqsdqWR/vkz9Rn7mZWMLWQhw+Mf OhH311m4xRZU8AGRnm207L64noyUQg8h0udjjt+NR5hp/6WikA9C6kglUbP62jyXKbIO A3Pg== Received: by 10.220.220.5 with SMTP id hw5mr393284vcb.53.1350972504081; Mon, 22 Oct 2012 23:08:24 -0700 (PDT) MIME-Version: 1.0 Received: by 10.220.26.76 with HTTP; Mon, 22 Oct 2012 23:07:43 -0700 (PDT) From: Sharad Singhai Date: Mon, 22 Oct 2012 23:07:43 -0700 Message-ID: Subject: [PATCH] vectorization passes clean up for dump info To: "gcc-patches@gcc.gnu.org" Cc: David Li , Martin Jambor , Richard Biener X-System-Of-Record: true X-Gm-Message-State: ALoCoQkRAwifHdEBjS72W28wgTPudpH/DE1izdC4uhkvtJts24dmEqOGyFZluF3lKGq/D0hCzTpYZTJ/v8+1Ouy2w6Qcq9e9iOChhDq2yCvop7q/aSviXVbsmGiuBiST+4K0ygJ3F4g6HklTU8lb6tOcMImh4Vcf6Vh8qf4HcYq48TLhRLpjB9GBBQnqkq6jQXO2uuJm0RIo Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Hi, The attached patch is a followup patch to http://gcc.gnu.org/ml/gcc-patches/2012-10/msg01909.html. It cleans up dump_printf guards in vectorization passes from the following form if (dump_kind (flags)) dump_printf (flags, ...); to this hopefully cleaner form if (dump_enabled_p ()) dump_printf (flags, ...); This way flags don't need to be tested twice. Instead 'dump_enabled_p ()' is used which is a very simple predicate returning true if any of the dump files is enabled. I have bootstrapped and tested it on x86_64 and observed no new failure. Okay for trunk? Thanks, Sharad 2012-10-22 Sharad Singhai * dumpfile.c (dump_enabled_p): Remove inline. * dumpfile.h: Likewise. * tree-vect-loop-manip.c: Replace all uses of dump_kind_p with dump_enabled_p. * tree-vectorizer.c: Likewise. * tree-vect-loop.c: Likewise. * tree-vect-data-refs.c: Likewise. * tree-vect-patterns.c: Likewise. * tree-vect-stmts.c: Likewise. * tree-vect-slp.c: Likewise. FOR_EACH_VEC_ELT (slp_instance, slp_instances, i, instance) @@ -1798,7 +1798,7 @@ vect_make_slp_decision (loop_vec_info loop_vinfo) LOOP_VINFO_SLP_UNROLLING_FACTOR (loop_vinfo) = unrolling_factor; - if (decided_to_slp && dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (decided_to_slp && dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "Decided to SLP %d instances. Unrolling factor %d", decided_to_slp, unrolling_factor); @@ -1863,7 +1863,7 @@ vect_detect_hybrid_slp (loop_vec_info loop_vinfo) VEC (slp_instance, heap) *slp_instances = LOOP_VINFO_SLP_INSTANCES (loop_vinfo); slp_instance instance; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_detect_hybrid_slp ==="); FOR_EACH_VEC_ELT (slp_instance, slp_instances, i, instance) @@ -2060,7 +2060,7 @@ vect_bb_vectorization_profitable_p (bb_vec_info bb vec_outside_cost = vec_prologue_cost + vec_epilogue_cost; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Cost model analysis: \n"); dump_printf (MSG_NOTE, " Vector inside of basic block cost: %d\n", @@ -2097,7 +2097,7 @@ vect_slp_analyze_bb_1 (basic_block bb) if (!vect_analyze_data_refs (NULL, bb_vinfo, &min_vf)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unhandled data-ref in basic " "block.\n"); @@ -2109,7 +2109,7 @@ vect_slp_analyze_bb_1 (basic_block bb) ddrs = BB_VINFO_DDRS (bb_vinfo); if (!VEC_length (ddr_p, ddrs)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: not enough data-refs in " "basic block.\n"); @@ -2123,7 +2123,7 @@ vect_slp_analyze_bb_1 (basic_block bb) if (!vect_analyze_data_ref_dependences (NULL, bb_vinfo, &max_vf) || min_vf > max_vf) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unhandled data dependence " "in basic block.\n"); @@ -2134,7 +2134,7 @@ vect_slp_analyze_bb_1 (basic_block bb) if (!vect_analyze_data_refs_alignment (NULL, bb_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: bad data alignment in basic " "block.\n"); @@ -2145,7 +2145,7 @@ vect_slp_analyze_bb_1 (basic_block bb) if (!vect_analyze_data_ref_accesses (NULL, bb_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unhandled data access in " "basic block.\n"); @@ -2158,7 +2158,7 @@ vect_slp_analyze_bb_1 (basic_block bb) trees. */ if (!vect_analyze_slp (NULL, bb_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: failed to find SLP opportunities " "in basic block.\n"); @@ -2179,7 +2179,7 @@ vect_slp_analyze_bb_1 (basic_block bb) if (!vect_verify_datarefs_alignment (NULL, bb_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unsupported alignment in basic " "block.\n"); @@ -2189,7 +2189,7 @@ vect_slp_analyze_bb_1 (basic_block bb) if (!vect_slp_analyze_operations (bb_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: bad operation in basic block.\n"); @@ -2201,7 +2201,7 @@ vect_slp_analyze_bb_1 (basic_block bb) if (flag_vect_cost_model && !vect_bb_vectorization_profitable_p (bb_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: vectorization is not " "profitable.\n"); @@ -2210,7 +2210,7 @@ vect_slp_analyze_bb_1 (basic_block bb) return NULL; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Basic block will be vectorized using SLP\n"); @@ -2226,7 +2226,7 @@ vect_slp_analyze_bb (basic_block bb) gimple_stmt_iterator gsi; unsigned int vector_sizes; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "===vect_slp_analyze_bb===\n"); for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) @@ -2240,7 +2240,7 @@ vect_slp_analyze_bb (basic_block bb) if (insns > PARAM_VALUE (PARAM_SLP_MAX_INSNS_IN_BB)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: too many instructions in " "basic block.\n"); @@ -2267,7 +2267,7 @@ vect_slp_analyze_bb (basic_block bb) /* Try the next biggest vector size. */ current_vector_size = 1 << floor_log2 (vector_sizes); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "***** Re-trying analysis with " "vector size %d\n", current_vector_size); @@ -2292,7 +2292,7 @@ vect_update_slp_costs_according_to_vf (loop_vec_in stmt_info_for_cost *si; void *data = LOOP_VINFO_TARGET_COST_DATA (loop_vinfo); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_update_slp_costs_according_to_vf ==="); @@ -2800,7 +2800,7 @@ vect_get_mask_element (gimple stmt, int first_mask the next vector as well. */ if (only_one_vec && *current_mask_element >= mask_nunits) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "permutation requires at least two vectors "); @@ -2818,7 +2818,7 @@ vect_get_mask_element (gimple stmt, int first_mask /* We either need the first vector too or have already moved to the next vector. In both cases, this permutation needs three vectors. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "permutation requires at " @@ -2884,7 +2884,7 @@ vect_transform_slp_perm_load (gimple stmt, VEC (tr if (!can_vec_perm_p (mode, false, NULL)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no vect permute for "); @@ -2964,7 +2964,7 @@ vect_transform_slp_perm_load (gimple stmt, VEC (tr if (!can_vec_perm_p (mode, false, mask)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -3068,7 +3068,7 @@ vect_schedule_slp_instance (slp_tree node, slp_ins SLP_TREE_NUMBER_OF_VEC_STMTS (node) = vec_stmts_size; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE,vect_location, "------>vectorizing SLP node starting from: "); @@ -3177,7 +3177,7 @@ vect_schedule_slp (loop_vec_info loop_vinfo, bb_ve /* Schedule the tree of INSTANCE. */ is_store = vect_schedule_slp_instance (SLP_INSTANCE_TREE (instance), instance, vf); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vectorizing stmts using SLP."); } @@ -3222,7 +3222,7 @@ vect_slp_transform_bb (basic_block bb) gcc_assert (bb_vinfo); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "SLPing BB\n"); for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si)) @@ -3230,7 +3230,7 @@ vect_slp_transform_bb (basic_block bb) gimple stmt = gsi_stmt (si); stmt_vec_info stmt_info; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "------>SLPing statement: "); @@ -3248,7 +3248,7 @@ vect_slp_transform_bb (basic_block bb) } } - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf (MSG_OPTIMIZED_LOCATIONS, "BASIC BLOCK VECTORIZED\n"); destroy_bb_vec_info (bb_vinfo); Index: dumpfile.c =================================================================== --- dumpfile.c (revision 192695) +++ dumpfile.c (working copy) @@ -516,7 +516,7 @@ dump_phase_enabled_p (int phase) /* Return true if any of the dumps are enabled, false otherwise. */ -inline bool +bool dump_enabled_p (void) { return (dump_file || alt_dump_file); Index: dumpfile.h =================================================================== --- dumpfile.h (revision 192695) +++ dumpfile.h (working copy) @@ -121,7 +121,7 @@ extern int dump_switch_p (const char *); extern int opt_info_switch_p (const char *); extern const char *dump_flag_name (int); extern bool dump_kind_p (int); -extern inline bool dump_enabled_p (void); +extern bool dump_enabled_p (void); extern void dump_printf (int, const char *, ...) ATTRIBUTE_PRINTF_2; extern void dump_printf_loc (int, source_location, const char *, ...) ATTRIBUTE_PRINTF_3; Index: tree-vect-loop-manip.c =================================================================== --- tree-vect-loop-manip.c (revision 192695) +++ tree-vect-loop-manip.c (working copy) @@ -792,7 +792,7 @@ slpeel_make_loop_iterate_ntimes (struct loop *loop free_stmt_vec_info (orig_cond); loop_loc = find_loop_location (loop); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { if (LOCATION_LOCUS (loop_loc) != UNKNOWN_LOC) dump_printf (MSG_NOTE, "\nloop at %s:%d: ", LOC_FILE (loop_loc), @@ -1683,7 +1683,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo) /* Analyze phi functions of the loop header. */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_can_advance_ivs_p:"); for (gsi = gsi_start_phis (bb); !gsi_end_p (gsi); gsi_next (&gsi)) { @@ -1691,7 +1691,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo) tree evolution_part; phi = gsi_stmt (gsi); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0); @@ -1702,7 +1702,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo) if (virtual_operand_p (PHI_RESULT (phi))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "virtual phi. skip."); continue; @@ -1712,7 +1712,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo) if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (phi)) == vect_reduction_def) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "reduc phi. skip."); continue; @@ -1725,13 +1725,13 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo) if (!access_fn) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "No Access function."); return false; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Access function of PHI: "); @@ -1742,7 +1742,7 @@ vect_can_advance_ivs_p (loop_vec_info loop_vinfo) if (evolution_part == NULL_TREE) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf (MSG_MISSED_OPTIMIZATION, "No evolution."); return false; } @@ -1827,7 +1827,7 @@ vect_update_ivs_after_vectorizer (loop_vec_info lo phi = gsi_stmt (gsi); phi1 = gsi_stmt (gsi1); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "vect_update_ivs_after_vectorizer: phi: "); @@ -1837,7 +1837,7 @@ vect_update_ivs_after_vectorizer (loop_vec_info lo /* Skip virtual phi's. */ if (virtual_operand_p (PHI_RESULT (phi))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "virtual phi. skip."); continue; @@ -1847,7 +1847,7 @@ vect_update_ivs_after_vectorizer (loop_vec_info lo stmt_info = vinfo_for_stmt (phi); if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_reduction_def) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "reduc phi. skip."); continue; @@ -1910,7 +1910,7 @@ vect_do_peeling_for_loop_bound (loop_vec_info loop tree cond_expr = NULL_TREE; gimple_seq cond_expr_stmt_list = NULL; - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "=== vect_do_peeling_for_loop_bound ==="); @@ -2022,7 +2022,7 @@ vect_gen_niters_for_prolog_loop (loop_vec_info loo { int npeel = LOOP_PEELING_FOR_ALIGNMENT (loop_vinfo); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "known peeling = %d.", npeel); @@ -2076,7 +2076,7 @@ vect_gen_niters_for_prolog_loop (loop_vec_info loo if (TREE_CODE (loop_niters) != INTEGER_CST) iters = fold_build2 (MIN_EXPR, niters_type, iters, loop_niters); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "niters for prolog loop: "); @@ -2134,7 +2134,7 @@ vect_update_inits_of_drs (loop_vec_info loop_vinfo VEC (data_reference_p, heap) *datarefs = LOOP_VINFO_DATAREFS (loop_vinfo); struct data_reference *dr; - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "=== vect_update_inits_of_dr ==="); @@ -2163,7 +2163,7 @@ vect_do_peeling_for_alignment (loop_vec_info loop_ int max_iter; int bound = 0; - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "=== vect_do_peeling_for_alignment ==="); @@ -2475,7 +2475,7 @@ vect_create_cond_for_alias_checks (loop_vec_info l segment_length_a = vect_vfa_segment_size (dr_a, length_factor); segment_length_b = vect_vfa_segment_size (dr_b, length_factor); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "create runtime check for data references "); @@ -2506,7 +2506,7 @@ vect_create_cond_for_alias_checks (loop_vec_info l *cond_expr = part_cond_expr; } - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "created %u versioning for alias checks.\n", VEC_length (ddr_p, may_alias_ddrs)); Index: tree-vectorizer.c =================================================================== --- tree-vectorizer.c (revision 192695) +++ tree-vectorizer.c (working copy) @@ -107,7 +107,7 @@ vectorize_loops (void) loop_vec_info loop_vinfo; vect_location = find_loop_location (loop); if (LOCATION_LOCUS (vect_location) != UNKNOWN_LOC - && dump_kind_p (MSG_ALL)) + && dump_enabled_p ()) dump_printf (MSG_ALL, "\nAnalyzing loop at %s:%d\n", LOC_FILE (vect_location), LOC_LINE (vect_location)); @@ -118,7 +118,7 @@ vectorize_loops (void) continue; if (LOCATION_LOCUS (vect_location) != UNKNOWN_LOC - && dump_kind_p (MSG_ALL)) + && dump_enabled_p ()) dump_printf (MSG_ALL, "\n\nVectorizing loop at %s:%d\n", LOC_FILE (vect_location), LOC_LINE (vect_location)); vect_transform_loop (loop_vinfo); @@ -128,8 +128,8 @@ vectorize_loops (void) vect_location = UNKNOWN_LOC; statistics_counter_event (cfun, "Vectorized loops", num_vectorized_loops); - if (dump_kind_p (MSG_ALL) - || (num_vectorized_loops > 0 && dump_kind_p (MSG_ALL))) + if (dump_enabled_p () + || (num_vectorized_loops > 0 && dump_enabled_p ())) dump_printf_loc (MSG_ALL, vect_location, "vectorized %u loops in function.\n", num_vectorized_loops); @@ -170,7 +170,7 @@ execute_vect_slp (void) if (vect_slp_analyze_bb (bb)) { vect_slp_transform_bb (bb); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "basic block vectorized using SLP\n"); } Index: tree-vect-loop.c =================================================================== --- tree-vect-loop.c (revision 192695) +++ tree-vect-loop.c (working copy) @@ -187,7 +187,7 @@ vect_determine_vectorization_factor (loop_vec_info gimple_stmt_iterator pattern_def_si = gsi_none (); bool analyze_pattern_stmt = false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_determine_vectorization_factor ==="); @@ -199,7 +199,7 @@ vect_determine_vectorization_factor (loop_vec_info { phi = gsi_stmt (si); stmt_info = vinfo_for_stmt (phi); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> examining phi: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0); @@ -212,7 +212,7 @@ vect_determine_vectorization_factor (loop_vec_info gcc_assert (!STMT_VINFO_VECTYPE (stmt_info)); scalar_type = TREE_TYPE (PHI_RESULT (phi)); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "get vectype for scalar type: "); @@ -222,7 +222,7 @@ vect_determine_vectorization_factor (loop_vec_info vectype = get_vectype_for_scalar_type (scalar_type); if (!vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unsupported " @@ -234,14 +234,14 @@ vect_determine_vectorization_factor (loop_vec_info } STMT_VINFO_VECTYPE (stmt_info) = vectype; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "vectype: "); dump_generic_expr (MSG_NOTE, TDF_SLIM, vectype); } nunits = TYPE_VECTOR_SUBPARTS (vectype); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "nunits = %d", nunits); if (!vectorization_factor @@ -261,7 +261,7 @@ vect_determine_vectorization_factor (loop_vec_info stmt_info = vinfo_for_stmt (stmt); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> examining statement: "); @@ -281,7 +281,7 @@ vect_determine_vectorization_factor (loop_vec_info { stmt = pattern_stmt; stmt_info = vinfo_for_stmt (pattern_stmt); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> examining pattern statement: "); @@ -290,7 +290,7 @@ vect_determine_vectorization_factor (loop_vec_info } else { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "skip."); gsi_next (&si); continue; @@ -330,7 +330,7 @@ vect_determine_vectorization_factor (loop_vec_info if (!gsi_end_p (pattern_def_si)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> examining pattern def stmt: "); @@ -353,7 +353,7 @@ vect_determine_vectorization_factor (loop_vec_info if (gimple_get_lhs (stmt) == NULL_TREE) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: irregular stmt."); @@ -365,7 +365,7 @@ vect_determine_vectorization_factor (loop_vec_info if (VECTOR_MODE_P (TYPE_MODE (gimple_expr_type (stmt)))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: vector stmt in loop:"); @@ -389,7 +389,7 @@ vect_determine_vectorization_factor (loop_vec_info { gcc_assert (!STMT_VINFO_DATA_REF (stmt_info)); scalar_type = TREE_TYPE (gimple_get_lhs (stmt)); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "get vectype for scalar type: "); @@ -398,7 +398,7 @@ vect_determine_vectorization_factor (loop_vec_info vectype = get_vectype_for_scalar_type (scalar_type); if (!vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unsupported " @@ -417,7 +417,7 @@ vect_determine_vectorization_factor (loop_vec_info support one vector size per loop). */ scalar_type = vect_get_smallest_scalar_type (stmt, &dummy, &dummy); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "get vectype for scalar type: "); @@ -426,7 +426,7 @@ vect_determine_vectorization_factor (loop_vec_info vf_vectype = get_vectype_for_scalar_type (scalar_type); if (!vf_vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unsupported data-type "); @@ -439,7 +439,7 @@ vect_determine_vectorization_factor (loop_vec_info if ((GET_MODE_SIZE (TYPE_MODE (vectype)) != GET_MODE_SIZE (TYPE_MODE (vf_vectype)))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: different sized vector " @@ -453,14 +453,14 @@ vect_determine_vectorization_factor (loop_vec_info return false; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "vectype: "); dump_generic_expr (MSG_NOTE, TDF_SLIM, vf_vectype); } nunits = TYPE_VECTOR_SUBPARTS (vf_vectype); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "nunits = %d", nunits); if (!vectorization_factor || (nunits > vectorization_factor)) @@ -475,12 +475,12 @@ vect_determine_vectorization_factor (loop_vec_info } /* TODO: Analyze cost. Decide if worth while to vectorize. */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vectorization factor = %d", vectorization_factor); if (vectorization_factor <= 1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unsupported data-type"); return false; @@ -517,7 +517,7 @@ vect_is_simple_iv_evolution (unsigned loop_nb, tre step_expr = evolution_part; init_expr = unshare_expr (initial_condition_in_loop_num (access_fn, loop_nb)); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "step: "); dump_generic_expr (MSG_NOTE, TDF_SLIM, step_expr); @@ -530,7 +530,7 @@ vect_is_simple_iv_evolution (unsigned loop_nb, tre if (TREE_CODE (step_expr) != INTEGER_CST) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "step unknown."); return false; @@ -555,7 +555,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v gimple_stmt_iterator gsi; bool double_reduc; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_analyze_scalar_cycles ==="); @@ -569,7 +569,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v tree def = PHI_RESULT (phi); stmt_vec_info stmt_vinfo = vinfo_for_stmt (phi); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0); @@ -587,7 +587,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v if (access_fn) { STRIP_NOPS (access_fn); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Access function of PHI: "); @@ -606,7 +606,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v gcc_assert (STMT_VINFO_LOOP_PHI_EVOLUTION_PART (stmt_vinfo) != NULL_TREE); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Detected induction."); STMT_VINFO_DEF_TYPE (stmt_vinfo) = vect_induction_def; } @@ -621,7 +621,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v gimple reduc_stmt; bool nested_cycle; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Analyze phi: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0); @@ -637,7 +637,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v { if (double_reduc) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Detected double reduction."); @@ -649,7 +649,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v { if (nested_cycle) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Detected vectorizable nested cycle."); @@ -659,7 +659,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v } else { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Detected reduction."); @@ -675,7 +675,7 @@ vect_analyze_scalar_cycles_1 (loop_vec_info loop_v } } else - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Unknown def-use cycle pattern."); } @@ -737,7 +737,7 @@ vect_get_loop_niters (struct loop *loop, tree *num { tree niters; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== get_loop_niters ==="); niters = number_of_exit_cond_executions (loop); @@ -747,7 +747,7 @@ vect_get_loop_niters (struct loop *loop, tree *num { *number_of_iterations = niters; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> get_loop_niters:"); dump_generic_expr (MSG_NOTE, TDF_SLIM, *number_of_iterations); @@ -995,7 +995,7 @@ vect_analyze_loop_1 (struct loop *loop) { loop_vec_info loop_vinfo; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "===== analyze_loop_nest_1 ====="); @@ -1004,7 +1004,7 @@ vect_analyze_loop_1 (struct loop *loop) loop_vinfo = vect_analyze_loop_form (loop); if (!loop_vinfo) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad inner-loop form."); return NULL; @@ -1030,7 +1030,7 @@ vect_analyze_loop_form (struct loop *loop) tree number_of_iterations = NULL; loop_vec_info inner_loop_vinfo = NULL; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_analyze_loop_form ==="); @@ -1054,7 +1054,7 @@ vect_analyze_loop_form (struct loop *loop) if (loop->num_nodes != 2) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: control flow in loop."); return NULL; @@ -1062,7 +1062,7 @@ vect_analyze_loop_form (struct loop *loop) if (empty_block_p (loop->header)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: empty loop."); return NULL; @@ -1092,7 +1092,7 @@ vect_analyze_loop_form (struct loop *loop) if ((loop->inner)->inner || (loop->inner)->next) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: multiple nested loops."); return NULL; @@ -1102,7 +1102,7 @@ vect_analyze_loop_form (struct loop *loop) inner_loop_vinfo = vect_analyze_loop_1 (loop->inner); if (!inner_loop_vinfo) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: Bad inner loop."); return NULL; @@ -1111,7 +1111,7 @@ vect_analyze_loop_form (struct loop *loop) if (!expr_invariant_in_loop_p (loop, LOOP_VINFO_NITERS (inner_loop_vinfo))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: inner-loop count not invariant."); destroy_loop_vec_info (inner_loop_vinfo, true); @@ -1120,7 +1120,7 @@ vect_analyze_loop_form (struct loop *loop) if (loop->num_nodes != 5) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: control flow in loop."); destroy_loop_vec_info (inner_loop_vinfo, true); @@ -1136,14 +1136,14 @@ vect_analyze_loop_form (struct loop *loop) || !single_exit (innerloop) || single_exit (innerloop)->dest != EDGE_PRED (loop->latch, 0)->src) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unsupported outerloop form."); destroy_loop_vec_info (inner_loop_vinfo, true); return NULL; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Considering outer-loop vectorization."); } @@ -1151,7 +1151,7 @@ vect_analyze_loop_form (struct loop *loop) if (!single_exit (loop) || EDGE_COUNT (loop->header->preds) != 2) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { if (!single_exit (loop)) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -1172,7 +1172,7 @@ vect_analyze_loop_form (struct loop *loop) if (!empty_block_p (loop->latch) || !gimple_seq_empty_p (phi_nodes (loop->latch))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unexpected loop form."); if (inner_loop_vinfo) @@ -1187,12 +1187,12 @@ vect_analyze_loop_form (struct loop *loop) if (!(e->flags & EDGE_ABNORMAL)) { split_loop_exit_edge (e); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf (MSG_NOTE, "split exit edge."); } else { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: abnormal loop exit edge."); if (inner_loop_vinfo) @@ -1204,7 +1204,7 @@ vect_analyze_loop_form (struct loop *loop) loop_cond = vect_get_loop_niters (loop, &number_of_iterations); if (!loop_cond) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: complicated exit condition."); if (inner_loop_vinfo) @@ -1214,7 +1214,7 @@ vect_analyze_loop_form (struct loop *loop) if (!number_of_iterations) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: number of iterations cannot be " "computed."); @@ -1225,7 +1225,7 @@ vect_analyze_loop_form (struct loop *loop) if (chrec_contains_undetermined (number_of_iterations)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Infinite number of iterations."); if (inner_loop_vinfo) @@ -1235,7 +1235,7 @@ vect_analyze_loop_form (struct loop *loop) if (!NITERS_KNOWN_P (number_of_iterations)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Symbolic number of iterations is "); @@ -1244,7 +1244,7 @@ vect_analyze_loop_form (struct loop *loop) } else if (TREE_INT_CST_LOW (number_of_iterations) == 0) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: number of iterations = 0."); if (inner_loop_vinfo) @@ -1292,7 +1292,7 @@ vect_analyze_loop_operations (loop_vec_info loop_v HOST_WIDE_INT estimated_niter; int min_profitable_estimate; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_analyze_loop_operations ==="); @@ -1328,7 +1328,7 @@ vect_analyze_loop_operations (loop_vec_info loop_v LOOP_VINFO_SLP_UNROLLING_FACTOR (loop_vinfo)); LOOP_VINFO_VECT_FACTOR (loop_vinfo) = vectorization_factor; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Updating vectorization factor to %d ", vectorization_factor); @@ -1344,7 +1344,7 @@ vect_analyze_loop_operations (loop_vec_info loop_v ok = true; stmt_info = vinfo_for_stmt (phi); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "examining phi: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0); @@ -1363,7 +1363,7 @@ vect_analyze_loop_operations (loop_vec_info loop_v && STMT_VINFO_DEF_TYPE (stmt_info) != vect_double_reduction_def) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Unsupported loop-closed phi in " "outer-loop."); @@ -1405,7 +1405,7 @@ vect_analyze_loop_operations (loop_vec_info loop_v if (STMT_VINFO_LIVE_P (stmt_info)) { /* FORNOW: not yet supported. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: value used after loop."); return false; @@ -1415,7 +1415,7 @@ vect_analyze_loop_operations (loop_vec_info loop_v && STMT_VINFO_DEF_TYPE (stmt_info) != vect_induction_def) { /* A scalar-dependence cycle that we don't support. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: scalar dependence cycle."); return false; @@ -1430,7 +1430,7 @@ vect_analyze_loop_operations (loop_vec_info loop_v if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: relevant phi not " @@ -1456,18 +1456,17 @@ vect_analyze_loop_operations (loop_vec_info loop_v touching this loop. */ if (!need_to_vectorize) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "All the computation can be taken out of the loop."); - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: redundant loop. no profit to " "vectorize."); return false; } - if (LOOP_VINFO_NITERS_KNOWN_P (loop_vinfo) - && dump_kind_p (MSG_NOTE)) + if (LOOP_VINFO_NITERS_KNOWN_P (loop_vinfo) && dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vectorization_factor = %d, niters = " HOST_WIDE_INT_PRINT_DEC, vectorization_factor, @@ -1478,10 +1477,10 @@ vect_analyze_loop_operations (loop_vec_info loop_v || ((max_niter = max_stmt_executions_int (loop)) != -1 && (unsigned HOST_WIDE_INT) max_niter < vectorization_factor)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: iteration count too small."); - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: iteration count smaller than " "vectorization factor."); @@ -1500,10 +1499,10 @@ vect_analyze_loop_operations (loop_vec_info loop_v if (min_profitable_iters < 0) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: vectorization not profitable."); - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: vector version will never be " "profitable."); @@ -1526,10 +1525,10 @@ vect_analyze_loop_operations (loop_vec_info loop_v if (LOOP_VINFO_NITERS_KNOWN_P (loop_vinfo) && LOOP_VINFO_INT_NITERS (loop_vinfo) <= th) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: vectorization not profitable."); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "not vectorized: iteration count smaller than user " "specified loop bound parameter or minimum profitable " @@ -1541,11 +1540,11 @@ vect_analyze_loop_operations (loop_vec_info loop_v && ((unsigned HOST_WIDE_INT) estimated_niter <= MAX (th, (unsigned)min_profitable_estimate))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: estimated iteration count too " "small."); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "not vectorized: estimated iteration count smaller " "than specified loop bound parameter or minimum " @@ -1558,18 +1557,18 @@ vect_analyze_loop_operations (loop_vec_info loop_v || LOOP_VINFO_INT_NITERS (loop_vinfo) % vectorization_factor != 0 || LOOP_PEELING_FOR_ALIGNMENT (loop_vinfo)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "epilog loop required."); if (!vect_can_advance_ivs_p (loop_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: can't create epilog loop 1."); return false; } if (!slpeel_can_duplicate_loop_p (loop, single_exit (loop))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: can't create epilog loop 2."); return false; @@ -1602,7 +1601,7 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) ok = vect_analyze_data_refs (loop_vinfo, NULL, &min_vf); if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad data references."); return false; @@ -1620,7 +1619,7 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) ok = vect_mark_stmts_to_be_vectorized (loop_vinfo); if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unexpected pattern."); return false; @@ -1635,7 +1634,7 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) if (!ok || max_vf < min_vf) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad data dependence."); return false; @@ -1644,14 +1643,14 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) ok = vect_determine_vectorization_factor (loop_vinfo); if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "can't determine vectorization factor."); return false; } if (max_vf < LOOP_VINFO_VECT_FACTOR (loop_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad data dependence."); return false; @@ -1663,7 +1662,7 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) ok = vect_analyze_data_refs_alignment (loop_vinfo, NULL); if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad data alignment."); return false; @@ -1675,7 +1674,7 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) ok = vect_analyze_data_ref_accesses (loop_vinfo, NULL); if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad data access."); return false; @@ -1687,7 +1686,7 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) ok = vect_prune_runtime_alias_test_list (loop_vinfo); if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "too long list of versioning for alias " "run-time tests."); @@ -1700,7 +1699,7 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) ok = vect_enhance_data_refs_alignment (loop_vinfo); if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad data alignment."); return false; @@ -1725,7 +1724,7 @@ vect_analyze_loop_2 (loop_vec_info loop_vinfo) ok = vect_analyze_loop_operations (loop_vinfo, slp); if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad operation or unsupported loop bound."); return false; @@ -1749,7 +1748,7 @@ vect_analyze_loop (struct loop *loop) current_vector_size = 0; vector_sizes = targetm.vectorize.autovectorize_vector_sizes (); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "===== analyze_loop_nest ====="); @@ -1757,7 +1756,7 @@ vect_analyze_loop (struct loop *loop) && loop_vec_info_for_loop (loop_outer (loop)) && LOOP_VINFO_VECTORIZABLE_P (loop_vec_info_for_loop (loop_outer (loop)))) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "outer-loop already vectorized."); return NULL; @@ -1769,7 +1768,7 @@ vect_analyze_loop (struct loop *loop) loop_vinfo = vect_analyze_loop_form (loop); if (!loop_vinfo) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad loop form."); return NULL; @@ -1791,7 +1790,7 @@ vect_analyze_loop (struct loop *loop) /* Try the next biggest vector size. */ current_vector_size = 1 << floor_log2 (vector_sizes); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "***** Re-trying analysis with " "vector size %d\n", current_vector_size); @@ -2023,7 +2022,7 @@ vect_is_slp_reduction (loop_vec_info loop_info, gi == vect_internal_def && !is_loop_header_bb_p (gimple_bb (def_stmt))))) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "swapping oprnds: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, next_stmt, 0); @@ -2125,7 +2124,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf if (!flow_bb_inside_loop_p (loop, gimple_bb (use_stmt))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "intermediate value used outside loop."); @@ -2137,7 +2136,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf nloop_uses++; if (nloop_uses > 1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "reduction used in loop."); return NULL; @@ -2146,7 +2145,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf if (TREE_CODE (loop_arg) != SSA_NAME) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "reduction: not ssa_name: "); @@ -2158,7 +2157,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf def_stmt = SSA_NAME_DEF_STMT (loop_arg); if (!def_stmt) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "reduction: no def_stmt."); return NULL; @@ -2166,7 +2165,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf if (!is_gimple_assign (def_stmt) && gimple_code (def_stmt) != GIMPLE_PHI) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_gimple_stmt (MSG_NOTE, TDF_SLIM, def_stmt, 0); return NULL; } @@ -2194,7 +2193,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf nloop_uses++; if (nloop_uses > 1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "reduction used in loop."); return NULL; @@ -2210,7 +2209,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf if (gimple_phi_num_args (def_stmt) != 1 || TREE_CODE (op1) != SSA_NAME) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported phi node definition."); @@ -2223,7 +2222,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf && flow_bb_inside_loop_p (loop->inner, gimple_bb (def1)) && is_gimple_assign (def1)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) report_vect_op (MSG_NOTE, def_stmt, "detected double reduction: "); @@ -2250,7 +2249,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf if (check_reduction && (!commutative_tree_code (code) || !associative_tree_code (code))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) report_vect_op (MSG_MISSED_OPTIMIZATION, def_stmt, "reduction: not commutative/associative: "); return NULL; @@ -2260,7 +2259,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf { if (code != COND_EXPR) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) report_vect_op (MSG_MISSED_OPTIMIZATION, def_stmt, "reduction: not binary operation: "); @@ -2279,7 +2278,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf if (TREE_CODE (op1) != SSA_NAME && TREE_CODE (op2) != SSA_NAME) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) report_vect_op (MSG_MISSED_OPTIMIZATION, def_stmt, "reduction: uses not ssa_names: "); @@ -2293,7 +2292,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf if (TREE_CODE (op1) != SSA_NAME && TREE_CODE (op2) != SSA_NAME) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) report_vect_op (MSG_MISSED_OPTIMIZATION, def_stmt, "reduction: uses not ssa_names: "); @@ -2311,7 +2310,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf || (op4 && TREE_CODE (op4) == SSA_NAME && !types_compatible_p (type, TREE_TYPE (op4)))) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "reduction: multiple types: operation type: "); @@ -2353,7 +2352,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf && check_reduction) { /* Changing the order of operations changes the semantics. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) report_vect_op (MSG_MISSED_OPTIMIZATION, def_stmt, "reduction: unsafe fp math optimization: "); return NULL; @@ -2362,7 +2361,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf && check_reduction) { /* Changing the order of operations changes the semantics. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) report_vect_op (MSG_MISSED_OPTIMIZATION, def_stmt, "reduction: unsafe int math optimization: "); return NULL; @@ -2370,7 +2369,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf else if (SAT_FIXED_POINT_TYPE_P (type) && check_reduction) { /* Changing the order of operations changes the semantics. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) report_vect_op (MSG_MISSED_OPTIMIZATION, def_stmt, "reduction: unsafe fixed-point math optimization: "); return NULL; @@ -2407,7 +2406,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf if (code != COND_EXPR && ((!def1 || gimple_nop_p (def1)) && (!def2 || gimple_nop_p (def2)))) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) report_vect_op (MSG_NOTE, def_stmt, "reduction: no defs for operands: "); return NULL; } @@ -2429,7 +2428,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf == vect_internal_def && !is_loop_header_bb_p (gimple_bb (def1))))))) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) report_vect_op (MSG_NOTE, def_stmt, "detected reduction: "); return def_stmt; } @@ -2452,7 +2451,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf /* Swap operands (just for simplicity - so that the rest of the code can assume that the reduction variable is always the last (second) argument). */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) report_vect_op (MSG_NOTE, def_stmt, "detected reduction: need to swap operands: "); @@ -2464,7 +2463,7 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf } else { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) report_vect_op (MSG_NOTE, def_stmt, "detected reduction: "); } @@ -2474,14 +2473,14 @@ vect_is_simple_reduction_1 (loop_vec_info loop_inf /* Try to find SLP reduction chain. */ if (check_reduction && vect_is_slp_reduction (loop_info, phi, def_stmt)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) report_vect_op (MSG_NOTE, def_stmt, "reduction: detected reduction chain: "); return def_stmt; } - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) report_vect_op (MSG_MISSED_OPTIMIZATION, def_stmt, "reduction: unknown pattern: "); @@ -2589,7 +2588,7 @@ vect_get_known_peeling_cost (loop_vec_info loop_vi if (!LOOP_VINFO_NITERS_KNOWN_P (loop_vinfo)) { *peel_iters_epilogue = vf/2; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "cost model: epilogue peel iters set to vf/2 " "because loop iterations are unknown ."); @@ -2882,7 +2881,7 @@ vect_estimate_min_profitable_iters (loop_vec_info /* vector version will never be profitable. */ else { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "cost model: the vector iteration cost = %d " "divided by the scalar iteration cost = %d " @@ -2893,7 +2892,7 @@ vect_estimate_min_profitable_iters (loop_vec_info return; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Cost model analysis: \n"); dump_printf (MSG_NOTE, " Vector inside of loop cost: %d\n", @@ -2925,7 +2924,7 @@ vect_estimate_min_profitable_iters (loop_vec_info then skip the vectorized loop. */ min_profitable_iters--; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, " Runtime profitability threshold = %d\n", min_profitable_iters); @@ -2950,7 +2949,7 @@ vect_estimate_min_profitable_iters (loop_vec_info } min_profitable_estimate --; min_profitable_estimate = MAX (min_profitable_estimate, min_profitable_iters); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, " Static estimate profitability threshold = %d\n", min_profitable_iters); @@ -3010,7 +3009,7 @@ vect_model_reduction_cost (stmt_vec_info stmt_info vectype = get_vectype_for_scalar_type (TREE_TYPE (reduction_op)); if (!vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported data-type "); @@ -3081,7 +3080,7 @@ vect_model_reduction_cost (stmt_vec_info stmt_info } } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf (MSG_NOTE, "vect_model_reduction_cost: inside_cost = %d, " "prologue_cost = %d, epilogue_cost = %d .", inside_cost, @@ -3110,7 +3109,7 @@ vect_model_induction_cost (stmt_vec_info stmt_info prologue_cost = add_stmt_cost (target_cost_data, 2, scalar_to_vec, stmt_info, 0, vect_prologue); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_induction_cost: inside_cost = %d, " "prologue_cost = %d .", inside_cost, prologue_cost); @@ -3239,7 +3238,7 @@ get_initial_def_for_induction (gimple iv_phi) new_bb = gsi_insert_on_edge_immediate (pe, init_stmt); gcc_assert (!new_bb); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "created new init_stmt: "); @@ -3382,7 +3381,7 @@ get_initial_def_for_induction (gimple iv_phi) && !STMT_VINFO_LIVE_P (stmt_vinfo)); STMT_VINFO_VEC_STMT (stmt_vinfo) = new_stmt; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "vector of inductions after inner-loop:"); @@ -3392,7 +3391,7 @@ get_initial_def_for_induction (gimple iv_phi) } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "transform induction: created def-use cycle: "); @@ -3800,7 +3799,7 @@ vect_create_epilog_for_reduction (VEC (tree, heap) add_phi_arg (phi, def, loop_latch_edge (loop), UNKNOWN_LOCATION); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "transform reduction: created def-use cycle: "); @@ -4001,7 +4000,7 @@ vect_create_epilog_for_reduction (VEC (tree, heap) /*** Case 1: Create: v_out2 = reduc_expr */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Reduce using direct vector reduction."); @@ -4052,7 +4051,7 @@ vect_create_epilog_for_reduction (VEC (tree, heap) Create: va = vop } */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Reduce using vector shifts"); @@ -4093,7 +4092,7 @@ vect_create_epilog_for_reduction (VEC (tree, heap) Create: s = op // For non SLP cases } */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Reduce using scalar code. "); @@ -4184,7 +4183,7 @@ vect_create_epilog_for_reduction (VEC (tree, heap) { tree rhs; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "extract scalar result"); @@ -4423,7 +4422,7 @@ vect_finalize_reduction: UNKNOWN_LOCATION); add_phi_arg (vect_phi, PHI_RESULT (inner_phi), loop_latch_edge (outer_loop), UNKNOWN_LOCATION); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "created double reduction phi node: "); @@ -4773,7 +4772,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i { if (!vectorizable_condition (stmt, gsi, NULL, ops[reduc_index], 0, NULL)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported condition in reduction"); @@ -4788,7 +4787,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i optab = optab_for_tree_code (code, vectype_in, optab_default); if (!optab) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no optab."); @@ -4797,7 +4796,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i if (optab_handler (optab, vec_mode) == CODE_FOR_nothing) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf (MSG_NOTE, "op not supported by target."); if (GET_MODE_SIZE (vec_mode) != UNITS_PER_WORD @@ -4805,7 +4804,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i < vect_min_worthwhile_factor (code)) return false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf (MSG_NOTE, "proceeding using word mode."); } @@ -4814,7 +4813,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i && LOOP_VINFO_VECT_FACTOR (loop_vinfo) < vect_min_worthwhile_factor (code)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not worthwhile without SIMD support."); @@ -4895,7 +4894,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i optab_default); if (!reduc_optab) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no optab for reduction."); @@ -4905,7 +4904,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i if (reduc_optab && optab_handler (reduc_optab, vec_mode) == CODE_FOR_nothing) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "reduc op not supported by target."); @@ -4916,7 +4915,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i { if (!nested_cycle || double_reduc) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no reduc code for scalar code."); @@ -4926,7 +4925,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i if (double_reduc && ncopies > 1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "multiple types in double reduction"); @@ -4945,7 +4944,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i ops[1] = fold_convert (TREE_TYPE (ops[0]), ops[1]); else { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "invalid types in dot-prod"); @@ -4963,7 +4962,7 @@ vectorizable_reduction (gimple stmt, gimple_stmt_i /** Transform. **/ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform reduction."); /* FORNOW: Multiple types are not supported for condition. */ @@ -5249,7 +5248,7 @@ vectorizable_induction (gimple phi, gimple_stmt_it if (ncopies > 1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "multiple types in nested loop."); return false; @@ -5273,7 +5272,7 @@ vectorizable_induction (gimple phi, gimple_stmt_it if (!(STMT_VINFO_RELEVANT_P (exit_phi_vinfo) && !STMT_VINFO_LIVE_P (exit_phi_vinfo))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "inner-loop induction only used outside " "of the outer vectorized loop."); @@ -5297,7 +5296,7 @@ vectorizable_induction (gimple phi, gimple_stmt_it if (!vec_stmt) /* transformation not required. */ { STMT_VINFO_TYPE (stmt_info) = induc_vec_info_type; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vectorizable_induction ==="); vect_model_induction_cost (stmt_info, ncopies); @@ -5306,7 +5305,7 @@ vectorizable_induction (gimple phi, gimple_stmt_it /** Transform. **/ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform induction phi."); vec_def = get_initial_def_for_induction (phi); @@ -5371,7 +5370,7 @@ vectorizable_live_operation (gimple stmt, && !vect_is_simple_use (op, stmt, loop_vinfo, NULL, &def_stmt, &def, &dt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -5410,7 +5409,7 @@ vect_loop_kill_debug_uses (struct loop *loop, gimp { if (gimple_debug_bind_p (ustmt)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "killing debug use"); @@ -5450,7 +5449,7 @@ vect_transform_loop (loop_vec_info loop_vinfo) bool check_profitability = false; int th; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vec_transform_loop ==="); /* Use the more conservative vectorization threshold. If the number @@ -5464,7 +5463,7 @@ vect_transform_loop (loop_vec_info loop_vinfo) if (th >= LOOP_VINFO_VECT_FACTOR (loop_vinfo) - 1 && !LOOP_VINFO_NITERS_KNOWN_P (loop_vinfo)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Profitability threshold is %d loop iterations.", th); check_profitability = true; @@ -5525,7 +5524,7 @@ vect_transform_loop (loop_vec_info loop_vinfo) for (si = gsi_start_phis (bb); !gsi_end_p (si); gsi_next (&si)) { phi = gsi_stmt (si); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "------>vectorizing phi: "); @@ -5544,12 +5543,12 @@ vect_transform_loop (loop_vec_info loop_vinfo) if ((TYPE_VECTOR_SUBPARTS (STMT_VINFO_VECTYPE (stmt_info)) != (unsigned HOST_WIDE_INT) vectorization_factor) - && dump_kind_p (MSG_NOTE)) + && dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "multiple-types."); if (STMT_VINFO_DEF_TYPE (stmt_info) == vect_induction_def) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform phi."); vect_transform_stmt (phi, NULL, NULL, NULL, NULL); } @@ -5565,7 +5564,7 @@ vect_transform_loop (loop_vec_info loop_vinfo) else stmt = gsi_stmt (si); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "------>vectorizing statement: "); @@ -5637,7 +5636,7 @@ vect_transform_loop (loop_vec_info loop_vinfo) if (!gsi_end_p (pattern_def_si)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> vectorizing pattern def " @@ -5664,7 +5663,7 @@ vect_transform_loop (loop_vec_info loop_vinfo) STMT_VINFO_VECTYPE (stmt_info)); if (!STMT_SLP_TYPE (stmt_info) && nunits != (unsigned int) vectorization_factor - && dump_kind_p (MSG_NOTE)) + && dump_enabled_p ()) /* For SLP VF is set according to unrolling factor, and not to vector size, hence for SLP this print is not valid. */ dump_printf_loc (MSG_NOTE, vect_location, @@ -5678,7 +5677,7 @@ vect_transform_loop (loop_vec_info loop_vinfo) { slp_scheduled = true; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== scheduling SLP instances ==="); @@ -5698,7 +5697,7 @@ vect_transform_loop (loop_vec_info loop_vinfo) } /* -------- vectorize statement ------------ */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform statement."); grouped_store = false; @@ -5741,9 +5740,9 @@ vect_transform_loop (loop_vec_info loop_vinfo) until all the loops have been transformed? */ update_ssa (TODO_update_ssa); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "LOOP VECTORIZED."); - if (loop->inner && dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (loop->inner && dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "OUTER LOOP VECTORIZED."); } Index: tree-vect-data-refs.c =================================================================== --- tree-vect-data-refs.c (revision 192695) +++ tree-vect-data-refs.c (working copy) @@ -60,7 +60,7 @@ vect_lanes_optab_supported_p (const char *name, co if (array_mode == BLKmode) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no array mode for %s[" HOST_WIDE_INT_PRINT_DEC "]", GET_MODE_NAME (mode), count); @@ -69,14 +69,14 @@ vect_lanes_optab_supported_p (const char *name, co if (convert_optab_handler (optab, array_mode, mode) == CODE_FOR_nothing) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "cannot use %s<%s><%s>", name, GET_MODE_NAME (array_mode), GET_MODE_NAME (mode)); return false; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "can use %s<%s><%s>", name, GET_MODE_NAME (array_mode), GET_MODE_NAME (mode)); @@ -439,7 +439,7 @@ vect_check_interleaving (struct data_reference *dr if (diff_mod_size == 0) { vect_update_interleaving_chain (drb, dra); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Detected interleaving "); @@ -462,7 +462,7 @@ vect_check_interleaving (struct data_reference *dr if (diff_mod_size == 0) { vect_update_interleaving_chain (dra, drb); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Detected interleaving "); @@ -524,7 +524,7 @@ vect_mark_for_runtime_alias_test (ddr_p ddr, loop_ if ((unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIAS_CHECKS) == 0) return false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "mark for run-time aliasing test between "); @@ -535,7 +535,7 @@ vect_mark_for_runtime_alias_test (ddr_p ddr, loop_ if (optimize_loop_nest_for_size_p (loop)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "versioning not supported when optimizing for size."); return false; @@ -544,7 +544,7 @@ vect_mark_for_runtime_alias_test (ddr_p ddr, loop_ /* FORNOW: We don't support versioning with outer-loop vectorization. */ if (loop->inner) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "versioning not yet supported for outer-loops."); return false; @@ -555,7 +555,7 @@ vect_mark_for_runtime_alias_test (ddr_p ddr, loop_ if (TREE_CODE (DR_STEP (DDR_A (ddr))) != INTEGER_CST || TREE_CODE (DR_STEP (DDR_B (ddr))) != INTEGER_CST) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "versioning not yet supported for non-constant " "step"); @@ -611,7 +611,7 @@ vect_analyze_data_ref_dependence (struct data_depe if (loop_vinfo) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "versioning for alias required: " @@ -637,7 +637,7 @@ vect_analyze_data_ref_dependence (struct data_depe if (DR_IS_READ (dra) && DR_IS_READ (drb)) return false; - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "can't determine dependence between "); @@ -666,7 +666,7 @@ vect_analyze_data_ref_dependence (struct data_depe if (dra != drb && vect_check_interleaving (dra, drb)) return false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "determined dependence between "); @@ -686,7 +686,7 @@ vect_analyze_data_ref_dependence (struct data_depe /* Loop-based vectorization and known data dependence. */ if (DDR_NUM_DIST_VECTS (ddr) == 0) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "versioning for alias required: " @@ -704,13 +704,13 @@ vect_analyze_data_ref_dependence (struct data_depe { int dist = dist_v[loop_depth]; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "dependence distance = %d.", dist); if (dist == 0) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "dependence distance == 0 between "); @@ -737,7 +737,7 @@ vect_analyze_data_ref_dependence (struct data_depe /* If DDR_REVERSED_P the order of the data-refs in DDR was reversed (to make distance vector positive), and the actual distance is negative. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "dependence distance negative."); continue; @@ -749,7 +749,7 @@ vect_analyze_data_ref_dependence (struct data_depe /* The dependence distance requires reduction of the maximal vectorization factor. */ *max_vf = abs (dist); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "adjusting maximal vectorization factor to %i", *max_vf); @@ -759,13 +759,13 @@ vect_analyze_data_ref_dependence (struct data_depe { /* Dependence distance does not create dependence, as far as vectorization is concerned, in this case. */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "dependence distance >= VF."); continue; } - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized, possible dependence " @@ -795,7 +795,7 @@ vect_analyze_data_ref_dependences (loop_vec_info l VEC (ddr_p, heap) *ddrs = NULL; struct data_dependence_relation *ddr; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_analyze_dependences ==="); if (loop_vinfo) @@ -837,7 +837,7 @@ vect_compute_data_ref_alignment (struct data_refer tree misalign; tree aligned_to, alignment; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_compute_data_ref_alignment:"); @@ -870,7 +870,7 @@ vect_compute_data_ref_alignment (struct data_refer if (dr_step % GET_MODE_SIZE (TYPE_MODE (vectype)) == 0) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "inner step divides the vector-size."); misalign = STMT_VINFO_DR_INIT (stmt_info); @@ -879,7 +879,7 @@ vect_compute_data_ref_alignment (struct data_refer } else { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "inner step doesn't divide the vector-size."); misalign = NULL_TREE; @@ -898,7 +898,7 @@ vect_compute_data_ref_alignment (struct data_refer if (dr_step % GET_MODE_SIZE (TYPE_MODE (vectype)) != 0) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "SLP: step doesn't divide the vector-size."); misalign = NULL_TREE; @@ -911,7 +911,7 @@ vect_compute_data_ref_alignment (struct data_refer if ((aligned_to && tree_int_cst_compare (aligned_to, alignment) < 0) || !misalign) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Unknown alignment for access: "); @@ -941,7 +941,7 @@ vect_compute_data_ref_alignment (struct data_refer if (!vect_can_force_dr_alignment_p (base, TYPE_ALIGN (vectype)) || (TREE_STATIC (base) && flag_section_anchors)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "can't force alignment of ref: "); @@ -953,7 +953,7 @@ vect_compute_data_ref_alignment (struct data_refer /* Force the alignment of the decl. NOTE: This is the only change to the code we make during the analysis phase, before deciding to vectorize the loop. */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "force alignment of "); dump_generic_expr (MSG_NOTE, TDF_SLIM, ref); @@ -987,7 +987,7 @@ vect_compute_data_ref_alignment (struct data_refer if (!host_integerp (misalign, 1)) { /* Negative or overflowed misalignment value. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unexpected misalign value"); return false; @@ -995,7 +995,7 @@ vect_compute_data_ref_alignment (struct data_refer SET_DR_MISALIGNMENT (dr, TREE_INT_CST_LOW (misalign)); - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr)); @@ -1095,7 +1095,7 @@ vect_update_misalignment_for_peel (struct data_ref return; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Setting misalignment to -1."); SET_DR_MISALIGNMENT (dr, -1); } @@ -1142,7 +1142,7 @@ vect_verify_datarefs_alignment (loop_vec_info loop supportable_dr_alignment = vect_supportable_dr_alignment (dr, false); if (!supportable_dr_alignment) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { if (DR_IS_READ (dr)) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -1157,8 +1157,7 @@ vect_verify_datarefs_alignment (loop_vec_info loop } return false; } - if (supportable_dr_alignment != dr_aligned - && dump_kind_p (MSG_NOTE)) + if (supportable_dr_alignment != dr_aligned && dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Vectorizing an unaligned access."); } @@ -1215,7 +1214,7 @@ vector_alignment_reachable_p (struct data_referenc { HOST_WIDE_INT elmsize = int_cst_value (TYPE_SIZE_UNIT (TREE_TYPE (vectype))); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "data size =" HOST_WIDE_INT_PRINT_DEC, elmsize); @@ -1224,7 +1223,7 @@ vector_alignment_reachable_p (struct data_referenc } if (DR_MISALIGNMENT (dr) % elmsize) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "data size does not divide the misalignment.\n"); return false; @@ -1235,7 +1234,7 @@ vector_alignment_reachable_p (struct data_referenc { tree type = TREE_TYPE (DR_REF (dr)); bool is_packed = not_size_aligned (DR_REF (dr)); - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Unknown misalignment, is_packed = %d",is_packed); if (targetm.vectorize.vector_alignment_reachable (type, is_packed)) @@ -1269,7 +1268,7 @@ vect_get_data_access_cost (struct data_reference * else vect_get_store_cost (dr, ncopies, inside_cost, body_cost_vec); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_get_data_access_cost: inside_cost = %d, " "outside_cost = %d.", *inside_cost, *outside_cost); @@ -1567,7 +1566,7 @@ vect_enhance_data_refs_alignment (loop_vec_info lo unsigned int nelements, mis, same_align_drs_max = 0; stmt_vector_for_cost body_cost_vec = NULL; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_enhance_data_refs_alignment ==="); @@ -1622,7 +1621,7 @@ vect_enhance_data_refs_alignment (loop_vec_info lo and so we can't generate the new base for the pointer. */ if (STMT_VINFO_STRIDE_LOAD_P (stmt_info)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "strided load prevents peeling"); do_peeling = false; @@ -1738,7 +1737,7 @@ vect_enhance_data_refs_alignment (loop_vec_info lo { if (!aligned_access_p (dr)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "vector alignment may not be reachable"); break; @@ -1879,7 +1878,7 @@ vect_enhance_data_refs_alignment (loop_vec_info lo if (STMT_VINFO_GROUPED_ACCESS (stmt_info)) npeel /= GROUP_SIZE (stmt_info); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Try peeling by %d", npeel); } @@ -1951,7 +1950,7 @@ vect_enhance_data_refs_alignment (loop_vec_info lo else LOOP_PEELING_FOR_ALIGNMENT (loop_vinfo) = DR_MISALIGNMENT (dr0); SET_DR_MISALIGNMENT (dr0, 0); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Alignment of access forced using peeling."); @@ -2077,12 +2076,12 @@ vect_enhance_data_refs_alignment (loop_vec_info lo stmt_vec_info stmt_info = vinfo_for_stmt (stmt); dr = STMT_VINFO_DATA_REF (stmt_info); SET_DR_MISALIGNMENT (dr, 0); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Alignment of access forced using versioning."); } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Versioning for alignment will be applied."); @@ -2148,7 +2147,7 @@ vect_find_same_alignment_drs (struct data_dependen { int dist = dist_v[loop_depth]; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "dependence distance = %d.", dist); @@ -2159,7 +2158,7 @@ vect_find_same_alignment_drs (struct data_dependen /* Two references with distance zero have the same alignment. */ VEC_safe_push (dr_p, heap, STMT_VINFO_SAME_ALIGN_REFS (stmtinfo_a), drb); VEC_safe_push (dr_p, heap, STMT_VINFO_SAME_ALIGN_REFS (stmtinfo_b), dra); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "accesses have the same alignment."); @@ -2183,7 +2182,7 @@ bool vect_analyze_data_refs_alignment (loop_vec_info loop_vinfo, bb_vec_info bb_vinfo) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_analyze_data_refs_alignment ==="); @@ -2201,7 +2200,7 @@ vect_analyze_data_refs_alignment (loop_vec_info lo if (!vect_compute_data_refs_alignment (loop_vinfo, bb_vinfo)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: can't calculate alignment " "for data ref."); @@ -2254,7 +2253,7 @@ vect_analyze_group_access (struct data_reference * { GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) = stmt; GROUP_SIZE (vinfo_for_stmt (stmt)) = groupsize; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Detected single element interleaving "); @@ -2265,13 +2264,13 @@ vect_analyze_group_access (struct data_reference * if (loop_vinfo) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Data access with gaps requires scalar " "epilogue loop"); if (loop->inner) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Peeling for outer loop is not" " supported"); @@ -2284,7 +2283,7 @@ vect_analyze_group_access (struct data_reference * return true; } - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not consecutive access "); @@ -2324,7 +2323,7 @@ vect_analyze_group_access (struct data_reference * { if (DR_IS_WRITE (data_ref)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Two store stmts share the same dr."); return false; @@ -2335,7 +2334,7 @@ vect_analyze_group_access (struct data_reference * if (GROUP_READ_WRITE_DEPENDENCE (vinfo_for_stmt (next)) || GROUP_READ_WRITE_DEPENDENCE (vinfo_for_stmt (prev))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "READ_WRITE dependence in interleaving."); return false; @@ -2355,7 +2354,7 @@ vect_analyze_group_access (struct data_reference * next_step = DR_STEP (STMT_VINFO_DATA_REF (vinfo_for_stmt (next))); if (tree_int_cst_compare (step, next_step)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not consecutive access in interleaving"); return false; @@ -2372,7 +2371,7 @@ vect_analyze_group_access (struct data_reference * slp_impossible = true; if (DR_IS_WRITE (data_ref)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "interleaved store with gaps"); return false; @@ -2401,7 +2400,7 @@ vect_analyze_group_access (struct data_reference * greater than STEP. */ if (dr_step && dr_step < count_in_bytes + gaps * type_size) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "interleaving size is greater than step for "); @@ -2424,7 +2423,7 @@ vect_analyze_group_access (struct data_reference * } else { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "interleaved store with gaps"); return false; @@ -2434,7 +2433,7 @@ vect_analyze_group_access (struct data_reference * /* Check that STEP is a multiple of type size. */ if (dr_step && (dr_step % type_size) != 0) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "step is not a multiple of type size: step "); @@ -2450,7 +2449,7 @@ vect_analyze_group_access (struct data_reference * groupsize = count; GROUP_SIZE (vinfo_for_stmt (stmt)) = groupsize; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Detected interleaving of size %d", (int)groupsize); @@ -2469,13 +2468,13 @@ vect_analyze_group_access (struct data_reference * /* There is a gap in the end of the group. */ if (groupsize - last_accessed_element > 0 && loop_vinfo) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Data access with gaps requires scalar " "epilogue loop"); if (loop->inner) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Peeling for outer loop is not supported"); return false; @@ -2508,7 +2507,7 @@ vect_analyze_data_ref_access (struct data_referenc if (loop_vinfo && !step) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bad data-ref access in loop"); return false; @@ -2531,7 +2530,7 @@ vect_analyze_data_ref_access (struct data_referenc step = STMT_VINFO_DR_STEP (stmt_info); if (integer_zerop (step)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "zero step in outer loop."); if (DR_IS_READ (dr)) @@ -2557,7 +2556,7 @@ vect_analyze_data_ref_access (struct data_referenc if (loop && nested_in_vect_loop_p (loop, stmt)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "grouped access in outer loop."); return false; @@ -2588,7 +2587,7 @@ vect_analyze_data_ref_accesses (loop_vec_info loop VEC (data_reference_p, heap) *datarefs; struct data_reference *dr; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_analyze_data_ref_accesses ==="); @@ -2601,7 +2600,7 @@ vect_analyze_data_ref_accesses (loop_vec_info loop if (STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (DR_STMT (dr))) && !vect_analyze_data_ref_access (dr)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: complicated access pattern."); @@ -2631,7 +2630,7 @@ vect_prune_runtime_alias_test_list (loop_vec_info LOOP_VINFO_MAY_ALIAS_DDRS (loop_vinfo); unsigned i, j; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_prune_runtime_alias_test_list ==="); @@ -2649,7 +2648,7 @@ vect_prune_runtime_alias_test_list (loop_vec_info if (vect_vfa_range_equal (ddr_i, ddr_j)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "found equal ranges "); @@ -2677,7 +2676,7 @@ vect_prune_runtime_alias_test_list (loop_vec_info if (VEC_length (ddr_p, ddrs) > (unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIAS_CHECKS)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "disable versioning for alias - max number of " @@ -2964,7 +2963,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, tree scalar_type; bool res, stop_bb_analysis = false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_analyze_data_refs ===\n"); @@ -2979,7 +2978,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (!res) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: loop contains function calls" " or data references that cannot be analyzed"); @@ -3011,7 +3010,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (!compute_all_dependences (BB_VINFO_DATAREFS (bb_vinfo), &BB_VINFO_DDRS (bb_vinfo), NULL, true)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: basic block contains function" " calls or data references that cannot be" @@ -3035,7 +3034,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (!dr || !DR_REF (dr)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unhandled data-ref "); return false; @@ -3081,7 +3080,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (!gather) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: data ref analysis " @@ -3102,7 +3101,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (TREE_CODE (DR_BASE_ADDRESS (dr)) == INTEGER_CST) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: base addr of dr is a " "constant"); @@ -3121,7 +3120,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (TREE_THIS_VOLATILE (DR_REF (dr))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: volatile type "); @@ -3140,7 +3139,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (stmt_can_throw_internal (stmt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: statement can throw an " @@ -3163,7 +3162,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (TREE_CODE (DR_REF (dr)) == COMPONENT_REF && DECL_BIT_FIELD (TREE_OPERAND (DR_REF (dr), 1))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: statement is bitfield " @@ -3189,7 +3188,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (is_gimple_call (stmt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: dr in a call "); @@ -3232,7 +3231,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, tree inner_base = build_fold_indirect_ref (fold_build_pointer_plus (base, init)); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "analyze in outer-loop: "); @@ -3245,7 +3244,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (pbitpos % BITS_PER_UNIT != 0) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "failed: bit offset alignment.\n"); return false; @@ -3255,7 +3254,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (!simple_iv (loop, loop_containing_stmt (stmt), outer_base, &base_iv, false)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "failed: evolution of base is not affine.\n"); return false; @@ -3278,7 +3277,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, else if (!simple_iv (loop, loop_containing_stmt (stmt), poffset, &offset_iv, false)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "evolution of offset is not affine.\n"); return false; @@ -3303,7 +3302,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, STMT_VINFO_DR_ALIGNED_TO (stmt_info) = size_int (highest_pow2_factor (offset_iv.base)); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "\touter base_address: "); @@ -3327,7 +3326,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (STMT_VINFO_DATA_REF (stmt_info)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: more than one data ref " @@ -3355,7 +3354,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, get_vectype_for_scalar_type (scalar_type); if (!STMT_VINFO_VECTYPE (stmt_info)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: no vectype for stmt: "); @@ -3406,7 +3405,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, { STMT_VINFO_DATA_REF (stmt_info) = NULL; free_data_ref (dr); - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: not suitable for gather " @@ -3459,7 +3458,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, if (bad) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: data dependence conflict" @@ -3480,7 +3479,7 @@ vect_analyze_data_refs (loop_vec_info loop_vinfo, = vect_check_strided_load (stmt, loop_vinfo, NULL, NULL); if (!strided_load) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: not suitable for strided " @@ -3668,7 +3667,7 @@ vect_create_addr_base_for_vector_ref (gimple stmt, mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (vec_stmt)); } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "created "); dump_generic_expr (MSG_NOTE, TDF_SLIM, vec_stmt); @@ -3790,7 +3789,7 @@ vect_create_data_ref_ptr (gimple stmt, tree aggr_t in LOOP. */ base_name = build_fold_indirect_ref (unshare_expr (DR_BASE_ADDRESS (dr))); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { tree data_ref_base = base_name; dump_printf_loc (MSG_NOTE, vect_location, @@ -4120,7 +4119,7 @@ vect_grouped_store_supported (tree vectype, unsign /* vect_permute_store_chain requires the group size to be a power of two. */ if (exact_log2 (count) == -1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "the size of the group of accesses" " is not a power of 2"); @@ -4146,7 +4145,7 @@ vect_grouped_store_supported (tree vectype, unsign } } - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf (MSG_MISSED_OPTIMIZATION, "interleave op not supported by target."); return false; @@ -4564,7 +4563,7 @@ vect_grouped_load_supported (tree vectype, unsigne /* vect_permute_load_chain requires the group size to be a power of two. */ if (exact_log2 (count) == -1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "the size of the group of accesses" " is not a power of 2"); @@ -4588,7 +4587,7 @@ vect_grouped_load_supported (tree vectype, unsigne } } - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "extract even/odd not supported by target"); return false; Index: tree-vect-patterns.c =================================================================== --- tree-vect-patterns.c (revision 192695) +++ tree-vect-patterns.c (working copy) @@ -416,7 +416,7 @@ vect_recog_dot_prod_pattern (VEC (gimple, heap) ** pattern_stmt = gimple_build_assign_with_ops (DOT_PROD_EXPR, var, oprnd00, oprnd01, oprnd1); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_dot_prod_pattern: detected: "); @@ -676,7 +676,7 @@ vect_recog_widen_mult_pattern (VEC (gimple, heap) return NULL; /* Pattern detected. */ - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_widen_mult_pattern: detected: "); @@ -699,7 +699,7 @@ vect_recog_widen_mult_pattern (VEC (gimple, heap) pattern_stmt = gimple_build_assign_with_ops (WIDEN_MULT_EXPR, var, oprnd0, oprnd1); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_gimple_stmt_loc (MSG_NOTE, vect_location, TDF_SLIM, pattern_stmt, 0); VEC_safe_push (gimple, heap, *stmts, last_stmt); @@ -912,7 +912,7 @@ vect_recog_widen_sum_pattern (VEC (gimple, heap) * pattern_stmt = gimple_build_assign_with_ops (WIDEN_SUM_EXPR, var, oprnd0, oprnd1); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_widen_sum_pattern: detected: "); @@ -1217,7 +1217,7 @@ vect_recog_over_widening_pattern (VEC (gimple, hea STMT_VINFO_RELATED_STMT (vinfo_for_stmt (stmt)) = pattern_stmt; new_pattern_def_seq (vinfo_for_stmt (stmt), new_def_stmt); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "created pattern stmt: "); @@ -1285,7 +1285,7 @@ vect_recog_over_widening_pattern (VEC (gimple, hea return NULL; /* Pattern detected. */ - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_over_widening_pattern: detected: "); @@ -1421,7 +1421,7 @@ vect_recog_widen_shift_pattern (VEC (gimple, heap) return NULL; /* Pattern detected. */ - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_widen_shift_pattern: detected: "); @@ -1445,7 +1445,7 @@ vect_recog_widen_shift_pattern (VEC (gimple, heap) pattern_stmt = gimple_build_assign_with_ops (WIDEN_LSHIFT_EXPR, var, oprnd0, oprnd1); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_gimple_stmt_loc (MSG_NOTE, vect_location, TDF_SLIM, pattern_stmt, 0); VEC_safe_push (gimple, heap, *stmts, last_stmt); @@ -1567,7 +1567,7 @@ vect_recog_vector_vector_shift_pattern (VEC (gimpl } /* Pattern detected. */ - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_vector_vector_shift_pattern: detected: "); @@ -1575,7 +1575,7 @@ vect_recog_vector_vector_shift_pattern (VEC (gimpl var = vect_recog_temp_ssa_var (TREE_TYPE (oprnd0), NULL); pattern_stmt = gimple_build_assign_with_ops (rhs_code, var, oprnd0, def); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_gimple_stmt_loc (MSG_NOTE, vect_location, TDF_SLIM, pattern_stmt, 0); VEC_safe_push (gimple, heap, *stmts, last_stmt); @@ -1685,7 +1685,7 @@ vect_recog_divmod_pattern (VEC (gimple, heap) **st return NULL; /* Pattern detected. */ - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_divmod_pattern: detected: "); @@ -1789,7 +1789,7 @@ vect_recog_divmod_pattern (VEC (gimple, heap) **st signmask); } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_gimple_stmt_loc (MSG_NOTE, vect_location, TDF_SLIM, pattern_stmt, 0); @@ -2031,7 +2031,7 @@ vect_recog_divmod_pattern (VEC (gimple, heap) **st } /* Pattern detected. */ - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_divmod_pattern: detected: "); @@ -2199,7 +2199,7 @@ vect_recog_mixed_size_cond_pattern (VEC (gimple, h *type_in = vecitype; *type_out = vectype; - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_mixed_size_cond_pattern: detected: "); @@ -2592,7 +2592,7 @@ vect_recog_bool_pattern (VEC (gimple, heap) **stmt *type_out = vectype; *type_in = vectype; VEC_safe_push (gimple, heap, *stmts, last_stmt); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_bool_pattern: detected: "); @@ -2638,7 +2638,7 @@ vect_recog_bool_pattern (VEC (gimple, heap) **stmt *type_out = vectype; *type_in = vectype; VEC_safe_push (gimple, heap, *stmts, last_stmt); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "vect_recog_bool_pattern: detected: "); return pattern_stmt; @@ -2788,7 +2788,7 @@ vect_pattern_recog_1 (vect_recog_func_ptr vect_rec } /* Found a vectorizable pattern. */ - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "pattern recognized: "); @@ -2814,7 +2814,7 @@ vect_pattern_recog_1 (vect_recog_func_ptr vect_rec { stmt_info = vinfo_for_stmt (stmt); pattern_stmt = STMT_VINFO_RELATED_STMT (stmt_info); - if (dump_kind_p (MSG_OPTIMIZED_LOCATIONS)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_OPTIMIZED_LOCATIONS, vect_location, "additional pattern stmt: "); @@ -2915,7 +2915,7 @@ vect_pattern_recog (loop_vec_info loop_vinfo, bb_v VEC (gimple, heap) *stmts_to_replace = VEC_alloc (gimple, heap, 1); gimple stmt; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_pattern_recog ==="); Index: tree-vect-stmts.c =================================================================== --- tree-vect-stmts.c (revision 192695) +++ tree-vect-stmts.c (working copy) @@ -190,7 +190,7 @@ vect_mark_relevant (VEC(gimple,heap) **worklist, g bool save_live_p = STMT_VINFO_LIVE_P (stmt_info); gimple pattern_stmt; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "mark relevant %d, live %d.", relevant, live_p); @@ -246,7 +246,7 @@ vect_mark_relevant (VEC(gimple,heap) **worklist, g pattern_stmt = STMT_VINFO_RELATED_STMT (stmt_info); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "last stmt in pattern. don't mark" " relevant/live."); @@ -265,7 +265,7 @@ vect_mark_relevant (VEC(gimple,heap) **worklist, g if (STMT_VINFO_RELEVANT (stmt_info) == save_relevant && STMT_VINFO_LIVE_P (stmt_info) == save_live_p) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "already marked relevant/live."); return; @@ -310,7 +310,7 @@ vect_stmt_relevant_p (gimple stmt, loop_vec_info l if (gimple_code (stmt) != GIMPLE_PHI) if (gimple_vdef (stmt)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vec_stmt_relevant_p: stmt has vdefs."); *relevant = vect_used_in_scope; @@ -324,7 +324,7 @@ vect_stmt_relevant_p (gimple stmt, loop_vec_info l basic_block bb = gimple_bb (USE_STMT (use_p)); if (!flow_bb_inside_loop_p (loop, bb)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vec_stmt_relevant_p: used out of loop."); @@ -437,7 +437,7 @@ process_use (gimple stmt, tree use, loop_vec_info if (!vect_is_simple_use (use, stmt, loop_vinfo, NULL, &def_stmt, &def, &dt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: unsupported use in stmt."); return false; @@ -449,7 +449,7 @@ process_use (gimple stmt, tree use, loop_vec_info def_bb = gimple_bb (def_stmt); if (!flow_bb_inside_loop_p (loop, def_bb)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "def_stmt is out of loop."); return true; } @@ -467,7 +467,7 @@ process_use (gimple stmt, tree use, loop_vec_info && STMT_VINFO_DEF_TYPE (dstmt_vinfo) == vect_reduction_def && bb->loop_father == def_bb->loop_father) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "reduc-stmt defining reduc-phi in the same nest."); if (STMT_VINFO_IN_PATTERN_P (dstmt_vinfo)) @@ -487,7 +487,7 @@ process_use (gimple stmt, tree use, loop_vec_info ... */ if (flow_loop_nested_p (def_bb->loop_father, bb->loop_father)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "outer-loop def-stmt defining inner-loop stmt."); @@ -525,7 +525,7 @@ process_use (gimple stmt, tree use, loop_vec_info stmt # use (d) */ else if (flow_loop_nested_p (bb->loop_father, def_bb->loop_father)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "inner-loop def-stmt defining outer-loop stmt."); @@ -589,7 +589,7 @@ vect_mark_stmts_to_be_vectorized (loop_vec_info lo enum vect_relevant relevant, tmp_relevant; enum vect_def_type def_type; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_mark_stmts_to_be_vectorized ==="); @@ -602,7 +602,7 @@ vect_mark_stmts_to_be_vectorized (loop_vec_info lo for (si = gsi_start_phis (bb); !gsi_end_p (si); gsi_next (&si)) { phi = gsi_stmt (si); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "init: phi relevant? "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, phi, 0); @@ -614,7 +614,7 @@ vect_mark_stmts_to_be_vectorized (loop_vec_info lo for (si = gsi_start_bb (bb); !gsi_end_p (si); gsi_next (&si)) { stmt = gsi_stmt (si); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "init: stmt relevant? "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0); @@ -632,7 +632,7 @@ vect_mark_stmts_to_be_vectorized (loop_vec_info lo ssa_op_iter iter; stmt = VEC_pop (gimple, worklist); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "worklist: examine stmt: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0); @@ -677,7 +677,7 @@ vect_mark_stmts_to_be_vectorized (loop_vec_info lo /* fall through */ default: - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported use of reduction."); VEC_free (gimple, heap, worklist); @@ -692,7 +692,7 @@ vect_mark_stmts_to_be_vectorized (loop_vec_info lo && tmp_relevant != vect_used_in_outer_by_reduction && tmp_relevant != vect_used_in_outer) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported use of nested cycle."); @@ -707,7 +707,7 @@ vect_mark_stmts_to_be_vectorized (loop_vec_info lo if (tmp_relevant != vect_unused_in_scope && tmp_relevant != vect_used_by_reduction) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported use of double reduction."); @@ -830,7 +830,7 @@ vect_model_simple_cost (stmt_vec_info stmt_info, i inside_cost = record_stmt_cost (body_cost_vec, ncopies, vector_stmt, stmt_info, 0, vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_simple_cost: inside_cost = %d, " "prologue_cost = %d .", inside_cost, prologue_cost); @@ -876,7 +876,7 @@ vect_model_promotion_demotion_cost (stmt_vec_info prologue_cost += add_stmt_cost (target_cost_data, 1, vector_stmt, stmt_info, 0, vect_prologue); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_promotion_demotion_cost: inside_cost = %d, " "prologue_cost = %d .", inside_cost, prologue_cost); @@ -960,7 +960,7 @@ vect_model_store_cost (stmt_vec_info stmt_info, in inside_cost = record_stmt_cost (body_cost_vec, nstmts, vec_perm, stmt_info, 0, vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_store_cost: strided group_size = %d .", group_size); @@ -969,7 +969,7 @@ vect_model_store_cost (stmt_vec_info stmt_info, in /* Costs of the stores. */ vect_get_store_cost (first_dr, ncopies, &inside_cost, body_cost_vec); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_store_cost: inside_cost = %d, " "prologue_cost = %d .", inside_cost, prologue_cost); @@ -994,7 +994,7 @@ vect_get_store_cost (struct data_reference *dr, in vector_store, stmt_info, 0, vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_store_cost: aligned."); break; @@ -1006,7 +1006,7 @@ vect_get_store_cost (struct data_reference *dr, in *inside_cost += record_stmt_cost (body_cost_vec, ncopies, unaligned_store, stmt_info, DR_MISALIGNMENT (dr), vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_store_cost: unaligned supported by " "hardware."); @@ -1017,7 +1017,7 @@ vect_get_store_cost (struct data_reference *dr, in { *inside_cost = VECT_MAX_COST; - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "vect_model_store_cost: unsupported access."); break; @@ -1076,7 +1076,7 @@ vect_model_load_cost (stmt_vec_info stmt_info, int inside_cost += record_stmt_cost (body_cost_vec, nstmts, vec_perm, stmt_info, 0, vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_load_cost: strided group_size = %d .", group_size); @@ -1100,7 +1100,7 @@ vect_model_load_cost (stmt_vec_info stmt_info, int &inside_cost, &prologue_cost, prologue_cost_vec, body_cost_vec, true); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_load_cost: inside_cost = %d, " "prologue_cost = %d .", inside_cost, prologue_cost); @@ -1127,7 +1127,7 @@ vect_get_load_cost (struct data_reference *dr, int *inside_cost += record_stmt_cost (body_cost_vec, ncopies, vector_load, stmt_info, 0, vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_load_cost: aligned."); @@ -1140,7 +1140,7 @@ vect_get_load_cost (struct data_reference *dr, int unaligned_load, stmt_info, DR_MISALIGNMENT (dr), vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_load_cost: unaligned supported by " "hardware."); @@ -1161,7 +1161,7 @@ vect_get_load_cost (struct data_reference *dr, int *inside_cost += record_stmt_cost (body_cost_vec, 1, vector_stmt, stmt_info, 0, vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_load_cost: explicit realign"); @@ -1169,7 +1169,7 @@ vect_get_load_cost (struct data_reference *dr, int } case dr_explicit_realign_optimized: { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_load_cost: unaligned software " "pipelined."); @@ -1197,7 +1197,7 @@ vect_get_load_cost (struct data_reference *dr, int *inside_cost += record_stmt_cost (body_cost_vec, ncopies, vec_perm, stmt_info, 0, vect_body); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vect_model_load_cost: explicit realign optimized"); @@ -1208,7 +1208,7 @@ vect_get_load_cost (struct data_reference *dr, int { *inside_cost = VECT_MAX_COST; - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "vect_model_load_cost: unsupported access."); break; @@ -1258,7 +1258,7 @@ vect_init_vector_1 (gimple stmt, gimple new_stmt, } } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "created new init_stmt: "); @@ -1340,7 +1340,7 @@ vect_get_vec_def_for_operand (tree op, gimple stmt bool is_simple_use; tree vector_type; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "vect_get_vec_def_for_operand: "); @@ -1350,7 +1350,7 @@ vect_get_vec_def_for_operand (tree op, gimple stmt is_simple_use = vect_is_simple_use (op, stmt, loop_vinfo, NULL, &def_stmt, &def, &dt); gcc_assert (is_simple_use); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { int loc_printed = 0; if (def) @@ -1382,7 +1382,7 @@ vect_get_vec_def_for_operand (tree op, gimple stmt *scalar_def = op; /* Create 'vect_cst_ = {cst,cst,...,cst}' */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Create vector_cst. nunits = %d", nunits); @@ -1399,7 +1399,7 @@ vect_get_vec_def_for_operand (tree op, gimple stmt *scalar_def = def; /* Create 'vec_inv = {inv,inv,..,inv}' */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Create vector_inv."); return vect_init_vector (stmt, def, vector_type, NULL); @@ -1661,7 +1661,7 @@ vect_finish_stmt_generation (gimple stmt, gimple v set_vinfo_for_stmt (vec_stmt, new_stmt_vec_info (vec_stmt, loop_vinfo, bb_vinfo)); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "add new stmt: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, vec_stmt, 0); @@ -1764,7 +1764,7 @@ vectorizable_call (gimple stmt, gimple_stmt_iterat if (rhs_type && !types_compatible_p (rhs_type, TREE_TYPE (op))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "argument types differ."); return false; @@ -1775,7 +1775,7 @@ vectorizable_call (gimple stmt, gimple_stmt_iterat if (!vect_is_simple_use_1 (op, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt[i], &opvectype)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -1786,7 +1786,7 @@ vectorizable_call (gimple stmt, gimple_stmt_iterat else if (opvectype && opvectype != vectype_in) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "argument vector types differ."); return false; @@ -1800,7 +1800,7 @@ vectorizable_call (gimple stmt, gimple_stmt_iterat gcc_assert (vectype_in); if (!vectype_in) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no vectype for scalar type "); @@ -1829,7 +1829,7 @@ vectorizable_call (gimple stmt, gimple_stmt_iterat fndecl = vectorizable_function (stmt, vectype_out, vectype_in); if (fndecl == NULL_TREE) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "function is not vectorizable."); @@ -1852,7 +1852,7 @@ vectorizable_call (gimple stmt, gimple_stmt_iterat if (!vec_stmt) /* transformation not required. */ { STMT_VINFO_TYPE (stmt_info) = call_vec_info_type; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vectorizable_call ==="); vect_model_simple_cost (stmt_info, ncopies, dt, NULL, NULL); return true; @@ -1860,7 +1860,7 @@ vectorizable_call (gimple stmt, gimple_stmt_iterat /** Transform. **/ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform call."); /* Handle def. */ @@ -2375,7 +2375,7 @@ vectorizable_conversion (gimple stmt, gimple_stmt_ && (TYPE_PRECISION (rhs_type) != GET_MODE_PRECISION (TYPE_MODE (rhs_type))))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "type conversion to/from bit-precision unsupported."); return false; @@ -2385,7 +2385,7 @@ vectorizable_conversion (gimple stmt, gimple_stmt_ if (!vect_is_simple_use_1 (op0, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt[0], &vectype_in)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -2407,7 +2407,7 @@ vectorizable_conversion (gimple stmt, gimple_stmt_ if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -2422,7 +2422,7 @@ vectorizable_conversion (gimple stmt, gimple_stmt_ gcc_assert (vectype_in); if (!vectype_in) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no vectype for scalar type "); @@ -2466,7 +2466,7 @@ vectorizable_conversion (gimple stmt, gimple_stmt_ break; /* FALLTHRU */ unsupported: - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "conversion not supported by target."); return false; @@ -2565,7 +2565,7 @@ vectorizable_conversion (gimple stmt, gimple_stmt_ if (!vec_stmt) /* transformation not required. */ { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vectorizable_conversion ==="); if (code == FIX_TRUNC_EXPR || code == FLOAT_EXPR) @@ -2588,7 +2588,7 @@ vectorizable_conversion (gimple stmt, gimple_stmt_ } /** Transform. **/ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform conversion. ncopies = %d.", ncopies); @@ -2941,7 +2941,7 @@ vectorizable_assignment (gimple stmt, gimple_stmt_ if (!vect_is_simple_use_1 (op, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt[0], &vectype_in)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -2970,7 +2970,7 @@ vectorizable_assignment (gimple stmt, gimple_stmt_ > TYPE_PRECISION (TREE_TYPE (op))) && TYPE_UNSIGNED (TREE_TYPE (op)))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "type conversion to/from bit-precision " "unsupported."); @@ -2980,7 +2980,7 @@ vectorizable_assignment (gimple stmt, gimple_stmt_ if (!vec_stmt) /* transformation not required. */ { STMT_VINFO_TYPE (stmt_info) = assignment_vec_info_type; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vectorizable_assignment ==="); vect_model_simple_cost (stmt_info, ncopies, dt, NULL, NULL); @@ -2988,7 +2988,7 @@ vectorizable_assignment (gimple stmt, gimple_stmt_ } /** Transform. **/ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform assignment."); /* Handle def. */ @@ -3135,7 +3135,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera if (TYPE_PRECISION (TREE_TYPE (scalar_dest)) != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (scalar_dest)))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bit-precision shifts not supported."); return false; @@ -3145,7 +3145,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera if (!vect_is_simple_use_1 (op0, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt[0], &vectype)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -3158,7 +3158,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera gcc_assert (vectype); if (!vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no vectype for scalar type "); return false; @@ -3173,7 +3173,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera if (!vect_is_simple_use_1 (op1, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt[1], &op1_vectype)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -3218,7 +3218,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera } else { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "operand mode requires invariant argument."); return false; @@ -3228,7 +3228,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera if (!scalar_shift_arg) { optab = optab_for_tree_code (code, vectype, optab_vector); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vector/vector shift/rotate found."); @@ -3237,7 +3237,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera if (op1_vectype == NULL_TREE || TYPE_MODE (op1_vectype) != TYPE_MODE (vectype)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unusable type for last operand in" " vector/vector shift/rotate."); @@ -3252,7 +3252,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera if (optab && optab_handler (optab, TYPE_MODE (vectype)) != CODE_FOR_nothing) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vector/scalar shift/rotate found."); } @@ -3265,7 +3265,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera { scalar_shift_arg = false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "vector/vector shift/rotate found."); @@ -3282,7 +3282,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera && TYPE_MODE (TREE_TYPE (vectype)) != TYPE_MODE (TREE_TYPE (op1))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unusable type for last operand in" " vector/vector shift/rotate."); @@ -3302,7 +3302,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera /* Supportable by target? */ if (!optab) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no optab."); return false; @@ -3311,7 +3311,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera icode = (int) optab_handler (optab, vec_mode); if (icode == CODE_FOR_nothing) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "op not supported by target."); /* Check only during analysis. */ @@ -3319,7 +3319,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera || (vf < vect_min_worthwhile_factor (code) && !vec_stmt)) return false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "proceeding using word mode."); } @@ -3328,7 +3328,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera && vf < vect_min_worthwhile_factor (code) && !vec_stmt) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not worthwhile without SIMD support."); return false; @@ -3337,7 +3337,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera if (!vec_stmt) /* transformation not required. */ { STMT_VINFO_TYPE (stmt_info) = shift_vec_info_type; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vectorizable_shift ==="); vect_model_simple_cost (stmt_info, ncopies, dt, NULL, NULL); return true; @@ -3345,7 +3345,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera /** Transform. **/ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform binary/unary operation."); @@ -3382,7 +3382,7 @@ vectorizable_shift (gimple stmt, gimple_stmt_itera optab_op2_mode = insn_data[icode].operand[2].mode; if (!VECTOR_MODE_P (optab_op2_mode)) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "operand 1 using scalar mode."); vec_oprnd1 = op1; @@ -3510,7 +3510,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i op_type = TREE_CODE_LENGTH (code); if (op_type != unary_op && op_type != binary_op && op_type != ternary_op) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "num. args = %d (not unary/binary/ternary op).", op_type); @@ -3529,7 +3529,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i && code != BIT_XOR_EXPR && code != BIT_AND_EXPR) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "bit-precision arithmetic not supported."); return false; @@ -3539,7 +3539,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i if (!vect_is_simple_use_1 (op0, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt[0], &vectype)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -3552,7 +3552,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i gcc_assert (vectype); if (!vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no vectype for scalar type "); @@ -3574,7 +3574,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i if (!vect_is_simple_use (op1, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt[1])) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -3586,7 +3586,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i if (!vect_is_simple_use (op2, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt[2])) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -3628,7 +3628,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i optab = optab_for_tree_code (code, vectype, optab_default); if (!optab) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no optab."); return false; @@ -3638,14 +3638,14 @@ vectorizable_operation (gimple stmt, gimple_stmt_i if (icode == CODE_FOR_nothing) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "op not supported by target."); /* Check only during analysis. */ if (GET_MODE_SIZE (vec_mode) != UNITS_PER_WORD || (!vec_stmt && vf < vect_min_worthwhile_factor (code))) return false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "proceeding using word mode."); } @@ -3654,7 +3654,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i && !vec_stmt && vf < vect_min_worthwhile_factor (code)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not worthwhile without SIMD support."); return false; @@ -3663,7 +3663,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i if (!vec_stmt) /* transformation not required. */ { STMT_VINFO_TYPE (stmt_info) = op_vec_info_type; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vectorizable_operation ==="); vect_model_simple_cost (stmt_info, ncopies, dt, NULL, NULL); @@ -3672,7 +3672,7 @@ vectorizable_operation (gimple stmt, gimple_stmt_i /** Transform. **/ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform binary/unary operation."); @@ -3860,7 +3860,7 @@ vectorizable_store (gimple stmt, gimple_stmt_itera /* FORNOW. This restriction should be relaxed. */ if (loop && nested_in_vect_loop_p (loop, stmt) && ncopies > 1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "multiple types in nested loop."); return false; @@ -3894,7 +3894,7 @@ vectorizable_store (gimple stmt, gimple_stmt_itera if (!vect_is_simple_use (op, stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -3915,7 +3915,7 @@ vectorizable_store (gimple stmt, gimple_stmt_itera ? STMT_VINFO_DR_STEP (stmt_info) : DR_STEP (dr), size_zero_node) < 0) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "negative step for store."); return false; @@ -3946,7 +3946,7 @@ vectorizable_store (gimple stmt, gimple_stmt_itera if (!vect_is_simple_use (op, next_stmt, loop_vinfo, bb_vinfo, &def_stmt, &def, &dt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "use not simple."); return false; @@ -4008,7 +4008,7 @@ vectorizable_store (gimple stmt, gimple_stmt_itera group_size = vec_num = 1; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform store. ncopies = %d", ncopies); @@ -4396,7 +4396,7 @@ vectorizable_load (gimple stmt, gimple_stmt_iterat /* FORNOW. This restriction should be relaxed. */ if (nested_in_vect_loop && ncopies > 1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "multiple types in nested loop."); return false; @@ -4436,7 +4436,7 @@ vectorizable_load (gimple stmt, gimple_stmt_iterat (e.g. - data copies). */ if (optab_handler (mov_optab, mode) == CODE_FOR_nothing) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Aligned load, but unsupported type."); return false; @@ -4472,7 +4472,7 @@ vectorizable_load (gimple stmt, gimple_stmt_iterat &def_stmt, &def, &gather_dt, &gather_off_vectype)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "gather index use not simple."); return false; @@ -4492,7 +4492,7 @@ vectorizable_load (gimple stmt, gimple_stmt_iterat size_zero_node) < 0; if (negative && ncopies > 1) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "multiple types with negative step."); return false; @@ -4505,14 +4505,14 @@ vectorizable_load (gimple stmt, gimple_stmt_iterat if (alignment_support_scheme != dr_aligned && alignment_support_scheme != dr_unaligned_supported) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "negative step but alignment required."); return false; } if (!perm_mask_for_reverse (vectype)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "negative step and reversing not supported."); return false; @@ -4527,7 +4527,7 @@ vectorizable_load (gimple stmt, gimple_stmt_iterat return true; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "transform load. ncopies = %d", ncopies); @@ -5334,7 +5334,7 @@ vectorizable_condition (gimple stmt, gimple_stmt_i /* FORNOW: not yet supported. */ if (STMT_VINFO_LIVE_P (stmt_info)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "value used after loop."); return false; @@ -5534,7 +5534,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect gimple pattern_stmt; gimple_seq pattern_def_seq; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> examining statement: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0); @@ -5542,7 +5542,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect if (gimple_has_volatile_ops (stmt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: stmt has volatile operands"); @@ -5575,7 +5575,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect /* Analyze PATTERN_STMT instead of the original stmt. */ stmt = pattern_stmt; stmt_info = vinfo_for_stmt (pattern_stmt); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> examining pattern statement: "); @@ -5584,7 +5584,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect } else { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "irrelevant."); return true; @@ -5597,7 +5597,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect || STMT_VINFO_LIVE_P (vinfo_for_stmt (pattern_stmt)))) { /* Analyze PATTERN_STMT too. */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> examining pattern statement: "); @@ -5621,7 +5621,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect || STMT_VINFO_LIVE_P (vinfo_for_stmt (pattern_def_stmt))) { /* Analyze def stmt of STMT if it's a pattern stmt. */ - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "==> examining pattern def statement: "); @@ -5660,7 +5660,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect gcc_assert (PURE_SLP_STMT (stmt_info)); scalar_type = TREE_TYPE (gimple_get_lhs (stmt)); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "get vectype for scalar type: "); @@ -5670,7 +5670,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect vectype = get_vectype_for_scalar_type (scalar_type); if (!vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not SLPed: unsupported data-type "); @@ -5680,7 +5680,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect return false; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "vectype: "); dump_generic_expr (MSG_NOTE, TDF_SLIM, vectype); @@ -5724,7 +5724,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: relevant stmt not "); @@ -5746,7 +5746,7 @@ vect_analyze_stmt (gimple stmt, bool *need_to_vect if (!ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not vectorized: live stmt not "); @@ -5846,7 +5846,7 @@ vect_transform_stmt (gimple stmt, gimple_stmt_iter default: if (!STMT_VINFO_LIVE_P (stmt_info)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "stmt not supported."); gcc_unreachable (); @@ -5871,7 +5871,7 @@ vect_transform_stmt (gimple stmt, gimple_stmt_iter tree scalar_dest; gimple exit_phi; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "Record the vdef for outer-loop vectorization."); @@ -6108,7 +6108,7 @@ get_vectype_for_scalar_type_and_size (tree scalar_ return NULL_TREE; vectype = build_vector_type (scalar_type, nunits); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "get vectype with %d units of type ", nunits); @@ -6118,7 +6118,7 @@ get_vectype_for_scalar_type_and_size (tree scalar_ if (!vectype) return NULL_TREE; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "vectype: "); dump_generic_expr (MSG_NOTE, TDF_SLIM, vectype); @@ -6127,7 +6127,7 @@ get_vectype_for_scalar_type_and_size (tree scalar_ if (!VECTOR_MODE_P (TYPE_MODE (vectype)) && !INTEGRAL_MODE_P (TYPE_MODE (vectype))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "mode not supported by target."); return NULL_TREE; @@ -6198,7 +6198,7 @@ vect_is_simple_use (tree operand, gimple stmt, loo *def_stmt = NULL; *def = NULL_TREE; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "vect_is_simple_use: operand "); @@ -6220,14 +6220,14 @@ vect_is_simple_use (tree operand, gimple stmt, loo if (TREE_CODE (operand) == PAREN_EXPR) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "non-associatable copy."); operand = TREE_OPERAND (operand, 0); } if (TREE_CODE (operand) != SSA_NAME) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "not ssa-name."); return false; @@ -6236,13 +6236,13 @@ vect_is_simple_use (tree operand, gimple stmt, loo *def_stmt = SSA_NAME_DEF_STMT (operand); if (*def_stmt == NULL) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "no def_stmt."); return false; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "def_stmt: "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, *def_stmt, 0); @@ -6274,13 +6274,13 @@ vect_is_simple_use (tree operand, gimple stmt, loo && *dt == vect_double_reduction_def && gimple_code (stmt) != GIMPLE_PHI)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Unsupported pattern."); return false; } - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "type of def: %d.", *dt); switch (gimple_code (*def_stmt)) @@ -6299,7 +6299,7 @@ vect_is_simple_use (tree operand, gimple stmt, loo break; /* FALLTHRU */ default: - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported defining stmt: "); return false; Index: tree-vect-slp.c =================================================================== --- tree-vect-slp.c (revision 192695) +++ tree-vect-slp.c (working copy) @@ -238,7 +238,7 @@ vect_get_and_check_slp_defs (loop_vec_info loop_vi &def, &dt) || (!def_stmt && dt != vect_constant_def)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: can't find def for "); @@ -263,7 +263,7 @@ vect_get_and_check_slp_defs (loop_vec_info loop_vi pattern = true; if (!first && !oprnd_info->first_pattern) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: some of the stmts" @@ -279,7 +279,7 @@ vect_get_and_check_slp_defs (loop_vec_info loop_vi if (dt == vect_unknown_def_type) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Unsupported pattern."); return false; @@ -296,7 +296,7 @@ vect_get_and_check_slp_defs (loop_vec_info loop_vi break; default: - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unsupported defining stmt: "); return false; @@ -361,7 +361,7 @@ vect_get_and_check_slp_defs (loop_vec_info loop_vi { if (number_of_oprnds != 2) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: different types "); @@ -388,7 +388,7 @@ vect_get_and_check_slp_defs (loop_vec_info loop_vi && !types_compatible_p (oprnd_info->first_def_type, TREE_TYPE (def_op0)))) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Swapping operands of "); @@ -400,7 +400,7 @@ vect_get_and_check_slp_defs (loop_vec_info loop_vi } else { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: different types "); @@ -435,7 +435,7 @@ vect_get_and_check_slp_defs (loop_vec_info loop_vi default: /* FORNOW: Not supported. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: illegal type of def "); @@ -504,7 +504,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ /* For every stmt in NODE find its def stmt/s. */ FOR_EACH_VEC_ELT (gimple, stmts, i, stmt) { - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Build SLP for "); dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0); @@ -513,7 +513,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ /* Fail to vectorize statements marked as unvectorizable. */ if (!STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (stmt))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: unvectorizable statement "); @@ -527,7 +527,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ lhs = gimple_get_lhs (stmt); if (lhs == NULL_TREE) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: not GIMPLE_ASSIGN nor " @@ -544,7 +544,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ && (cond = gimple_assign_rhs1 (stmt)) && !COMPARISON_CLASS_P (cond)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: condition is not " @@ -560,7 +560,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ vectype = get_vectype_for_scalar_type (scalar_type); if (!vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: unsupported data-type "); @@ -591,7 +591,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ || !gimple_call_nothrow_p (stmt) || gimple_call_chain (stmt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: unsupported call type "); @@ -631,7 +631,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ if (!optab) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: no optab."); vect_free_oprnd_info (&oprnds_info); @@ -640,7 +640,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ icode = (int) optab_handler (optab, vec_mode); if (icode == CODE_FOR_nothing) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: " "op not supported by target."); @@ -674,7 +674,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ || first_stmt_code == COMPONENT_REF || first_stmt_code == MEM_REF))) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: different operation " @@ -689,7 +689,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ if (need_same_oprnds && !operand_equal_p (first_op1, gimple_assign_rhs2 (stmt), 0)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: different shift " @@ -710,7 +710,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ || gimple_call_fntype (first_stmt) != gimple_call_fntype (stmt)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: different calls in "); @@ -749,7 +749,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ || (GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) != stmt && GROUP_GAP (vinfo_for_stmt (stmt)) != 1)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: grouped " @@ -767,7 +767,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ if (loop_vinfo && GROUP_SIZE (vinfo_for_stmt (stmt)) > ncopies * group_size) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: the number " @@ -792,7 +792,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ && rhs_code != REALPART_EXPR && rhs_code != IMAGPART_EXPR) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -817,7 +817,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ if (vect_supportable_dr_alignment (first_dr, false) == dr_unaligned_unsupported) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -857,7 +857,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ if (TREE_CODE_CLASS (rhs_code) == tcc_reference) { /* Not grouped load. */ - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: not grouped load "); @@ -875,7 +875,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ && rhs_code != COND_EXPR && rhs_code != CALL_EXPR) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: operation"); @@ -895,7 +895,7 @@ vect_build_slp_tree (loop_vec_info loop_vinfo, bb_ first_cond_code = TREE_CODE (cond_expr); else if (first_cond_code != TREE_CODE (cond_expr)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: different" @@ -1080,7 +1080,7 @@ vect_supported_slp_permutation_p (slp_instance ins /* Check that the loads are all in the same interleaving chain. */ if (GROUP_FIRST_ELEMENT (vinfo_for_stmt (scalar_stmt)) != first_load) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: unsupported data " @@ -1169,7 +1169,7 @@ vect_supported_load_permutation_p (slp_instance sl if (!slp_instn) return false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Load permutation "); FOR_EACH_VEC_ELT (int, load_permutation, i, next) @@ -1376,7 +1376,7 @@ vect_supported_load_permutation_p (slp_instance sl if (vect_supportable_dr_alignment (dr, false) == dr_unaligned_unsupported) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -1536,7 +1536,7 @@ vect_analyze_slp_instance (loop_vec_info loop_vinf if (!vectype) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: unsupported data-type "); @@ -1556,7 +1556,7 @@ vect_analyze_slp_instance (loop_vec_info loop_vinf unrolling_factor = least_common_multiple (nunits, group_size) / group_size; if (unrolling_factor != 1 && !loop_vinfo) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: unrolling required in basic" " block SLP"); @@ -1618,7 +1618,7 @@ vect_analyze_slp_instance (loop_vec_info loop_vinf if (unrolling_factor != 1 && !loop_vinfo) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: unrolling required in basic" " block SLP"); @@ -1645,7 +1645,7 @@ vect_analyze_slp_instance (loop_vec_info loop_vinf if (!vect_supported_load_permutation_p (new_instance, group_size, load_permutation)) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Build SLP failed: unsupported load " @@ -1685,7 +1685,7 @@ vect_analyze_slp_instance (loop_vec_info loop_vinf VEC_safe_push (slp_instance, heap, BB_VINFO_SLP_INSTANCES (bb_vinfo), new_instance); - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) vect_print_slp_tree (MSG_NOTE, node); return true; @@ -1717,7 +1717,7 @@ vect_analyze_slp (loop_vec_info loop_vinfo, bb_vec gimple first_element; bool ok = false; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_analyze_slp ==="); if (loop_vinfo) @@ -1736,7 +1736,7 @@ vect_analyze_slp (loop_vec_info loop_vinfo, bb_vec if (bb_vinfo && !ok) { - if (dump_kind_p (MSG_MISSED_OPTIMIZATION)) + if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "Failed to SLP the basic block."); @@ -1780,7 +1780,7 @@ vect_make_slp_decision (loop_vec_info loop_vinfo) slp_instance instance; int decided_to_slp = 0; - if (dump_kind_p (MSG_NOTE)) + if (dump_enabled_p ()) dump_printf_loc (MSG_NOTE, vect_location, "=== vect_make_slp_decision ===");