From patchwork Thu Apr 28 23:35:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Bergner X-Patchwork-Id: 616539 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3qwtVY2V7Mz9ssM for ; Fri, 29 Apr 2016 09:35:31 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=tMfY00nE; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:subject:from:to:date:content-type:mime-version :content-transfer-encoding; q=dns; s=default; b=m7bOcB0vJM1Dj9+G 9yvJidYM/t0j7P6fB8Jvf8vqKoJWJVWdzccn1GGU81u6mPESh4ds3nbgUEuUNqGm s8Y4rek7Ziz8pdXUGlGYSjuXFKS58+MTxPzzckw1+PrhMX25Wo3CcFcOJqrWQXPy f8qeVSrjoxDxLwOF5BkRsL9ZL+A= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:subject:from:to:date:content-type:mime-version :content-transfer-encoding; s=default; bh=RaqvnFZRGPG5MHL0wykc8d 6gwlg=; b=tMfY00nEtiF/xH1VLwAZrjjgm7L2X9KYEG3GcUNygnaaDd1XugMm/1 kGBsfWF7Jyc7+Beo9sqT9aP5rgiMNEBBaYbVbWrmyIRDgfBVXQHSTNEVzIS7F6IK 0TcyhYVHq5sgva8bM0sbLH84t6jb542wTjte0oqdlQhAZRlOCbX/0= Received: (qmail 32683 invoked by alias); 28 Apr 2016 23:35:20 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 31702 invoked by uid 89); 28 Apr 2016 23:35:20 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.4 required=5.0 tests=BAYES_00, KAM_ASCII_DIVIDERS, RP_MATCHES_RCVD, SPF_SOFTFAIL autolearn=no version=3.3.2 spammy=new_size, old_size, LFB0, lfb0 X-HELO: e32.co.us.ibm.com Received: from e32.co.us.ibm.com (HELO e32.co.us.ibm.com) (32.97.110.150) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (CAMELLIA256-SHA encrypted) ESMTPS; Thu, 28 Apr 2016 23:35:16 +0000 Received: from localhost by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 28 Apr 2016 17:35:14 -0600 Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e32.co.us.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 28 Apr 2016 17:35:12 -0600 X-IBM-Helo: d03dlp01.boulder.ibm.com X-IBM-MailFrom: bergner@vnet.ibm.com X-IBM-RcptTo: gcc-patches@gcc.gnu.org Received: from b03cxnp08028.gho.boulder.ibm.com (b03cxnp08028.gho.boulder.ibm.com [9.17.130.20]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id 000BF1FF004A for ; Thu, 28 Apr 2016 17:34:56 -0600 (MDT) Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by b03cxnp08028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u3SNZBSb34013388 for ; Thu, 28 Apr 2016 16:35:11 -0700 Received: from d03av03.boulder.ibm.com (localhost [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id u3SNZB0p000846 for ; Thu, 28 Apr 2016 17:35:11 -0600 Received: from sig-9-77-146-127.ibm.com (sig-9-77-146-127.ibm.com [9.77.146.127]) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id u3SNZAIX000790; Thu, 28 Apr 2016 17:35:10 -0600 Message-ID: <1461886509.4256.14.camel@vnet.ibm.com> Subject: [PATCH] Fix PR tree-optimization/51513 From: Peter Bergner To: "gcc-patches@gcc.gnu.org" Date: Thu, 28 Apr 2016 18:35:09 -0500 Mime-Version: 1.0 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16042823-0005-0000-0000-00002B2C320A X-IsSubscribed: yes This patch fixes PR tree-optimization/51513, namely the generation of wild branches due to switch case statements that only contain calls to __builtin_unreachable(). For example, compiling using -O2 -fjump-tables --param case-values-threshold=1 (to easily expose the bug), we see: switch (which) { case 0: return v0; case 1: return v1; case 2: return v2; default: __builtin_unreachable( ); } we currently generate for powerpc64le-linux: cmpldi 7,3,2 bgt 7,.L2 <- Invalid branch ... .L3: mr 3,4 blr .p2align 4,,15 .L2: <- Invalid branch target .long 0 .byte 0,0,0,0,0,0,0,0 .size bug,.-bug ...and for x86_64-linux: bug: .LFB0: .cfi_startproc cmpq $2, %rdi ja .L2 <- Invalid branch jmp *.L4(,%rdi,8) ... .L3: movq %rsi, %rax ret .p2align 4,,10 .p2align 3 .L2: <- Invalid branch target .cfi_endproc .LFE0: .size bug, .-bug The bug is that we end up deleting the unreachable block(s) from the CFG, but we never remove the label(s) for the block(s) in the switch jump table. We fix this by removing the case labels and their associated edges for unreachable blocks. Normal CFG cleanup removes the unreachable blocks. This has passed bootstrap and regtesting on powerpc64le-linux and x86_64-linux with no regressions. Ok for trunk? Peter gcc/ PR tree-optimization/51513 * tree-cfg.c (gimple_unreachable_bb_p): New function. (assert_unreachable_fallthru_edge_p): Use it. (compress_case_label_vector): New function. (group_case_labels_stmt): Use it. (cleanup_dead_labels): Call gimple_unreachable_bb_p() and compress_case_label_vector(). Remove labels and edges leading to unreachable blocks. gcc/testsuite/ PR tree-optimization/51513 * gcc.c-torture/execute/pr51513.c: New test. Index: gcc/tree-cfg.c =================================================================== --- gcc/tree-cfg.c (revision 235531) +++ gcc/tree-cfg.c (working copy) @@ -408,6 +408,33 @@ computed_goto_p (gimple *t) && TREE_CODE (gimple_goto_dest (t)) != LABEL_DECL); } +/* Returns true if the basic block BB has no successors and only contains + a call to __builtin_unreachable (). */ + +static bool +gimple_unreachable_bb_p (basic_block bb) +{ + gimple_stmt_iterator gsi; + gimple *stmt; + + if (EDGE_COUNT (bb->succs) != 0) + return false; + + gsi = gsi_after_labels (bb); + if (gsi_end_p (gsi)) + return false; + + stmt = gsi_stmt (gsi); + while (is_gimple_debug (stmt) || gimple_clobber_p (stmt)) + { + gsi_next (&gsi); + if (gsi_end_p (gsi)) + return false; + stmt = gsi_stmt (gsi); + } + return gimple_call_builtin_p (stmt, BUILT_IN_UNREACHABLE); +} + /* Returns true for edge E where e->src ends with a GIMPLE_COND and the other edge points to a bb with just __builtin_unreachable (). I.e. return true for C->M edge in: @@ -431,23 +458,7 @@ assert_unreachable_fallthru_edge_p (edge basic_block other_bb = EDGE_SUCC (pred_bb, 0)->dest; if (other_bb == e->dest) other_bb = EDGE_SUCC (pred_bb, 1)->dest; - if (EDGE_COUNT (other_bb->succs) == 0) - { - gimple_stmt_iterator gsi = gsi_after_labels (other_bb); - gimple *stmt; - - if (gsi_end_p (gsi)) - return false; - stmt = gsi_stmt (gsi); - while (is_gimple_debug (stmt) || gimple_clobber_p (stmt)) - { - gsi_next (&gsi); - if (gsi_end_p (gsi)) - return false; - stmt = gsi_stmt (gsi); - } - return gimple_call_builtin_p (stmt, BUILT_IN_UNREACHABLE); - } + return gimple_unreachable_bb_p (other_bb); } return false; } @@ -1401,6 +1412,26 @@ cleanup_dead_labels_eh (void) } } +/* Compress the case labels in the label vector to remove NULL labels, + and adjust the length of the vector. */ + +void +compress_case_label_vector (gswitch *stmt, size_t new_size, size_t old_size) +{ + size_t i, j; + + gcc_assert (new_size <= old_size); + + for (i = 0, j = 0; i < new_size; i++) + { + while (!gimple_switch_label (stmt, j)) + j++; + gimple_switch_set_label (stmt, i, + gimple_switch_label (stmt, j++)); + } + + gimple_switch_set_num_labels (stmt, new_size); +} /* Cleanup redundant labels. This is a three-step process: 1) Find the leading label for each block. @@ -1485,17 +1516,81 @@ cleanup_dead_labels (void) case GIMPLE_SWITCH: { gswitch *switch_stmt = as_a (stmt); - size_t i, n = gimple_switch_num_labels (switch_stmt); - - /* Replace all destination labels. */ - for (i = 0; i < n; ++i) + size_t i, old_size = gimple_switch_num_labels (switch_stmt); + size_t new_size = old_size; + basic_block prev_bb = NULL; + bool prev_bb_empty = false; + + /* Replace all destination labels that do not lead to unreachable + blocks. Remove any unreachable blocks. */ + for (i = 0; i < old_size; i++) { tree case_label = gimple_switch_label (switch_stmt, i); label = CASE_LABEL (case_label); - new_label = main_block_label (label); - if (new_label != label) - CASE_LABEL (case_label) = new_label; + basic_block case_bb = label_to_block (label); + + /* The call to gimple_unreachable_bb_p() can be expensive, so + we cache the previous blocks's results to catch the case + where multiple labels all branch to the same block. + See gcc.c-torture/compile/limits-caselabels.c for an + extreme case where this caching helps a lot. */ + if ((case_bb == prev_bb && prev_bb_empty) + || (case_bb != prev_bb && gimple_unreachable_bb_p (case_bb))) + { + /* Clear the unreachable case label from the switch's jump + table and remove its edge in the CFG. */ + gimple_switch_set_label (switch_stmt, i, NULL_TREE); + if (case_bb != prev_bb) + { + edge e = find_edge (bb, case_bb); + if (e != NULL) + remove_edge_and_dominated_blocks (e); + } + prev_bb = case_bb; + prev_bb_empty = true; + new_size--; + } + else + { + new_label = main_block_label (label); + if (new_label != label) + CASE_LABEL (case_label) = new_label; + prev_bb = case_bb; + prev_bb_empty = false; + } + } + + /* If every case statement is unreachable, then remove the switch + statement itself. The jump table will be cleaned up below. */ + if (new_size == 0) + { + gimple_stmt_iterator gsi = gsi_last_bb (bb); + gsi_remove (&gsi, true); + } + else if (gimple_switch_label (switch_stmt, 0) == NULL_TREE) + { + /* If we deleted the default case block because it was + unreachable, then add a new dummy default case label + pointing at one of the other cases. */ + tree default_label = NULL_TREE; + for (i = 1; i < old_size; i++) + { + default_label = gimple_switch_label (switch_stmt, old_size - i); + if (default_label != NULL_TREE) + { + default_label = CASE_LABEL (default_label); + break; + } + } + gcc_assert (default_label != NULL_TREE); + tree default_case = build_case_label (NULL_TREE, NULL_TREE, + default_label); + gimple_switch_set_default_label (switch_stmt, default_case); + new_size++; } + + if (new_size < old_size) + compress_case_label_vector (switch_stmt, new_size, old_size); break; } @@ -1610,7 +1705,7 @@ void group_case_labels_stmt (gswitch *stmt) { int old_size = gimple_switch_num_labels (stmt); - int i, j, new_size = old_size; + int i, new_size = old_size; basic_block default_bb = NULL; default_bb = label_to_block (CASE_LABEL (gimple_switch_default_label (stmt))); @@ -1668,18 +1763,7 @@ group_case_labels_stmt (gswitch *stmt) } } - /* Compress the case labels in the label vector, and adjust the - length of the vector. */ - for (i = 0, j = 0; i < new_size; i++) - { - while (! gimple_switch_label (stmt, j)) - j++; - gimple_switch_set_label (stmt, i, - gimple_switch_label (stmt, j++)); - } - - gcc_assert (new_size <= old_size); - gimple_switch_set_num_labels (stmt, new_size); + compress_case_label_vector (stmt, new_size, old_size); } /* Look for blocks ending in a multiway branch (a GIMPLE_SWITCH), Index: gcc/testsuite/gcc.c-torture/execute/pr51513.c =================================================================== --- gcc/testsuite/gcc.c-torture/execute/pr51513.c (revision 0) +++ gcc/testsuite/gcc.c-torture/execute/pr51513.c (working copy) @@ -0,0 +1,36 @@ +/* PR tree-optimization/51513 */ + +/* { dg-do run } */ +/* { dg-options "-fjump-tables --param case-values-threshold=1" } */ + +long +__attribute__ ((__noinline__)) +bug (long which, long v0, long v1, long v2) +{ + switch (which) + { + case 0: + return v0; + case 1: + return v1; + case 2: + return v2; + default: + __builtin_unreachable( ); + } + __builtin_abort (); +} + +int +main (void) +{ + /* Purposely call bug() with an argument that will target the + switch's "unreachable" default case statement, to test whether + we generated a wild branch or not. */ + long ret = bug (2, 13, 13, 13); + + if (ret != 13) + __builtin_abort (); + + return 0; +}