From patchwork Tue Jun 15 21:57:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Malcolm X-Patchwork-Id: 1492488 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=l8p6ett7; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G4Mcc4kH0z9sRN for ; Wed, 16 Jun 2021 07:57:35 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 45BF43896C06 for ; Tue, 15 Jun 2021 21:57:32 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 45BF43896C06 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1623794252; bh=IQKiZoS7f57Q61iU4m3a1lM4BAGsrHSmPMFmndhY/7s=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=l8p6ett7KTRN3Da7lWB5UuU77J20aoUJ6UwrAbXtNXTpheil4VKkCOFN0ZX4xrkWo TNci4RRBYx7c0fbEjP+x2PyZdculv69zNBO5RR0bx1iXn5j62JZJ8L4VuhUPYmE/Mt 7cZznRGLp5YljfViZ/S90PPt7ZJBjm+0HOonqGnU= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by sourceware.org (Postfix) with ESMTP id DAEB2386486B for ; Tue, 15 Jun 2021 21:57:10 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org DAEB2386486B Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-245-q13TUG8ePGmQ56SnJATYqw-1; Tue, 15 Jun 2021 17:57:05 -0400 X-MC-Unique: q13TUG8ePGmQ56SnJATYqw-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B77558015C6 for ; Tue, 15 Jun 2021 21:57:04 +0000 (UTC) Received: from t14s.localdomain.com (ovpn-112-247.phx2.redhat.com [10.3.112.247]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5DD055D6AD; Tue, 15 Jun 2021 21:57:04 +0000 (UTC) To: gcc-patches@gcc.gnu.org Subject: [committed] analyzer: fix bitfield endianness issues [PR99212, PR101082] Date: Tue, 15 Jun 2021 17:57:00 -0400 Message-Id: <20210615215700.430680-1-dmalcolm@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-13.4 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: David Malcolm via Gcc-patches From: David Malcolm Reply-To: David Malcolm Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" Looks like my patch for PR analyzer/99212 implicitly assumed little-endian, which the following patch fixes. Successfully bootstrapped & regrtested on x86_64-pc-linux-gnu. I attempted to test it on all targets using config-list.mk but repeatedly managed to delete everything by mistake, so for now I've merely verified that this fixes bitfields-1.c on: - armeb-none-linux-gnueabihf - cris-elf - powerpc64-darwin - s390-linux-gnu Pushed to trunk as r12-1491-gec3fafa9ec7d16b9d89076efd3bac1d1af0502b8. Sorry about the breakage. gcc/analyzer/ChangeLog: PR analyzer/99212 PR analyzer/101082 * engine.cc: Include "target.h". (impl_run_checkers): Log BITS_BIG_ENDIAN, BYTES_BIG_ENDIAN, and WORDS_BIG_ENDIAN. * region-model-manager.cc (region_model_manager::maybe_fold_binop): Move support for masking via ARG0 & CST into... (region_model_manager::maybe_undo_optimize_bit_field_compare): ...this new function. Flatten by converting from nested conditionals to a series of early return statements to reject failures. Reject if type is not unsigned_char_type_node. Handle BYTES_BIG_ENDIAN when determining which bits are bound in the binding_map. * region-model.h (region_model_manager::maybe_undo_optimize_bit_field_compare): New decl. * store.cc (bit_range::dump): New function. * store.h (bit_range::dump): New decl. Signed-off-by: David Malcolm --- gcc/analyzer/engine.cc | 8 +++ gcc/analyzer/region-model-manager.cc | 94 +++++++++++++++++----------- gcc/analyzer/region-model.h | 3 + gcc/analyzer/store.cc | 12 ++++ gcc/analyzer/store.h | 1 + 5 files changed, 83 insertions(+), 35 deletions(-) diff --git a/gcc/analyzer/engine.cc b/gcc/analyzer/engine.cc index df04b0ba5d6..529965af187 100644 --- a/gcc/analyzer/engine.cc +++ b/gcc/analyzer/engine.cc @@ -64,6 +64,7 @@ along with GCC; see the file COPYING3. If not see #include "analyzer/bar-chart.h" #include #include "plugin.h" +#include "target.h" /* For an overview, see gcc/doc/analyzer.texi. */ @@ -4845,6 +4846,13 @@ impl_run_checkers (logger *logger) { LOG_SCOPE (logger); + if (logger) + { + logger->log ("BITS_BIG_ENDIAN: %i", BITS_BIG_ENDIAN ? 1 : 0); + logger->log ("BYTES_BIG_ENDIAN: %i", BYTES_BIG_ENDIAN ? 1 : 0); + logger->log ("WORDS_BIG_ENDIAN: %i", WORDS_BIG_ENDIAN ? 1 : 0); + } + /* If using LTO, ensure that the cgraph nodes have function bodies. */ cgraph_node *node; FOR_EACH_FUNCTION_WITH_GIMPLE_BODY (node) diff --git a/gcc/analyzer/region-model-manager.cc b/gcc/analyzer/region-model-manager.cc index 0ca0c8ad02e..621eff0e139 100644 --- a/gcc/analyzer/region-model-manager.cc +++ b/gcc/analyzer/region-model-manager.cc @@ -431,6 +431,60 @@ region_model_manager::get_or_create_cast (tree type, const svalue *arg) return get_or_create_unaryop (type, op, arg); } +/* Subroutine of region_model_manager::maybe_fold_binop for handling + (TYPE)(COMPOUND_SVAL BIT_AND_EXPR CST) that may have been generated by + optimize_bit_field_compare, where CST is from ARG1. + + Support masking out bits from a compound_svalue for comparing a bitfield + against a value, as generated by optimize_bit_field_compare for + BITFIELD == VALUE. + + If COMPOUND_SVAL has a value for the appropriate bits, return it, + shifted accordingly. + Otherwise return NULL. */ + +const svalue * +region_model_manager:: +maybe_undo_optimize_bit_field_compare (tree type, + const compound_svalue *compound_sval, + tree cst, + const svalue *arg1) +{ + if (type != unsigned_char_type_node) + return NULL; + + const binding_map &map = compound_sval->get_map (); + unsigned HOST_WIDE_INT mask = TREE_INT_CST_LOW (cst); + /* If "mask" is a contiguous range of set bits, see if the + compound_sval has a value for those bits. */ + bit_range bits (0, 0); + if (!bit_range::from_mask (mask, &bits)) + return NULL; + + bit_range bound_bits (bits); + if (BYTES_BIG_ENDIAN) + bound_bits = bit_range (BITS_PER_UNIT - bits.get_next_bit_offset (), + bits.m_size_in_bits); + const concrete_binding *conc + = get_store_manager ()->get_concrete_binding (bound_bits, BK_direct); + const svalue *sval = map.get (conc); + if (!sval) + return NULL; + + /* We have a value; + shift it by the correct number of bits. */ + const svalue *lhs = get_or_create_cast (type, sval); + HOST_WIDE_INT bit_offset = bits.get_start_bit_offset ().to_shwi (); + tree shift_amt = build_int_cst (type, bit_offset); + const svalue *shift_sval = get_or_create_constant_svalue (shift_amt); + const svalue *shifted_sval = get_or_create_binop (type, LSHIFT_EXPR, + lhs, shift_sval); + /* Reapply the mask (needed for negative + signed bitfields). */ + return get_or_create_binop (type, BIT_AND_EXPR, + shifted_sval, arg1); +} + /* Subroutine of region_model_manager::get_or_create_binop. Attempt to fold the inputs and return a simpler svalue *. Otherwise, return NULL. */ @@ -485,43 +539,13 @@ region_model_manager::maybe_fold_binop (tree type, enum tree_code op, /* "(ARG0 & 0)" -> "0". */ return get_or_create_constant_svalue (build_int_cst (type, 0)); - /* Support masking out bits from a compound_svalue, as this - is generated when accessing bitfields. */ if (const compound_svalue *compound_sval = arg0->dyn_cast_compound_svalue ()) - { - const binding_map &map = compound_sval->get_map (); - unsigned HOST_WIDE_INT mask = TREE_INT_CST_LOW (cst1); - /* If "mask" is a contiguous range of set bits, see if the - compound_sval has a value for those bits. */ - bit_range bits (0, 0); - if (bit_range::from_mask (mask, &bits)) - { - const concrete_binding *conc - = get_store_manager ()->get_concrete_binding (bits, - BK_direct); - if (const svalue *sval = map.get (conc)) - { - /* We have a value; - shift it by the correct number of bits. */ - const svalue *lhs = get_or_create_cast (type, sval); - HOST_WIDE_INT bit_offset - = bits.get_start_bit_offset ().to_shwi (); - tree shift_amt = build_int_cst (type, bit_offset); - const svalue *shift_sval - = get_or_create_constant_svalue (shift_amt); - const svalue *shifted_sval - = get_or_create_binop (type, - LSHIFT_EXPR, - lhs, shift_sval); - /* Reapply the mask (needed for negative - signed bitfields). */ - return get_or_create_binop (type, - BIT_AND_EXPR, - shifted_sval, arg1); - } - } - } + if (const svalue *sval + = maybe_undo_optimize_bit_field_compare (type, + compound_sval, + cst1, arg1)) + return sval; } break; case TRUTH_ANDIF_EXPR: diff --git a/gcc/analyzer/region-model.h b/gcc/analyzer/region-model.h index 8b669df00be..7b12d35ab59 100644 --- a/gcc/analyzer/region-model.h +++ b/gcc/analyzer/region-model.h @@ -320,6 +320,9 @@ private: const svalue *maybe_fold_sub_svalue (tree type, const svalue *parent_svalue, const region *subregion); + const svalue *maybe_undo_optimize_bit_field_compare (tree type, + const compound_svalue *compound_sval, + tree cst, const svalue *arg1); unsigned m_next_region_id; root_region m_root_region; diff --git a/gcc/analyzer/store.cc b/gcc/analyzer/store.cc index 699de94cdb0..d75fb2c4c7a 100644 --- a/gcc/analyzer/store.cc +++ b/gcc/analyzer/store.cc @@ -249,6 +249,18 @@ bit_range::dump_to_pp (pretty_printer *pp) const pp_wide_int (pp, get_next_bit_offset (), SIGNED); } +/* Dump this object to stderr. */ + +DEBUG_FUNCTION void +bit_range::dump () const +{ + pretty_printer pp; + pp.buffer->stream = stderr; + dump_to_pp (&pp); + pp_newline (&pp); + pp_flush (&pp); +} + int bit_range::cmp (const bit_range &br1, const bit_range &br2) { diff --git a/gcc/analyzer/store.h b/gcc/analyzer/store.h index 7bd2824dba9..ca9ff696bca 100644 --- a/gcc/analyzer/store.h +++ b/gcc/analyzer/store.h @@ -275,6 +275,7 @@ struct bit_range {} void dump_to_pp (pretty_printer *pp) const; + void dump () const; bit_offset_t get_start_bit_offset () const {