From patchwork Fri May 7 19:04:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew MacLeod X-Patchwork-Id: 1475655 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=GWVcnetO; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4FcKcn5XcGz9sRK for ; Sat, 8 May 2021 05:04:25 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 918393853806; Fri, 7 May 2021 19:04:23 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 918393853806 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1620414263; bh=I6o8eiUbcTvCV90xt/KmODTj13Ba1s/1Uajb9cjn0Ug=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=GWVcnetOEjmtngLtBqODvaLm7Ht/Gh2TNpHvlk+WSwVCWqP5ePxF9oyPCPwH6uJ7G O7jb1RD9XKPcyPrxuIcpya018nxIHXE4BDL/0G83VL4PlfI9ot56uSheU9iu+pHAGS FJWgNQIXtgzR85hofxdi9Wl3Vz6/umdGBziJF6Ww= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTP id B222B3853806 for ; Fri, 7 May 2021 19:04:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org B222B3853806 Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-206-UPu33ZC1OGWn2s_mjY72Ug-1; Fri, 07 May 2021 15:04:17 -0400 X-MC-Unique: UPu33ZC1OGWn2s_mjY72Ug-1 Received: by mail-qv1-f69.google.com with SMTP id x6-20020a0cda060000b02901c4b3f7d3d9so7384647qvj.0 for ; Fri, 07 May 2021 12:04:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:from:subject:message-id:date:user-agent :mime-version:content-language; bh=I6o8eiUbcTvCV90xt/KmODTj13Ba1s/1Uajb9cjn0Ug=; b=ZmLCYs7WA4Z/1sQuwN8629gAtH3kNkABKo6IiHwkTZEYDGbb+rRs7n/58SytXaQbbJ ytWFyvbaWLXThgvImp1HWVw2cs2wryeVeMvvyqkun9iJ6i4zMCwyDTpfuktVJ7U+F9Uo BcUNFhqrBmZ2alc+6pEqyFrJzk4/2wa+oO+PhDy0ZNIWovz200GhEiPbVSL+cRaG9Dse iB3LPq/oXsJ7vaEeBRcEPaj+zHU+5y5aI3fSqJz08s0PFsDAHw/4L/TQDNy2sQSNBPhg 5P2elimMu7IfVXv/7meWJyQ9b5S5B4OuvTi6NsxIq7S9/UsupIsnKgQIPJ2yPIEIW+S9 AEwg== X-Gm-Message-State: AOAM5339sfAj1oDXloF7n5kz/dycJde+4LGDA1SwmdJoHM58DRwyP96z eng5loEYkjvP3pskn5Nss/iumP8M3+VLQyHOvloAYKvCJz0n8YpjG0VHN21qgYM4rP+CoW7jff1 jmJIhmjXAe5h6z6+nSQ== X-Received: by 2002:a37:7d84:: with SMTP id y126mr10929182qkc.155.1620414256762; Fri, 07 May 2021 12:04:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz0KbzzHJovMYbSSZIcx22vJhcmZbb4xVwseKIAPTsiBDgrHmry6m9KKsKbXX/WHTn56621vg== X-Received: by 2002:a37:7d84:: with SMTP id y126mr10929170qkc.155.1620414256605; Fri, 07 May 2021 12:04:16 -0700 (PDT) Received: from ?IPv6:2607:fea8:a25f:fa00:6ae8:97ac:cf69:b86d? ([2607:fea8:a25f:fa00:6ae8:97ac:cf69:b86d]) by smtp.gmail.com with ESMTPSA id i24sm3095597qtm.72.2021.05.07.12.04.15 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 07 May 2021 12:04:16 -0700 (PDT) To: gcc-patches Subject: [PATCH] abstract the on entry cache. Message-ID: <48c9ec3a-4cab-60f0-cdad-9cca9df29238@redhat.com> Date: Fri, 7 May 2021 15:04:13 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.1 MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-CA X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Andrew MacLeod via Gcc-patches From: Andrew MacLeod Reply-To: Andrew MacLeod Errors-To: gcc-patches-bounces@gcc.gnu.org Sender: "Gcc-patches" This patch cleans up the ssa_block_range class which represents the bulk of the on-entry cache storage. PR 100299 shows the we need better control over the cache, and different approaches can be useful for different situations. In preparation for fixing that PR, this patch: 1) Abstracts ssa_block_range into a pure virtual API, removing the unused set_bb_varying routine. Now its simply  set, get and query if there is a range.. 2) Moves the original vector code into a derived class sbr_vector, while making some modest improvements such as caching a single varying and undefined range. 3) Provides the irange_alllocator obstack the ability to provide memory hunks.  This allows the on-entry cache to be completely allocated from within the one obstack, and eliminates any need for looping during destruction time.. we can just throw the entire obstack away and be done. Even though this makes the class virtual, the end result is about an overall  0.4% improvement in the performance of the pass (according to callgrind). Mostly this is due to the tweaks to the vector implementation changes. This also paves the way for providing an alternate implementation when the CFG is very big, or other conditions.  I have a follow up patch for later which address the large CFG issues and fixes that PR.  I just wanted to get this foundation in before other restructuring changes interfere with the ability to easily apply it to the gcc 11 branch. Bootstraps on  x86_64-pc-linux-gnu with no testsuite regressions. Pushed. Andrew commit 18f0495c021084fc98fd252798accada955c60dc Author: Andrew MacLeod Date: Fri May 7 12:03:01 2021 -0400 Clean up and virtualize the on-entry cache interface. Cleanup/Virtualize the ssa_block_range class, and implement the current vector approach as a derived class. Allow memory allocation from the irange allocator obstack for easy freeing. * gimple-range-cache.cc (ssa_block_ranges): Virtualize. (sbr_vector): Renamed from ssa_block_cache. (sbr_vector::sbr_vector): Allocate from obstack abd initialize. (ssa_block_ranges::~ssa_block_ranges): Remove. (sbr_vector::set_bb_range): Use varying and undefined cached values. (ssa_block_ranges::set_bb_varying): Remove. (sbr_vector::get_bb_range): Adjust assert. (sbr_vector::bb_range_p): Adjust assert. (~block_range_cache): No freeing loop required. (block_range_cache::get_block_ranges): Remove. (block_range_cache::set_bb_range): Inline get_block_ranges. (block_range_cache::set_bb_varying): Remove. * gimple-range_cache.h (set_bb_varying): Remove prototype. * value-range.h (irange_allocator::get_memory): New. diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc index 9b401927bd6..60e5d66c52d 100644 --- a/gcc/gimple-range-cache.cc +++ b/gcc/gimple-range-cache.cc @@ -125,29 +125,53 @@ non_null_ref::process_name (tree name) // ------------------------------------------------------------------------- -// This class implements a cache of ranges indexed by basic block. It -// represents all that is known about an SSA_NAME on entry to each -// block. It caches a range-for-type varying range so it doesn't need -// to be reformed all the time. If a range is ever always associated -// with a type, we can use that instead. Whenever varying is being -// set for a block, the cache simply points to this cached one rather -// than create a new one each time. +// This class represents the API into a cache of ranges for an SSA_NAME. +// Routines must be implemented to set, get, and query if a value is set. class ssa_block_ranges { public: - ssa_block_ranges (tree t, irange_allocator *allocator); - ~ssa_block_ranges (); - - void set_bb_range (const basic_block bb, const irange &r); - void set_bb_varying (const basic_block bb); - bool get_bb_range (irange &r, const basic_block bb); - bool bb_range_p (const basic_block bb); + virtual void set_bb_range (const basic_block bb, const irange &r) = 0; + virtual bool get_bb_range (irange &r, const basic_block bb) = 0; + virtual bool bb_range_p (const basic_block bb) = 0; void dump(FILE *f); -private: - vec m_tab; - irange *m_type_range; +}; + +// Print the list of known ranges for file F in a nice format. + +void +ssa_block_ranges::dump (FILE *f) +{ + basic_block bb; + int_range_max r; + + FOR_EACH_BB_FN (bb, cfun) + if (get_bb_range (r, bb)) + { + fprintf (f, "BB%d -> ", bb->index); + r.dump (f); + fprintf (f, "\n"); + } +} + +// This class implements the range cache as a linear vector, indexed by BB. +// It caches a varying and undefined range which are used instead of +// allocating new ones each time. + +class sbr_vector : public ssa_block_ranges +{ +public: + sbr_vector (tree t, irange_allocator *allocator); + + virtual void set_bb_range (const basic_block bb, const irange &r) OVERRIDE; + virtual bool get_bb_range (irange &r, const basic_block bb) OVERRIDE; + virtual bool bb_range_p (const basic_block bb) OVERRIDE; +protected: + irange **m_tab; // Non growing vector. + int m_tab_size; + int_range<2> m_varying; + int_range<2> m_undefined; tree m_type; irange_allocator *m_irange_allocator; }; @@ -155,55 +179,43 @@ private: // Initialize a block cache for an ssa_name of type T. -ssa_block_ranges::ssa_block_ranges (tree t, irange_allocator *allocator) +sbr_vector::sbr_vector (tree t, irange_allocator *allocator) { gcc_checking_assert (TYPE_P (t)); m_type = t; m_irange_allocator = allocator; - - m_tab.create (0); - m_tab.safe_grow_cleared (last_basic_block_for_fn (cfun)); + m_tab_size = last_basic_block_for_fn (cfun) + 1; + m_tab = (irange **)allocator->get_memory (m_tab_size * sizeof (irange *)); + memset (m_tab, 0, m_tab_size * sizeof (irange *)); // Create the cached type range. - m_type_range = m_irange_allocator->allocate (2); - m_type_range->set_varying (t); - - m_tab[ENTRY_BLOCK_PTR_FOR_FN (cfun)->index] = m_type_range; -} - -// Destruct block range. - -ssa_block_ranges::~ssa_block_ranges () -{ - m_tab.release (); + m_varying.set_varying (t); + m_undefined.set_undefined (); } // Set the range for block BB to be R. void -ssa_block_ranges::set_bb_range (const basic_block bb, const irange &r) +sbr_vector::set_bb_range (const basic_block bb, const irange &r) { - gcc_checking_assert ((unsigned) bb->index < m_tab.length ()); - irange *m = m_irange_allocator->allocate (r); + irange *m; + gcc_checking_assert (bb->index < m_tab_size); + if (r.varying_p ()) + m = &m_varying; + else if (r.undefined_p ()) + m = &m_undefined; + else + m = m_irange_allocator->allocate (r); m_tab[bb->index] = m; } -// Set the range for block BB to the range for the type. - -void -ssa_block_ranges::set_bb_varying (const basic_block bb) -{ - gcc_checking_assert ((unsigned) bb->index < m_tab.length ()); - m_tab[bb->index] = m_type_range; -} - // Return the range associated with block BB in R. Return false if // there is no range. bool -ssa_block_ranges::get_bb_range (irange &r, const basic_block bb) +sbr_vector::get_bb_range (irange &r, const basic_block bb) { - gcc_checking_assert ((unsigned) bb->index < m_tab.length ()); + gcc_checking_assert (bb->index < m_tab_size); irange *m = m_tab[bb->index]; if (m) { @@ -216,30 +228,12 @@ ssa_block_ranges::get_bb_range (irange &r, const basic_block bb) // Return true if a range is present. bool -ssa_block_ranges::bb_range_p (const basic_block bb) +sbr_vector::bb_range_p (const basic_block bb) { - gcc_checking_assert ((unsigned) bb->index < m_tab.length ()); + gcc_checking_assert (bb->index < m_tab_size); return m_tab[bb->index] != NULL; } - -// Print the list of known ranges for file F in a nice format. - -void -ssa_block_ranges::dump (FILE *f) -{ - basic_block bb; - int_range_max r; - - FOR_EACH_BB_FN (bb, cfun) - if (get_bb_range (r, bb)) - { - fprintf (f, "BB%d -> ", bb->index); - r.dump (f); - fprintf (f, "\n"); - } -} - // ------------------------------------------------------------------------- // Initialize the block cache. @@ -255,38 +249,36 @@ block_range_cache::block_range_cache () block_range_cache::~block_range_cache () { - unsigned x; - for (x = 0; x < m_ssa_ranges.length (); ++x) - { - if (m_ssa_ranges[x]) - delete m_ssa_ranges[x]; - } delete m_irange_allocator; // Release the vector itself. m_ssa_ranges.release (); } -// Return a reference to the ssa_block_cache for NAME. If it has not been -// accessed yet, allocate it first. +// Set the range for NAME on entry to block BB to R. +// If it has not been // accessed yet, allocate it first. -ssa_block_ranges & -block_range_cache::get_block_ranges (tree name) +void +block_range_cache::set_bb_range (tree name, const basic_block bb, + const irange &r) { unsigned v = SSA_NAME_VERSION (name); if (v >= m_ssa_ranges.length ()) m_ssa_ranges.safe_grow_cleared (num_ssa_names + 1); if (!m_ssa_ranges[v]) - m_ssa_ranges[v] = new ssa_block_ranges (TREE_TYPE (name), + { + void *r = m_irange_allocator->get_memory (sizeof (sbr_vector)); + m_ssa_ranges[v] = new (r) sbr_vector (TREE_TYPE (name), m_irange_allocator); - return *(m_ssa_ranges[v]); + } + m_ssa_ranges[v]->set_bb_range (bb, r); } // Return a pointer to the ssa_block_cache for NAME. If it has not been // accessed yet, return NULL. -ssa_block_ranges * +inline ssa_block_ranges * block_range_cache::query_block_ranges (tree name) { unsigned v = SSA_NAME_VERSION (name); @@ -295,22 +287,7 @@ block_range_cache::query_block_ranges (tree name) return m_ssa_ranges[v]; } -// Set the range for NAME on entry to block BB to R. -void -block_range_cache::set_bb_range (tree name, const basic_block bb, - const irange &r) -{ - return get_block_ranges (name).set_bb_range (bb, r); -} - -// Set the range for NAME on entry to block BB to varying. - -void -block_range_cache::set_bb_varying (tree name, const basic_block bb) -{ - return get_block_ranges (name).set_bb_varying (bb); -} // Return the range for NAME on entry to BB in R. Return true if there // is one. diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h index 986a68a9e06..15e6d0c61c4 100644 --- a/gcc/gimple-range-cache.h +++ b/gcc/gimple-range-cache.h @@ -51,7 +51,6 @@ public: ~block_range_cache (); void set_bb_range (tree name, const basic_block bb, const irange &r); - void set_bb_varying (tree name, const basic_block bb); bool get_bb_range (irange &r, tree name, const basic_block bb); bool bb_range_p (tree name, const basic_block bb); diff --git a/gcc/value-range.h b/gcc/value-range.h index f63433a4e14..3e58dad4e90 100644 --- a/gcc/value-range.h +++ b/gcc/value-range.h @@ -639,6 +639,7 @@ public: // Return a copy of SRC with the minimum amount of sub-ranges needed // to represent it. irange *allocate (const irange &src); + void *get_memory (unsigned num_bytes); private: DISABLE_COPY_AND_ASSIGN (irange_allocator); struct obstack m_obstack; @@ -656,6 +657,14 @@ irange_allocator::~irange_allocator () obstack_free (&m_obstack, NULL); } +// Provide a hunk of memory from the obstack +inline void * +irange_allocator::get_memory (unsigned num_bytes) +{ + void *r = obstack_alloc (&m_obstack, num_bytes); + return r; +} + // Return a new range with NUM_PAIRS. inline irange *