From patchwork Tue Oct 24 10:50:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 1854301 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4SF8424FS0z23jl for ; Tue, 24 Oct 2023 21:50:58 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5BD163858431 for ; Tue, 24 Oct 2023 10:50:56 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id DD0703858025 for ; Tue, 24 Oct 2023 10:50:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org DD0703858025 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org DD0703858025 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1698144645; cv=none; b=W09rsJnVHPfXFeD9GOHbEtVhR9YWmtFV4s2IkV/7I/IqlFN5YMwCdWfovcxkVi7/IYaZqf7v1KhLV9oJxMRh1FsLRAmJWHxwdQR95Mo8xaBAol9NaqQt/b3dZcRIAJemJsBo1GZ6Bgdq9uPg9SxrLuUJWVm4kpnkuIIqYOWeDJM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1698144645; c=relaxed/simple; bh=O2iLleg38Rr/NPczpp2K4om9A6SV1HF+vapwJSRLb38=; h=From:To:Subject:Date:Message-Id:MIME-Version; b=Jvwhdny3h3RPf1GgL+gKhaw2AbrBzsEPqcv/ufJUA1SoC3m6QCwbqR7LC3GVIdKSa3OWiTDCtF/uutVp9r/QyI3ZeWaYKk+9JANh8n4TVVk84SZpthrgJsu1p5kS8AHInV7ASvqyPzrA6SlNeHuLLKw5dAthtxfarsX8x0x40Mk= ARC-Authentication-Results: i=1; server2.sourceware.org Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ACB562F4; Tue, 24 Oct 2023 03:51:20 -0700 (PDT) Received: from e121540-lin.manchester.arm.com (e121540-lin.manchester.arm.com [10.32.110.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1EECE3F64C; Tue, 24 Oct 2023 03:50:39 -0700 (PDT) From: Richard Sandiford To: jlaw@ventanamicro.com, gcc-patches@gcc.gnu.org Cc: Richard Sandiford Subject: [PATCH 3/6] rtl-ssa: Fix ICE when deleting memory clobbers Date: Tue, 24 Oct 2023 11:50:03 +0100 Message-Id: <20231024105006.3337671-4-richard.sandiford@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231024105006.3337671-1-richard.sandiford@arm.com> References: <20231024105006.3337671-1-richard.sandiford@arm.com> MIME-Version: 1.0 X-Spam-Status: No, score=-24.0 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_NONE, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sometimes an optimisation can remove a clobber of scratch registers or scratch memory. We then need to update the DU chains to reflect the removed clobber. For registers this isn't a problem. Clobbers of registers are just momentary blips in the register's lifetime. They act as a barrier for moving uses later or defs earlier, but otherwise they have no effect on the semantics of other instructions. Removing a clobber is therefore a cheap, local operation. In contrast, clobbers of memory are modelled as full sets. This is because (a) a clobber of memory does not invalidate *all* memory and (b) it's a common idiom to use (clobber (mem ...)) in stack barriers. But removing a set and redirecting all uses to a different set is a linear operation. Doing it for potentially every optimisation could lead to quadratic behaviour. This patch therefore refrains from removing sets of memory that appear to be redundant. There's an opportunity to clean this up in linear time at the end of the pass, but as things stand, nothing would benefit from that. This is also a very rare event. Usually we should try to optimise the insn before the scratch memory has been allocated. gcc/ * rtl-ssa/changes.cc (function_info::finalize_new_accesses): If a change describes a set of memory, ensure that that set is kept, regardless of the insn pattern. --- gcc/rtl-ssa/changes.cc | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/gcc/rtl-ssa/changes.cc b/gcc/rtl-ssa/changes.cc index c73c23c86fb..5800f9dba97 100644 --- a/gcc/rtl-ssa/changes.cc +++ b/gcc/rtl-ssa/changes.cc @@ -429,8 +429,18 @@ function_info::finalize_new_accesses (insn_change &change, insn_info *pos) // Also keep any explicitly-recorded call clobbers, which are deliberately // excluded from the vec_rtx_properties. Calls shouldn't move, so we can // keep the definitions in their current position. + // + // If the change describes a set of memory, but the pattern doesn't + // reference memory, keep the set anyway. This can happen if the + // old pattern was a parallel that contained a memory clobber, and if + // the new pattern was recognized without that clobber. Keeping the + // set avoids a linear-complexity update to the set's users. + // + // ??? We could queue an update so that these bogus clobbers are + // removed later. for (def_info *def : change.new_defs) - if (def->m_has_been_superceded && def->is_call_clobber ()) + if (def->m_has_been_superceded + && (def->is_call_clobber () || def->is_mem ())) { def->m_has_been_superceded = false; def->set_insn (insn); @@ -535,7 +545,7 @@ function_info::finalize_new_accesses (insn_change &change, insn_info *pos) } } - // Install the new list of definitions in CHANGE. + // Install the new list of uses in CHANGE. sort_accesses (m_temp_uses); change.new_uses = use_array (temp_access_array (m_temp_uses)); m_temp_uses.truncate (0);