{"id":2226762,"url":"http://patchwork.ozlabs.org/api/patches/2226762/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hhup4p5wp2.gcc.gcc-TEST.pinskia.84.1.5@forge-stage.sourceware.org/","project":{"id":17,"url":"http://patchwork.ozlabs.org/api/projects/17/?format=json","name":"GNU Compiler Collection","link_name":"gcc","list_id":"gcc-patches.gcc.gnu.org","list_email":"gcc-patches@gcc.gnu.org","web_url":null,"scm_url":null,"webscm_url":null,"list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<bmm.hhup4p5wp2.gcc.gcc-TEST.pinskia.84.1.5@forge-stage.sourceware.org>","list_archive_url":null,"date":"2026-04-22T18:49:13","name":"[v1,05/10] fab/forwprop: Move optimize stack restore to forwprop [PR121762]","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"7a025fa4cc3a35bbd48219a3a87baed6ebd40529","submitter":{"id":93219,"url":"http://patchwork.ozlabs.org/api/people/93219/?format=json","name":"Andrew Pinski via Sourceware Forge","email":"forge-bot+pinskia@forge-stage.sourceware.org"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hhup4p5wp2.gcc.gcc-TEST.pinskia.84.1.5@forge-stage.sourceware.org/mbox/","series":[{"id":501092,"url":"http://patchwork.ozlabs.org/api/series/501092/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/list/?series=501092","date":"2026-04-22T18:49:11","name":"remove_fab","version":1,"mbox":"http://patchwork.ozlabs.org/series/501092/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2226762/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2226762/checks/","tags":{},"related":[],"headers":{"Return-Path":"<gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=2620:52:6:3111::32; helo=vm01.sourceware.org;\n envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org; dmarc=none (p=none dis=none)\n header.from=forge-stage.sourceware.org","sourceware.org;\n spf=pass smtp.mailfrom=forge-stage.sourceware.org","server2.sourceware.org;\n arc=none smtp.remote-ip=38.145.34.39"],"Received":["from vm01.sourceware.org (vm01.sourceware.org\n [IPv6:2620:52:6:3111::32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g18P401Nyz1yGs\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 23 Apr 2026 05:27:03 +1000 (AEST)","from vm01.sourceware.org (localhost [127.0.0.1])\n\tby sourceware.org (Postfix) with ESMTP id 20461450B9EF\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 19:26:59 +0000 (GMT)","from forge-stage.sourceware.org (vm08.sourceware.org [38.145.34.39])\n by sourceware.org (Postfix) with ESMTPS id 74A9C40A2C4D\n for <gcc-patches@gcc.gnu.org>; Wed, 22 Apr 2026 18:50:43 +0000 (GMT)","from forge-stage.sourceware.org (localhost [IPv6:::1])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange x25519 server-signature ECDSA (prime256v1) server-digest SHA256)\n (No client certificate requested)\n by forge-stage.sourceware.org (Postfix) with ESMTPS id B677443594\n for <gcc-patches@gcc.gnu.org>; Wed, 22 Apr 2026 18:50:41 +0000 (UTC)"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org 20461450B9EF","OpenDKIM Filter v2.11.0 sourceware.org 74A9C40A2C4D"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 74A9C40A2C4D","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org 74A9C40A2C4D","ARC-Seal":"i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1776883843; cv=none;\n b=FTiMIDXANeI4xzckMrCX9BIghy/DD8OoIWibos9blcbCkFJLzNpfar+BKhRNJ9MSAnRwk+sYN4zHuakjoYHpdXJ2xKfwEwTfue6lDkFGONtSKPzY/aZSXLYWOYpQqHl4ls+TwUjfEsCI8q+mDEfrDqfheMSm7duh7vxGYDGvP8U=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=sourceware.org; s=key;\n t=1776883843; c=relaxed/simple;\n bh=gqMP1WkqAhaPqkW7X8StMypH9lqGa7G0jyAD4iHfZCU=;\n h=From:Date:Subject:To:Message-ID;\n b=Y0DalhZoiVkNXvRVPbWrG7FCOuFMmiKpxh+7JYd12cP+dwEObG5r5gmOFGvrwEmowt1OAWU4caqlly2sxfWKSqKA3ks5guII1vbOt/D5Lh+AxQJkwNHt1QtGDXEyPJJDIRDbk5WlyFRYgxWCX3MFAs+SJTKvDH6UKNUxAvzhAs4=","ARC-Authentication-Results":"i=1; server2.sourceware.org","From":"Andrew Pinski via Sourceware Forge\n <forge-bot+pinskia@forge-stage.sourceware.org>","Date":"Wed, 22 Apr 2026 18:49:13 +0000","Subject":"[PATCH v1 05/10] fab/forwprop: Move optimize stack restore to\n forwprop [PR121762]","To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>","Message-ID":"\n <bmm.hhup4p5wp2.gcc.gcc-TEST.pinskia.84.1.5@forge-stage.sourceware.org>","X-Mailer":"batrachomyomachia","X-Pull-Request-Organization":"gcc","X-Pull-Request-Repository":"gcc-TEST","X-Pull-Request":"https://forge.sourceware.org/gcc/gcc-TEST/pulls/84","References":"\n <bmm.hhup4p5wp2.gcc.gcc-TEST.pinskia.84.1.0@forge-stage.sourceware.org>","In-Reply-To":"\n <bmm.hhup4p5wp2.gcc.gcc-TEST.pinskia.84.1.0@forge-stage.sourceware.org>","X-Patch-URL":"\n https://forge.sourceware.org/pinskia/gcc-TEST/commit/978e5248cb2331f4997502eed34e2fedb9a8052d","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Reply-To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>,\n pinskia@gcc.gnu.org","Errors-To":"gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org"},"content":"From: Andrew Pinski <andrew.pinski@oss.qualcomm.com>\n\nThis moves the removal of redundant stack restore from fab to forwprop.\nThere is a possibility of removing some of the redundant stack restores\nbefore the last folding but that is for a different patch.\n\nBootstrapped and tested on x86_64-linux-gnu.\n\n\tPR tree-optimization/121762\ngcc/ChangeLog:\n\n\t* tree-ssa-ccp.cc (optimize_stack_restore): Move to tree-ssa-forwprop.cc.\n\t(pass_fold_builtins::execute): Don't call optimize_stack_restore.\n\t* tree-ssa-forwprop.cc (optimize_stack_restore): New function from\n\ttree-ssa-ccp.cc. Return bool instead of value, use replace_call_with_value\n\tistead of returning integer_zero_node.\n\t(simplify_builtin_call): Call optimize_stack_restore.\n\nSigned-off-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>\n---\n gcc/tree-ssa-ccp.cc      | 113 ---------------------------------------\n gcc/tree-ssa-forwprop.cc | 109 +++++++++++++++++++++++++++++++++++++\n 2 files changed, 109 insertions(+), 113 deletions(-)","diff":"diff --git a/gcc/tree-ssa-ccp.cc b/gcc/tree-ssa-ccp.cc\nindex d2c133345cd7..739d3be91293 100644\n--- a/gcc/tree-ssa-ccp.cc\n+++ b/gcc/tree-ssa-ccp.cc\n@@ -3085,112 +3085,6 @@ make_pass_ccp (gcc::context *ctxt)\n   return new pass_ccp (ctxt);\n }\n \n-\n-\n-/* Try to optimize out __builtin_stack_restore.  Optimize it out\n-   if there is another __builtin_stack_restore in the same basic\n-   block and no calls or ASM_EXPRs are in between, or if this block's\n-   only outgoing edge is to EXIT_BLOCK and there are no calls or\n-   ASM_EXPRs after this __builtin_stack_restore.\n-   Note restore right before a noreturn function is not needed.\n-   And skip some cheap calls that will most likely become an instruction.  */\n-\n-static tree\n-optimize_stack_restore (gimple_stmt_iterator i)\n-{\n-  tree callee;\n-  gimple *stmt;\n-\n-  basic_block bb = gsi_bb (i);\n-  gimple *call = gsi_stmt (i);\n-\n-  if (gimple_code (call) != GIMPLE_CALL\n-      || gimple_call_num_args (call) != 1\n-      || TREE_CODE (gimple_call_arg (call, 0)) != SSA_NAME\n-      || !POINTER_TYPE_P (TREE_TYPE (gimple_call_arg (call, 0))))\n-    return NULL_TREE;\n-\n-  for (gsi_next (&i); !gsi_end_p (i); gsi_next (&i))\n-    {\n-      stmt = gsi_stmt (i);\n-      if (is_a<gasm*> (stmt))\n-\treturn NULL_TREE;\n-      gcall *call = dyn_cast<gcall*>(stmt);\n-      if (!call)\n-\tcontinue;\n-\n-      /* We can remove the restore in front of noreturn\n-\t calls.  Since the restore will happen either\n-\t via an unwind/longjmp or not at all. */\n-      if (gimple_call_noreturn_p (call))\n-\tbreak;\n-\n-      /* Internal calls are ok, to bypass\n-\t check first since fndecl will be null. */\n-      if (gimple_call_internal_p (call))\n-\tcontinue;\n-\n-      callee = gimple_call_fndecl (call);\n-      /* Non-builtin calls are not ok. */\n-      if (!callee\n-\t  || !fndecl_built_in_p (callee))\n-\treturn NULL_TREE;\n-\n-      /* Do not remove stack updates before strub leave.  */\n-      if (fndecl_built_in_p (callee, BUILT_IN___STRUB_LEAVE)\n-\t  /* Alloca calls are not ok either. */\n-\t  || fndecl_builtin_alloc_p (callee))\n-\treturn NULL_TREE;\n-\n-      if (fndecl_built_in_p (callee, BUILT_IN_STACK_RESTORE))\n-\tgoto second_stack_restore;\n-\n-      /* If not a simple or inexpensive builtin, then it is not ok either. */\n-      if (!is_simple_builtin (callee)\n-\t  && !is_inexpensive_builtin (callee))\n-\treturn NULL_TREE;\n-    }\n-\n-  /* Allow one successor of the exit block, or zero successors.  */\n-  switch (EDGE_COUNT (bb->succs))\n-    {\n-    case 0:\n-      break;\n-    case 1:\n-      if (single_succ_edge (bb)->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))\n-\treturn NULL_TREE;\n-      break;\n-    default:\n-      return NULL_TREE;\n-    }\n- second_stack_restore:\n-\n-  /* If there's exactly one use, then zap the call to __builtin_stack_save.\n-     If there are multiple uses, then the last one should remove the call.\n-     In any case, whether the call to __builtin_stack_save can be removed\n-     or not is irrelevant to removing the call to __builtin_stack_restore.  */\n-  if (has_single_use (gimple_call_arg (call, 0)))\n-    {\n-      gimple *stack_save = SSA_NAME_DEF_STMT (gimple_call_arg (call, 0));\n-      if (is_gimple_call (stack_save))\n-\t{\n-\t  callee = gimple_call_fndecl (stack_save);\n-\t  if (callee && fndecl_built_in_p (callee, BUILT_IN_STACK_SAVE))\n-\t    {\n-\t      gimple_stmt_iterator stack_save_gsi;\n-\t      tree rhs;\n-\n-\t      stack_save_gsi = gsi_for_stmt (stack_save);\n-\t      rhs = build_int_cst (TREE_TYPE (gimple_call_arg (call, 0)), 0);\n-\t      replace_call_with_value (&stack_save_gsi, rhs);\n-\t    }\n-\t}\n-    }\n-\n-  /* No effect, so the statement will be deleted.  */\n-  return integer_zero_node;\n-}\n-\n /* If va_list type is a simple pointer and nothing special is needed,\n    optimize __builtin_va_start (&ap, 0) into ap = __builtin_next_arg (0),\n    __builtin_va_end (&ap) out as NOP and __builtin_va_copy into a simple\n@@ -4342,13 +4236,6 @@ pass_fold_builtins::execute (function *fun)\n \t      switch (DECL_FUNCTION_CODE (callee))\n \t\t{\n \n-\t\tcase BUILT_IN_STACK_RESTORE:\n-\t\t  result = optimize_stack_restore (i);\n-\t\t  if (result)\n-\t\t    break;\n-\t\t  gsi_next (&i);\n-\t\t  continue;\n-\n \t\tcase BUILT_IN_UNREACHABLE:\n \t\t  if (optimize_unreachable (i))\n \t\t    cfg_changed = true;\ndiff --git a/gcc/tree-ssa-forwprop.cc b/gcc/tree-ssa-forwprop.cc\nindex 917f3d90f6c2..06ce34c6782c 100644\n--- a/gcc/tree-ssa-forwprop.cc\n+++ b/gcc/tree-ssa-forwprop.cc\n@@ -2132,6 +2132,113 @@ simplify_builtin_memcpy_memset (gimple_stmt_iterator *gsi_p, gcall *stmt2)\n     }\n }\n \n+\n+/* Try to optimize out __builtin_stack_restore.  Optimize it out\n+   if there is another __builtin_stack_restore in the same basic\n+   block and no calls or ASM_EXPRs are in between, or if this block's\n+   only outgoing edge is to EXIT_BLOCK and there are no calls or\n+   ASM_EXPRs after this __builtin_stack_restore.\n+   Note restore right before a noreturn function is not needed.\n+   And skip some cheap calls that will most likely become an instruction.  */\n+\n+static bool\n+optimize_stack_restore (gimple_stmt_iterator *gsi, gimple *call)\n+{\n+  if (!(cfun->curr_properties & PROP_last_full_fold))\n+    return false;\n+  tree callee;\n+  gimple *stmt;\n+\n+  basic_block bb = gsi_bb (*gsi);\n+\n+  if (gimple_call_num_args (call) != 1\n+      || TREE_CODE (gimple_call_arg (call, 0)) != SSA_NAME\n+      || !POINTER_TYPE_P (TREE_TYPE (gimple_call_arg (call, 0))))\n+    return false;\n+\n+  gimple_stmt_iterator i = *gsi;\n+  for (gsi_next (&i); !gsi_end_p (i); gsi_next (&i))\n+    {\n+      stmt = gsi_stmt (i);\n+      if (is_a<gasm*> (stmt))\n+\treturn false;\n+      gcall *call = dyn_cast<gcall*>(stmt);\n+      if (!call)\n+\tcontinue;\n+\n+      /* We can remove the restore in front of noreturn\n+\t calls.  Since the restore will happen either\n+\t via an unwind/longjmp or not at all. */\n+      if (gimple_call_noreturn_p (call))\n+\tbreak;\n+\n+      /* Internal calls are ok, to bypass\n+\t check first since fndecl will be null. */\n+      if (gimple_call_internal_p (call))\n+\tcontinue;\n+\n+      callee = gimple_call_fndecl (call);\n+      /* Non-builtin calls are not ok. */\n+      if (!callee\n+\t  || !fndecl_built_in_p (callee))\n+\treturn false;\n+\n+      /* Do not remove stack updates before strub leave.  */\n+      if (fndecl_built_in_p (callee, BUILT_IN___STRUB_LEAVE)\n+\t  /* Alloca calls are not ok either. */\n+\t  || fndecl_builtin_alloc_p (callee))\n+\treturn false;\n+\n+      if (fndecl_built_in_p (callee, BUILT_IN_STACK_RESTORE))\n+\tgoto second_stack_restore;\n+\n+      /* If not a simple or inexpensive builtin, then it is not ok either. */\n+      if (!is_simple_builtin (callee)\n+\t  && !is_inexpensive_builtin (callee))\n+\treturn false;\n+    }\n+\n+  /* Allow one successor of the exit block, or zero successors.  */\n+  switch (EDGE_COUNT (bb->succs))\n+    {\n+    case 0:\n+      break;\n+    case 1:\n+      if (single_succ_edge (bb)->dest != EXIT_BLOCK_PTR_FOR_FN (cfun))\n+\treturn false;\n+      break;\n+    default:\n+      return false;\n+    }\n+ second_stack_restore:\n+\n+  /* If there's exactly one use, then zap the call to __builtin_stack_save.\n+     If there are multiple uses, then the last one should remove the call.\n+     In any case, whether the call to __builtin_stack_save can be removed\n+     or not is irrelevant to removing the call to __builtin_stack_restore.  */\n+  if (has_single_use (gimple_call_arg (call, 0)))\n+    {\n+      gimple *stack_save = SSA_NAME_DEF_STMT (gimple_call_arg (call, 0));\n+      if (is_gimple_call (stack_save))\n+\t{\n+\t  callee = gimple_call_fndecl (stack_save);\n+\t  if (callee && fndecl_built_in_p (callee, BUILT_IN_STACK_SAVE))\n+\t    {\n+\t      gimple_stmt_iterator stack_save_gsi;\n+\t      tree rhs;\n+\n+\t      stack_save_gsi = gsi_for_stmt (stack_save);\n+\t      rhs = build_int_cst (TREE_TYPE (gimple_call_arg (call, 0)), 0);\n+\t      replace_call_with_value (&stack_save_gsi, rhs);\n+\t    }\n+\t}\n+    }\n+\n+  /* No effect, so the statement will be deleted.  */\n+  replace_call_with_value (gsi, NULL_TREE);\n+  return true;\n+}\n+\n /* *GSI_P is a GIMPLE_CALL to a builtin function.\n    Optimize\n    memcpy (p, \"abcd\", 4);\n@@ -2163,6 +2270,8 @@ simplify_builtin_call (gimple_stmt_iterator *gsi_p, tree callee2, bool full_walk\n \n   switch (DECL_FUNCTION_CODE (callee2))\n     {\n+    case BUILT_IN_STACK_RESTORE:\n+      return optimize_stack_restore (gsi_p, as_a<gcall*>(stmt2));\n     case BUILT_IN_MEMCMP:\n     case BUILT_IN_MEMCMP_EQ:\n       return simplify_builtin_memcmp (gsi_p, as_a<gcall*>(stmt2));\n","prefixes":["v1","05/10"]}