diff mbox series

libgcc, riscv: Add restore libcalls to be used by tail calling functions

Message ID CAJd+PRN7SDCCtKT58YN08pJv5Qi9YfYF989tYs5ta8HpRrgDZg@mail.gmail.com
State New
Headers show
Series libgcc, riscv: Add restore libcalls to be used by tail calling functions | expand

Commit Message

Lewis Revill March 29, 2022, 2:08 p.m. UTC
Currently the existing libcalls for restoring registers have the
requirement that they must be tail called by the parent function, so
that they can safely return through the restored return address
register. This does impose the restriction that the libcalls cannot be
used if there already exists a tail call at the end of the parent
function in question, and as such this patch forms part of an effort to
rectify this situation.

There already exists patches to LLVM and Compiler-RT to add the libcalls
and the capability for the compiler to generate them
(https://reviews.llvm.org/D91720 and https://reviews.llvm.org/D91719),
and the behaviour that we want to standardize across the compilers is
documented in the following pull request to the RISC-V toolchain
conventions repository:
https://github.com/riscv-non-isa/riscv-toolchain-conventions/pull/10

The libcalls added in this patch follow that documented behaviour and
are based off a suggested implementation provided by Jim Wilson in the
thread of that pull request. Similar to the existing restore libcalls,
restores are grouped according to the expected stack alignment, and the
'upper' libcalls fall through to the lower libcalls, finally ending in
return through the temporary register t1.

libgcc/

* config/riscv/restore-tail.S: Add restore libcalls compatible
with use from functions ending in tail calls.
* config/riscv/t-elf: Add file restore-tail.S.
---
 libgcc/config/riscv/restore-tail.S | 279 +++++++++++++++++++++++++++++
 libgcc/config/riscv/t-elf          |   1 +
 2 files changed, 280 insertions(+)
 create mode 100644 libgcc/config/riscv/restore-tail.S

Comments

Palmer Dabbelt April 1, 2022, 2:57 a.m. UTC | #1
On Tue, 29 Mar 2022 07:08:35 PDT (-0700), lewis.revill@embecosm.com wrote:
> Currently the existing libcalls for restoring registers have the
> requirement that they must be tail called by the parent function, so
> that they can safely return through the restored return address
> register. This does impose the restriction that the libcalls cannot be
> used if there already exists a tail call at the end of the parent
> function in question, and as such this patch forms part of an effort to
> rectify this situation.
>
> There already exists patches to LLVM and Compiler-RT to add the libcalls
> and the capability for the compiler to generate them
> (https://reviews.llvm.org/D91720 and https://reviews.llvm.org/D91719),
> and the behaviour that we want to standardize across the compilers is
> documented in the following pull request to the RISC-V toolchain
> conventions repository:
> https://github.com/riscv-non-isa/riscv-toolchain-conventions/pull/10

This generally looks good to me, but the timing is awkward: we're in 
stage 4 (so features need an exception), but my bigger worry is that 
taking support for a draft spec so late in the cycle puts us at serious 
risk of shipping the draft and being stuck with it (which is bad for 
everyone).  It looks like the spec is just waiting on GCC, though, so 
maybe we're in that chicken-and-egg stage -- a bit of a headache for 
that to show up in stage 4 as there's no room for error, but this one 
seems manageable.

If this is aimed at GCC-13, then I think it's best to make sure we also 
have the GCC support for emitting calls to those routines -- otherwise 
it'll be very hard to test this.  The good news is that in that case 
there's time, it's just a chunk of extra work to do.  That should also 
make alignment with the spec timeline easy, as we'll have many months of 
slack.

Regardless, it seems like this is mostly Jim's code so I'll defer to him 
here.

Thanks!

> The libcalls added in this patch follow that documented behaviour and
> are based off a suggested implementation provided by Jim Wilson in the
> thread of that pull request. Similar to the existing restore libcalls,
> restores are grouped according to the expected stack alignment, and the
> 'upper' libcalls fall through to the lower libcalls, finally ending in
> return through the temporary register t1.
>
> libgcc/
>
> * config/riscv/restore-tail.S: Add restore libcalls compatible
> with use from functions ending in tail calls.
> * config/riscv/t-elf: Add file restore-tail.S.
> ---
>  libgcc/config/riscv/restore-tail.S | 279 +++++++++++++++++++++++++++++
>  libgcc/config/riscv/t-elf          |   1 +
>  2 files changed, 280 insertions(+)
>  create mode 100644 libgcc/config/riscv/restore-tail.S
>
> diff --git a/libgcc/config/riscv/restore-tail.S
> b/libgcc/config/riscv/restore-tail.S
> new file mode 100644
> index 00000000000..54116beff17
> --- /dev/null
> +++ b/libgcc/config/riscv/restore-tail.S
> @@ -0,0 +1,279 @@
> +/* Tail-call compatible callee-saved register restore routines for RISC-V.
> +
> +   Copyright (C) 2022 Free Software Foundation, Inc.
> +
> +This file is part of GCC.
> +
> +GCC is free software; you can redistribute it and/or modify it under
> +the terms of the GNU General Public License as published by the Free
> +Software Foundation; either version 3, or (at your option) any later
> +version.
> +
> +GCC is distributed in the hope that it will be useful, but WITHOUT ANY
> +WARRANTY; without even the implied warranty of MERCHANTABILITY or
> +FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
> +for more details.
> +
> +Under Section 7 of GPL version 3, you are granted additional
> +permissions described in the GCC Runtime Library Exception, version
> +3.1, as published by the Free Software Foundation.
> +
> +You should have received a copy of the GNU General Public License and
> +a copy of the GCC Runtime Library Exception along with this program;
> +see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
> +<http://www.gnu.org/licenses/>.  */
> +
> +#include "riscv-asm.h"
> +
> +  .text
> +
> +#if __riscv_xlen == 64
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_12)
> +  .cfi_startproc
> +  .cfi_def_cfa_offset 112
> +  .cfi_offset 27, -104
> +  .cfi_offset 26, -96
> +  .cfi_offset 25, -88
> +  .cfi_offset 24, -80
> +  .cfi_offset 23, -72
> +  .cfi_offset 22, -64
> +  .cfi_offset 21, -56
> +  .cfi_offset 20, -48
> +  .cfi_offset 19, -40
> +  .cfi_offset 18, -32
> +  .cfi_offset 9, -24
> +  .cfi_offset 8, -16
> +  .cfi_offset 1, -8
> +  ld s11, 8(sp)
> +  .cfi_restore 27
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_11)
> +FUNC_BEGIN (__riscv_restore_tailcall_10)
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 96
> +  ld s10, 0(sp)
> +  .cfi_restore 26
> +  ld s9, 8(sp)
> +  .cfi_restore 25
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_9)
> +FUNC_BEGIN (__riscv_restore_tailcall_8)
> +  .cfi_restore 25
> +  .cfi_restore 26
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 80
> +  ld s8, 0(sp)
> +  .cfi_restore 24
> +  ld s7, 8(sp)
> +  .cfi_restore 23
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_7)
> +FUNC_BEGIN (__riscv_restore_tailcall_6)
> +  .cfi_restore 23
> +  .cfi_restore 24
> +  .cfi_restore 25
> +  .cfi_restore 26
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 64
> +  ld s6, 0(sp)
> +  .cfi_restore 22
> +  ld s5, 8(sp)
> +  .cfi_restore 21
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_5)
> +FUNC_BEGIN (__riscv_restore_tailcall_4)
> +  .cfi_restore 21
> +  .cfi_restore 22
> +  .cfi_restore 23
> +  .cfi_restore 24
> +  .cfi_restore 25
> +  .cfi_restore 26
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 48
> +  ld s4, 0(sp)
> +  .cfi_restore 20
> +  ld s3, 8(sp)
> +  .cfi_restore 19
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_3)
> +FUNC_BEGIN (__riscv_restore_tailcall_2)
> +  .cfi_restore 19
> +  .cfi_restore 20
> +  .cfi_restore 21
> +  .cfi_restore 22
> +  .cfi_restore 23
> +  .cfi_restore 24
> +  .cfi_restore 25
> +  .cfi_restore 26
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 32
> +  ld s2, 0(sp)
> +  .cfi_restore 18
> +  ld s1, 8(sp)
> +  .cfi_restore 9
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_1)
> +FUNC_BEGIN (__riscv_restore_tailcall_0)
> +  .cfi_restore 9
> +  .cfi_restore 18
> +  .cfi_restore 19
> +  .cfi_restore 20
> +  .cfi_restore 21
> +  .cfi_restore 22
> +  .cfi_restore 23
> +  .cfi_restore 24
> +  .cfi_restore 25
> +  .cfi_restore 26
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 16
> +  ld s0, 0(sp)
> +  .cfi_restore 8
> +  ld ra, 8(sp)
> +  .cfi_restore 1
> +  addi sp, sp, 16
> +  .cfi_def_cfa_offset 0
> +  jr t1
> +  .cfi_endproc
> +FUNC_END (__riscv_restore_tailcall_12)
> +FUNC_END (__riscv_restore_tailcall_11)
> +FUNC_END (__riscv_restore_tailcall_10)
> +FUNC_END (__riscv_restore_tailcall_9)
> +FUNC_END (__riscv_restore_tailcall_8)
> +FUNC_END (__riscv_restore_tailcall_7)
> +FUNC_END (__riscv_restore_tailcall_6)
> +FUNC_END (__riscv_restore_tailcall_5)
> +FUNC_END (__riscv_restore_tailcall_4)
> +FUNC_END (__riscv_restore_tailcall_3)
> +FUNC_END (__riscv_restore_tailcall_2)
> +FUNC_END (__riscv_restore_tailcall_1)
> +FUNC_END (__riscv_restore_tailcall_0)
> +
> +#else
> +
> +#ifdef __riscv_32e
> +FUNC_BEGIN(__riscv_restore_tailcall_2)
> +FUNC_BEGIN(__riscv_restore_tailcall_1)
> +FUNC_BEGIN(__riscv_restore_tailcall_0)
> +  .cfi_startproc
> +  .cfi_def_cfa_offset 14
> +  lw s1, 0(sp)
> +  .cfi_restore 9
> +  lw s0, 4(sp)
> +  .cfi_restore 8
> +  lw ra, 8(sp)
> +  .cfi_restore 1
> +  addi sp, sp, 12
> +  .cfi_def_cfa_offset 0
> +  jr t1
> +  .cfi_endproc
> +FUNC_END(__riscv_restore_tailcall_2)
> +FUNC_END(__riscv_restore_tailcall_1)
> +FUNC_END(__riscv_restore_tailcall_0)
> +
> +#else
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_12)
> +  .cfi_startproc
> +  .cfi_def_cfa_offset 64
> +  .cfi_offset 27, -52
> +  .cfi_offset 26, -48
> +  .cfi_offset 25, -44
> +  .cfi_offset 24, -40
> +  .cfi_offset 23, -36
> +  .cfi_offset 22, -32
> +  .cfi_offset 21, -28
> +  .cfi_offset 20, -24
> +  .cfi_offset 19, -20
> +  .cfi_offset 18, -16
> +  .cfi_offset 9, -12
> +  .cfi_offset 8, -8
> +  .cfi_offset 1, -4
> +  lw s11, 12(sp)
> +  .cfi_restore 27
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_11)
> +FUNC_BEGIN (__riscv_restore_tailcall_10)
> +FUNC_BEGIN (__riscv_restore_tailcall_9)
> +FUNC_BEGIN (__riscv_restore_tailcall_8)
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 48
> +  lw s10, 0(sp)
> +  .cfi_restore 26
> +  lw s9, 4(sp)
> +  .cfi_restore 25
> +  lw s8, 8(sp)
> +  .cfi_restore 24
> +  lw s7, 12(sp)
> +  .cfi_restore 23
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_7)
> +FUNC_BEGIN (__riscv_restore_tailcall_6)
> +FUNC_BEGIN (__riscv_restore_tailcall_5)
> +FUNC_BEGIN (__riscv_restore_tailcall_4)
> +  .cfi_restore 23
> +  .cfi_restore 24
> +  .cfi_restore 25
> +  .cfi_restore 26
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 32
> +  lw s6, 0(sp)
> +  .cfi_restore 22
> +  lw s5, 4(sp)
> +  .cfi_restore 21
> +  lw s4, 8(sp)
> +  .cfi_restore 20
> +  lw s3, 12(sp)
> +  .cfi_restore 19
> +  addi sp, sp, 16
> +
> +FUNC_BEGIN (__riscv_restore_tailcall_3)
> +FUNC_BEGIN (__riscv_restore_tailcall_2)
> +FUNC_BEGIN (__riscv_restore_tailcall_1)
> +FUNC_BEGIN (__riscv_restore_tailcall_0)
> +  .cfi_restore 19
> +  .cfi_restore 20
> +  .cfi_restore 21
> +  .cfi_restore 22
> +  .cfi_restore 24
> +  .cfi_restore 25
> +  .cfi_restore 26
> +  .cfi_restore 27
> +  .cfi_def_cfa_offset 16
> +  lw s2, 0(sp)
> +  .cfi_restore 18
> +  lw s1, 4(sp)
> +  .cfi_restore 9
> +  lw s0, 8(sp)
> +  .cfi_restore 8
> +  lw ra, 12(sp)
> +  .cfi_restore 1
> +  addi sp, sp, 16
> +  .cfi_def_cfa_offset 0
> +  jr t1
> +  .cfi_endproc
> +FUNC_END (__riscv_restore_tailcall_12)
> +FUNC_END (__riscv_restore_tailcall_11)
> +FUNC_END (__riscv_restore_tailcall_10)
> +FUNC_END (__riscv_restore_tailcall_9)
> +FUNC_END (__riscv_restore_tailcall_8)
> +FUNC_END (__riscv_restore_tailcall_7)
> +FUNC_END (__riscv_restore_tailcall_6)
> +FUNC_END (__riscv_restore_tailcall_5)
> +FUNC_END (__riscv_restore_tailcall_4)
> +FUNC_END (__riscv_restore_tailcall_3)
> +FUNC_END (__riscv_restore_tailcall_2)
> +FUNC_END (__riscv_restore_tailcall_1)
> +FUNC_END (__riscv_restore_tailcall_0)
> +
> +#endif /* __riscv_32e */
> +
> +#endif /* __riscv_xlen == 64 */
> diff --git a/libgcc/config/riscv/t-elf b/libgcc/config/riscv/t-elf
> index 415e1fffbe7..6b105b40d82 100644
> --- a/libgcc/config/riscv/t-elf
> +++ b/libgcc/config/riscv/t-elf
> @@ -1,8 +1,9 @@
>  LIB2ADD += $(srcdir)/config/riscv/save-restore.S \
> +   $(srcdir)/config/riscv/restore-tail.S \
>     $(srcdir)/config/riscv/muldi3.S \
>     $(srcdir)/config/riscv/multi3.c \
>     $(srcdir)/config/riscv/div.S \
>     $(srcdir)/config/riscv/atomic.c \
>
>  # Avoid the full unwinder being pulled along with the division libcalls.
>  LIB2_DIVMOD_EXCEPTION_FLAGS := -fasynchronous-unwind-tables
diff mbox series

Patch

diff --git a/libgcc/config/riscv/restore-tail.S
b/libgcc/config/riscv/restore-tail.S
new file mode 100644
index 00000000000..54116beff17
--- /dev/null
+++ b/libgcc/config/riscv/restore-tail.S
@@ -0,0 +1,279 @@ 
+/* Tail-call compatible callee-saved register restore routines for RISC-V.
+
+   Copyright (C) 2022 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+Under Section 7 of GPL version 3, you are granted additional
+permissions described in the GCC Runtime Library Exception, version
+3.1, as published by the Free Software Foundation.
+
+You should have received a copy of the GNU General Public License and
+a copy of the GCC Runtime Library Exception along with this program;
+see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+<http://www.gnu.org/licenses/>.  */
+
+#include "riscv-asm.h"
+
+  .text
+
+#if __riscv_xlen == 64
+
+FUNC_BEGIN (__riscv_restore_tailcall_12)
+  .cfi_startproc
+  .cfi_def_cfa_offset 112
+  .cfi_offset 27, -104
+  .cfi_offset 26, -96
+  .cfi_offset 25, -88
+  .cfi_offset 24, -80
+  .cfi_offset 23, -72
+  .cfi_offset 22, -64
+  .cfi_offset 21, -56
+  .cfi_offset 20, -48
+  .cfi_offset 19, -40
+  .cfi_offset 18, -32
+  .cfi_offset 9, -24
+  .cfi_offset 8, -16
+  .cfi_offset 1, -8
+  ld s11, 8(sp)
+  .cfi_restore 27
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_11)
+FUNC_BEGIN (__riscv_restore_tailcall_10)
+  .cfi_restore 27
+  .cfi_def_cfa_offset 96
+  ld s10, 0(sp)
+  .cfi_restore 26
+  ld s9, 8(sp)
+  .cfi_restore 25
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_9)
+FUNC_BEGIN (__riscv_restore_tailcall_8)
+  .cfi_restore 25
+  .cfi_restore 26
+  .cfi_restore 27
+  .cfi_def_cfa_offset 80
+  ld s8, 0(sp)
+  .cfi_restore 24
+  ld s7, 8(sp)
+  .cfi_restore 23
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_7)
+FUNC_BEGIN (__riscv_restore_tailcall_6)
+  .cfi_restore 23
+  .cfi_restore 24
+  .cfi_restore 25
+  .cfi_restore 26
+  .cfi_restore 27
+  .cfi_def_cfa_offset 64
+  ld s6, 0(sp)
+  .cfi_restore 22
+  ld s5, 8(sp)
+  .cfi_restore 21
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_5)
+FUNC_BEGIN (__riscv_restore_tailcall_4)
+  .cfi_restore 21
+  .cfi_restore 22
+  .cfi_restore 23
+  .cfi_restore 24
+  .cfi_restore 25
+  .cfi_restore 26
+  .cfi_restore 27
+  .cfi_def_cfa_offset 48
+  ld s4, 0(sp)
+  .cfi_restore 20
+  ld s3, 8(sp)
+  .cfi_restore 19
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_3)
+FUNC_BEGIN (__riscv_restore_tailcall_2)
+  .cfi_restore 19
+  .cfi_restore 20
+  .cfi_restore 21
+  .cfi_restore 22
+  .cfi_restore 23
+  .cfi_restore 24
+  .cfi_restore 25
+  .cfi_restore 26
+  .cfi_restore 27
+  .cfi_def_cfa_offset 32
+  ld s2, 0(sp)
+  .cfi_restore 18
+  ld s1, 8(sp)
+  .cfi_restore 9
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_1)
+FUNC_BEGIN (__riscv_restore_tailcall_0)
+  .cfi_restore 9
+  .cfi_restore 18
+  .cfi_restore 19
+  .cfi_restore 20
+  .cfi_restore 21
+  .cfi_restore 22
+  .cfi_restore 23
+  .cfi_restore 24
+  .cfi_restore 25
+  .cfi_restore 26
+  .cfi_restore 27
+  .cfi_def_cfa_offset 16
+  ld s0, 0(sp)
+  .cfi_restore 8
+  ld ra, 8(sp)
+  .cfi_restore 1
+  addi sp, sp, 16
+  .cfi_def_cfa_offset 0
+  jr t1
+  .cfi_endproc
+FUNC_END (__riscv_restore_tailcall_12)
+FUNC_END (__riscv_restore_tailcall_11)
+FUNC_END (__riscv_restore_tailcall_10)
+FUNC_END (__riscv_restore_tailcall_9)
+FUNC_END (__riscv_restore_tailcall_8)
+FUNC_END (__riscv_restore_tailcall_7)
+FUNC_END (__riscv_restore_tailcall_6)
+FUNC_END (__riscv_restore_tailcall_5)
+FUNC_END (__riscv_restore_tailcall_4)
+FUNC_END (__riscv_restore_tailcall_3)
+FUNC_END (__riscv_restore_tailcall_2)
+FUNC_END (__riscv_restore_tailcall_1)
+FUNC_END (__riscv_restore_tailcall_0)
+
+#else
+
+#ifdef __riscv_32e
+FUNC_BEGIN(__riscv_restore_tailcall_2)
+FUNC_BEGIN(__riscv_restore_tailcall_1)
+FUNC_BEGIN(__riscv_restore_tailcall_0)
+  .cfi_startproc
+  .cfi_def_cfa_offset 14
+  lw s1, 0(sp)
+  .cfi_restore 9
+  lw s0, 4(sp)
+  .cfi_restore 8
+  lw ra, 8(sp)
+  .cfi_restore 1
+  addi sp, sp, 12
+  .cfi_def_cfa_offset 0
+  jr t1
+  .cfi_endproc
+FUNC_END(__riscv_restore_tailcall_2)
+FUNC_END(__riscv_restore_tailcall_1)
+FUNC_END(__riscv_restore_tailcall_0)
+
+#else
+
+FUNC_BEGIN (__riscv_restore_tailcall_12)
+  .cfi_startproc
+  .cfi_def_cfa_offset 64
+  .cfi_offset 27, -52
+  .cfi_offset 26, -48
+  .cfi_offset 25, -44
+  .cfi_offset 24, -40
+  .cfi_offset 23, -36
+  .cfi_offset 22, -32
+  .cfi_offset 21, -28
+  .cfi_offset 20, -24
+  .cfi_offset 19, -20
+  .cfi_offset 18, -16
+  .cfi_offset 9, -12
+  .cfi_offset 8, -8
+  .cfi_offset 1, -4
+  lw s11, 12(sp)
+  .cfi_restore 27
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_11)
+FUNC_BEGIN (__riscv_restore_tailcall_10)
+FUNC_BEGIN (__riscv_restore_tailcall_9)
+FUNC_BEGIN (__riscv_restore_tailcall_8)
+  .cfi_restore 27
+  .cfi_def_cfa_offset 48
+  lw s10, 0(sp)
+  .cfi_restore 26
+  lw s9, 4(sp)
+  .cfi_restore 25
+  lw s8, 8(sp)
+  .cfi_restore 24
+  lw s7, 12(sp)
+  .cfi_restore 23
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_7)
+FUNC_BEGIN (__riscv_restore_tailcall_6)
+FUNC_BEGIN (__riscv_restore_tailcall_5)
+FUNC_BEGIN (__riscv_restore_tailcall_4)
+  .cfi_restore 23
+  .cfi_restore 24
+  .cfi_restore 25
+  .cfi_restore 26
+  .cfi_restore 27
+  .cfi_def_cfa_offset 32
+  lw s6, 0(sp)
+  .cfi_restore 22
+  lw s5, 4(sp)
+  .cfi_restore 21
+  lw s4, 8(sp)
+  .cfi_restore 20
+  lw s3, 12(sp)
+  .cfi_restore 19
+  addi sp, sp, 16
+
+FUNC_BEGIN (__riscv_restore_tailcall_3)
+FUNC_BEGIN (__riscv_restore_tailcall_2)
+FUNC_BEGIN (__riscv_restore_tailcall_1)
+FUNC_BEGIN (__riscv_restore_tailcall_0)
+  .cfi_restore 19
+  .cfi_restore 20
+  .cfi_restore 21
+  .cfi_restore 22
+  .cfi_restore 24
+  .cfi_restore 25
+  .cfi_restore 26
+  .cfi_restore 27
+  .cfi_def_cfa_offset 16
+  lw s2, 0(sp)
+  .cfi_restore 18
+  lw s1, 4(sp)
+  .cfi_restore 9
+  lw s0, 8(sp)
+  .cfi_restore 8
+  lw ra, 12(sp)
+  .cfi_restore 1
+  addi sp, sp, 16
+  .cfi_def_cfa_offset 0
+  jr t1
+  .cfi_endproc
+FUNC_END (__riscv_restore_tailcall_12)
+FUNC_END (__riscv_restore_tailcall_11)
+FUNC_END (__riscv_restore_tailcall_10)
+FUNC_END (__riscv_restore_tailcall_9)
+FUNC_END (__riscv_restore_tailcall_8)
+FUNC_END (__riscv_restore_tailcall_7)
+FUNC_END (__riscv_restore_tailcall_6)
+FUNC_END (__riscv_restore_tailcall_5)
+FUNC_END (__riscv_restore_tailcall_4)
+FUNC_END (__riscv_restore_tailcall_3)
+FUNC_END (__riscv_restore_tailcall_2)
+FUNC_END (__riscv_restore_tailcall_1)
+FUNC_END (__riscv_restore_tailcall_0)
+
+#endif /* __riscv_32e */
+
+#endif /* __riscv_xlen == 64 */
diff --git a/libgcc/config/riscv/t-elf b/libgcc/config/riscv/t-elf
index 415e1fffbe7..6b105b40d82 100644
--- a/libgcc/config/riscv/t-elf
+++ b/libgcc/config/riscv/t-elf
@@ -1,8 +1,9 @@ 
 LIB2ADD += $(srcdir)/config/riscv/save-restore.S \
+   $(srcdir)/config/riscv/restore-tail.S \
    $(srcdir)/config/riscv/muldi3.S \
    $(srcdir)/config/riscv/multi3.c \
    $(srcdir)/config/riscv/div.S \
    $(srcdir)/config/riscv/atomic.c \

 # Avoid the full unwinder being pulled along with the division libcalls.
 LIB2_DIVMOD_EXCEPTION_FLAGS := -fasynchronous-unwind-tables