diff mbox series

kernel/module_64.c: Add REL24 relocation support of livepatch symbols

Message ID 1507008978-10145-1-git-send-email-kamalesh@linux.vnet.ibm.com (mailing list archive)
State Changes Requested
Headers show
Series kernel/module_64.c: Add REL24 relocation support of livepatch symbols | expand

Commit Message

Kamalesh Babulal Oct. 3, 2017, 5:36 a.m. UTC
With commit 425595a7fc20 ("livepatch: reuse module loader code to
write relocations") livepatch uses module loader to write relocations
of livepatch symbols, instead of managing them by arch-dependent
klp_write_module_reloc() function.

livepatch module managed relocation entries are written to sections
marked with SHF_RELA_LIVEPATCH flag and livepatch symbols within the
section are marked with SHN_LIVEPATCH symbol section index. When the
livepatching module is loaded, the livepatch symbols are resolved
before calling apply_relocate_add() to apply the relocations.

R_PPC64_REL24 relocation type resolves to a function address, those may
be local to the livepatch module or available in kernel/other modules.
For every such non-local function, apply_relocate_add() constructs a stub
(a.k.a trampoline) to branch to a function. Stub code is responsible
to save toc onto the stack, before calling the function via the global
entry point. A NOP instruction is expected after every non local function
branch, i.e. after the REL24 relocation. Which in-turn is replaced by
toc restore instruction by apply_relocate_add().

livepatch symbols with R_PPC64_REL24 relocation type, may not be
reachable within current TOC range or might not have a NOP instruction
following a branch. Latter symbols are the local symbols, whose became
global to the livepatched function. As per ABIv2, local functions are
accessed via local entry point, which is relative to current module's
toc value.

For example, consider the following livepatch relocations (the example is
from livepatch module generated by kpatch tool):

Relocation section '.klp.rela.vmlinux..text.meminfo_proc_show' at offset 0x84530 contains 44 entries:
    Offset             Info             Type      Symbol's Value   Symbol's Name + Addend
0000000000000054  000000560000000a R_PPC64_REL24  0000000000000000 .klp.sym.vmlinux.si_swapinfo,0 + 0
000000000000007c  000000570000000a R_PPC64_REL24  0000000000000000 .klp.sym.vmlinux.total_swapcache_pages,0 + 0
00000000000000e8  000000580000000a R_PPC64_REL24  0000000000000000 .klp.sym.vmlinux.show_val_kb,1 + 0
[...]

1. .klp.sym.vmlinux.si_swapinfo and .klp.sym.vmlinux.total_swapcache_pages
   are not available within the livepatch module TOC range.

2. .klp.sym.vmlinux.show_val_kb livepatch symbol was previously local
   but now global to fs/proc/meminfo.c::meminfo_proc_show().

While the livepatch module is loaded the livepatch symbols mentioned in
case 1 will fail with an error:
[   74.485405] module_64: kpatch_meminfo_string: REL24 -1152921504751525976 out of range!

and livepatch symbols mentioned in case 2 with fail with an error:
[   24.568425] module_64: kpatch_meminfo_string: Expect noop after relocate, got 3d220000

Both the failures with REL24 livepatch symbols relocation, can be resolved
by constructing a new livepatch stub. The newly setup klp_stub mimics the
functionality of entry_64.S::livepatch_handler introduced by commit
85baa095497f ("powerpc/livepatch: Add live patching support on ppc64le").

Which introduces a "livepatch stack" growing upwards from the base of
the regular stack and is used to store/restore TOC/LR values, other than
the stub setup and branch. The additional instructions sequences to handle
klp_stub increases the stub size and current ppc64_stub_insn[] is not
sufficient to hold them. This patch also introduces new
ppc64le_klp_stub_entry[], along with the helpers to find/allocate
livepatch stub.

Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jessica Yu <jeyu@kernel.org>
Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: live-patching@vger.kernel.org
---
 arch/powerpc/include/asm/module.h              |   4 +
 arch/powerpc/kernel/module_64.c                | 119 ++++++++++++++++++++++++-
 arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  77 ++++++++++++++++
 3 files changed, 197 insertions(+), 3 deletions(-)

Comments

Naveen N. Rao Oct. 3, 2017, 9:30 a.m. UTC | #1
Hi Kamalesh,

On 2017/10/03 05:36AM, Kamalesh Babulal wrote:
> With commit 425595a7fc20 ("livepatch: reuse module loader code to
> write relocations") livepatch uses module loader to write relocations
> of livepatch symbols, instead of managing them by arch-dependent
> klp_write_module_reloc() function.
> 
> livepatch module managed relocation entries are written to sections
> marked with SHF_RELA_LIVEPATCH flag and livepatch symbols within the
> section are marked with SHN_LIVEPATCH symbol section index. When the
> livepatching module is loaded, the livepatch symbols are resolved
> before calling apply_relocate_add() to apply the relocations.
> 
> R_PPC64_REL24 relocation type resolves to a function address, those may
> be local to the livepatch module or available in kernel/other modules.
> For every such non-local function, apply_relocate_add() constructs a stub
> (a.k.a trampoline) to branch to a function. Stub code is responsible
> to save toc onto the stack, before calling the function via the global
> entry point. A NOP instruction is expected after every non local function
> branch, i.e. after the REL24 relocation. Which in-turn is replaced by
> toc restore instruction by apply_relocate_add().
> 
> livepatch symbols with R_PPC64_REL24 relocation type, may not be
> reachable within current TOC range or might not have a NOP instruction
> following a branch. Latter symbols are the local symbols, whose became
> global to the livepatched function. As per ABIv2, local functions are
> accessed via local entry point, which is relative to current module's
> toc value.
> 
> For example, consider the following livepatch relocations (the example is
> from livepatch module generated by kpatch tool):
> 
> Relocation section '.klp.rela.vmlinux..text.meminfo_proc_show' at offset 0x84530 contains 44 entries:
>     Offset             Info             Type      Symbol's Value   Symbol's Name + Addend
> 0000000000000054  000000560000000a R_PPC64_REL24  0000000000000000 .klp.sym.vmlinux.si_swapinfo,0 + 0
> 000000000000007c  000000570000000a R_PPC64_REL24  0000000000000000 .klp.sym.vmlinux.total_swapcache_pages,0 + 0
> 00000000000000e8  000000580000000a R_PPC64_REL24  0000000000000000 .klp.sym.vmlinux.show_val_kb,1 + 0
> [...]
> 
> 1. .klp.sym.vmlinux.si_swapinfo and .klp.sym.vmlinux.total_swapcache_pages
>    are not available within the livepatch module TOC range.
> 
> 2. .klp.sym.vmlinux.show_val_kb livepatch symbol was previously local
>    but now global to fs/proc/meminfo.c::meminfo_proc_show().
> 
> While the livepatch module is loaded the livepatch symbols mentioned in
> case 1 will fail with an error:
> [   74.485405] module_64: kpatch_meminfo_string: REL24 -1152921504751525976 out of range!
> 
> and livepatch symbols mentioned in case 2 with fail with an error:
> [   24.568425] module_64: kpatch_meminfo_string: Expect noop after relocate, got 3d220000
> 
> Both the failures with REL24 livepatch symbols relocation, can be resolved
> by constructing a new livepatch stub. The newly setup klp_stub mimics the
> functionality of entry_64.S::livepatch_handler introduced by commit
> 85baa095497f ("powerpc/livepatch: Add live patching support on ppc64le").
> 
> Which introduces a "livepatch stack" growing upwards from the base of
> the regular stack and is used to store/restore TOC/LR values, other than
> the stub setup and branch. The additional instructions sequences to handle
> klp_stub increases the stub size and current ppc64_stub_insn[] is not
> sufficient to hold them. This patch also introduces new
> ppc64le_klp_stub_entry[], along with the helpers to find/allocate
> livepatch stub.
> 
> Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com>
> Cc: Balbir Singh <bsingharora@gmail.com>
> Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Jessica Yu <jeyu@kernel.org>
> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
> Cc: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: live-patching@vger.kernel.org
> ---
>  arch/powerpc/include/asm/module.h              |   4 +
>  arch/powerpc/kernel/module_64.c                | 119 ++++++++++++++++++++++++-
>  arch/powerpc/kernel/trace/ftrace_64_mprofile.S |  77 ++++++++++++++++
>  3 files changed, 197 insertions(+), 3 deletions(-)
> 
 
[snip]

A few small nits focusing on just the trampoline...

> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> index c98e90b..708a96d 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -249,6 +249,83 @@ livepatch_handler:
> 
>  	/* Return to original caller of live patched function */
>  	blr
> +
> +	/*
> +	 * This livepatch stub code, called from livepatch module to jump into
> +	 * kernel or other modules. It replicates the livepatch_handler code,
> +	 * with an expection of jumping to the trampoline instead of patched
> +	 * function.
> +	 */
> +	.global klp_stub_insn
> +klp_stub_insn:
> +	CURRENT_THREAD_INFO(r12, r1)
> +
> +	/* Allocate 3 x 8 bytes */
> +	ld      r11, TI_livepatch_sp(r12)
> +	addi    r11, r11, 24
> +	std     r11, TI_livepatch_sp(r12)
> +
> +	/* Save toc & real LR on livepatch stack */
> +	std     r2,  -24(r11)
> +	mflr    r12
> +	std     r12, -16(r11)
> +
> +	/* Store stack end marker */
> +	lis     r12, STACK_END_MAGIC@h
> +	ori     r12, r12, STACK_END_MAGIC@l
> +	std     r12, -8(r11)

Seeing as this is the same as livepatch_handler() except for this part 
in the middle, does it make sense to reuse livepatch_handler() with 
appropriate labels added for your use?  You could patch in the below 5 
instructions using the macros from ppc-opcode.h...

> +
> +	/*
> +	 * Stub memory is allocated dynamically, during the module load.
> +	 * Load TOC relative address into r11. module_64.c::klp_stub_for_addr()
> +	 * identifies the available free stub slot and loads the address into
> +	 * r11 with two instructions.
> +	 *
> +	 * addis r11, r2, stub_address@ha
> +	 * addi  r11, r11, stub_address@l
> +	 */
> +	.global klp_stub_entry
> +klp_stub_entry:
> +	addis   r11, r2, 0
> +	addi    r11, r11, 0
> +
> +	/* Load the r12 with called function address from entry->funcdata */
> +	ld      r12, 128(r11)
> +
> +	/* Move r12 into ctr for global entry and branch there */
> +	mtctr   r12
> +	bctrl
> +
> +	/*
> +	 * Now we are returning to the patched function. We are free to
> +	 * use r11, r12 and we can use r2 until we restore it.
> +	 */
> +	CURRENT_THREAD_INFO(r12, r1)
> +
> +	ld      r11, TI_livepatch_sp(r12)
> +
> +	/* Check stack marker hasn't been trashed */
> +	lis     r2,  STACK_END_MAGIC@h
> +	ori     r2,  r2, STACK_END_MAGIC@l
> +	ld      r12, -8(r11)
> +2:	tdne    r12, r2
> +	EMIT_BUG_ENTRY 2b, __FILE__, __LINE__ - 1, 0

If you plan to keep this trampoline separate from livepatch_handler(), 
note that the above bug entry is not required since you copy only the 
text of this trampoline elsewhere and you won't have an associated bug 
entry for that new stub address.

- Naveen
Kamalesh Babulal Oct. 4, 2017, 8:15 a.m. UTC | #2
On Tuesday 03 October 2017 03:00 PM, Naveen N . Rao wrote:

Hi Naveen,

[snip]
> 
> A few small nits focusing on just the trampoline...
> 
>> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
>> index c98e90b..708a96d 100644
>> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
>> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
>> @@ -249,6 +249,83 @@ livepatch_handler:
>>
>>  	/* Return to original caller of live patched function */
>>  	blr
>> +
>> +	/*
>> +	 * This livepatch stub code, called from livepatch module to jump into
>> +	 * kernel or other modules. It replicates the livepatch_handler code,
>> +	 * with an expection of jumping to the trampoline instead of patched
>> +	 * function.
>> +	 */
>> +	.global klp_stub_insn
>> +klp_stub_insn:
>> +	CURRENT_THREAD_INFO(r12, r1)
>> +
>> +	/* Allocate 3 x 8 bytes */
>> +	ld      r11, TI_livepatch_sp(r12)
>> +	addi    r11, r11, 24
>> +	std     r11, TI_livepatch_sp(r12)
>> +
>> +	/* Save toc & real LR on livepatch stack */
>> +	std     r2,  -24(r11)
>> +	mflr    r12
>> +	std     r12, -16(r11)
>> +
>> +	/* Store stack end marker */
>> +	lis     r12, STACK_END_MAGIC@h
>> +	ori     r12, r12, STACK_END_MAGIC@l
>> +	std     r12, -8(r11)
> 
> Seeing as this is the same as livepatch_handler() except for this part 
> in the middle, does it make sense to reuse livepatch_handler() with 
> appropriate labels added for your use?  You could patch in the below 5 
> instructions using the macros from ppc-opcode.h...

Thanks for the review. The current upstream livepatch_handler code
is a bit different. I have posted a bug fix to 
https://lists.ozlabs.org/pipermail/linuxppc-dev/2017-September/163824.html
which alters the livepatch_handler to have similar code. It's
a good idea to re-use the livepatch_handler code. I will re-spin
v2 with based on top of bug fix posted earlier.

> 
>> +
>> +	/*
>> +	 * Stub memory is allocated dynamically, during the module load.
>> +	 * Load TOC relative address into r11. module_64.c::klp_stub_for_addr()
>> +	 * identifies the available free stub slot and loads the address into
>> +	 * r11 with two instructions.
>> +	 *
>> +	 * addis r11, r2, stub_address@ha
>> +	 * addi  r11, r11, stub_address@l
>> +	 */
>> +	.global klp_stub_entry
>> +klp_stub_entry:
>> +	addis   r11, r2, 0
>> +	addi    r11, r11, 0
>> +
>> +	/* Load the r12 with called function address from entry->funcdata */
>> +	ld      r12, 128(r11)
>> +
>> +	/* Move r12 into ctr for global entry and branch there */
>> +	mtctr   r12
>> +	bctrl
>> +
>> +	/*
>> +	 * Now we are returning to the patched function. We are free to
>> +	 * use r11, r12 and we can use r2 until we restore it.
>> +	 */
>> +	CURRENT_THREAD_INFO(r12, r1)
>> +
>> +	ld      r11, TI_livepatch_sp(r12)
>> +
>> +	/* Check stack marker hasn't been trashed */
>> +	lis     r2,  STACK_END_MAGIC@h
>> +	ori     r2,  r2, STACK_END_MAGIC@l
>> +	ld      r12, -8(r11)
>> +2:	tdne    r12, r2
>> +	EMIT_BUG_ENTRY 2b, __FILE__, __LINE__ - 1, 0
> 
> If you plan to keep this trampoline separate from livepatch_handler(), 
> note that the above bug entry is not required since you copy only the 
> text of this trampoline elsewhere and you won't have an associated bug 
> entry for that new stub address.
> 

Agree, klp_stub_entry trampoline will go away in v2.
diff mbox series

Patch

diff --git a/arch/powerpc/include/asm/module.h b/arch/powerpc/include/asm/module.h
index 6c0132c..46d5eb0 100644
--- a/arch/powerpc/include/asm/module.h
+++ b/arch/powerpc/include/asm/module.h
@@ -44,6 +44,10 @@  struct mod_arch_specific {
 	unsigned long toc;
 	unsigned long tramp;
 #endif
+#ifdef CONFIG_LIVEPATCH
+	unsigned long klp_relocs;       /* Count of kernel livepatch relocations */
+#endif
+
 
 #else /* powerpc64 */
 	/* Indices of PLT sections within module. */
diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index 0b0f896..08e75ff 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -140,6 +140,25 @@  static u32 ppc64_stub_insns[] = {
 	0x4e800420			/* bctr */
 };
 
+#ifdef CONFIG_LIVEPATCH
+extern u32 klp_stub_insn[];
+extern u32 klp_stub_entry[];
+extern u32 klp_stub_insn_end[];
+
+struct ppc64le_klp_stub_entry {
+	/*
+	 * Other than setting up the stub and livepatch stub also needs to
+	 * allocate extra instructions to allocate livepatch stack,
+	 * storing/restoring TOC/LR values on/from the livepatch stack.
+	 */
+	u32 jump[31];
+	/* Used by ftrace to identify stubs */
+	u32 magic;
+	/* Data for the above code */
+	func_desc_t funcdata;
+};
+#endif
+
 #ifdef CONFIG_DYNAMIC_FTRACE
 int module_trampoline_target(struct module *mod, unsigned long addr,
 			     unsigned long *target)
@@ -239,10 +258,13 @@  static void relaswap(void *_x, void *_y, int size)
 
 /* Get size of potential trampolines required. */
 static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
-				    const Elf64_Shdr *sechdrs)
+				    const Elf64_Shdr *sechdrs,
+				    struct module *me)
 {
 	/* One extra reloc so it's always 0-funcaddr terminated */
 	unsigned long relocs = 1;
+	unsigned long sec_relocs = 0;
+	unsigned long klp_relocs = 0;
 	unsigned i;
 
 	/* Every relocated section... */
@@ -262,9 +284,20 @@  static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
 			     sechdrs[i].sh_size / sizeof(Elf64_Rela),
 			     sizeof(Elf64_Rela), relacmp, relaswap);
 
-			relocs += count_relocs((void *)sechdrs[i].sh_addr,
+			sec_relocs = count_relocs((void *)sechdrs[i].sh_addr,
 					       sechdrs[i].sh_size
 					       / sizeof(Elf64_Rela));
+			/*
+			 * size of livepatch stub is 28 instructions, whereas the
+			 * non-livepatch stub requires 7 instructions. Account for
+			 * different stub sizes and track the livepatch relocation
+			 * count in me->arch.klp_relocs.
+			 */
+			relocs += sec_relocs;
+#ifdef CONFIG_LIVEPATCH
+			if (sechdrs[i].sh_flags & SHF_RELA_LIVEPATCH)
+				klp_relocs += sec_relocs;
+#endif
 		}
 	}
 
@@ -273,6 +306,15 @@  static unsigned long get_stubs_size(const Elf64_Ehdr *hdr,
 	relocs++;
 #endif
 
+	relocs -= klp_relocs;
+#ifdef CONFIG_LIVEPATCH
+	me->arch.klp_relocs = klp_relocs;
+
+	pr_debug("Looks like a total of %lu stubs, (%lu) livepatch stubs, max\n",
+				relocs, klp_relocs);
+	return (relocs * sizeof(struct ppc64_stub_entry) +
+		klp_relocs * sizeof(struct ppc64le_klp_stub_entry));
+#endif
 	pr_debug("Looks like a total of %lu stubs, max\n", relocs);
 	return relocs * sizeof(struct ppc64_stub_entry);
 }
@@ -369,7 +411,7 @@  int module_frob_arch_sections(Elf64_Ehdr *hdr,
 		me->arch.toc_section = me->arch.stubs_section;
 
 	/* Override the stubs size */
-	sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs);
+	sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs, me);
 	return 0;
 }
 
@@ -415,6 +457,39 @@  static inline int create_stub(const Elf64_Shdr *sechdrs,
 	return 1;
 }
 
+#ifdef CONFIG_LIVEPATCH
+/* Patch stub to reference function and correct r2 value. */
+static inline int create_klp_stub(const Elf64_Shdr *sechdrs,
+				  struct ppc64le_klp_stub_entry *entry,
+				  unsigned long addr,
+				  struct module *me)
+{
+	long reladdr;
+	unsigned long klp_stub_size, klp_stub_entry_idx;
+
+	klp_stub_size = (klp_stub_insn_end - klp_stub_insn);
+	klp_stub_entry_idx = (klp_stub_entry - klp_stub_insn);
+
+	memcpy(entry->jump, klp_stub_insn, sizeof(u32) * klp_stub_size);
+
+	/* Stub uses address relative to r2. */
+	reladdr = (unsigned long)entry - my_r2(sechdrs, me);
+	if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) {
+		pr_err("%s: Address %p of stub out of range of %p.\n",
+				me->name, (void *)reladdr, (void *)my_r2);
+		return 0;
+	}
+	pr_debug("Stub %p get data from reladdr %li\n", entry, reladdr);
+
+	entry->jump[klp_stub_entry_idx] |= PPC_HA(reladdr);
+	entry->jump[klp_stub_entry_idx + 1] |= PPC_LO(reladdr);
+	entry->funcdata = func_desc(addr);
+	entry->magic = STUB_MAGIC;
+
+	return 1;
+}
+#endif
+
 /* Create stub to jump to function described in this OPD/ptr: we need the
    stub to set up the TOC ptr (r2) for the function. */
 static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
@@ -441,6 +516,38 @@  static unsigned long stub_for_addr(const Elf64_Shdr *sechdrs,
 	return (unsigned long)&stubs[i];
 }
 
+#ifdef CONFIG_LIVEPATCH
+static unsigned long klp_stub_for_addr(const Elf64_Shdr *sechdrs,
+				       unsigned long addr,
+				       struct module *me)
+{
+	struct ppc64le_klp_stub_entry *klp_stubs;
+	unsigned int num_klp_stubs = me->arch.klp_relocs;
+	unsigned int i, num_stubs;
+
+	num_stubs = (sechdrs[me->arch.stubs_section].sh_size -
+		    (num_klp_stubs * sizeof(*klp_stubs))) /
+				sizeof(struct ppc64_stub_entry);
+
+	/*
+	 * Create livepatch stubs after the regular stubs.
+	 */
+	klp_stubs = (void *)sechdrs[me->arch.stubs_section].sh_addr +
+		    (num_stubs * sizeof(struct ppc64_stub_entry));
+	for (i = 0; stub_func_addr(klp_stubs[i].funcdata); i++) {
+		BUG_ON(i >= num_klp_stubs);
+
+		if (stub_func_addr(klp_stubs[i].funcdata) == func_addr(addr))
+			return (unsigned long)&klp_stubs[i];
+	}
+
+	if (!create_klp_stub(sechdrs, &klp_stubs[i], addr, me))
+		return 0;
+
+	return (unsigned long)&klp_stubs[i];
+}
+#endif
+
 #ifdef CC_USING_MPROFILE_KERNEL
 static bool is_early_mcount_callsite(u32 *instruction)
 {
@@ -622,6 +729,12 @@  int apply_relocate_add(Elf64_Shdr *sechdrs,
 					return -ENOEXEC;
 
 				squash_toc_save_inst(strtab + sym->st_name, value);
+#ifdef CONFIG_LIVEPATCH
+			} else if (sym->st_shndx == SHN_LIVEPATCH) {
+				value = klp_stub_for_addr(sechdrs, value, me);
+				if (!value)
+					return -ENOENT;
+#endif
 			} else
 				value += local_entry_offset(sym);
 
diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
index c98e90b..708a96d 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
+++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
@@ -249,6 +249,83 @@  livepatch_handler:
 
 	/* Return to original caller of live patched function */
 	blr
+
+	/*
+	 * This livepatch stub code, called from livepatch module to jump into
+	 * kernel or other modules. It replicates the livepatch_handler code,
+	 * with an expection of jumping to the trampoline instead of patched
+	 * function.
+	 */
+	.global klp_stub_insn
+klp_stub_insn:
+	CURRENT_THREAD_INFO(r12, r1)
+
+	/* Allocate 3 x 8 bytes */
+	ld      r11, TI_livepatch_sp(r12)
+	addi    r11, r11, 24
+	std     r11, TI_livepatch_sp(r12)
+
+	/* Save toc & real LR on livepatch stack */
+	std     r2,  -24(r11)
+	mflr    r12
+	std     r12, -16(r11)
+
+	/* Store stack end marker */
+	lis     r12, STACK_END_MAGIC@h
+	ori     r12, r12, STACK_END_MAGIC@l
+	std     r12, -8(r11)
+
+	/*
+	 * Stub memory is allocated dynamically, during the module load.
+	 * Load TOC relative address into r11. module_64.c::klp_stub_for_addr()
+	 * identifies the available free stub slot and loads the address into
+	 * r11 with two instructions.
+	 *
+	 * addis r11, r2, stub_address@ha
+	 * addi  r11, r11, stub_address@l
+	 */
+	.global klp_stub_entry
+klp_stub_entry:
+	addis   r11, r2, 0
+	addi    r11, r11, 0
+
+	/* Load the r12 with called function address from entry->funcdata */
+	ld      r12, 128(r11)
+
+	/* Move r12 into ctr for global entry and branch there */
+	mtctr   r12
+	bctrl
+
+	/*
+	 * Now we are returning to the patched function. We are free to
+	 * use r11, r12 and we can use r2 until we restore it.
+	 */
+	CURRENT_THREAD_INFO(r12, r1)
+
+	ld      r11, TI_livepatch_sp(r12)
+
+	/* Check stack marker hasn't been trashed */
+	lis     r2,  STACK_END_MAGIC@h
+	ori     r2,  r2, STACK_END_MAGIC@l
+	ld      r12, -8(r11)
+2:	tdne    r12, r2
+	EMIT_BUG_ENTRY 2b, __FILE__, __LINE__ - 1, 0
+
+	/* Restore LR & toc from livepatch stack */
+	ld      r12, -16(r11)
+	mtlr    r12
+	ld      r2,  -24(r11)
+
+	/* Pop livepatch stack frame */
+	CURRENT_THREAD_INFO(r12, r1)
+	subi    r11, r11, 24
+	std     r11, TI_livepatch_sp(r12)
+
+	/* Return to original caller of live patched function */
+	blr
+
+	.global klp_stub_insn_end
+klp_stub_insn_end:
 #endif /* CONFIG_LIVEPATCH */
 
 #endif /* CONFIG_DYNAMIC_FTRACE */