diff mbox series

uprobes/x86: emulate push insns for uprobe on x86

Message ID 20171109080155.359718-1-yhs@fb.com
State Not Applicable, archived
Delegated to: David Miller
Headers show
Series uprobes/x86: emulate push insns for uprobe on x86 | expand

Commit Message

Yonghong Song Nov. 9, 2017, 8:01 a.m. UTC
Uprobe is a tracing mechanism for userspace programs.
Typical uprobe will incur overhead of two traps.
First trap is caused by replaced trap insn, and
the second trap is to execute the original displaced
insn in user space.

To reduce the overhead, kernel provides hooks
for architectures to emulate the original insn
and skip the second trap. In x86, emulation
is done for certain branch insns.

This patch extends the emulation to "push <reg>"
insns. These insns are typical in the beginning
of the function. For example, bcc
in https://github.com/iovisor/bcc repo provides
tools to measure funclantency, detect memleak, etc.
The tools will place uprobes in the beginning of
function and possibly uretprobes at the end of function.
This patch is able to reduce the trap overhead for
uprobe from 2 to 1.

Without this patch, uretprobe will typically incur
three traps. With this patch, if the function starts
with "push" insn, the number of traps can be
reduced from 3 to 2.

An experiment was conducted on two local VMs,
fedora 26 64-bit VM and 32-bit VM, both 4 processors
and 4GB memory, booted with latest x86/urgent (and this patch).
The host is MacBook with intel i7 processor.

The test program looks like
  #include <stdio.h>
  #include <stdlib.h>
  #include <time.h>
  #include <sys/time.h>

  static void test() __attribute__((noinline));
  void test() {}
  int main() {
    struct timeval start, end;

    gettimeofday(&start, NULL);
    for (int i = 0; i < 1000000; i++) {
      test();
    }
    gettimeofday(&end, NULL);

    printf("%ld\n", ((end.tv_sec * 1000000 + end.tv_usec)
                     - (start.tv_sec * 1000000 + start.tv_usec)));
    return 0;
  }

The program is compiled without optimization, and
the first insn for function "test" is "push %rbp".
The host is relatively idle.

Before the test run, the uprobe is inserted as below for uprobe:
  echo 'p <binary>:<test_func_offset>' > /sys/kernel/debug/tracing/uprobe_events
  echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable
and for uretprobe:
  echo 'r <binary>:<test_func_offset>' > /sys/kernel/debug/tracing/uprobe_events
  echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable

Unit: microsecond(usec) per loop iteration

x86_64          W/ this patch   W/O this patch
uprobe          1.55            3.1
uretprobe       2.0             3.6

x86_32          W/ this patch   W/O this patch
uprobe          1.41            3.5
uretprobe       1.75            4.0

You can see that this patch significantly reduced the overhead,
50% for uprobe and 44% for uretprobe on x86_64, and even more
on x86_32.

Signed-off-by: Yonghong Song <yhs@fb.com>
---
 arch/x86/include/asm/uprobes.h |  10 ++++
 arch/x86/kernel/uprobes.c      | 115 +++++++++++++++++++++++++++++++++++------
 2 files changed, 109 insertions(+), 16 deletions(-)

Changelog:
  . Make commit subject more appropriate

Comments

Oleg Nesterov Nov. 9, 2017, 1:44 p.m. UTC | #1
On 11/09, Yonghong Song wrote:
>
> This patch extends the emulation to "push <reg>"
> insns. These insns are typical in the beginning
> of the function. For example, bcc
> in https://github.com/iovisor/bcc repo provides
> tools to measure funclantency, detect memleak, etc.
> The tools will place uprobes in the beginning of
> function and possibly uretprobes at the end of function.
> This patch is able to reduce the trap overhead for
> uprobe from 2 to 1.

OK. but to be honest I do not like the implementation, please see below.

> +enum uprobe_insn_t {
> +	UPROBE_BRANCH_INSN	= 0,
> +	UPROBE_PUSH_INSN	= 1,
> +};
> +
>  struct uprobe_xol_ops;
>
>  struct arch_uprobe {
> @@ -42,6 +47,7 @@ struct arch_uprobe {
>  	};
>
>  	const struct uprobe_xol_ops	*ops;
> +	enum uprobe_insn_t		insn_class;

Why?

I'd suggest to leave branch_xol_ops alone and add the new push_xol_ops{},
the code will look much simpler.

The only thing they can share is branch_post_xol_op() which is just

	regs->sp += sizeof_long();
	return -ERESTART;

I think a bit of code duplication would be fine in this case.

And. Do you really need ->post_xol() method to emulate "push"? Why we can't
simply execute it out-of-line if copy_to_user() fails?

branch_post_xol_op() is needed because we can't execute "call" out-of-line,
we need to restart and try again if copy_to_user() fails, but I don not
understand why it is needed to emulate "push".

Oleg.
Oleg Nesterov Nov. 9, 2017, 2:04 p.m. UTC | #2
On 11/09, Oleg Nesterov wrote:
>
> And. Do you really need ->post_xol() method to emulate "push"? Why we can't
> simply execute it out-of-line if copy_to_user() fails?
>
> branch_post_xol_op() is needed because we can't execute "call" out-of-line,
> we need to restart and try again if copy_to_user() fails, but I don not
> understand why it is needed to emulate "push".

If I wasn't clear, please see the comment in branch_clear_offset().

Oleg.
Oleg Nesterov Nov. 9, 2017, 2:47 p.m. UTC | #3
On 11/09, Yonghong Song wrote:
>
> +	if (insn_class == UPROBE_PUSH_INSN) {
> +		src_ptr = get_push_reg_ptr(auprobe, regs);
> +		reg_width = sizeof_long();
> +		sp = regs->sp;
> +		if (copy_to_user((void __user *)(sp - reg_width), src_ptr, reg_width))
> +			return false;
> +
> +		regs->sp = sp - reg_width;
> +		regs->ip += 1 + (auprobe->push.rex_prefix != 0);
> +		return true;

Another nit... You can rename push_ret_address() and use it here

		src_ptr = ...;
		if (push_ret_address(regs, *src_ptr))
			return false;

		regs->ip += ...;
		return true;

and I think get_push_reg_ptr() should just return "unsigned long", not the
pointer.

And again, please make a separate method for this code. Let me repeat, the
main reason for branch_xol_ops/etc is that we simply can not execute these
insns out-of-line, we have to emulate them. "push" differs, the only reason
why we may want to emulate it is optimization.

Oleg.
Yonghong Song Nov. 9, 2017, 9:53 p.m. UTC | #4
On 11/9/17 5:44 AM, Oleg Nesterov wrote:
> On 11/09, Yonghong Song wrote:
>>
>> This patch extends the emulation to "push <reg>"
>> insns. These insns are typical in the beginning
>> of the function. For example, bcc
>> in https://github.com/iovisor/bcc repo provides
>> tools to measure funclantency, detect memleak, etc.
>> The tools will place uprobes in the beginning of
>> function and possibly uretprobes at the end of function.
>> This patch is able to reduce the trap overhead for
>> uprobe from 2 to 1.
> 
> OK. but to be honest I do not like the implementation, please see below.
> 
>> +enum uprobe_insn_t {
>> +	UPROBE_BRANCH_INSN	= 0,
>> +	UPROBE_PUSH_INSN	= 1,
>> +};
>> +
>>   struct uprobe_xol_ops;
>>
>>   struct arch_uprobe {
>> @@ -42,6 +47,7 @@ struct arch_uprobe {
>>   	};
>>
>>   	const struct uprobe_xol_ops	*ops;
>> +	enum uprobe_insn_t		insn_class;
> 
> Why?
> 
> I'd suggest to leave branch_xol_ops alone and add the new push_xol_ops{},
> the code will look much simpler.
> 
> The only thing they can share is branch_post_xol_op() which is just
> 
> 	regs->sp += sizeof_long();
> 	return -ERESTART;
> 
> I think a bit of code duplication would be fine in this case.

Just prototyped. Agreed, having seperate uprobe_xol_ops for "push" 
emulation is clean and better.

> 
> And. Do you really need ->post_xol() method to emulate "push"? Why we can't
> simply execute it out-of-line if copy_to_user() fails?

Thanks for pointing it out. Agreed, we do not really need post_xol for 
"push". xol execution is just fine.

Will address your other comments as well in the next revision.

> 
> branch_post_xol_op() is needed because we can't execute "call" out-of-line,
> we need to restart and try again if copy_to_user() fails, but I don not
> understand why it is needed to emulate "push".
> 
> Oleg.
>
diff mbox series

Patch

diff --git a/arch/x86/include/asm/uprobes.h b/arch/x86/include/asm/uprobes.h
index 74f4c2f..f9d2b43 100644
--- a/arch/x86/include/asm/uprobes.h
+++ b/arch/x86/include/asm/uprobes.h
@@ -33,6 +33,11 @@  typedef u8 uprobe_opcode_t;
 #define UPROBE_SWBP_INSN		0xcc
 #define UPROBE_SWBP_INSN_SIZE		   1
 
+enum uprobe_insn_t {
+	UPROBE_BRANCH_INSN	= 0,
+	UPROBE_PUSH_INSN	= 1,
+};
+
 struct uprobe_xol_ops;
 
 struct arch_uprobe {
@@ -42,6 +47,7 @@  struct arch_uprobe {
 	};
 
 	const struct uprobe_xol_ops	*ops;
+	enum uprobe_insn_t		insn_class;
 
 	union {
 		struct {
@@ -53,6 +59,10 @@  struct arch_uprobe {
 			u8	fixups;
 			u8	ilen;
 		} 			defparam;
+		struct {
+			u8	rex_prefix;
+			u8	opc1;
+		}			push;
 	};
 };
 
diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
index a3755d2..5ace65c 100644
--- a/arch/x86/kernel/uprobes.c
+++ b/arch/x86/kernel/uprobes.c
@@ -640,11 +640,71 @@  static bool check_jmp_cond(struct arch_uprobe *auprobe, struct pt_regs *regs)
 #undef	COND
 #undef	CASE_COND
 
-static bool branch_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
+static unsigned long *get_push_reg_ptr(struct arch_uprobe *auprobe,
+				       struct pt_regs *regs)
 {
-	unsigned long new_ip = regs->ip += auprobe->branch.ilen;
-	unsigned long offs = (long)auprobe->branch.offs;
+#if defined(CONFIG_X86_64)
+	switch (auprobe->push.opc1) {
+	case 0x50:
+		return auprobe->push.rex_prefix ? &regs->r8 : &regs->ax;
+	case 0x51:
+		return auprobe->push.rex_prefix ? &regs->r9 : &regs->cx;
+	case 0x52:
+		return auprobe->push.rex_prefix ? &regs->r10 : &regs->dx;
+	case 0x53:
+		return auprobe->push.rex_prefix ? &regs->r11 : &regs->bx;
+	case 0x54:
+		return auprobe->push.rex_prefix ? &regs->r12 : &regs->sp;
+	case 0x55:
+		return auprobe->push.rex_prefix ? &regs->r13 : &regs->bp;
+	case 0x56:
+		return auprobe->push.rex_prefix ? &regs->r14 : &regs->si;
+	}
+
+	/* opc1 0x57 */
+	return auprobe->push.rex_prefix ? &regs->r15 : &regs->di;
+#else
+	switch (auprobe->push.opc1) {
+	case 0x50:
+		return &regs->ax;
+	case 0x51:
+		return &regs->cx;
+	case 0x52:
+		return &regs->dx;
+	case 0x53:
+		return &regs->bx;
+	case 0x54:
+		return &regs->sp;
+	case 0x55:
+		return &regs->bp;
+	case 0x56:
+		return &regs->si;
+	}
 
+	/* opc1 0x57 */
+	return &regs->di;
+#endif
+}
+
+static bool sstep_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	int reg_width, insn_class = auprobe->insn_class;
+	unsigned long *src_ptr, new_ip, offs, sp;
+
+	if (insn_class == UPROBE_PUSH_INSN) {
+		src_ptr = get_push_reg_ptr(auprobe, regs);
+		reg_width = sizeof_long();
+		sp = regs->sp;
+		if (copy_to_user((void __user *)(sp - reg_width), src_ptr, reg_width))
+			return false;
+
+		regs->sp = sp - reg_width;
+		regs->ip += 1 + (auprobe->push.rex_prefix != 0);
+		return true;
+	}
+
+	new_ip = regs->ip += auprobe->branch.ilen;
+	offs = (long)auprobe->branch.offs;
 	if (branch_is_call(auprobe)) {
 		/*
 		 * If it fails we execute this (mangled, see the comment in
@@ -665,14 +725,18 @@  static bool branch_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
 	return true;
 }
 
-static int branch_post_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
+static int sstep_post_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
 {
-	BUG_ON(!branch_is_call(auprobe));
+	BUG_ON(auprobe->insn_class != UPROBE_PUSH_INSN &&
+	       !branch_is_call(auprobe));
 	/*
-	 * We can only get here if branch_emulate_op() failed to push the ret
-	 * address _and_ another thread expanded our stack before the (mangled)
-	 * "call" insn was executed out-of-line. Just restore ->sp and restart.
-	 * We could also restore ->ip and try to call branch_emulate_op() again.
+	 * We can only get here if
+	 * - for push operation, sstep_emulate_op() failed to push the stack, or
+	 * - for branch operation, sstep_emulate_op() failed to push the ret address
+	 *   _and_ another thread expanded our stack before the (mangled)
+	 *   "call" insn was executed out-of-line.
+	 * Just restore ->sp and restart. We could also restore ->ip and try to
+	 * call sstep_emulate_op() again.
 	 */
 	regs->sp += sizeof_long();
 	return -ERESTART;
@@ -698,17 +762,18 @@  static void branch_clear_offset(struct arch_uprobe *auprobe, struct insn *insn)
 		0, insn->immediate.nbytes);
 }
 
-static const struct uprobe_xol_ops branch_xol_ops = {
-	.emulate  = branch_emulate_op,
-	.post_xol = branch_post_xol_op,
+static const struct uprobe_xol_ops sstep_xol_ops = {
+	.emulate  = sstep_emulate_op,
+	.post_xol = sstep_post_xol_op,
 };
 
-/* Returns -ENOSYS if branch_xol_ops doesn't handle this insn */
-static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
+/* Returns -ENOSYS if sstep_xol_ops doesn't handle this insn */
+static int sstep_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
 {
 	u8 opc1 = OPCODE1(insn);
 	int i;
 
+	auprobe->insn_class = UPROBE_BRANCH_INSN;
 	switch (opc1) {
 	case 0xeb:	/* jmp 8 */
 	case 0xe9:	/* jmp 32 */
@@ -719,6 +784,23 @@  static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
 		branch_clear_offset(auprobe, insn);
 		break;
 
+	case 0x50 ... 0x57:
+		if (insn->length > 2)
+			return -ENOSYS;
+		if (insn->length == 2) {
+			/* only support rex_prefix 0x41 (x64 only) */
+			if (insn->rex_prefix.nbytes != 1 ||
+			    insn->rex_prefix.bytes[0] != 0x41)
+				return -ENOSYS;
+			auprobe->push.rex_prefix = 0x41;
+		} else {
+			auprobe->push.rex_prefix = 0;
+		}
+
+		auprobe->insn_class = UPROBE_PUSH_INSN;
+		auprobe->push.opc1 = opc1;
+		goto set_ops;
+
 	case 0x0f:
 		if (insn->opcode.nbytes != 2)
 			return -ENOSYS;
@@ -746,7 +828,8 @@  static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
 	auprobe->branch.ilen = insn->length;
 	auprobe->branch.offs = insn->immediate.value;
 
-	auprobe->ops = &branch_xol_ops;
+set_ops:
+	auprobe->ops = &sstep_xol_ops;
 	return 0;
 }
 
@@ -767,7 +850,7 @@  int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
 	if (ret)
 		return ret;
 
-	ret = branch_setup_xol_ops(auprobe, &insn);
+	ret = sstep_setup_xol_ops(auprobe, &insn);
 	if (ret != -ENOSYS)
 		return ret;