diff mbox series

[bpf] bpf: do not modify min/max bounds on scalars with constant values

Message ID 20180112192345.3589-1-daniel@iogearbox.net
State Changes Requested, archived
Delegated to: BPF Maintainers
Headers show
Series [bpf] bpf: do not modify min/max bounds on scalars with constant values | expand

Commit Message

Daniel Borkmann Jan. 12, 2018, 7:23 p.m. UTC
syzkaller generated a BPF proglet and triggered a warning with
the following:

  0: (b7) r0 = 0
  1: (d5) if r0 s<= 0x0 goto pc+0
   R0=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0
  2: (1f) r0 -= r1
   R0=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0
  verifier internal error: known but bad sbounds

What happens is that in the first insn, r0's min/max value are
both 0 due to the immediate assignment, later in the jsle test
the bounds are updated for the min value in the false path,
meaning, they yield smin_val = 1 smax_val = 0, and when ctx
pointer is subtracted from r0, verifier bails out with the
internal error and throwing a WARN since smin_val != smax_val
for the known constant.

Fix is to not update any bounds of the register holding the
constant, thus in reg_set_min_max() and reg_set_min_max_inv()
we return right away, similarly as in the case when the dst
register holds a pointer value. Reason for doing so is rather
straight forward: when we have a register holding a constant
as dst, then {s,u}min_val == {s,u}max_val, thus it cannot get
any more specific in terms of upper/lower bounds than this.

In reg_set_min_max() and reg_set_min_max_inv() it's fine to
only check false_reg similarly as with __is_pointer_value()
check since at this point in time, false_reg and true_reg
both hold the same state, so we only need to pick one. This
fixes the bug and properly rejects the program with an error
of 'R0 tried to subtract pointer from scalar'.

I've been thinking to additionally reject arithmetic on ctx
pointer in adjust_ptr_min_max_vals() right upfront as well
since we reject actual access in such case later on anyway,
but there's a use case in tracing (in bcc) in combination
with passing such ctx to bpf_probe_read(), so we cannot do
that part.

Reported-by: syzbot+6d362cadd45dc0a12ba4@syzkaller.appspotmail.com
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
---
 kernel/bpf/verifier.c                       | 11 ++++
 tools/testing/selftests/bpf/test_verifier.c | 95 +++++++++++++++++++++++++++++
 2 files changed, 106 insertions(+)

Comments

Edward Cree Jan. 12, 2018, 7:52 p.m. UTC | #1
On 12/01/18 19:23, Daniel Borkmann wrote:
> syzkaller generated a BPF proglet and triggered a warning with
> the following:
>
>   0: (b7) r0 = 0
>   1: (d5) if r0 s<= 0x0 goto pc+0
>    R0=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0
>   2: (1f) r0 -= r1
>    R0=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0
>   verifier internal error: known but bad sbounds
>
> What happens is that in the first insn, r0's min/max value are
> both 0 due to the immediate assignment, later in the jsle test
> the bounds are updated for the min value in the false path,
> meaning, they yield smin_val = 1 smax_val = 0,
reg_set_min_max() refines the existing bounds, it doesn't replace
 them, so all that's happened is that the jsle handling has
 demonstrated that this branch can't be taken.
That AFAICT isn't confined to known constants, one could e.g.
 obtain inconsistent bounds with two js* insns.  Updating the
 bounds in reg_set_min_max() is right, it's where we try to use
 those sbounds in adjust_ptr_min_max_vals() that's wrong imho;
 instead the 'known' paths should be using off_reg->var_off.value
 rather than smin_val everywhere.

Alternatively we could consider not following jumps/lack-thereof
 that produce inconsistent bounds, but that can make insns
 unreachable that previously weren't and thus reject programs
 that we previously considered valid, so we probably can't get
 away with that.

-Ed
Daniel Borkmann Jan. 13, 2018, 1:01 a.m. UTC | #2
On 01/12/2018 08:52 PM, Edward Cree wrote:
> On 12/01/18 19:23, Daniel Borkmann wrote:
>> syzkaller generated a BPF proglet and triggered a warning with
>> the following:
>>
>>   0: (b7) r0 = 0
>>   1: (d5) if r0 s<= 0x0 goto pc+0
>>    R0=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0
>>   2: (1f) r0 -= r1
>>    R0=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0
>>   verifier internal error: known but bad sbounds
>>
>> What happens is that in the first insn, r0's min/max value are
>> both 0 due to the immediate assignment, later in the jsle test
>> the bounds are updated for the min value in the false path,
>> meaning, they yield smin_val = 1 smax_val = 0,
> reg_set_min_max() refines the existing bounds, it doesn't replace
>  them, so all that's happened is that the jsle handling has
>  demonstrated that this branch can't be taken.

Correct, on the one hand one could argue that for known constants
it cannot get any more refined than this already since we know
the precise constant value the register holds (thus we make the
range worse in this case fwiw), on the other hand why should they
be treated any different than registers with unknown scalars.
True that smin > smax is a result that the branch won't be taken
at runtime.

> That AFAICT isn't confined to known constants, one could e.g.
>  obtain inconsistent bounds with two js* insns.  Updating the
>  bounds in reg_set_min_max() is right, it's where we try to use
>  those sbounds in adjust_ptr_min_max_vals() that's wrong imho;
>  instead the 'known' paths should be using off_reg->var_off.value
>  rather than smin_val everywhere.

Ok, I'll check further on the side-effects and evaluate this
option of using var_off.value for a v2 as well, thanks!

> Alternatively we could consider not following jumps/lack-thereof
>  that produce inconsistent bounds, but that can make insns
>  unreachable that previously weren't and thus reject programs
>  that we previously considered valid, so we probably can't get
>  away with that.

Agree.

Thanks,
Daniel
Y Song Jan. 15, 2018, 6:38 a.m. UTC | #3
On Fri, Jan 12, 2018 at 11:23 AM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> syzkaller generated a BPF proglet and triggered a warning with
> the following:
>
>   0: (b7) r0 = 0
>   1: (d5) if r0 s<= 0x0 goto pc+0
>    R0=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0
>   2: (1f) r0 -= r1
>    R0=inv0 R1=ctx(id=0,off=0,imm=0) R10=fp0
>   verifier internal error: known but bad sbounds
>
> What happens is that in the first insn, r0's min/max value are
> both 0 due to the immediate assignment, later in the jsle test
> the bounds are updated for the min value in the false path,
> meaning, they yield smin_val = 1 smax_val = 0, and when ctx
> pointer is subtracted from r0, verifier bails out with the
> internal error and throwing a WARN since smin_val != smax_val
> for the known constant.
>
> Fix is to not update any bounds of the register holding the
> constant, thus in reg_set_min_max() and reg_set_min_max_inv()
> we return right away, similarly as in the case when the dst
> register holds a pointer value. Reason for doing so is rather
> straight forward: when we have a register holding a constant
> as dst, then {s,u}min_val == {s,u}max_val, thus it cannot get
> any more specific in terms of upper/lower bounds than this.
>
> In reg_set_min_max() and reg_set_min_max_inv() it's fine to
> only check false_reg similarly as with __is_pointer_value()
> check since at this point in time, false_reg and true_reg
> both hold the same state, so we only need to pick one. This
> fixes the bug and properly rejects the program with an error
> of 'R0 tried to subtract pointer from scalar'.
>
> I've been thinking to additionally reject arithmetic on ctx
> pointer in adjust_ptr_min_max_vals() right upfront as well
> since we reject actual access in such case later on anyway,
> but there's a use case in tracing (in bcc) in combination
> with passing such ctx to bpf_probe_read(), so we cannot do
> that part.

There is a reason why bcc does this. For example, suppose that we want to
trace kernel tracepoint, sched_process_exec,

TRACE_EVENT(sched_process_exec,

        TP_PROTO(struct task_struct *p, pid_t old_pid,
                 struct linux_binprm *bprm),

        TP_ARGS(p, old_pid, bprm),

        TP_STRUCT__entry(
                __string(       filename,       bprm->filename  )
                __field(        pid_t,          pid             )
                __field(        pid_t,          old_pid         )
        ),

        TP_fast_assign(
                __assign_str(filename, bprm->filename);
                __entry->pid            = p->pid;
                __entry->old_pid        = old_pid;
        ),

        TP_printk("filename=%s pid=%d old_pid=%d", __get_str(filename),
                  __entry->pid, __entry->old_pid)
);

Eventually structure at
/sys/kernel/debug/tracing/event/sched/sched_process_exec/format:
......
        field:__data_loc char[] filename;       offset:8;
size:4; signed:1;
        field:pid_t pid;        offset:12;      size:4; signed:1;
        field:pid_t old_pid;    offset:16;      size:4; signed:1;

and "data_loc filename" is the offset in the structure where
"filename" is stored.

Therefore, in bcc, the access sequence is:
    offset = args->filename;  /* field __data_loc filename */
    bpf_probe_read(&dst, len, (char *)args + offset);

For this kind of dynamic array in the tracepoint, the offset to access
certain field in ctx will be unknown at verification time.

So I suggest to remove the above paragraph regarding to potential ctx+offset
rejection.

>
> Reported-by: syzbot+6d362cadd45dc0a12ba4@syzkaller.appspotmail.com
> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
> ---
>  kernel/bpf/verifier.c                       | 11 ++++
>  tools/testing/selftests/bpf/test_verifier.c | 95 +++++++++++++++++++++++++++++
>  2 files changed, 106 insertions(+)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index b414d6b..6bf1275 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2617,6 +2617,14 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
>          */
>         if (__is_pointer_value(false, false_reg))
>                 return;
> +       /* Same goes for constant values. They have {s,u}min == {s,u}max,
> +        * it cannot get narrower than this. Same here, at the branch
> +        * point false_reg and true_reg have the same type, so we only
> +        * check false_reg here as well.
> +        */
> +       if (false_reg->type == SCALAR_VALUE &&
> +           tnum_is_const(false_reg->var_off))
> +               return;
>
>         switch (opcode) {
>         case BPF_JEQ:
> @@ -2689,6 +2697,9 @@ static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
>  {
>         if (__is_pointer_value(false, false_reg))
>                 return;
> +       if (false_reg->type == SCALAR_VALUE &&
> +           tnum_is_const(false_reg->var_off))
> +               return;
>
>         switch (opcode) {
>         case BPF_JEQ:
> diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
> index b510174..162d497 100644
> --- a/tools/testing/selftests/bpf/test_verifier.c
> +++ b/tools/testing/selftests/bpf/test_verifier.c
> @@ -8569,6 +8569,101 @@ static struct bpf_test tests[] = {
>                 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
>         },
>         {
> +               "check deducing bounds from const, 1",
> +               .insns = {
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
> +                       BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = REJECT,
> +               .errstr = "R0 tried to subtract pointer from scalar",
> +       },
> +       {
> +               "check deducing bounds from const, 2",
> +               .insns = {
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
> +                       BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = REJECT,
> +               .errstr = "R0 tried to subtract pointer from scalar",
> +       },
> +       {
> +               "check deducing bounds from const, 3",
> +               .insns = {
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
> +                       BPF_EXIT_INSN(),
> +                       BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = REJECT,
> +               .errstr = "R0 tried to subtract pointer from scalar",
> +       },
> +       {
> +               "check deducing bounds from const, 4",
> +               .insns = {
> +                       BPF_MOV64_IMM(BPF_REG_0, ~0),
> +                       BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
> +                       BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, mark)),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = REJECT,
> +               .errstr = "dereference of modified ctx ptr",
> +       },
> +       {
> +               "check deducing bounds from const, 5",
> +               .insns = {
> +                       BPF_MOV64_IMM(BPF_REG_0, ~0),
> +                       BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
> +                       BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
> +                       BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
> +                                   offsetof(struct __sk_buff, mark)),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = REJECT,
> +               .errstr = "dereference of modified ctx ptr",
> +       },
> +       {
> +               "check deducing bounds from const, 6",
> +               .insns = {
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
> +                       BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = REJECT,
> +               .errstr = "R0 tried to subtract pointer from scalar",
> +       },
> +       {
> +               "check deducing bounds from const, 7",
> +               .insns = {
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
> +                       BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = REJECT,
> +               .errstr = "R0 tried to subtract pointer from scalar",
> +       },
> +       {
> +               "check deducing bounds from const, 8",
> +               .insns = {
> +                       BPF_MOV64_IMM(BPF_REG_0, 0),
> +                       BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
> +                       /* Marks reg as unknown. */
> +                       BPF_ALU64_IMM(BPF_NEG, BPF_REG_0, 0),
> +                       BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
> +                       BPF_EXIT_INSN(),
> +               },
> +               .result = REJECT,
> +               .errstr = "math between ctx pointer and register with unbounded min value is not allowed",
> +       },
> +       {
>                 "bpf_exit with invalid return code. test1",
>                 .insns = {
>                         BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
> --
> 2.9.5
>
Daniel Borkmann Jan. 15, 2018, 10:40 a.m. UTC | #4
On 01/15/2018 07:38 AM, Y Song wrote:
> On Fri, Jan 12, 2018 at 11:23 AM, Daniel Borkmann <daniel@iogearbox.net> wrote:
[...]
>>
>> I've been thinking to additionally reject arithmetic on ctx
>> pointer in adjust_ptr_min_max_vals() right upfront as well
>> since we reject actual access in such case later on anyway,
>> but there's a use case in tracing (in bcc) in combination
>> with passing such ctx to bpf_probe_read(), so we cannot do
>> that part.
> 
> There is a reason why bcc does this. For example, suppose that we want to
> trace kernel tracepoint, sched_process_exec,
> 
> TRACE_EVENT(sched_process_exec,
> 
>         TP_PROTO(struct task_struct *p, pid_t old_pid,
>                  struct linux_binprm *bprm),
> 
>         TP_ARGS(p, old_pid, bprm),
> 
>         TP_STRUCT__entry(
>                 __string(       filename,       bprm->filename  )
>                 __field(        pid_t,          pid             )
>                 __field(        pid_t,          old_pid         )
>         ),
> 
>         TP_fast_assign(
>                 __assign_str(filename, bprm->filename);
>                 __entry->pid            = p->pid;
>                 __entry->old_pid        = old_pid;
>         ),
> 
>         TP_printk("filename=%s pid=%d old_pid=%d", __get_str(filename),
>                   __entry->pid, __entry->old_pid)
> );
> 
> Eventually structure at
> /sys/kernel/debug/tracing/event/sched/sched_process_exec/format:
> ......
>         field:__data_loc char[] filename;       offset:8;
> size:4; signed:1;
>         field:pid_t pid;        offset:12;      size:4; signed:1;
>         field:pid_t old_pid;    offset:16;      size:4; signed:1;
> 
> and "data_loc filename" is the offset in the structure where
> "filename" is stored.
> 
> Therefore, in bcc, the access sequence is:
>     offset = args->filename;  /* field __data_loc filename */
>     bpf_probe_read(&dst, len, (char *)args + offset);
> 
> For this kind of dynamic array in the tracepoint, the offset to access
> certain field in ctx will be unknown at verification time.

Right, that is exactly what I said in above paragraph.

> So I suggest to remove the above paragraph regarding to potential ctx+offset
> rejection.

I'm confused, I mentioned we cannot reject exactly because of this
use-case, I thought it's worth having it in the log for future
reference so we don't forget about it since it's not too obvious.

Cheers,
Daniel
Y Song Jan. 15, 2018, 5:33 p.m. UTC | #5
On Mon, Jan 15, 2018 at 2:40 AM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> On 01/15/2018 07:38 AM, Y Song wrote:
>> On Fri, Jan 12, 2018 at 11:23 AM, Daniel Borkmann <daniel@iogearbox.net> wrote:
> [...]
>>>
>>> I've been thinking to additionally reject arithmetic on ctx
>>> pointer in adjust_ptr_min_max_vals() right upfront as well
>>> since we reject actual access in such case later on anyway,
>>> but there's a use case in tracing (in bcc) in combination
>>> with passing such ctx to bpf_probe_read(), so we cannot do
>>> that part.
>>
>> There is a reason why bcc does this. For example, suppose that we want to
>> trace kernel tracepoint, sched_process_exec,
>>
>> TRACE_EVENT(sched_process_exec,
>>
>>         TP_PROTO(struct task_struct *p, pid_t old_pid,
>>                  struct linux_binprm *bprm),
>>
>>         TP_ARGS(p, old_pid, bprm),
>>
>>         TP_STRUCT__entry(
>>                 __string(       filename,       bprm->filename  )
>>                 __field(        pid_t,          pid             )
>>                 __field(        pid_t,          old_pid         )
>>         ),
>>
>>         TP_fast_assign(
>>                 __assign_str(filename, bprm->filename);
>>                 __entry->pid            = p->pid;
>>                 __entry->old_pid        = old_pid;
>>         ),
>>
>>         TP_printk("filename=%s pid=%d old_pid=%d", __get_str(filename),
>>                   __entry->pid, __entry->old_pid)
>> );
>>
>> Eventually structure at
>> /sys/kernel/debug/tracing/event/sched/sched_process_exec/format:
>> ......
>>         field:__data_loc char[] filename;       offset:8;
>> size:4; signed:1;
>>         field:pid_t pid;        offset:12;      size:4; signed:1;
>>         field:pid_t old_pid;    offset:16;      size:4; signed:1;
>>
>> and "data_loc filename" is the offset in the structure where
>> "filename" is stored.
>>
>> Therefore, in bcc, the access sequence is:
>>     offset = args->filename;  /* field __data_loc filename */
>>     bpf_probe_read(&dst, len, (char *)args + offset);
>>
>> For this kind of dynamic array in the tracepoint, the offset to access
>> certain field in ctx will be unknown at verification time.
>
> Right, that is exactly what I said in above paragraph.
>
>> So I suggest to remove the above paragraph regarding to potential ctx+offset
>> rejection.
>
> I'm confused, I mentioned we cannot reject exactly because of this
> use-case, I thought it's worth having it in the log for future
> reference so we don't forget about it since it's not too obvious.

I thought it was irrelevant to the main subject of the patch (e.g., dealing with
constant min/max, etc.) But I agree that having a log for a reference is
better since it is not too obvious.

Could you describe the "use case in bcc" part a little more in detail then?
Otherwise, it may not be really clear for some people.

Sorry for causing your confusion!

Thanks,

>
> Cheers,
> Daniel
diff mbox series

Patch

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b414d6b..6bf1275 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2617,6 +2617,14 @@  static void reg_set_min_max(struct bpf_reg_state *true_reg,
 	 */
 	if (__is_pointer_value(false, false_reg))
 		return;
+	/* Same goes for constant values. They have {s,u}min == {s,u}max,
+	 * it cannot get narrower than this. Same here, at the branch
+	 * point false_reg and true_reg have the same type, so we only
+	 * check false_reg here as well.
+	 */
+	if (false_reg->type == SCALAR_VALUE &&
+	    tnum_is_const(false_reg->var_off))
+		return;
 
 	switch (opcode) {
 	case BPF_JEQ:
@@ -2689,6 +2697,9 @@  static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
 {
 	if (__is_pointer_value(false, false_reg))
 		return;
+	if (false_reg->type == SCALAR_VALUE &&
+	    tnum_is_const(false_reg->var_off))
+		return;
 
 	switch (opcode) {
 	case BPF_JEQ:
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index b510174..162d497 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -8569,6 +8569,101 @@  static struct bpf_test tests[] = {
 		.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
 	},
 	{
+		"check deducing bounds from const, 1",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
+			BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "R0 tried to subtract pointer from scalar",
+	},
+	{
+		"check deducing bounds from const, 2",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
+			BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "R0 tried to subtract pointer from scalar",
+	},
+	{
+		"check deducing bounds from const, 3",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
+			BPF_EXIT_INSN(),
+			BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "R0 tried to subtract pointer from scalar",
+	},
+	{
+		"check deducing bounds from const, 4",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, ~0),
+			BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
+			BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, mark)),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "dereference of modified ctx ptr",
+	},
+	{
+		"check deducing bounds from const, 5",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, ~0),
+			BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
+			BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
+			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
+				    offsetof(struct __sk_buff, mark)),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "dereference of modified ctx ptr",
+	},
+	{
+		"check deducing bounds from const, 6",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
+			BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "R0 tried to subtract pointer from scalar",
+	},
+	{
+		"check deducing bounds from const, 7",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
+			BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "R0 tried to subtract pointer from scalar",
+	},
+	{
+		"check deducing bounds from const, 8",
+		.insns = {
+			BPF_MOV64_IMM(BPF_REG_0, 0),
+			BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
+			/* Marks reg as unknown. */
+			BPF_ALU64_IMM(BPF_NEG, BPF_REG_0, 0),
+			BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
+			BPF_EXIT_INSN(),
+		},
+		.result = REJECT,
+		.errstr = "math between ctx pointer and register with unbounded min value is not allowed",
+	},
+	{
 		"bpf_exit with invalid return code. test1",
 		.insns = {
 			BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),