Message ID | 20180113025952.3451758-1-ast@kernel.org |
---|---|
State | Accepted, archived |
Delegated to: | BPF Maintainers |
Headers | show |
Series | [bpf] bpf: fix 32-bit divide by zero | expand |
On Fri, Jan 12, 2018 at 06:59:52PM -0800, Alexei Starovoitov wrote: > due to some JITs doing if (src_reg == 0) check in 64-bit mode > for div/mod opreations mask upper 32-bits of src register > before doing the check > > Fixes: 622582786c9e ("net: filter: x86: internal BPF JIT") > Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.") > Reported-by: syzbot+48340bb518e88849e2e3@syzkaller.appspotmail.com > Signed-off-by: Alexei Starovoitov <ast@kernel.org> > --- > arm64 jit seems to be ok > haven't analyzed other JITs s390 looks ok mips64 looks buggy arm32 ebpf jit doesn't have if src == 0 check powerpc looks ok > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 20eb04fd155e..b7448347e6b6 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -4445,6 +4445,24 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) > int i, cnt, delta = 0; > > for (i = 0; i < insn_cnt; i++, insn++) { > + if (insn->code == (BPF_ALU | BPF_MOD | BPF_X) || > + insn->code == (BPF_ALU | BPF_DIV | BPF_X)) { > + /* due to JIT bugs clear upper 32-bits of src register > + * before div/mod operation > + */ > + insn_buf[0] = BPF_MOV32_REG(insn->src_reg, insn->src_reg); > + insn_buf[1] = *insn; long term such mask allows us to insert 'if (src_reg == 0) goto error' by the verifier and remove corresponding branches from JITs. Without mask such comparison is not possible, because eBPF doesn't have 32-bit compare and jump instructions. Furthermore the verifier tracks values in the registers and in many cases knows that src_reg cannot be zero, so insertion of 'if (src_reg == 0)' safety check can be conditional. > diff --git a/net/core/filter.c b/net/core/filter.c > index d339ef170df6..1c0eb436671f 100644 > --- a/net/core/filter.c > +++ b/net/core/filter.c > @@ -458,6 +458,10 @@ static int bpf_convert_filter(struct sock_filter *prog, int len, > convert_bpf_extensions(fp, &insn)) > break; > > + if (fp->code == (BPF_ALU | BPF_DIV | BPF_X) || > + fp->code == (BPF_ALU | BPF_MOD | BPF_X)) > + *insn++ = BPF_MOV32_REG(BPF_REG_X, BPF_REG_X); > + > *insn = BPF_RAW_INSN(fp->code, BPF_REG_A, BPF_REG_X, 0, fp->k); this hunk is not strictly necessary, since in classic->extended conversion all operations are 32-bit and BPF_REG_X will have upper 32-bit cleared before div/mod, so buggy JITs will be fine, but div/mod are slow anyway and extra bpf_mov32_reg won't hurt performance, so I prefer to keep this hunk to have less things to worry about.
On 01/13/2018 03:59 AM, Alexei Starovoitov wrote: > due to some JITs doing if (src_reg == 0) check in 64-bit mode > for div/mod opreations mask upper 32-bits of src register > before doing the check > > Fixes: 622582786c9e ("net: filter: x86: internal BPF JIT") > Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.") > Reported-by: syzbot+48340bb518e88849e2e3@syzkaller.appspotmail.com > Signed-off-by: Alexei Starovoitov <ast@kernel.org> Applied to bpf as well, thanks Alexei!
On Fri, 2018-01-12 at 18:59 -0800, Alexei Starovoitov wrote: > due to some JITs doing if (src_reg == 0) check in 64-bit mode > for div/mod opreations mask upper 32-bits of src register > before doing the check > Is the plan to fix JIT, and if they can all be fixed, revert this patch ? x86 patch would be something like : diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 87f214fbe66ec163d24b12b6defc7edab612ecc9..91e4ab69573e09f793eb1c1e29d1b5ffad1d5dc7 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -548,8 +548,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, if (BPF_SRC(insn->code) == BPF_X) { /* if (src_reg == 0) return 0 */ - /* cmp r11, 0 */ - EMIT4(0x49, 0x83, 0xFB, 0x00); + if (BPF_CLASS(insn->code) == BPF_ALU64) { + /* cmp r11, 0 */ + EMIT4(0x49, 0x83, 0xFB, 0x00); + } else { + /* cmp r11d, 0 */ + EMIT4(0x41, 0x83, 0xFB, 0x00); + } /* jne .+9 (skip over pop, pop, xor and jmp) */ EMIT2(X86_JNE, 1 + 1 + 2 + 5);
On 1/18/18 2:30 PM, Eric Dumazet wrote: > On Fri, 2018-01-12 at 18:59 -0800, Alexei Starovoitov wrote: >> due to some JITs doing if (src_reg == 0) check in 64-bit mode >> for div/mod opreations mask upper 32-bits of src register >> before doing the check >> > > Is the plan to fix JIT, and if they can all be fixed, > revert this patch ? No need to fix JITs. I'd rather remove 'cmp rX, 0' from JITs and let verifier emit it. It's more generic and gives us flexibility to decide what to do with divide by zero and other exceptions.
On 01/18/2018 11:40 PM, Alexei Starovoitov wrote: > On 1/18/18 2:30 PM, Eric Dumazet wrote: >> On Fri, 2018-01-12 at 18:59 -0800, Alexei Starovoitov wrote: >>> due to some JITs doing if (src_reg == 0) check in 64-bit mode >>> for div/mod opreations mask upper 32-bits of src register >>> before doing the check >> >> Is the plan to fix JIT, and if they can all be fixed, >> revert this patch ? > > No need to fix JITs. > I'd rather remove 'cmp rX, 0' from JITs and let verifier emit it. > It's more generic and gives us flexibility to decide what to do > with divide by zero and other exceptions. Yeah, working on this, but patch most likely for the next window. The 'return 0' is really suboptimal as exception code for some program types, so I'd like to have struct bpf_verifier_ops to define such exception return code to be used, so that the verifier can apply this via bpf insns directly.
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 20eb04fd155e..b7448347e6b6 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4445,6 +4445,24 @@ static int fixup_bpf_calls(struct bpf_verifier_env *env) int i, cnt, delta = 0; for (i = 0; i < insn_cnt; i++, insn++) { + if (insn->code == (BPF_ALU | BPF_MOD | BPF_X) || + insn->code == (BPF_ALU | BPF_DIV | BPF_X)) { + /* due to JIT bugs clear upper 32-bits of src register + * before div/mod operation + */ + insn_buf[0] = BPF_MOV32_REG(insn->src_reg, insn->src_reg); + insn_buf[1] = *insn; + cnt = 2; + new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt); + if (!new_prog) + return -ENOMEM; + + delta += cnt - 1; + env->prog = prog = new_prog; + insn = new_prog->insnsi + i + delta; + continue; + } + if (insn->code != (BPF_JMP | BPF_CALL)) continue; diff --git a/net/core/filter.c b/net/core/filter.c index d339ef170df6..1c0eb436671f 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -458,6 +458,10 @@ static int bpf_convert_filter(struct sock_filter *prog, int len, convert_bpf_extensions(fp, &insn)) break; + if (fp->code == (BPF_ALU | BPF_DIV | BPF_X) || + fp->code == (BPF_ALU | BPF_MOD | BPF_X)) + *insn++ = BPF_MOV32_REG(BPF_REG_X, BPF_REG_X); + *insn = BPF_RAW_INSN(fp->code, BPF_REG_A, BPF_REG_X, 0, fp->k); break;
due to some JITs doing if (src_reg == 0) check in 64-bit mode for div/mod opreations mask upper 32-bits of src register before doing the check Fixes: 622582786c9e ("net: filter: x86: internal BPF JIT") Fixes: 7a12b5031c6b ("sparc64: Add eBPF JIT.") Reported-by: syzbot+48340bb518e88849e2e3@syzkaller.appspotmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> --- arm64 jit seems to be ok haven't analyzed other JITs It works around the interpreter bug too, but I think the interpreter worth fixing anyway. --- kernel/bpf/verifier.c | 18 ++++++++++++++++++ net/core/filter.c | 4 ++++ 2 files changed, 22 insertions(+)