From patchwork Tue Jan 1 01:37:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 1019726 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="bjyuT7ut"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43TGzH1Rphz9s7h for ; Tue, 1 Jan 2019 12:38:19 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728165AbfAABiC (ORCPT ); Mon, 31 Dec 2018 20:38:02 -0500 Received: from mail-qt1-f194.google.com ([209.85.160.194]:39208 "EHLO mail-qt1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727302AbfAABiB (ORCPT ); Mon, 31 Dec 2018 20:38:01 -0500 Received: by mail-qt1-f194.google.com with SMTP id u47so25855679qtj.6 for ; Mon, 31 Dec 2018 17:38:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JEA3mU15lIFIbYv0DcEoxkWZNyvWCtoVFMANmncd1Xw=; b=bjyuT7utvBBwwE1I4crOpKtfFJbIedqw2oG5VZd9M+Z9sLNeGLmN8zGtXJkexkTFWc TYJYMTpYd6P6ARxzVM8zribgsf1nMEXJIFaTELMI7PPTowGbpvAlzovgVfSHByPQVrUO SbZMlOCfgBr7F/sF88PZoQEOTrzGHs383ErXA1C6rU0d0Fol34GzRb4TvVUXvOGmkcAV lraFxz1C7NZLXpHpY5TH6J5jn/azvdjVywXym9vrEuOLC8cDkJqeTxnUu8t0V5iOXnvX Q63ehQaN4AOqmWBftnF/1JglSj0y+vlLBqKaqYRYluPNgOQjo+J7ulOLkRTsjgCGW1g/ opEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JEA3mU15lIFIbYv0DcEoxkWZNyvWCtoVFMANmncd1Xw=; b=QdBlEVZbKOxgGoymBwOMUJ2fVqSeN2EpN5AYFILjI6q9jOdj5Vl/4BUuylutZATQQx k/Z8nmZwbyhVQJnu/3vass/0kmxH4Yrvip/h6X0JEhvUXp16tNxJ9pLDtIn9oGUiCYls Bj7uR9QEd6n0QG4p7zCWry7sjoZNHKjxE0LZhj2rZd7li/786B9IMAXFq8gAlJTIbCdZ 16stviDkStKNWKhFKNrIj2oOgMsVmEy06EP8OwTyMi46G9NcF0DbTg40ZQ7+l5HJIDpZ 4gI4ZXAPYq6u3xK92m42bB0DF62xNUMOneGtYGMek8neL5DWztWPUPO99aFKfoSvhLdk 2gWA== X-Gm-Message-State: AJcUukdZJ/53K7kvOgFHRt19BisG4niNVZUXeSRxhgowE5axzpl11w8+ +vZbQLspiSjSFVpRavT7EFKKP4Z2nBg= X-Google-Smtp-Source: ALg8bN7Zr3RfbrU7Ke26SLQJmY+SNPb/D9414HvjcVy1RIswXdaQqCvfr56357V5/QnUBFBj+WZFgA== X-Received: by 2002:a0c:86c1:: with SMTP id 1mr37307507qvg.6.1546306680308; Mon, 31 Dec 2018 17:38:00 -0800 (PST) Received: from jkicinski-Precision-T1700.netronome.com ([66.60.152.14]) by smtp.gmail.com with ESMTPSA id p72sm19707544qke.87.2018.12.31.17.37.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 31 Dec 2018 17:37:59 -0800 (PST) From: Jakub Kicinski To: alexei.starovoitov@gmail.com, daniel@iogearbox.net, yhs@fb.com Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, Jakub Kicinski Subject: [RFC bpf-next v4 02/12] bpf: verifier: hard wire branches to dead code Date: Mon, 31 Dec 2018 17:37:33 -0800 Message-Id: <20190101013743.23935-3-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20190101013743.23935-1-jakub.kicinski@netronome.com> References: <20190101013743.23935-1-jakub.kicinski@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Loading programs with dead code becomes more and more common, as people begin to patch constants at load time. Turn conditional jumps to unconditional ones, to avoid potential branch misprediction penalty. This optimization is enabled for privileged users only. For branches which just fall through we could just mark them as not seen and have dead code removal take care of them, but that seems less clean. v0.2: - don't call capable(CAP_SYS_ADMIN) twice (Jiong). v3: - fix GCC warning; Signed-off-by: Jakub Kicinski --- kernel/bpf/verifier.c | 45 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 43 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 71d86e3024ae..30e2cd399b4a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -6259,6 +6259,40 @@ static void sanitize_dead_code(struct bpf_verifier_env *env) } } +static bool insn_is_cond_jump(u8 code) +{ + u8 op; + + if (BPF_CLASS(code) != BPF_JMP) + return false; + + op = BPF_OP(code); + return op != BPF_JA && op != BPF_EXIT && op != BPF_CALL; +} + +static void opt_hard_wire_dead_code_branches(struct bpf_verifier_env *env) +{ + struct bpf_insn_aux_data *aux_data = env->insn_aux_data; + struct bpf_insn ja = BPF_JMP_IMM(BPF_JA, 0, 0, 0); + struct bpf_insn *insn = env->prog->insnsi; + const int insn_cnt = env->prog->len; + int i; + + for (i = 0; i < insn_cnt; i++, insn++) { + if (!insn_is_cond_jump(insn->code)) + continue; + + if (!aux_data[i + 1].seen) + ja.off = insn->off; + else if (!aux_data[i + 1 + insn->off].seen) + ja.off = 0; + else + continue; + + memcpy(insn, &ja, sizeof(ja)); + } +} + /* convert load instructions that access fields of a context type into a * sequence of instructions that access fields of the underlying structure: * struct __sk_buff -> struct sk_buff @@ -6899,6 +6933,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, struct bpf_verifier_env *env; struct bpf_verifier_log *log; int ret = -EINVAL; + bool is_priv; /* no program is valid */ if (ARRAY_SIZE(bpf_verifier_ops) == 0) @@ -6945,6 +6980,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (attr->prog_flags & BPF_F_ANY_ALIGNMENT) env->strict_alignment = false; + is_priv = capable(CAP_SYS_ADMIN); + env->allow_ptr_leaks = is_priv; + ret = replace_map_fd_with_map_ptr(env); if (ret < 0) goto skip_full_check; @@ -6962,8 +7000,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (!env->explored_states) goto skip_full_check; - env->allow_ptr_leaks = capable(CAP_SYS_ADMIN); - ret = check_subprogs(env); if (ret < 0) goto skip_full_check; @@ -6993,6 +7029,11 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, ret = check_max_stack_depth(env); /* instruction rewrites happen after this point */ + if (is_priv) { + if (ret == 0) + opt_hard_wire_dead_code_branches(env); + } + if (ret == 0) sanitize_dead_code(env);