From patchwork Tue Mar 26 18:05:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065880 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="ZJNiYvf2"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJwv3m6Jz9sSb for ; Wed, 27 Mar 2019 05:06:39 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732369AbfCZSGg (ORCPT ); Tue, 26 Mar 2019 14:06:36 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:42014 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732399AbfCZSGg (ORCPT ); Tue, 26 Mar 2019 14:06:36 -0400 Received: by mail-wr1-f65.google.com with SMTP id g3so12006696wrx.9 for ; Tue, 26 Mar 2019 11:06:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3a0ZAIs1IZIaNNbJanNqmr2UjvxD3jEqVyi+mpSkwGo=; b=ZJNiYvf2Ytac3mXImIE87YhLmTGm0qrNlAGW74a9VVoh0wTwQoZPAh9SdE2wGyXXcQ FEnStWSfJOxudcuLk4W/Tu46vaSB/CjD5Y5E74AJ7PIWdouSiaB1ZDDBMnNjcn54MT8E 55Tz95cR2Q424eaFUEFlup1r71XHcWYi66ttbbWcjuk4rZJHWpzEBrfJJLwztdMJRnvd xAm6DCQRVQdFJNku8GF0VyeKzufDEg6kl0kLvxjWwTgdVLzaY+B7uuG223/q5AULzKgk JBQuCwDm4bSHxmqUwz+TG8YAaG+X/+w/nONoXyFVM3OSsSWwT0TRtfq2E7tRuARxF607 rlmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3a0ZAIs1IZIaNNbJanNqmr2UjvxD3jEqVyi+mpSkwGo=; b=KyUTIksf7TLw3UD1Fx+gBFtupc4TWofbCHhbqGyJlbmQlr1KXVpMIrjJWM3DrL8XGK 50qNAHs5wYkc7iX8xx9qRUSsRfpYjx85XjxiVb+egOxTdO5CIJuuPES/xE3XLQOnEvuR ZlT3lc7XCocvgGmyJR1hw8oXfS43EmmBydfw3FyRMHC4t0mWTJzY77Odb2AHl99oG9EW +Ue/36Ptipn/JPiFd0fkZmW+mJXTkReXHYv91MNwIQbwK/Bid99PmK7RrjZ/aY8e2vzF /UchmcaIhMvtzFF6cSE+zTHheGvV94tF2iPhZgvNbPlzC7Z0Oy9UwzW31rgxbCxTvuZ0 N5CA== X-Gm-Message-State: APjAAAVdWTzJBFmCNeRZ63TTUuGjUvKamn6oveN0ns+EHE2f4LGlq8Ro TvoAS7KjhQi4YyR+ieLZGWcZZUaqE9s= X-Google-Smtp-Source: APXvYqziSCtK0ryQY3PEKB6zX+OpEJZ+76wlReV8ZBova0k9UVj/uYPqHLqdi7+4tfSEG7DXVlVU+g== X-Received: by 2002:a5d:638d:: with SMTP id p13mr20001816wru.202.1553623595017; Tue, 26 Mar 2019 11:06:35 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:33 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 01/16] bpf: turn "enum bpf_reg_liveness" into bit representation Date: Tue, 26 Mar 2019 18:05:24 +0000 Message-Id: <1553623539-15474-2-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org "enum bpf_reg_liveness" is actually used as bit instead of integer. For example: if (live & (REG_LIVE_READ | REG_LIVE_WRITTEN | REG_LIVE_DONE)) Using enum to represent bit is error prone, better to use explicit bit macros. Signed-off-by: Jiong Wang --- include/linux/bpf_verifier.h | 16 +++++++++------- kernel/bpf/verifier.c | 5 ++--- 2 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 7d8228d..f03c86a 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -34,12 +34,14 @@ * but of the link between it and its parent. See mark_reg_read() and * mark_stack_slot_read() in kernel/bpf/verifier.c. */ -enum bpf_reg_liveness { - REG_LIVE_NONE = 0, /* reg hasn't been read or written this branch */ - REG_LIVE_READ, /* reg was read, so we're sensitive to initial value */ - REG_LIVE_WRITTEN, /* reg was written first, screening off later reads */ - REG_LIVE_DONE = 4, /* liveness won't be updating this register anymore */ -}; +/* Reg hasn't been read or written this branch. */ +#define REG_LIVE_NONE 0x0 +/* Reg was read, so we're sensitive to initial value. */ +#define REG_LIVE_READ 0x1 +/* Reg was written first, screening off later reads. */ +#define REG_LIVE_WRITTEN 0x2 +/* Liveness won't be updating this register anymore. */ +#define REG_LIVE_DONE 0x4 struct bpf_reg_state { /* Ordering of fields matters. See states_equal() */ @@ -131,7 +133,7 @@ struct bpf_reg_state { * pointing to bpf_func_state. */ u32 frameno; - enum bpf_reg_liveness live; + u8 live; }; enum bpf_stack_slot_type { diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index dffeec3..6cc8c38 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -407,8 +407,7 @@ static char slot_type_char[] = { [STACK_ZERO] = '0', }; -static void print_liveness(struct bpf_verifier_env *env, - enum bpf_reg_liveness live) +static void print_liveness(struct bpf_verifier_env *env, u8 live) { if (live & (REG_LIVE_READ | REG_LIVE_WRITTEN | REG_LIVE_DONE)) verbose(env, "_"); @@ -5687,8 +5686,8 @@ static bool check_ids(u32 old_id, u32 cur_id, struct idpair *idmap) static void clean_func_state(struct bpf_verifier_env *env, struct bpf_func_state *st) { - enum bpf_reg_liveness live; int i, j; + u8 live; for (i = 0; i < BPF_REG_FP; i++) { live = st->regs[i].live; From patchwork Tue Mar 26 18:05:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065882 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="VlLzt1gw"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJww6ZYHz9sSb for ; Wed, 27 Mar 2019 05:06:40 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732462AbfCZSGj (ORCPT ); Tue, 26 Mar 2019 14:06:39 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:36750 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732434AbfCZSGi (ORCPT ); Tue, 26 Mar 2019 14:06:38 -0400 Received: by mail-wm1-f68.google.com with SMTP id h18so14003637wml.1 for ; Tue, 26 Mar 2019 11:06:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LIVt0dF8T96GyCFFdLmKckPQiAfo4lzXRZU1y+SNemA=; b=VlLzt1gwAX0AzFo5X/Aj75AjSCbf1ZJ8BVJRg9x1T3FnCJ13S2FPK0BZW7ONXFCrgO 6uxaw2Aa2KUO1rkPunjcBQZZJoT98l2RCdqUEudOT8aBHh8Gc8Z+DXAQnSyOmh7GnPh5 JZy9d92ZN4uFm46qy1y85V1bMdTlEvszNGT5LTAxWdIFvRbCoLqBVVMtO0NF8LsMaXbK gJYmH+9/KF03zfMv5eU9QrRpofu/1X5Q8T9/oKu7mJBAhOB8a+LbwfORDx8E/JMnTHrg ngtGlJ1K6oxJ8JTgpxoPzrEkOVyMeAZmWrJz5QphvN3UJIQapLhARnc2aNeXrT+g3qzG aNiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LIVt0dF8T96GyCFFdLmKckPQiAfo4lzXRZU1y+SNemA=; b=MjfGW1O5cHRJWbOSmbyWE86K7THw23atIE/Abv8QAFyHotB6HOG8UVm6lYhYA+gApi TB8dXtcRkFmfUSl+95MPyB9b7g27kFEC3/hG6WAxRcAJjI8NSM+KPU8smeD5XC86UMA6 IawbHDKLuVgpBKDe/pfa9KrTPgt2eoANaT8smAypVzs/2dHDAyN9kA4DNtmnxNZFpFRK WTEURvGs2hpR04hYtrRn/zh/QxeYL2YaUaF3Nj4kcOiHjR2mq5HOfUibqtl40LGxWqTZ +DjE3fPn5aUH1dRsCUUv/zzVfl9xCJ8/ZLrkIFB4OxTXgVEhm7EdqzSn117SC9Y50vFu BjfQ== X-Gm-Message-State: APjAAAXn4xNQOkaHTIoUqZaIMBWn6iQiHcILMDQ//K7R94/RsxPdyYl8 8sITIWS2eq1/1127M93AuJRuLQ== X-Google-Smtp-Source: APXvYqwFTHtA2OtmuDEstdgpORaj0gGh74NGh3RCSQbg57GifQsbHcGu8lfqTDzgNAg6A5glXldkCA== X-Received: by 2002:a1c:988d:: with SMTP id a135mr9816994wme.24.1553623596490; Tue, 26 Mar 2019 11:06:36 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:35 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 02/16] bpf: refactor propagate_live implementation Date: Tue, 26 Mar 2019 18:05:25 +0000 Message-Id: <1553623539-15474-3-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Some code inside current implementation of "propagate_liveness" is a little bit verbose. This patch refactor them so the code looks more simple and more clear. The redundant usage of "vparent->frame[vstate->curframe]" is removed as we are here. It is safe to do this because "state_equal" has guaranteed that vstate->curframe must be equal with vparent->curframe. Signed-off-by: Jiong Wang --- kernel/bpf/verifier.c | 44 ++++++++++++++++++++++++++++++-------------- 1 file changed, 30 insertions(+), 14 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6cc8c38..245bb3c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -6050,6 +6050,22 @@ static bool states_equal(struct bpf_verifier_env *env, return true; } +static int propagate_liveness_reg(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, + struct bpf_reg_state *parent_reg, u8 flag) +{ + int err; + + if (parent_reg->live & flag || !(reg->live & flag)) + return 0; + + err = mark_reg_read(env, reg, parent_reg); + if (err) + return err; + + return 1; +} + /* A write screens off any subsequent reads; but write marks come from the * straight-line code between a state and its parent. When we arrive at an * equivalent state (jump target or such) we didn't arrive by the straight-line @@ -6061,8 +6077,9 @@ static int propagate_liveness(struct bpf_verifier_env *env, const struct bpf_verifier_state *vstate, struct bpf_verifier_state *vparent) { - int i, frame, err = 0; + struct bpf_reg_state *regs, *parent_regs; struct bpf_func_state *state, *parent; + int i, frame, err = 0; if (vparent->curframe != vstate->curframe) { WARN(1, "propagate_live: parent frame %d current frame %d\n", @@ -6071,16 +6088,13 @@ static int propagate_liveness(struct bpf_verifier_env *env, } /* Propagate read liveness of registers... */ BUILD_BUG_ON(BPF_REG_FP + 1 != MAX_BPF_REG); + parent_regs = vparent->frame[vparent->curframe]->regs; + regs = vstate->frame[vstate->curframe]->regs; /* We don't need to worry about FP liveness because it's read-only */ for (i = 0; i < BPF_REG_FP; i++) { - if (vparent->frame[vparent->curframe]->regs[i].live & REG_LIVE_READ) - continue; - if (vstate->frame[vstate->curframe]->regs[i].live & REG_LIVE_READ) { - err = mark_reg_read(env, &vstate->frame[vstate->curframe]->regs[i], - &vparent->frame[vstate->curframe]->regs[i]); - if (err) - return err; - } + err = propagate_liveness_reg(env, ®s[i], &parent_regs[i]); + if (err < 0) + return err; } /* ... and stack slots */ @@ -6089,11 +6103,13 @@ static int propagate_liveness(struct bpf_verifier_env *env, parent = vparent->frame[frame]; for (i = 0; i < state->allocated_stack / BPF_REG_SIZE && i < parent->allocated_stack / BPF_REG_SIZE; i++) { - if (parent->stack[i].spilled_ptr.live & REG_LIVE_READ) - continue; - if (state->stack[i].spilled_ptr.live & REG_LIVE_READ) - mark_reg_read(env, &state->stack[i].spilled_ptr, - &parent->stack[i].spilled_ptr); + struct bpf_reg_state *parent_reg, *reg; + + parent_reg = &parent->stack[i].spilled_ptr; + reg = &state->stack[i].spilled_ptr; + err = propagate_liveness_reg(env, reg, parent_reg); + if (err < 0) + return err; } } return err; From patchwork Tue Mar 26 18:05:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065883 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="qCQ/JbIE"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJwx5zSfz9sSt for ; Wed, 27 Mar 2019 05:06:41 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732465AbfCZSGl (ORCPT ); Tue, 26 Mar 2019 14:06:41 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:42023 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732451AbfCZSGj (ORCPT ); Tue, 26 Mar 2019 14:06:39 -0400 Received: by mail-wr1-f68.google.com with SMTP id g3so12006891wrx.9 for ; Tue, 26 Mar 2019 11:06:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9WXitAHkulex430Kpiw9BHjYykl0TxdU8tfkPsWWgwE=; b=qCQ/JbIEAvdU3/+D53teuOGKBWdAnKKyxoKNFO9ksD6Ii5XjwD5aMrytX2He0di1Rf flCgUWClRCfOHSOmdR0itbKnHO8uis7neFNnGUAX6Qil+4GcuL7gY3dyYA7qpT5EbI65 rhCVpyNEEZeOGzO4NRjRsGIK9LJR1s2IYZU9oG3fQKHp+NVwjv1KzaWALYP1WaHm9dKf bQLVnAkTtx/j8qavFRGuh/KntX8xV/yFliC9kyzR2yhAR/iK2yYIgHFRXWUn7ZrmdLEP RY91imhTd1YK7mCJZFin7T/XABGqYx4eLWszhCGMlj1gWYOfc+NXz5nnNb6u/9Mglxpr X20Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9WXitAHkulex430Kpiw9BHjYykl0TxdU8tfkPsWWgwE=; b=ZK2nzSr5IRnK2Vm4g8ljsbvN26/VHkmypsPrptgjDoCxA44TRb3yl/Xp9Ty0fwBb8F 1sOKHJGJdm7F89eUtWEfYtBDHFHVOf5fwzbiRJfMMjfSQx+M+e53XK4cEmlWI5eL8RxC bjDRgFNYulLla4HQzLixGNqvdbHQZgdpi2lK9GkxxSDnzFx4Tg6dUgWUYHt3ilVSzxRk iewCHSgKxvjM3sTlHYJDi2zreUrCf+dEtIMLHd940Mdico3t4nV8vhHqkdkzlL/VPlhp tQvg4M1NqV/dg81x4ZyVky4xoYWtbgZuLdRd5r500h3tb1mzINZXCx2vwLYV1S8dKHLH yh9g== X-Gm-Message-State: APjAAAU1vvXGHKm0eBRNYCVidkuM7HPIV92GaUicCG7iWoQBXMp56xFS vrEH4mYiRUWiCwM7taUtJtHdMQ== X-Google-Smtp-Source: APXvYqxUFmAnnGo5W0ha/TISBlXiZBH5Odotvyo2sIMLnQCV2JBlPq25kOzmcKHspWctr9p0GucbAg== X-Received: by 2002:adf:f1cc:: with SMTP id z12mr20443656wro.180.1553623597905; Tue, 26 Mar 2019 11:06:37 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.36 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:37 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 03/16] bpf: split read liveness into REG_LIVE_READ64 and REG_LIVE_READ32 Date: Tue, 26 Mar 2019 18:05:26 +0000 Message-Id: <1553623539-15474-4-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org In previous patch, we have split register arg type for sub-register read, but haven't touch read liveness. This patch further split read liveness into REG_LIVE_READ64 and REG_LIVE_READ32. Liveness propagation code are updated accordingly. After this split, customized actions could be defined when propagating full register read (REG_LIVE_READ64) or sub-register read (REG_LIVE_READ32). Signed-off-by: Jiong Wang --- include/linux/bpf_verifier.h | 9 ++++++--- kernel/bpf/verifier.c | 30 +++++++++++++++++++++--------- 2 files changed, 27 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index f03c86a..27761ab 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -37,11 +37,14 @@ /* Reg hasn't been read or written this branch. */ #define REG_LIVE_NONE 0x0 /* Reg was read, so we're sensitive to initial value. */ -#define REG_LIVE_READ 0x1 +#define REG_LIVE_READ32 0x1 +/* Likewise, but full 64-bit content matters. */ +#define REG_LIVE_READ64 0x2 +#define REG_LIVE_READ (REG_LIVE_READ32 | REG_LIVE_READ64) /* Reg was written first, screening off later reads. */ -#define REG_LIVE_WRITTEN 0x2 +#define REG_LIVE_WRITTEN 0x4 /* Liveness won't be updating this register anymore. */ -#define REG_LIVE_DONE 0x4 +#define REG_LIVE_DONE 0x8 struct bpf_reg_state { /* Ordering of fields matters. See states_equal() */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 245bb3c..b95c438 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1126,7 +1126,7 @@ static int check_subprogs(struct bpf_verifier_env *env) */ static int mark_reg_read(struct bpf_verifier_env *env, const struct bpf_reg_state *state, - struct bpf_reg_state *parent) + struct bpf_reg_state *parent, bool dw_read) { bool writes = parent == state->parent; /* Observe write marks */ @@ -1141,7 +1141,7 @@ static int mark_reg_read(struct bpf_verifier_env *env, return -EFAULT; } /* ... then we depend on parent's value */ - parent->live |= REG_LIVE_READ; + parent->live |= dw_read ? REG_LIVE_READ64 : REG_LIVE_READ32; state = parent; parent = state->parent; writes = true; @@ -1170,7 +1170,7 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, /* We don't need to worry about FP liveness because it's read-only */ if (regno != BPF_REG_FP) return mark_reg_read(env, ®s[regno], - regs[regno].parent); + regs[regno].parent, true); } else { /* check whether register used as dest operand can be written to */ if (regno == BPF_REG_FP) { @@ -1357,7 +1357,7 @@ static int check_stack_read(struct bpf_verifier_env *env, state->regs[value_regno].live |= REG_LIVE_WRITTEN; } mark_reg_read(env, ®_state->stack[spi].spilled_ptr, - reg_state->stack[spi].spilled_ptr.parent); + reg_state->stack[spi].spilled_ptr.parent, true); return 0; } else { int zeros = 0; @@ -1374,7 +1374,8 @@ static int check_stack_read(struct bpf_verifier_env *env, return -EACCES; } mark_reg_read(env, ®_state->stack[spi].spilled_ptr, - reg_state->stack[spi].spilled_ptr.parent); + reg_state->stack[spi].spilled_ptr.parent, + size == BPF_REG_SIZE); if (value_regno >= 0) { if (zeros == size) { /* any size read into register is zero extended, @@ -2220,7 +2221,8 @@ static int check_stack_boundary(struct bpf_verifier_env *env, int regno, * the whole slot to be marked as 'read' */ mark_reg_read(env, &state->stack[spi].spilled_ptr, - state->stack[spi].spilled_ptr.parent); + state->stack[spi].spilled_ptr.parent, + access_size == BPF_REG_SIZE); } return update_stack_depth(env, state, off); } @@ -6059,7 +6061,7 @@ static int propagate_liveness_reg(struct bpf_verifier_env *env, if (parent_reg->live & flag || !(reg->live & flag)) return 0; - err = mark_reg_read(env, reg, parent_reg); + err = mark_reg_read(env, reg, parent_reg, flag == REG_LIVE_READ64); if (err) return err; @@ -6092,7 +6094,12 @@ static int propagate_liveness(struct bpf_verifier_env *env, regs = vstate->frame[vstate->curframe]->regs; /* We don't need to worry about FP liveness because it's read-only */ for (i = 0; i < BPF_REG_FP; i++) { - err = propagate_liveness_reg(env, ®s[i], &parent_regs[i]); + err = propagate_liveness_reg(env, ®s[i], &parent_regs[i], + REG_LIVE_READ64); + if (err < 0) + return err; + err = propagate_liveness_reg(env, ®s[i], &parent_regs[i], + REG_LIVE_READ32); if (err < 0) return err; } @@ -6107,7 +6114,12 @@ static int propagate_liveness(struct bpf_verifier_env *env, parent_reg = &parent->stack[i].spilled_ptr; reg = &state->stack[i].spilled_ptr; - err = propagate_liveness_reg(env, reg, parent_reg); + err = propagate_liveness_reg(env, reg, parent_reg, + REG_LIVE_READ64); + if (err < 0) + return err; + err = propagate_liveness_reg(env, reg, parent_reg, + REG_LIVE_READ32); if (err < 0) return err; } From patchwork Tue Mar 26 18:05:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065885 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="L+F3d8/m"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJwy55dLz9sSt for ; Wed, 27 Mar 2019 05:06:42 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732476AbfCZSGl (ORCPT ); Tue, 26 Mar 2019 14:06:41 -0400 Received: from mail-wr1-f52.google.com ([209.85.221.52]:39015 "EHLO mail-wr1-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732387AbfCZSGl (ORCPT ); Tue, 26 Mar 2019 14:06:41 -0400 Received: by mail-wr1-f52.google.com with SMTP id j9so15489241wrn.6 for ; Tue, 26 Mar 2019 11:06:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=P1Pl+PYiZgKtWPFgiB/xp+eSvP3P0DnTxOTxRs5FUHk=; b=L+F3d8/ms86DugHNXolFhLyKA2+yjXi3S88nkVMNyeEaUM4z6oR0xbBM+lyLT4xVGp 94RNQR6cu/FCvzCNPcJHvEhbU6rF+yavRg/HI0McsGIx9cZdLGdN4QnwRNQSE9fV0KNP Ex6SmmBjC469VYp8LCdgZgwyG988Hjdisjz+NhetiK2HHXG5GbiuOB1aGhdzWNI4RqUL CSc7a9DivwJT1SgdGk1PyFp4hyzUFOcyD6gyRiGMUdW41QEfUMTPUUF0Kf5x2YKdlCQg 3Vyd51Zg9olS8kX6+RwqJa8+NzoF+kfEV9+fVdGPJdrMQkvaAKZ/84gQn1L1W+ONDDZH 9PcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=P1Pl+PYiZgKtWPFgiB/xp+eSvP3P0DnTxOTxRs5FUHk=; b=MQJZEnI0IE2sQ/8/2kovZ20wJFC5yx5oeeHZiXE0Bf+K4fcQWydcPu+CdS3yW1+LOZ G9B1oRo3lJXHoPu6fKxiqmYJ/33s98AwsaH/Omuu0irwxXifjPnx46UCqC055Vk8sUu+ 314/MlJBQcl9JPhRMWQZIGMgpxbMn9vJE64gZivEhQGI2p5cepqKMubDF1DXrW96LYsv XRy8pHC5AIe3LyxLM53IgCR23Pp8AMepRiN7bri3irTcauKrWz6uJ1OrqUIMdnHjhe94 ub4rLB3CAZC1XssJCe3AmWdUJmXAeYAJsGXE5n+F16OVJ2eqICehH4L6q+AgSFNB6BCK ozdQ== X-Gm-Message-State: APjAAAXB0igMr1TwSYlYHX26e09enAUCcq0l1NxtHF3X16EN3u0gkgef FTg6nj5XYJY1axugADFzQmIR7Q== X-Google-Smtp-Source: APXvYqyE9hyjrpOWMiw7X86SzTW7hMQF9CajkzzGYGO/YBTcSDoetUYjAjOirz9J4BRLehcz/PoWxg== X-Received: by 2002:a5d:6887:: with SMTP id h7mr21832885wru.122.1553623599106; Tue, 26 Mar 2019 11:06:39 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:38 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 04/16] bpf: mark sub-register writes that really need zero extension to high bits Date: Tue, 26 Mar 2019 18:05:27 +0000 Message-Id: <1553623539-15474-5-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org eBPF ISA specification requires high 32-bit cleared when low 32-bit sub-register is written. This applies to destination register of ALU32 etc. JIT back-ends must guarantee this semantic when doing code-gen. x86-64 and arm64 ISA has the same semantic, so the corresponding JIT back-end doesn't need to do extra work. However, 32-bit arches (arm, nfp etc.) and some other 64-bit arches (powerpc, sparc etc), need explicit zero extension sequence to meet such semantic. This is important, because for code the following: u64_value = (u64) u32_value ... other uses of u64_value compiler could exploit the semantic described above and save those zero extensions for extending u32_value to u64_value. Hardware, runtime, or BPF JIT back-ends, are responsible for guaranteeing this. Some benchmarks show ~40% sub-register writes out of total insns, meaning ~40% extra code-gen ( could go up to more for some arches which requires two shifts for zero extension) because JIT back-end needs to do extra code-gen for all such instructions. However this is not always necessary in case u32_value is never cast into a u64, which is quite normal in real life program. So, it would be really good if we could identify those places where such type cast happened, and only do zero extensions for them, not for the others. This could save a lot of BPF code-gen. Algo: - Record indices of instructions that do sub-register def (write). And these indices need to stay with function state so path pruning and bpf to bpf function call could be handled properly. These indices are kept up to date while doing insn walk. - A full register read on an active sub-register def marks the def insn as needing zero extension on dst register. - A new sub-register write overrides the old one. A new full register write makes the register free of zero extension on dst register. - When propagating register read64 during path pruning, it also marks def insns whose defs are hanging active sub-register, if there is any read64 from shown from the equal state. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- include/linux/bpf_verifier.h | 4 +++ kernel/bpf/verifier.c | 85 +++++++++++++++++++++++++++++++++++++++++--- 2 files changed, 84 insertions(+), 5 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 27761ab..0ae9a3f 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -181,6 +181,9 @@ struct bpf_func_state { */ u32 subprogno; + /* tracks subreg definition. */ + s32 subreg_def[MAX_BPF_REG]; + /* The following fields should be last. See copy_func_state() */ int acquired_refs; struct bpf_reference_state *refs; @@ -232,6 +235,7 @@ struct bpf_insn_aux_data { int ctx_field_size; /* the ctx field size for load insn, maybe 0 */ int sanitize_stack_off; /* stack slot to be cleared */ bool seen; /* this insn was processed by the verifier */ + bool zext_dst; /* this insn zero extend dst reg */ u8 alu_state; /* used in combination with alu_limit */ unsigned int orig_idx; /* original instruction index */ }; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index b95c438..66e5e65 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -971,16 +971,19 @@ static void mark_reg_not_init(struct bpf_verifier_env *env, __mark_reg_not_init(regs + regno); } +#define DEF_NOT_SUBREG (-1) static void init_reg_state(struct bpf_verifier_env *env, struct bpf_func_state *state) { struct bpf_reg_state *regs = state->regs; + s32 *subreg_def = state->subreg_def; int i; for (i = 0; i < MAX_BPF_REG; i++) { mark_reg_not_init(env, regs, i); regs[i].live = REG_LIVE_NONE; regs[i].parent = NULL; + subreg_def[i] = DEF_NOT_SUBREG; } /* frame pointer */ @@ -1149,18 +1152,66 @@ static int mark_reg_read(struct bpf_verifier_env *env, return 0; } +/* This function is supposed to be used by the following check_reg_arg only. */ +static bool insn_has_reg64(struct bpf_insn *insn) +{ + u8 code, class, op; + + code = insn->code; + class = BPF_CLASS(code); + op = BPF_OP(code); + + /* BPF_EXIT will reach here because of return value readability test for + * "main" which has s32 return value. + * BPF_CALL will reach here because of marking caller saved clobber with + * DST_OP_NO_MARK for which we don't care the register def because they + * are anyway marked as NOT_INIT already. + * + * So, return false for both. + */ + if (class == BPF_JMP && (op == BPF_EXIT || op == BPF_CALL)) + return false; + + if (class == BPF_ALU64 || class == BPF_JMP || + /* BPF_END always use BPF_ALU class. */ + (class == BPF_ALU && op == BPF_END && insn->imm == 64)) + return true; + + if (class == BPF_ALU || class == BPF_JMP32) + return false; + + /* LD/ST/LDX/STX */ + return BPF_SIZE(code) == BPF_DW; +} + +static void mark_insn_zext(struct bpf_verifier_env *env, + struct bpf_func_state *state, u8 regno) +{ + s32 def_idx = state->subreg_def[regno]; + + if (def_idx == DEF_NOT_SUBREG) + return; + + env->insn_aux_data[def_idx].zext_dst = true; + /* The dst will be zero extended, so won't be sub-register anymore. */ + state->subreg_def[regno] = DEF_NOT_SUBREG; +} + static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, enum reg_arg_type t) { struct bpf_verifier_state *vstate = env->cur_state; struct bpf_func_state *state = vstate->frame[vstate->curframe]; + struct bpf_insn *insn = env->prog->insnsi + env->insn_idx; struct bpf_reg_state *regs = state->regs; + bool dw_reg; if (regno >= MAX_BPF_REG) { verbose(env, "R%d is invalid\n", regno); return -EINVAL; } + dw_reg = insn_has_reg64(insn); if (t == SRC_OP) { /* check whether register used as source operand can be read */ if (regs[regno].type == NOT_INIT) { @@ -1168,9 +1219,12 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, return -EACCES; } /* We don't need to worry about FP liveness because it's read-only */ - if (regno != BPF_REG_FP) + if (regno != BPF_REG_FP) { + if (dw_reg) + mark_insn_zext(env, state, regno); return mark_reg_read(env, ®s[regno], - regs[regno].parent, true); + regs[regno].parent, dw_reg); + } } else { /* check whether register used as dest operand can be written to */ if (regno == BPF_REG_FP) { @@ -1178,6 +1232,8 @@ static int check_reg_arg(struct bpf_verifier_env *env, u32 regno, return -EACCES; } regs[regno].live |= REG_LIVE_WRITTEN; + state->subreg_def[regno] = + dw_reg ? DEF_NOT_SUBREG : env->insn_idx; if (t == DST_OP) mark_reg_unknown(env, regs, regno); } @@ -2360,6 +2416,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, if (err) return err; + /* arg_type doesn't differentiate 32 and 64-bit arg, always zext. */ + mark_insn_zext(env, cur_func(env), regno); + if (arg_type == ARG_ANYTHING) { if (is_pointer_value(env, regno)) { verbose(env, "R%d leaks addr into helper function\n", @@ -2880,10 +2939,13 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return err; /* copy r1 - r5 args that callee can access. The copy includes parent - * pointers, which connects us up to the liveness chain + * pointers, which connects us up to the liveness chain. subreg_def for + * them need to be copied as well. */ - for (i = BPF_REG_1; i <= BPF_REG_5; i++) + for (i = BPF_REG_1; i <= BPF_REG_5; i++) { callee->regs[i] = caller->regs[i]; + callee->subreg_def[i] = caller->subreg_def[i]; + } /* after the call registers r0 - r5 were scratched */ for (i = 0; i < CALLER_SAVED_REGS; i++) { @@ -2928,8 +2990,11 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx) state->curframe--; caller = state->frame[state->curframe]; - /* return to the caller whatever r0 had in the callee */ + /* return to the caller whatever r0 had in the callee, subreg_def should + * be copied to caller as well. + */ caller->regs[BPF_REG_0] = *r0; + caller->subreg_def[BPF_REG_0] = callee->subreg_def[BPF_REG_0]; /* Transfer references to the caller */ err = transfer_reference_state(caller, callee); @@ -3118,6 +3183,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn check_reg_arg(env, caller_saved[i], DST_OP_NO_MARK); } + /* helper call must return full 64-bit R0. */ + cur_func(env)->subreg_def[BPF_REG_0] = DEF_NOT_SUBREG; + /* update return register (already marked as written above) */ if (fn->ret_type == RET_INTEGER) { /* sets type to SCALAR_VALUE */ @@ -5114,6 +5182,8 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn) * Already marked as written above. */ mark_reg_unknown(env, regs, BPF_REG_0); + /* ld_abs load up to 32-bit skb data. */ + cur_func(env)->subreg_def[BPF_REG_0] = env->insn_idx; return 0; } @@ -6092,12 +6162,17 @@ static int propagate_liveness(struct bpf_verifier_env *env, BUILD_BUG_ON(BPF_REG_FP + 1 != MAX_BPF_REG); parent_regs = vparent->frame[vparent->curframe]->regs; regs = vstate->frame[vstate->curframe]->regs; + parent = vparent->frame[vparent->curframe]; /* We don't need to worry about FP liveness because it's read-only */ for (i = 0; i < BPF_REG_FP; i++) { err = propagate_liveness_reg(env, ®s[i], &parent_regs[i], REG_LIVE_READ64); if (err < 0) return err; + + if (err > 0) + mark_insn_zext(env, parent, i); + err = propagate_liveness_reg(env, ®s[i], &parent_regs[i], REG_LIVE_READ32); if (err < 0) From patchwork Tue Mar 26 18:05:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065886 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="aHJ172on"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJx00DqNz9sT1 for ; Wed, 27 Mar 2019 05:06:44 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732485AbfCZSGn (ORCPT ); Tue, 26 Mar 2019 14:06:43 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:39465 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732463AbfCZSGm (ORCPT ); Tue, 26 Mar 2019 14:06:42 -0400 Received: by mail-wr1-f67.google.com with SMTP id j9so15489336wrn.6 for ; Tue, 26 Mar 2019 11:06:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LvAm5TEZ+vRy41nPiPaxm/BGWkzUCYm1nHQ0A4AYs+k=; b=aHJ172onZnRdHTfpnTtS4Dsu3xSs9VcQvlHvuIW2Vo2fUNCLsvT4AAx/46M/9ceY1F g4YyZH0rlPJ3sq7kTfecGDyOU8f25uMwVQbCfuX8y1WPiAN1zQTS62Ni/6eNUHQX9Qj2 6DzRlaUUUL+2pvJgmOPOw07c/zbE+rZS4Y+CDlnUIJGn3FvhoSEepO5ghjwIf+Ceztpn KynEM0n8Bu8hzO7aYVw0xlsztnPxMt+CY91bFWq8zJZ8QQDatGA5PKnBd/r92k7Fv+L3 BNN4NZwINk/g/giDkjfPoDuSmdFl0vs6nrUF1eQLHl2NWXFtMo8p685TAPB1w5JYjc0J k1Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LvAm5TEZ+vRy41nPiPaxm/BGWkzUCYm1nHQ0A4AYs+k=; b=A3mfYCz5JZPaKSt28NuHEN3aleAfYnlZy99cN5CmC28W3JDu0Kn0K04Iv81oj6Z7gy +qp6eZT9qIA0L5tDiPo5omIi4a0R6N5/N1+MizfgIEBzxUM4ZwHOMdhk2Wylvy0/86MT +/fhFwnesaIERH/bOJlaYDKrZqInrDUywc1agGHV3G0Q9EMpaRHrKXiU76uQa+3wVeqP ftDjKPFKMSMX6SJ013R5IJFYqj8iagkK8VGW4i7I5Xl0o/iSiqQw1VsHqG1HLU/mF4xI 9N8yk8olsdaMf66k0bMEGKbfujap2vyBHrE+g1Y4QeQZlVoKWiJKgwbAuLu1wfK4yMvm 8BbA== X-Gm-Message-State: APjAAAXF4ybk68myx/tbaONzMINikt25zbs8qLkWlDPs3MfW7uUXk0Vx WSmjRU5cpZ5y0ftVw7xWseSUsg== X-Google-Smtp-Source: APXvYqyxtwWcUPusevaIYQWg1q+tg0uYksQLLM8nh54cOqVn52EJLuTY+K9BOTQe1xGCrujlzxrNsg== X-Received: by 2002:a5d:4606:: with SMTP id t6mr536622wrq.43.1553623600470; Tue, 26 Mar 2019 11:06:40 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:39 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 05/16] bpf: reduce false alarm by refining "enum bpf_arg_type" Date: Tue, 26 Mar 2019 18:05:28 +0000 Message-Id: <1553623539-15474-6-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Unlike BPF to BPF function call, BPF helper call is calls to native insns that verifier can't walk. So, verifier needs helper proto type descriptions for data-flow purpose. There is such information already, but it is not differentiate sub-register read with full register read. This patch split "enum bpf_arg_type" for sub-register read, and updated descriptions for several functions that shown frequent usage in one Cilium benchmark. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- include/linux/bpf.h | 3 +++ kernel/bpf/core.c | 2 +- kernel/bpf/helpers.c | 2 +- kernel/bpf/verifier.c | 19 +++++++++++++------ net/core/filter.c | 28 ++++++++++++++-------------- 5 files changed, 32 insertions(+), 22 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f628971..5616a58 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -190,9 +190,12 @@ enum bpf_arg_type { ARG_CONST_SIZE, /* number of bytes accessed from memory */ ARG_CONST_SIZE_OR_ZERO, /* number of bytes accessed from memory or 0 */ + ARG_CONST_SIZE32, /* Likewise, but size fits into 32-bit */ + ARG_CONST_SIZE32_OR_ZERO, /* Ditto */ ARG_PTR_TO_CTX, /* pointer to context */ ARG_ANYTHING, /* any (initialized) argument is ok */ + ARG_ANYTHING32, /* Likewise, but it is a 32-bit argument */ ARG_PTR_TO_SPIN_LOCK, /* pointer to bpf_spin_lock */ ARG_PTR_TO_SOCK_COMMON, /* pointer to sock_common */ }; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index ff09d32..8834d80 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2065,7 +2065,7 @@ const struct bpf_func_proto bpf_tail_call_proto = { .ret_type = RET_VOID, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_CONST_MAP_PTR, - .arg3_type = ARG_ANYTHING, + .arg3_type = ARG_ANYTHING32, }; /* Stub for JITs that only support cBPF. eBPF programs are interpreted. diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index a411fc1..6b7453e 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -218,7 +218,7 @@ const struct bpf_func_proto bpf_get_current_comm_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_UNINIT_MEM, - .arg2_type = ARG_CONST_SIZE, + .arg2_type = ARG_CONST_SIZE32, }; #if defined(CONFIG_QUEUED_SPINLOCKS) || defined(CONFIG_BPF_ARCH_SPINLOCK) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 66e5e65..83448bb 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -2398,7 +2398,9 @@ static bool arg_type_is_mem_ptr(enum bpf_arg_type type) static bool arg_type_is_mem_size(enum bpf_arg_type type) { return type == ARG_CONST_SIZE || - type == ARG_CONST_SIZE_OR_ZERO; + type == ARG_CONST_SIZE_OR_ZERO || + type == ARG_CONST_SIZE32 || + type == ARG_CONST_SIZE32_OR_ZERO; } static int check_func_arg(struct bpf_verifier_env *env, u32 regno, @@ -2416,10 +2418,12 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, if (err) return err; - /* arg_type doesn't differentiate 32 and 64-bit arg, always zext. */ - mark_insn_zext(env, cur_func(env), regno); + if (arg_type != ARG_ANYTHING32 && + arg_type != ARG_CONST_SIZE32 && + arg_type != ARG_CONST_SIZE32_OR_ZERO) + mark_insn_zext(env, cur_func(env), regno); - if (arg_type == ARG_ANYTHING) { + if (arg_type == ARG_ANYTHING || arg_type == ARG_ANYTHING32) { if (is_pointer_value(env, regno)) { verbose(env, "R%d leaks addr into helper function\n", regno); @@ -2442,7 +2446,9 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, type != expected_type) goto err_type; } else if (arg_type == ARG_CONST_SIZE || - arg_type == ARG_CONST_SIZE_OR_ZERO) { + arg_type == ARG_CONST_SIZE_OR_ZERO || + arg_type == ARG_CONST_SIZE32 || + arg_type == ARG_CONST_SIZE32_OR_ZERO) { expected_type = SCALAR_VALUE; if (type != expected_type) goto err_type; @@ -2536,7 +2542,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno, meta->map_ptr->value_size, false, meta); } else if (arg_type_is_mem_size(arg_type)) { - bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO); + bool zero_size_allowed = (arg_type == ARG_CONST_SIZE_OR_ZERO || + arg_type == ARG_CONST_SIZE32_OR_ZERO); /* remember the mem_size which may be used later * to refine return values. diff --git a/net/core/filter.c b/net/core/filter.c index 22eb2ed..3f6d8af 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -1693,9 +1693,9 @@ static const struct bpf_func_proto bpf_skb_store_bytes_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_PTR_TO_MEM, - .arg4_type = ARG_CONST_SIZE, + .arg4_type = ARG_CONST_SIZE32, .arg5_type = ARG_ANYTHING, }; @@ -1724,9 +1724,9 @@ static const struct bpf_func_proto bpf_skb_load_bytes_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_PTR_TO_UNINIT_MEM, - .arg4_type = ARG_CONST_SIZE, + .arg4_type = ARG_CONST_SIZE32, }; BPF_CALL_5(bpf_skb_load_bytes_relative, const struct sk_buff *, skb, @@ -1875,7 +1875,7 @@ static const struct bpf_func_proto bpf_l3_csum_replace_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, .arg4_type = ARG_ANYTHING, .arg5_type = ARG_ANYTHING, @@ -1928,7 +1928,7 @@ static const struct bpf_func_proto bpf_l4_csum_replace_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, .arg4_type = ARG_ANYTHING, .arg5_type = ARG_ANYTHING, @@ -1967,9 +1967,9 @@ static const struct bpf_func_proto bpf_csum_diff_proto = { .pkt_access = true, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_MEM_OR_NULL, - .arg2_type = ARG_CONST_SIZE_OR_ZERO, + .arg2_type = ARG_CONST_SIZE32_OR_ZERO, .arg3_type = ARG_PTR_TO_MEM_OR_NULL, - .arg4_type = ARG_CONST_SIZE_OR_ZERO, + .arg4_type = ARG_CONST_SIZE32_OR_ZERO, .arg5_type = ARG_ANYTHING, }; @@ -2150,7 +2150,7 @@ static const struct bpf_func_proto bpf_redirect_proto = { .func = bpf_redirect, .gpl_only = false, .ret_type = RET_INTEGER, - .arg1_type = ARG_ANYTHING, + .arg1_type = ARG_ANYTHING32, .arg2_type = ARG_ANYTHING, }; @@ -2928,7 +2928,7 @@ static const struct bpf_func_proto bpf_skb_change_proto_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, }; @@ -2948,7 +2948,7 @@ static const struct bpf_func_proto bpf_skb_change_type_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, }; static u32 bpf_skb_net_base_len(const struct sk_buff *skb) @@ -3236,7 +3236,7 @@ static const struct bpf_func_proto bpf_skb_change_tail_proto = { .gpl_only = false, .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, - .arg2_type = ARG_ANYTHING, + .arg2_type = ARG_ANYTHING32, .arg3_type = ARG_ANYTHING, }; @@ -3832,7 +3832,7 @@ static const struct bpf_func_proto bpf_skb_get_tunnel_key_proto = { .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_PTR_TO_UNINIT_MEM, - .arg3_type = ARG_CONST_SIZE, + .arg3_type = ARG_CONST_SIZE32, .arg4_type = ARG_ANYTHING, }; @@ -3941,7 +3941,7 @@ static const struct bpf_func_proto bpf_skb_set_tunnel_key_proto = { .ret_type = RET_INTEGER, .arg1_type = ARG_PTR_TO_CTX, .arg2_type = ARG_PTR_TO_MEM, - .arg3_type = ARG_CONST_SIZE, + .arg3_type = ARG_CONST_SIZE32, .arg4_type = ARG_ANYTHING, }; From patchwork Tue Mar 26 18:05:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065908 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="B4Z2rciH"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJxV4gl2z9sTT for ; Wed, 27 Mar 2019 05:07:10 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732499AbfCZSGq (ORCPT ); Tue, 26 Mar 2019 14:06:46 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:38037 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732481AbfCZSGo (ORCPT ); Tue, 26 Mar 2019 14:06:44 -0400 Received: by mail-wr1-f68.google.com with SMTP id k11so8093673wro.5 for ; Tue, 26 Mar 2019 11:06:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=m7nPu/pKQhL5sojiyqZrA/kh9tSYEiKRphQNX2XxZNk=; b=B4Z2rciHRqujg5G5jf4YAkAxmM/SgZ1kvSR9ZGB91H9z3fMOZhXU9DAWnOnhbj1/hn 1ghLf3x5MEEhHU+z10f4J9I2WKfhCJUC0Ns87T7lVSagYEGtBFScmwgU+dw7BkorIGkv 1E+xPEw9NrMj5YAydRCfInz+w1rslAC5VslpCCJBz9vsAbPIEYqGzL5bL11Yg/SrkWF1 BNSG2i7kkqNWvSmAx6M5T4OGsjoux+WN69iXetUJL4KtbUik/CutowYHKO38a1RU2k1k h6TA7h4EE7QLBlLdoVOpmtsHcXgPvPzUUViAFVBV5EN10VZSbuPwZI4JghOxfdquq6ct TQEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m7nPu/pKQhL5sojiyqZrA/kh9tSYEiKRphQNX2XxZNk=; b=XIu1WfXvMI9WhImTIOH2W/Y1JHPyOSvuo4k/34Hl2EdsfUU5nh9cfmKUqj//GvtNQF eWaw+ES9hLy3NFjaTXoh/xC5+4/UaYL+8hIRSmxzrXaoWZOXOriOyiVhcCQhVHjSYTRH Nt3qq0vspfjm6k2w1SuOboG4r2qqyJMuWcGOqJNbJVH0UyUjZ79HNlyrrQatFbjhQVsY kJNBlT8xO+G6v1JHMtoc2WDp4jo7QLq4mjPtpnlCd4FgjZO0R4gxWoHn+ur+FQwvHHnC K4nehkY+FPram2JphtWDmiHXbhaoqExuhd9Cbcn7A6H3Y+DJTg3POcyYHKh2gPZGai7j 4KBg== X-Gm-Message-State: APjAAAUmWFbbPW8AVXb3Mcg4HOF2WuJnSJqErAU6u6+TQtdskaznJx6p pCfczBgDTcfbVSmu6yPBtWTgv8u8W9s= X-Google-Smtp-Source: APXvYqwABR7iS3HcBOOWqfcjUuJgDTR8O3S/jTNM8OvA0kyj12jHz/rUF0fX9AS7ASxfnLcUvnc/tw== X-Received: by 2002:adf:afd7:: with SMTP id y23mr19878648wrd.254.1553623602202; Tue, 26 Mar 2019 11:06:42 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:41 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 06/16] bpf: new sysctl "bpf_jit_32bit_opt" Date: Tue, 26 Mar 2019 18:05:29 +0000 Message-Id: <1553623539-15474-7-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org After previous patches, verifier has marked those instructions that really need zero extension on dst_reg. It is then for all back-ends to decide how to use such information to eliminate unnecessary zero extension codegen during JIT compilation. One approach is: 1. Verifier insert explicit zero extension for those instructions that need zero extension. 2. All JIT back-ends do NOT generate zero extension for sub-register write any more. The good thing for this approach is no major change on JIT back-end interface, all back-ends could get this optimization. However, only those back-ends that do not have hardware zero extension want this optimization. For back-ends like x86_64 and AArch64, there is hardware support, so this optimization should be disabled. This patch introduces new sysctl "bpf_jit_32bit_opt" which is the control variable for whether the optimization should be enabled. It is initialized using target hook bpf_jit_hardware_zext which is default true, meaning the underlying hardware will do zero extension automatically, therefore the optimization will be disabled. Offload targets do not use this native target hook, instead, they could get the optimization results using bpf_prog_offload_ops.finalize. The user could always enable or disable the optimization by using: sysctl net/core/bpf_jit_32bit_opt=[0 | 1] A brief diagram below to show the logic of how the optimization is controlled. arm ppc x86_64 bpf_jit_hardware_zext() bpf_jit_hardware_zext() bpf_jit_hardware_zext() | | | V V V false false true | | | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | V bpf_jit_32bit_opt /\ / \ true false -> disable optimization | V enable optimization Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- Documentation/sysctl/net.txt | 15 +++++++++++++++ include/linux/filter.h | 2 ++ kernel/bpf/core.c | 16 ++++++++++++++++ net/core/sysctl_net_core.c | 9 +++++++++ 4 files changed, 42 insertions(+) diff --git a/Documentation/sysctl/net.txt b/Documentation/sysctl/net.txt index 2ae91d3..f820e3b 100644 --- a/Documentation/sysctl/net.txt +++ b/Documentation/sysctl/net.txt @@ -101,6 +101,21 @@ compiler in order to reject unprivileged JIT requests once it has been surpassed. bpf_jit_limit contains the value of the global limit in bytes. +bpf_jit_32bit_opt +----------------- + +This enables verifier optimizations related with sub-register access. These +optimizations aim to help JIT back-ends doing code-gen efficiently. There is +only one such optimization at the moment, the zero extension insertion pass. +Once it is enabled, verifier will guarantee high bits clearance semantics +when doing sub-register write whenever it is necessary. Without this, JIT +back-ends always need to do code-gen for high bits clearance, which leads to +significant redundancy. + +Values : + 0 - disable these optimization passes + 1 - enable these optimization passes + dev_weight -------------- diff --git a/include/linux/filter.h b/include/linux/filter.h index 6074aa0..b66a4d9 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -819,6 +819,7 @@ u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog); void bpf_jit_compile(struct bpf_prog *prog); +bool bpf_jit_hardware_zext(void); bool bpf_helper_changes_pkt_data(void *func); static inline bool bpf_dump_raw_ok(void) @@ -905,6 +906,7 @@ extern int bpf_jit_enable; extern int bpf_jit_harden; extern int bpf_jit_kallsyms; extern long bpf_jit_limit; +extern int bpf_jit_32bit_opt; typedef void (*bpf_jit_fill_hole_t)(void *area, unsigned int size); diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 8834d80..cc7f0fd 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -524,6 +524,14 @@ int bpf_jit_enable __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_ALWAYS_ON); int bpf_jit_harden __read_mostly; int bpf_jit_kallsyms __read_mostly; long bpf_jit_limit __read_mostly; +int bpf_jit_32bit_opt __read_mostly; + +static int __init bpf_jit_32bit_opt_init(void) +{ + bpf_jit_32bit_opt = !bpf_jit_hardware_zext(); + return 0; +} +pure_initcall(bpf_jit_32bit_opt_init); static __always_inline void bpf_get_prog_addr_region(const struct bpf_prog *prog, @@ -2089,6 +2097,14 @@ bool __weak bpf_helper_changes_pkt_data(void *func) return false; } +/* Return TRUE is the target hardware of JIT will do zero extension to high bits + * when writing to low 32-bit of one register. Otherwise, return FALSE. + */ +bool __weak bpf_jit_hardware_zext(void) +{ + return true; +} + /* To execute LD_ABS/LD_IND instructions __bpf_prog_run() may call * skb_copy_bits(), so provide a weak definition of it for NET-less config. */ diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c index 84bf286..68be151 100644 --- a/net/core/sysctl_net_core.c +++ b/net/core/sysctl_net_core.c @@ -416,6 +416,15 @@ static struct ctl_table net_core_table[] = { .extra1 = &zero, .extra2 = &one, }, + { + .procname = "bpf_jit_32bit_opt", + .data = &bpf_jit_32bit_opt, + .maxlen = sizeof(int), + .mode = 0600, + .proc_handler = proc_dointvec_minmax_bpf_restricted, + .extra1 = &zero, + .extra2 = &one, + }, # endif { .procname = "bpf_jit_limit", From patchwork Tue Mar 26 18:05:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065889 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="RoDLjVv8"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJx336zJz9sT1 for ; Wed, 27 Mar 2019 05:06:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732505AbfCZSGq (ORCPT ); Tue, 26 Mar 2019 14:06:46 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:51200 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732387AbfCZSGp (ORCPT ); Tue, 26 Mar 2019 14:06:45 -0400 Received: by mail-wm1-f65.google.com with SMTP id 4so13674533wmf.1 for ; Tue, 26 Mar 2019 11:06:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mudG8RvVsPscJFtOW94defIRDtY8/IaEfskGfsPcl3o=; b=RoDLjVv8hYngagCtjx7KodJXiPKWLHLjPxn6Ue9w0uV8JG5JbQC9nyYQL8KandEiI1 XhfE0FjIlaCr7VkwdnEO8RNg/nOcixhAlc7nUEWLw4xzOgIJ/lxNye4/jps2PGfYW+4C NB9uxTHIlSTKdNrhjbxN0Otb5E25D+6ZzWIVVKVgyNPyKdkunkg3wUIx/QYpK4qHdz7L se2Mb5Tp5t314VzT1pehz/Ju5vBx0bysaRj8GrzhV4EnkHx8nDhBWpA1jmHLhW55L6Ov JoW19gXt8I1J10lxGPiLmJTQsne5Wxq6IBEcKAFKpF+g7UrEPif6v6//dByRiHH+Skdy 9rSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mudG8RvVsPscJFtOW94defIRDtY8/IaEfskGfsPcl3o=; b=YxVkEI6tNsyqacRiDgJv6vHvWNtGVmphVwIsp1a+yqYbVBNqmN55TdUX+7YjFJiLGG dC6yJA9iMvjIm7Gni4butrPi00EmOoFD+HbSMq7aCUuOugeKWHF+1+HxFB9L1wZWQ6Eo xCeC/DKUBGfidDFEvHto8ztan2S7I8mu9vaO8eXkElIrgWXO/ZjfSDlBZle4z8WvCKgW FIsf9FaYvj0NQ6ylu+pNeZD6AkJyr3zE88JGIPRZbrG1Vbwqe7NbvbaGU3nmGyMmlAHx jYrzR0DDHW4JThNYg99gXcAJX+soNGGs9lOvuDtq+/52DxjVh06d/A91N09FKn6hf9hd yJqQ== X-Gm-Message-State: APjAAAUonJwezwAtdWUq52Mkpvohthx2OWKgi5HWcfxKhIJITU5EBsU4 ZaMmmblYizCUvTYBVwcjLqsmKA== X-Google-Smtp-Source: APXvYqx7hzQuy2XOs/tEbBMoZ7LRcdI0dA/dKXAdi6lHD2QbOkBU5NjtCmAh6aKpIGVvTJCGp4AD5A== X-Received: by 2002:a1c:3845:: with SMTP id f66mr10324433wma.41.1553623603801; Tue, 26 Mar 2019 11:06:43 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.42 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:42 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 07/16] bpf: insert explicit zero extension instructions when bpf_jit_32bit_opt is true Date: Tue, 26 Mar 2019 18:05:30 +0000 Message-Id: <1553623539-15474-8-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch implements the zero extension insertion pass using bpf_patch_insn_data infrastructure. Once zero extensions are inserted, tell JIT back-ends about this through the new field env boolean "no_verifier_zext". We need this because user could enable or disable the insertion pass as they like through sysctl variable. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- include/linux/bpf.h | 1 + kernel/bpf/verifier.c | 45 ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 45 insertions(+), 1 deletion(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 5616a58..3336f93 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -359,6 +359,7 @@ struct bpf_prog_aux { u32 id; u32 func_cnt; /* used by non-func prog as the number of func progs */ u32 func_idx; /* 0 for non-func prog, the index in func array for func prog */ + bool no_verifier_zext; /* No zero extension insertion by verifier. */ bool offload_requested; struct bpf_prog **func; void *jit_data; /* JIT specific data. arch dependent */ diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 83448bb..57db451 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7224,6 +7224,38 @@ static int opt_remove_nops(struct bpf_verifier_env *env) return 0; } +static int opt_subreg_zext(struct bpf_verifier_env *env) +{ + struct bpf_insn_aux_data *aux = env->insn_aux_data; + int i, delta = 0, len = env->prog->len; + struct bpf_insn *insns = env->prog->insnsi; + struct bpf_insn zext_patch[3]; + struct bpf_prog *new_prog; + + zext_patch[1] = BPF_ALU64_IMM(BPF_LSH, 0, 32); + zext_patch[2] = BPF_ALU64_IMM(BPF_RSH, 0, 32); + for (i = 0; i < len; i++) { + struct bpf_insn insn; + + if (!aux[i + delta].zext_dst) + continue; + + insn = insns[i + delta]; + zext_patch[0] = insn; + zext_patch[1].dst_reg = insn.dst_reg; + zext_patch[2].dst_reg = insn.dst_reg; + new_prog = bpf_patch_insn_data(env, i + delta, zext_patch, 3); + if (!new_prog) + return -ENOMEM; + env->prog = new_prog; + insns = new_prog->insnsi; + aux = env->insn_aux_data; + delta += 2; + } + + return 0; +} + /* convert load instructions that access fields of a context type into a * sequence of instructions that access fields of the underlying structure: * struct __sk_buff -> struct sk_buff @@ -8022,7 +8054,18 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, if (ret == 0) ret = check_max_stack_depth(env); - /* instruction rewrites happen after this point */ + /* Instruction rewrites happen after this point. + * For offload target, finalize hook has all aux insn info, do any + * customized work there. + */ + if (ret == 0 && bpf_jit_32bit_opt && + !bpf_prog_is_dev_bound(env->prog->aux)) { + ret = opt_subreg_zext(env); + env->prog->aux->no_verifier_zext = !!ret; + } else { + env->prog->aux->no_verifier_zext = true; + } + if (is_priv) { if (ret == 0) opt_hard_wire_dead_code_branches(env); From patchwork Tue Mar 26 18:05:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065890 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="PMBzJZyi"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJx63RSkz9sT1 for ; Wed, 27 Mar 2019 05:06:50 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732514AbfCZSGs (ORCPT ); Tue, 26 Mar 2019 14:06:48 -0400 Received: from mail-wm1-f66.google.com ([209.85.128.66]:39586 "EHLO mail-wm1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732496AbfCZSGr (ORCPT ); Tue, 26 Mar 2019 14:06:47 -0400 Received: by mail-wm1-f66.google.com with SMTP id t124so13932786wma.4 for ; Tue, 26 Mar 2019 11:06:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=u7fZb3IEqFJRYe0oJUr+G1NSo0sv1IV8cJXsSnOcig0=; b=PMBzJZyimubcntVcgh0pA6qard648dVzHUXODir6+ClbkXGCB5jsYBDwFmuPU4rShF 09GBaGQMg2g2bh4nLtpHH0iUxz8IMWmKB4Djnv8oL3JhmtVuaquBJxC07XZG1kFlrFbO 3YRMC6a23KRBwxVkSMEX8hCLmuPQr/XA4VAfbvcYygNDpOtmHnbGNFigFjb3csA5b+NA HpNq+ZAcKMJKLwmemT5YvHwFBGYJkPblikswy7RT9CdIu9dWNewRQSGnGv6fc39tmxDc 41oXFP0LyW1jo8qSY35rJbzDALkZdo+BpfNqdeJztFlMXZJ2gLSu8PXCis3DN5eoJjS3 Fuyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=u7fZb3IEqFJRYe0oJUr+G1NSo0sv1IV8cJXsSnOcig0=; b=jCvLtdwqVLr0err+Oq2YdJ+63dw4Tp2RzlrScFEXGbHUzYiYlsqtUI6bwO6m9B9Fit bc7L2hnaMoJkE09qzLlT13UYWJAlTH8VQrdoIxgbRrQSgwwoO8lPT8iS63SksbsZdcll qtNkdteRveOQNC2vfcVoR+jGR7SgaK/ZdNYLfXjHO7E2QKxo1E5VpOMhXpfKdFsj5Exq HTWS+8j6MtIHJqRNtR9L6bnNZ65ffOj/0NWcO0gaxS29VyX2Txa+Ksp2ucPpO3X9LkNG TLlcs2mfgc91SNOPoS37X+orET/+UXcDdB1wpmBHykvcpdgStJ3sDAbP7z6YXJxi0T9U q2JA== X-Gm-Message-State: APjAAAVQNcujCwMOepubvnBWZp66zgRKOzAafVvaRT9UcyjF+ECmD7PL luP5VmIx8HB4JjwnXdndw07nCw== X-Google-Smtp-Source: APXvYqzjI38mPozOmomKT4GGNc2NohaTE4YHBlCjoxEnOoH2poX4f4sM3FYnvRRj6IT5pqnuYY/jSQ== X-Received: by 2002:a7b:c5cd:: with SMTP id n13mr15644961wmk.114.1553623605187; Tue, 26 Mar 2019 11:06:45 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:44 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Shubham Bansal Subject: [PATCH/RFC bpf-next 08/16] arm: bpf: eliminate zero extension code-gen Date: Tue, 26 Mar 2019 18:05:31 +0000 Message-Id: <1553623539-15474-9-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: Shubham Bansal Signed-off-by: Jiong Wang --- arch/arm/net/bpf_jit_32.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c index c8bfbbf..8cecd06 100644 --- a/arch/arm/net/bpf_jit_32.c +++ b/arch/arm/net/bpf_jit_32.c @@ -736,7 +736,8 @@ static inline void emit_a32_alu_r64(const bool is64, const s8 dst[], /* ALU operation */ emit_alu_r(rd[1], rs, true, false, op, ctx); - emit_a32_mov_i(rd[0], 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + emit_a32_mov_i(rd[0], 0, ctx); } arm_bpf_put_reg64(dst, rd, ctx); @@ -758,8 +759,9 @@ static inline void emit_a32_mov_r64(const bool is64, const s8 dst[], struct jit_ctx *ctx) { if (!is64) { emit_a32_mov_r(dst_lo, src_lo, ctx); - /* Zero out high 4 bytes */ - emit_a32_mov_i(dst_hi, 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + /* Zero out high 4 bytes */ + emit_a32_mov_i(dst_hi, 0, ctx); } else if (__LINUX_ARM_ARCH__ < 6 && ctx->cpu_architecture < CPU_ARCH_ARMv5TE) { /* complete 8 byte move */ @@ -1438,7 +1440,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) } emit_udivmod(rd_lo, rd_lo, rt, ctx, BPF_OP(code)); arm_bpf_put_reg32(dst_lo, rd_lo, ctx); - emit_a32_mov_i(dst_hi, 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; case BPF_ALU64 | BPF_DIV | BPF_K: case BPF_ALU64 | BPF_DIV | BPF_X: @@ -1453,7 +1456,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) return -EINVAL; if (imm) emit_a32_alu_i(dst_lo, imm, ctx, BPF_OP(code)); - emit_a32_mov_i(dst_hi, 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; /* dst = dst << imm */ case BPF_ALU64 | BPF_LSH | BPF_K: @@ -1488,7 +1492,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) /* dst = ~dst */ case BPF_ALU | BPF_NEG: emit_a32_alu_i(dst_lo, 0, ctx, BPF_OP(code)); - emit_a32_mov_i(dst_hi, 0, ctx); + if (ctx->prog->aux->no_verifier_zext) + emit_a32_mov_i(dst_hi, 0, ctx); break; /* dst = ~dst (64 bit) */ case BPF_ALU64 | BPF_NEG: @@ -1838,6 +1843,11 @@ void bpf_jit_compile(struct bpf_prog *prog) /* Nothing to do here. We support Internal BPF. */ } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { struct bpf_prog *tmp, *orig_prog = prog; From patchwork Tue Mar 26 18:05:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065891 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="XfSaf34Q"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJx83dj2z9sT1 for ; Wed, 27 Mar 2019 05:06:52 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732523AbfCZSGv (ORCPT ); Tue, 26 Mar 2019 14:06:51 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:39589 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732508AbfCZSGs (ORCPT ); Tue, 26 Mar 2019 14:06:48 -0400 Received: by mail-wm1-f68.google.com with SMTP id t124so13932843wma.4 for ; Tue, 26 Mar 2019 11:06:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FyyxR9Iqk6cAH0oBjCaJb0bcVV0NGMOOW5Oun5gFHdM=; b=XfSaf34QG8CikR9x/GZqc+vAwaApH2d51JPwWZNYw3W9l72fGWeLbmRKKQnP1mMstf eNzP2skW0bzvIQF/RGAXbszctcB2lcL/GqOSi/gn4EBwpZdGm/uB6LSpwQEanhTEeX60 neQ44wdAaE91+7++dId+zDrSYYWLraJ04in39ObNiUQ+tnIRhdtDzVS2tRacnaR/vUi8 myhtkkaWoTYgdb0ckosOTUMdWbg8icoOupbo1QXjKmA44ROF7i1jhLg71Dy+a5GCp0XR 1teWDDyRNldVuplwQCZP77E9QoeI4IWdnQX8b5BL2DWzzNHYpiAwePCY+Nu4GD/NbuUz 3Q/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FyyxR9Iqk6cAH0oBjCaJb0bcVV0NGMOOW5Oun5gFHdM=; b=GGDwxJzmPsmeCwhSwJSbaxfiwswOL2nPg7g0FB6OGnyn3Hg/11s9FEdluBSmjsYJGJ kJCPwnV3AuQ4i8HtaW5FG8FNsmiEpeKRRQubeCaIAR84PrZgTH4OSUwq68MMmNhHDY28 uGCt6u6SOAXIkUY2ThkeH2B6vCthtiS1oN9E9nQc9h9G0ndKbDMGBmhIufAUtrRy1aeK xb+dJ/0/n+3/EhUw1h3OtiuMgLeZxY3qvpDL+ithl0YgL3itPmUOukVGq07C4KlL1lYC TcXQsITaUEJNcNC67SXa8NQtefm3F5UNO4QGf/H4RpM0ubPIp7CvnKP1kpAPw0fiJ/lQ U6Ig== X-Gm-Message-State: APjAAAVjlY0V8a/AJCRg1UwD8FObz4SbeTQnGZiAFUMQ3pZ54AXreIzU 5jA76wlE8/4BhoijNJrBo1bxZA== X-Google-Smtp-Source: APXvYqwvf0UBF6B5OEGKi/6bb/tefUxEWf7HgV/sCjas1VhJ0iAqDi9GZ/YSF59uu5uq4/7WwYpqWw== X-Received: by 2002:a1c:eb14:: with SMTP id j20mr10489900wmh.32.1553623606349; Tue, 26 Mar 2019 11:06:46 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:45 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , "Naveen N . Rao" , Sandipan Das Subject: [PATCH/RFC bpf-next 09/16] powerpc: bpf: eliminate zero extension code-gen Date: Tue, 26 Mar 2019 18:05:32 +0000 Message-Id: <1553623539-15474-10-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: Naveen N. Rao Cc: Sandipan Das Signed-off-by: Jiong Wang --- arch/powerpc/net/bpf_jit_comp64.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c index 21a1dcd..d10621b 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -559,7 +559,7 @@ static int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, bpf_alu32_trunc: /* Truncate to 32-bits */ - if (BPF_CLASS(code) == BPF_ALU) + if (BPF_CLASS(code) == BPF_ALU && fp->aux->no_verifier_zext) PPC_RLWINM(dst_reg, dst_reg, 0, 0, 31); break; @@ -1046,6 +1046,11 @@ struct powerpc64_jit_data { struct codegen_context ctx; }; +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) { u32 proglen; From patchwork Tue Mar 26 18:05:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065905 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="C8BAAlG4"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJxR6VY6z9sTR for ; Wed, 27 Mar 2019 05:07:07 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732521AbfCZSHH (ORCPT ); Tue, 26 Mar 2019 14:07:07 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:46535 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732325AbfCZSGu (ORCPT ); Tue, 26 Mar 2019 14:06:50 -0400 Received: by mail-wr1-f68.google.com with SMTP id o1so15466691wrs.13 for ; Tue, 26 Mar 2019 11:06:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=/wylRhv97waN1hLNQRjtw5HFd7aLKXwfbfkeZTfcGns=; b=C8BAAlG4HEv42kHlnrcjpi2XqGSGKgawPiv1cUHm1oaeWHBrcvSDSc5QF3VEzD4iKN cP/J8O7wQ/m5tSSUxofchGX68sX+EwskQ39M++NtnCD+OKOsHCdOOZ2FRmA9FDtE4Q5O pC3Qib6NFHJ3XfANiFbHqnzsBZU7FtOc31Zp3UI9IYXy74Yic8+kUL+i/LEJtT5ssRGs YVKkGZR5KM/dEkUnMMO5qe+I/7z3gJOZsQUqpdv4fhsAdgft1tuZxPDVMVCa4upDgbzF vlP94ZNYu7JfIvc77VI8sL48ItMwwT1nXkndlA3ifyo9JgqE4aHOliCbVsciPGb5gQJB 5/rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=/wylRhv97waN1hLNQRjtw5HFd7aLKXwfbfkeZTfcGns=; b=f++m2iGQyu6iCBpLfBUJY0vnuy85Q5RrAux45hq/xtW7LgyYdEmDC5TucODf5DZAo0 eFBjdel7oG40/r+eU9J7eM/w9uLtpZ6mI9eL034D8FSXS/dtz81aXdeEHxKTWDjK8GUe MYMKFwHgJVw6RmxQe9Bn33p/rtXVILU2dkJyHFpGdxGo0nfZRO4YBKXL2a7dHQShtomI twAdJ+rk6n/pKu3DaqGUAzZgAUSpMs5QrSzbDauDn78rKLIOtL6N3Vs/HUhDi/cZs5LW XjuXEo2GHZIsjcVMnJIan8Y5dMawR63OctS46O8TLoIGKtt0VyodQij3TLquI6guVZ0n 82kQ== X-Gm-Message-State: APjAAAW/PMX829Ahky1FsoT61h0WOFx93LqDWlUqX/55WAok/aL7AYsn Xlp3fV3pZn9ba1v9oyoHB+Fzzw== X-Google-Smtp-Source: APXvYqwbtrejR3KCst2sQRjESvX6YZZCTbRkCOolq/qhQ3zuJEC1s5hVXN92fwZnDA7m3oOf0RSnEw== X-Received: by 2002:adf:f286:: with SMTP id k6mr11712543wro.137.1553623608337; Tue, 26 Mar 2019 11:06:48 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:46 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Martin Schwidefsky , Heiko Carstens Subject: [PATCH/RFC bpf-next 10/16] s390: bpf: eliminate zero extension code-gen Date: Tue, 26 Mar 2019 18:05:33 +0000 Message-Id: <1553623539-15474-11-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Cc: Martin Schwidefsky Cc: Heiko Carstens Signed-off-by: Jiong Wang --- arch/s390/net/bpf_jit_comp.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 51dd026..59592d7 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -299,9 +299,11 @@ static inline void reg_set_seen(struct bpf_jit *jit, u32 b1) #define EMIT_ZERO(b1) \ ({ \ - /* llgfr %dst,%dst (zero extend to 64 bit) */ \ - EMIT4(0xb9160000, b1, b1); \ - REG_SET_SEEN(b1); \ + if (fp->aux->no_verifier_zext) { \ + /* llgfr %dst,%dst (zero extend to 64 bit) */ \ + EMIT4(0xb9160000, b1, b1); \ + REG_SET_SEEN(b1); \ + } \ }) /* @@ -1282,6 +1284,11 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp) return 0; } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + /* * Compile eBPF program "fp" */ From patchwork Tue Mar 26 18:05:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065892 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="Uwz6kXSt"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJx93fgDz9sT1 for ; Wed, 27 Mar 2019 05:06:53 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732527AbfCZSGw (ORCPT ); Tue, 26 Mar 2019 14:06:52 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:35077 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732496AbfCZSGv (ORCPT ); Tue, 26 Mar 2019 14:06:51 -0400 Received: by mail-wr1-f65.google.com with SMTP id w1so15544320wrp.2 for ; Tue, 26 Mar 2019 11:06:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Pi+jMPgfzj2fBkAwzCkG0vyOGnD4cZfJGlYpVzkjSMY=; b=Uwz6kXStZ3+GS00JjBgym7IHmoe5zQCtHuhTZWZ6whi8Paulx4I9osEYbAp8OAxb1r SN2R82bPatlTfGf7/ZDGsoX2AoKGLUHynBD4IcnNQN4nth3uhGZRFBxAIgKS1VjxkQYt Z1u+bGRrQhSNFQ40QVyNt5aahhgDaZZV/CywzKNqP1GUejZj6gDZtoyneyUNh0yD9qOO zxcvCDG183uaTZ/1QqG5h+1pXRv/HKJrxkCCzJpLxOXNlVXgRTKx/W/OfBwjHeesdTpt ydcoblscucz4JYutnXHtEprSx+fZ06ZbJlL/DFrCxZH4HRv9F7JDtnyFW1A+el7iMl0Q JjKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Pi+jMPgfzj2fBkAwzCkG0vyOGnD4cZfJGlYpVzkjSMY=; b=OvDvQXLzx/NW9ZCRFqKQ0Rm3egUvAbhZ9DfmOY/c68Amsn9Sx4bhiVwbdJATbSanGS 0mZDbNDqUp8VV5Uoae+Sxilz6BCQfoz8q0Ng1woeUKoMdsXdu/5ttO9SbvFqzTHWs/yw RGHdDcW6jDfjg9g7QDu5LvFY5bzxt+6hUBvBD17TrXg6ueGR5XNChws34Mmtq2L9wkJJ 69UIoTE5S5yzlu2kKcz++hfcDjgHJIjPiCqdEF1rtG0BA/VJRBDLkCeUzAybeuyHI0nY JX6j2fhMal54SgtqEouU10kREebWqggQx0GdlwBkKLpGjp3UqZ8Vdcfw4B356/pkix+g C7cQ== X-Gm-Message-State: APjAAAUOhFuPh+uhfkDF2s5BGGHeJqeID+0PYGsM6okrl1yal8/TcKoz BYUQ8byGd32m3g0cpim9mJEsbV5QrHg= X-Google-Smtp-Source: APXvYqzRW/JCTfVndo3ix6hVnik+3OhuUgzyLEs1Bmvdrdyvk0jE6wPonj8TWWeDZgJqRDGyaufR0g== X-Received: by 2002:a5d:694a:: with SMTP id r10mr17263748wrw.313.1553623609369; Tue, 26 Mar 2019 11:06:49 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:48 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , "David S . Miller" Subject: [PATCH/RFC bpf-next 11/16] sparc: bpf: eliminate zero extension code-gen Date: Tue, 26 Mar 2019 18:05:34 +0000 Message-Id: <1553623539-15474-12-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Cc: David S. Miller Signed-off-by: Jiong Wang --- arch/sparc/net/bpf_jit_comp_64.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c index 65428e7..e58f84e 100644 --- a/arch/sparc/net/bpf_jit_comp_64.c +++ b/arch/sparc/net/bpf_jit_comp_64.c @@ -1144,7 +1144,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) break; do_alu32_trunc: - if (BPF_CLASS(code) == BPF_ALU) + if (BPF_CLASS(code) == BPF_ALU && + ctx->prog->aux->no_verifier_zext) emit_alu_K(SRL, dst, 0, ctx); break; @@ -1432,6 +1433,11 @@ static void jit_fill_hole(void *area, unsigned int size) *ptr++ = 0x91d02005; /* ta 5 */ } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct sparc64_jit_data { struct bpf_binary_header *header; u8 *image; From patchwork Tue Mar 26 18:05:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065893 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="k8noji8Q"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJxC0rTCz9sT1 for ; Wed, 27 Mar 2019 05:06:55 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732531AbfCZSGy (ORCPT ); Tue, 26 Mar 2019 14:06:54 -0400 Received: from mail-wm1-f65.google.com ([209.85.128.65]:33248 "EHLO mail-wm1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732519AbfCZSGw (ORCPT ); Tue, 26 Mar 2019 14:06:52 -0400 Received: by mail-wm1-f65.google.com with SMTP id z6so2988379wmi.0 for ; Tue, 26 Mar 2019 11:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=esldtJjtJ7LjAyiBdO+8AWliqTz0U0S4jG86huzM9ss=; b=k8noji8QmwO/ufvcMYhtIST6ndEfDeg/HD04iJrYn9FZDz6hj4HVJVMMsuNPp/EYwV S13x3XSO6sgGSsAHJnCi8wjkaj0iuHd7opdu029/3h6iuHHx97T6AslAJto3wmZNVJf/ XAX73wWepIyN0hUgxxajNo3iiNcBb5tzH7Zfz6tfyyIpTfOfAUySYj+U1JlFyVsqdAq1 c6WaO+e0LcdUsEnBoGZ9ANeEYDGQ714Hbp/OkZu1Nykm6GmarLo08dln4YdNw/qoqiGN rIdRSq1ipmvSnOc0Gftef+8+p+oiojNpHKSHRi+oeAyX56nlZCNnSXbMQb7lfsHSGU/P veuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=esldtJjtJ7LjAyiBdO+8AWliqTz0U0S4jG86huzM9ss=; b=SG2XzsuK6Ja+tNpvP8ecoasD6ylIwW4uufxQnKGBTUkodwK2PwrNuRoQONH8sXf4MF SlzK/53SXML4il7luvVCXCnA2QIYEdE8CilNuoONO5qPTwJ0uLUUJqvJnPWjdF+0lt+C p39qI0HXqSZxTwKe/B/W9MSSc2jtTXP8lSjL2bcx40JMEZNtG+31iUqkYOXx3xIJdush ySFBstDYkRXGfKppoEEXEs8TibQrDitac/QEQqolAx2fn9dM4A59XV451En+jDauV2e4 pQH8VWP3iMZF8hDv7W914XS/fzxw1KAppFUx/hI8gdwsuxo98y97tg3PX7Sz06ZI3EiD S2qA== X-Gm-Message-State: APjAAAWS/y97ibHTbj1G3qh83A5P1JR9/B9n53BBovH29Ncw7S6/Ha95 jMVf27kVdJTm3Ii883Wy+uiSYQ== X-Google-Smtp-Source: APXvYqyEtlLHLRihOask+JJDJ+7LuYwBN2xnI3Su4W7+V+AII9DiIfRbSdHZOe1mgHAH2+BfKIjKDA== X-Received: by 2002:a1c:7dcc:: with SMTP id y195mr1164740wmc.13.1553623610619; Tue, 26 Mar 2019 11:06:50 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:50 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , Wang YanQing Subject: [PATCH/RFC bpf-next 12/16] x32: bpf: eliminate zero extension code-gen Date: Tue, 26 Mar 2019 18:05:35 +0000 Message-Id: <1553623539-15474-13-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Cc: Wang YanQing Signed-off-by: Jiong Wang --- arch/x86/net/bpf_jit_comp32.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c index 0d9cdff..8c6cf22 100644 --- a/arch/x86/net/bpf_jit_comp32.c +++ b/arch/x86/net/bpf_jit_comp32.c @@ -567,7 +567,7 @@ static inline void emit_ia32_alu_r(const bool is64, const bool hi, const u8 op, static inline void emit_ia32_alu_r64(const bool is64, const u8 op, const u8 dst[], const u8 src[], bool dstk, bool sstk, - u8 **pprog) + u8 **pprog, const struct bpf_prog_aux *aux) { u8 *prog = *pprog; @@ -575,7 +575,7 @@ static inline void emit_ia32_alu_r64(const bool is64, const u8 op, if (is64) emit_ia32_alu_r(is64, true, op, dst_hi, src_hi, dstk, sstk, &prog); - else + else if (aux->no_verifier_zext) emit_ia32_mov_i(dst_hi, 0, dstk, &prog); *pprog = prog; } @@ -666,7 +666,8 @@ static inline void emit_ia32_alu_i(const bool is64, const bool hi, const u8 op, /* ALU operation (64 bit) */ static inline void emit_ia32_alu_i64(const bool is64, const u8 op, const u8 dst[], const u32 val, - bool dstk, u8 **pprog) + bool dstk, u8 **pprog, + const struct bpf_prog_aux *aux) { u8 *prog = *pprog; u32 hi = 0; @@ -677,7 +678,7 @@ static inline void emit_ia32_alu_i64(const bool is64, const u8 op, emit_ia32_alu_i(is64, false, op, dst_lo, val, dstk, &prog); if (is64) emit_ia32_alu_i(is64, true, op, dst_hi, hi, dstk, &prog); - else + else if (aux->no_verifier_zext) emit_ia32_mov_i(dst_hi, 0, dstk, &prog); *pprog = prog; @@ -1690,11 +1691,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, switch (BPF_SRC(code)) { case BPF_X: emit_ia32_alu_r64(is64, BPF_OP(code), dst, - src, dstk, sstk, &prog); + src, dstk, sstk, &prog, + bpf_prog->aux); break; case BPF_K: emit_ia32_alu_i64(is64, BPF_OP(code), dst, - imm32, dstk, &prog); + imm32, dstk, &prog, + bpf_prog->aux); break; } break; @@ -1713,7 +1716,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, false, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (bpf_prog->aux->no_verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; case BPF_ALU | BPF_LSH | BPF_X: case BPF_ALU | BPF_RSH | BPF_X: @@ -1733,7 +1737,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (bpf_prog->aux->no_verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; /* dst = dst / src(imm) */ /* dst = dst % src(imm) */ @@ -1755,7 +1760,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, &prog); break; } - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (bpf_prog->aux->no_verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; case BPF_ALU64 | BPF_DIV | BPF_K: case BPF_ALU64 | BPF_DIV | BPF_X: @@ -1772,7 +1778,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, EMIT2_off32(0xC7, add_1reg(0xC0, IA32_ECX), imm32); emit_ia32_shift_r(BPF_OP(code), dst_lo, IA32_ECX, dstk, false, &prog); - emit_ia32_mov_i(dst_hi, 0, dstk, &prog); + if (bpf_prog->aux->no_verifier_zext) + emit_ia32_mov_i(dst_hi, 0, dstk, &prog); break; /* dst = dst << imm */ case BPF_ALU64 | BPF_LSH | BPF_K: @@ -2367,6 +2374,11 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, return proglen; } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { struct bpf_binary_header *header = NULL; From patchwork Tue Mar 26 18:05:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065895 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="YnWX5Dh/"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJxH2Gksz9sTB for ; Wed, 27 Mar 2019 05:06:59 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732556AbfCZSG6 (ORCPT ); Tue, 26 Mar 2019 14:06:58 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:35081 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732496AbfCZSGy (ORCPT ); Tue, 26 Mar 2019 14:06:54 -0400 Received: by mail-wr1-f65.google.com with SMTP id w1so15544457wrp.2 for ; Tue, 26 Mar 2019 11:06:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pNHJxDIKLsiKxKEQK4Ngr2CSz1DZ9y91uELS2TzjMyw=; b=YnWX5Dh/NWzwe3RIRtGIX+hqHEUSpVj/pEt3iMeIhBQqAIwGqvCIkf3F6KHHvPVqPt wvV8WszyIMGPpInwZ9wPR+Jbs+gITPtIcnV1YEU33Z1qjE35zRHcTQT/HJCE5ZJAMQIY eA3nAFbw010d4hvnl7CVFMmGVoilD2weOzG/sXUZFAG6SKY3O37LBC4zkzFe+vB4Mbs0 s8E3tBcdGgzeUvJ4/UXNHqB6IhVn/77p99JMyXB66ypZjxbuIX+LhcFSxDLdtyfghAxY BGMP2rbwu4lD6M0tVVXtXUOmS2Ry7gw0ACddMhC0CtzYpzClasZ5iuCoh8vcUX7bRHyS i0iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pNHJxDIKLsiKxKEQK4Ngr2CSz1DZ9y91uELS2TzjMyw=; b=h62uAyj7lUiUS8qXZaj9kH+zytejV+d29ZZjj0m9waY7aHhd4Dg7X7rwV0HBXNjGp2 TOk5zoJ8ExXAxea/c3/JYF+WWSbQBm/l5mojKKGZTo8dxzpxL5oXRAUqtY9Z7yQmzF0p izlnqwzSvbiOBQNuqpte535ugNHnD4X6HEllS0xGgMsXbItjN537Ml6GmfEiJo4zJRQ3 lPSfVqeJtqzblnrZLp0HLrhpfDSusG38z8Rf0fdXYxib27Pja0DydAfrSbKcdqIJ7F7Y RImjjXq4mO4cUpSDIQloelyYKrt4e6V3XbyTwpQ4XGWykyRJIKwSa+lj+CikPyL917dg wd9A== X-Gm-Message-State: APjAAAXDjPoM52DzVNXPFdPSbrshg3gGvdBNP7VW3kTtjEMgAdlL2pil K9zSNb7HYi0iZSEOQiBB365xEA== X-Google-Smtp-Source: APXvYqxRSJFmMkY3SvTBptAdcc9SGOsmJOTBv2WyDhuwf2hG7KwUfOlYOKU5MscPI1Rng9cC3VO/8A== X-Received: by 2002:a5d:638d:: with SMTP id p13mr20002701wru.202.1553623611989; Tue, 26 Mar 2019 11:06:51 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:51 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang , =?utf-8?b?QmrDtnJuIFTDtnBlbA==?= Subject: [PATCH/RFC bpf-next 13/16] riscv: bpf: eliminate zero extension code-gen Date: Tue, 26 Mar 2019 18:05:36 +0000 Message-Id: <1553623539-15474-14-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org CC: Björn Töpel Signed-off-by: Jiong Wang --- arch/riscv/net/bpf_jit_comp.c | 32 +++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/arch/riscv/net/bpf_jit_comp.c b/arch/riscv/net/bpf_jit_comp.c index 80b12aa..9cba262 100644 --- a/arch/riscv/net/bpf_jit_comp.c +++ b/arch/riscv/net/bpf_jit_comp.c @@ -731,6 +731,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, { bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 || BPF_CLASS(insn->code) == BPF_JMP; + struct bpf_prog_aux *aux = ctx->prog->aux; int rvoff, i = insn - ctx->prog->insnsi; u8 rd = -1, rs = -1, code = insn->code; s16 off = insn->off; @@ -743,7 +744,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MOV | BPF_X: case BPF_ALU64 | BPF_MOV | BPF_X: emit(is64 ? rv_addi(rd, rs, 0) : rv_addiw(rd, rs, 0), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; @@ -771,19 +772,19 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MUL | BPF_X: case BPF_ALU64 | BPF_MUL | BPF_X: emit(is64 ? rv_mul(rd, rd, rs) : rv_mulw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_DIV | BPF_X: case BPF_ALU64 | BPF_DIV | BPF_X: emit(is64 ? rv_divu(rd, rd, rs) : rv_divuw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MOD | BPF_X: case BPF_ALU64 | BPF_MOD | BPF_X: emit(is64 ? rv_remu(rd, rd, rs) : rv_remuw(rd, rd, rs), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_LSH | BPF_X: @@ -867,7 +868,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, case BPF_ALU | BPF_MOV | BPF_K: case BPF_ALU64 | BPF_MOV | BPF_K: emit_imm(rd, imm, ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; @@ -882,7 +883,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit(is64 ? rv_add(rd, rd, RV_REG_T1) : rv_addw(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_SUB | BPF_K: @@ -895,7 +896,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit(is64 ? rv_sub(rd, rd, RV_REG_T1) : rv_subw(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_AND | BPF_K: @@ -906,7 +907,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_and(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_OR | BPF_K: @@ -917,7 +918,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_or(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_XOR | BPF_K: @@ -928,7 +929,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(rv_xor(rd, rd, RV_REG_T1), ctx); } - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MUL | BPF_K: @@ -936,7 +937,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_mul(rd, rd, RV_REG_T1) : rv_mulw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_DIV | BPF_K: @@ -944,7 +945,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_divu(rd, rd, RV_REG_T1) : rv_divuw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_MOD | BPF_K: @@ -952,7 +953,7 @@ static int emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx, emit_imm(RV_REG_T1, imm, ctx); emit(is64 ? rv_remu(rd, rd, RV_REG_T1) : rv_remuw(rd, rd, RV_REG_T1), ctx); - if (!is64) + if (!is64 && aux->no_verifier_zext) emit_zext_32(rd, ctx); break; case BPF_ALU | BPF_LSH | BPF_K: @@ -1503,6 +1504,11 @@ static void bpf_flush_icache(void *start, void *end) flush_icache_range((unsigned long)start, (unsigned long)end); } +bool bpf_jit_hardware_zext(void) +{ + return false; +} + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) { bool tmp_blinded = false, extra_pass = false; From patchwork Tue Mar 26 18:05:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065899 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="FLuJJxwb"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJxM0Xvjz9sTH for ; Wed, 27 Mar 2019 05:07:03 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732551AbfCZSG5 (ORCPT ); Tue, 26 Mar 2019 14:06:57 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:45024 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732535AbfCZSGz (ORCPT ); Tue, 26 Mar 2019 14:06:55 -0400 Received: by mail-wr1-f65.google.com with SMTP id y7so11377871wrn.11 for ; Tue, 26 Mar 2019 11:06:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=qxA1L/gVWESQpZ3Mokilcchz/Sf49i0tM5B88NgcAXI=; b=FLuJJxwbL+uSyJnrcjkx78dkl/wUU88a/CG1dkr0F0YHaCLPpiQD82zkncVmkN9f1f w9ikUg6AgZS17D0qEbcdexuhiNfIWkGK5Rkxl1POE6EIGgWirCiiSpXq1/lxhRwyv7F0 a/41PYxxo4x64jryxf9Kg/jNjI6E6pYlXAB+xDtcRRMA6L1bd4cAjpakV5fkQxtyYbO6 o1rlLA8CFDNi7F06p7fR4irBcmIqN0w16AnqjQ9Ka9NTHiU36FXiYSef0yBbRw+emwAR /TDWNnvbp/2hEqX/sY49uup4ntYveRoFu50cxAVLw7Y//Ld1XsKk34zpn/NDvtwksUUa crBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=qxA1L/gVWESQpZ3Mokilcchz/Sf49i0tM5B88NgcAXI=; b=Z6gZBP5TgLhIXt/cOOo9vtALlS0kfV4NUYulPsCo++677/GFfI4mIbe9bjCwidE+67 Z30D2PsAUb19hXVLxQydhyqNNCLQQlZNWGAhUOh3PlX1fdLojlXwORaFhoEppV9f3KuO miippVwCBkAp/9SDddMiamsunV7gSQYL4XcF2Y8Xjr3/MBTT7VzJBISFWQNvczt9msjv nWxtLVCdPsQ1bl8aR5A1+0IpH30dk8v2zQjI7E4fZ0daaysPJ1V/E1H54/lRdDhWgS0t SQWRXCyCXyuWLHU2dbe9BrFApm+6Ot2heZUHH+waPz0kHJLlJTqcM1LBt7h/XqROtZG9 DVAg== X-Gm-Message-State: APjAAAVXKKv0hggl8IN1ySmKJKWp7y3m18xsmn2BlyLzxdpcyVo9CPCI HszObIWjDoiGBdm2PAOZ1LuUOA== X-Google-Smtp-Source: APXvYqzdCQVID1/4X7UqpG5Lz7mhxs2lHPeZ2afTivMTKmUc6NTUk2Mb5ZsfdZ3Pxfo/bSqztnfMEA== X-Received: by 2002:a05:6000:1291:: with SMTP id f17mr20229172wrx.201.1553623613401; Tue, 26 Mar 2019 11:06:53 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:52 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 14/16] nfp: bpf: eliminate zero extension code-gen Date: Tue, 26 Mar 2019 18:05:37 +0000 Message-Id: <1553623539-15474-15-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This patch eliminate zero extension code-gen for instructions except load/store when possible. Elimination for load/store will be supported in up coming patches. Reviewed-by: Jakub Kicinski Signed-off-by: Jiong Wang --- drivers/net/ethernet/netronome/nfp/bpf/jit.c | 119 +++++++++++++--------- drivers/net/ethernet/netronome/nfp/bpf/main.h | 2 + drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 12 +++ 3 files changed, 83 insertions(+), 50 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c index f272247..eb30c52 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c @@ -612,6 +612,13 @@ static void wrp_immed(struct nfp_prog *nfp_prog, swreg dst, u32 imm) } static void +wrp_zext(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst) +{ + if (meta->flags & FLAG_INSN_DO_ZEXT) + wrp_immed(nfp_prog, reg_both(dst + 1), 0); +} + +static void wrp_immed_relo(struct nfp_prog *nfp_prog, swreg dst, u32 imm, enum nfp_relo_type relo) { @@ -847,7 +854,8 @@ static int nfp_cpp_memcpy(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) } static int -data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size) +data_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, swreg offset, + u8 dst_gpr, int size) { unsigned int i; u16 shift, sz; @@ -870,14 +878,15 @@ data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size) wrp_mov(nfp_prog, reg_both(dst_gpr + i), reg_xfer(i)); if (i < 2) - wrp_immed(nfp_prog, reg_both(dst_gpr + 1), 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } static int -data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr, - swreg lreg, swreg rreg, int size, enum cmd_mode mode) +data_ld_host_order(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 dst_gpr, swreg lreg, swreg rreg, int size, + enum cmd_mode mode) { unsigned int i; u8 mask, sz; @@ -900,33 +909,34 @@ data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr, wrp_mov(nfp_prog, reg_both(dst_gpr + i), reg_xfer(i)); if (i < 2) - wrp_immed(nfp_prog, reg_both(dst_gpr + 1), 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } static int -data_ld_host_order_addr32(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, - u8 dst_gpr, u8 size) +data_ld_host_order_addr32(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 src_gpr, swreg offset, u8 dst_gpr, u8 size) { - return data_ld_host_order(nfp_prog, dst_gpr, reg_a(src_gpr), offset, - size, CMD_MODE_32b); + return data_ld_host_order(nfp_prog, meta, dst_gpr, reg_a(src_gpr), + offset, size, CMD_MODE_32b); } static int -data_ld_host_order_addr40(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, - u8 dst_gpr, u8 size) +data_ld_host_order_addr40(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u8 src_gpr, swreg offset, u8 dst_gpr, u8 size) { swreg rega, regb; addr40_offset(nfp_prog, src_gpr, offset, ®a, ®b); - return data_ld_host_order(nfp_prog, dst_gpr, rega, regb, + return data_ld_host_order(nfp_prog, meta, dst_gpr, rega, regb, size, CMD_MODE_40b_BA); } static int -construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size) +construct_data_ind_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u16 offset, u16 src, u8 size) { swreg tmp_reg; @@ -942,10 +952,12 @@ construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size) emit_br_relo(nfp_prog, BR_BLO, BR_OFF_RELO, 0, RELO_BR_GO_ABORT); /* Load data */ - return data_ld(nfp_prog, imm_b(nfp_prog), 0, size); + return data_ld(nfp_prog, meta, imm_b(nfp_prog), 0, size); } -static int construct_data_ld(struct nfp_prog *nfp_prog, u16 offset, u8 size) +static int +construct_data_ld(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + u16 offset, u8 size) { swreg tmp_reg; @@ -956,7 +968,7 @@ static int construct_data_ld(struct nfp_prog *nfp_prog, u16 offset, u8 size) /* Load data */ tmp_reg = re_load_imm_any(nfp_prog, offset, imm_b(nfp_prog)); - return data_ld(nfp_prog, tmp_reg, 0, size); + return data_ld(nfp_prog, meta, tmp_reg, 0, size); } static int @@ -1193,7 +1205,7 @@ mem_op_stack(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, } if (clr_gpr && size < 8) - wrp_immed(nfp_prog, reg_both(gpr + 1), 0); + wrp_zext(nfp_prog, meta, gpr); while (size) { u32 slice_end; @@ -1294,9 +1306,10 @@ wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, enum alu_op alu_op) { const struct bpf_insn *insn = &meta->insn; + u8 dst = insn->dst_reg * 2; - wrp_alu_imm(nfp_prog, insn->dst_reg * 2, alu_op, insn->imm); - wrp_immed(nfp_prog, reg_both(insn->dst_reg * 2 + 1), 0); + wrp_alu_imm(nfp_prog, dst, alu_op, insn->imm); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -1308,7 +1321,7 @@ wrp_alu32_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst = meta->insn.dst_reg * 2, src = meta->insn.src_reg * 2; emit_alu(nfp_prog, reg_both(dst), reg_a(dst), alu_op, reg_b(src)); - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2385,12 +2398,14 @@ static int neg_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) u8 dst = meta->insn.dst_reg * 2; emit_alu(nfp_prog, reg_both(dst), reg_imm(0), ALU_OP_SUB, reg_b(dst)); - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } -static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) { /* Set signedness bit (MSB of result). */ @@ -2399,7 +2414,7 @@ static int __ashr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR, reg_b(dst), SHF_SC_R_SHF, shift_amt); } - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2414,7 +2429,7 @@ static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __ashr_imm(nfp_prog, dst, umin); + return __ashr_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; /* NOTE: the first insn will set both indirect shift amount (source A) @@ -2423,7 +2438,7 @@ static int ashr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_b(dst)); emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_ASHR, reg_b(dst), SHF_SC_R_SHF); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2433,15 +2448,17 @@ static int ashr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __ashr_imm(nfp_prog, dst, insn->imm); + return __ashr_imm(nfp_prog, meta, dst, insn->imm); } -static int __shr_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__shr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_R_SHF, shift_amt); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2450,7 +2467,7 @@ static int shr_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __shr_imm(nfp_prog, dst, insn->imm); + return __shr_imm(nfp_prog, meta, dst, insn->imm); } static int shr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) @@ -2463,22 +2480,24 @@ static int shr_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __shr_imm(nfp_prog, dst, umin); + return __shr_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; emit_alu(nfp_prog, reg_none(), reg_a(src), ALU_OP_OR, reg_imm(0)); emit_shf_indir(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_R_SHF); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } -static int __shl_imm(struct nfp_prog *nfp_prog, u8 dst, u8 shift_amt) +static int +__shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, u8 dst, + u8 shift_amt) { if (shift_amt) emit_shf(nfp_prog, reg_both(dst), reg_none(), SHF_OP_NONE, reg_b(dst), SHF_SC_L_SHF, shift_amt); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2487,7 +2506,7 @@ static int shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) const struct bpf_insn *insn = &meta->insn; u8 dst = insn->dst_reg * 2; - return __shl_imm(nfp_prog, dst, insn->imm); + return __shl_imm(nfp_prog, meta, dst, insn->imm); } static int shl_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) @@ -2500,11 +2519,11 @@ static int shl_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) umin = meta->umin_src; umax = meta->umax_src; if (umin == umax) - return __shl_imm(nfp_prog, dst, umin); + return __shl_imm(nfp_prog, meta, dst, umin); src = insn->src_reg * 2; shl_reg64_lt32_low(nfp_prog, dst, src); - wrp_immed(nfp_prog, reg_both(dst + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2566,34 +2585,34 @@ static int imm_ld8(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) static int data_ld1(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 1); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 1); } static int data_ld2(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 2); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 2); } static int data_ld4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ld(nfp_prog, meta->insn.imm, 4); + return construct_data_ld(nfp_prog, meta, meta->insn.imm, 4); } static int data_ind_ld1(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 1); } static int data_ind_ld2(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 2); } static int data_ind_ld4(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { - return construct_data_ind_ld(nfp_prog, meta->insn.imm, + return construct_data_ind_ld(nfp_prog, meta, meta->insn.imm, meta->insn.src_reg * 2, 4); } @@ -2632,7 +2651,7 @@ static int mem_ldx_skb(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, return -EOPNOTSUPP; } - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2658,7 +2677,7 @@ static int mem_ldx_xdp(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, return -EOPNOTSUPP; } - wrp_immed(nfp_prog, reg_both(meta->insn.dst_reg * 2 + 1), 0); + wrp_zext(nfp_prog, meta, dst); return 0; } @@ -2671,7 +2690,7 @@ mem_ldx_data(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); - return data_ld_host_order_addr32(nfp_prog, meta->insn.src_reg * 2, + return data_ld_host_order_addr32(nfp_prog, meta, meta->insn.src_reg * 2, tmp_reg, meta->insn.dst_reg * 2, size); } @@ -2683,7 +2702,7 @@ mem_ldx_emem(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); - return data_ld_host_order_addr40(nfp_prog, meta->insn.src_reg * 2, + return data_ld_host_order_addr40(nfp_prog, meta, meta->insn.src_reg * 2, tmp_reg, meta->insn.dst_reg * 2, size); } @@ -2744,7 +2763,7 @@ mem_ldx_data_from_pktcache_unaligned(struct nfp_prog *nfp_prog, wrp_reg_subpart(nfp_prog, dst_lo, src_lo, len_lo, off); if (!len_mid) { - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); return 0; } @@ -2752,7 +2771,7 @@ mem_ldx_data_from_pktcache_unaligned(struct nfp_prog *nfp_prog, if (size <= REG_WIDTH) { wrp_reg_or_subpart(nfp_prog, dst_lo, src_mid, len_mid, len_lo); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else { swreg src_hi = reg_xfer(idx + 2); @@ -2783,10 +2802,10 @@ mem_ldx_data_from_pktcache_aligned(struct nfp_prog *nfp_prog, if (size < REG_WIDTH) { wrp_reg_subpart(nfp_prog, dst_lo, src_lo, size, 0); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else if (size == REG_WIDTH) { wrp_mov(nfp_prog, dst_lo, src_lo); - wrp_immed(nfp_prog, dst_hi, 0); + wrp_zext(nfp_prog, meta, dst_gpr); } else { swreg src_hi = reg_xfer(idx + 1); diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h index b25a482..7369bdf 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.h +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h @@ -249,6 +249,8 @@ struct nfp_bpf_reg_state { #define FLAG_INSN_SKIP_PREC_DEPENDENT BIT(4) /* Instruction is optimized by the verifier */ #define FLAG_INSN_SKIP_VERIFIER_OPT BIT(5) +/* Instruction needs to zero extend to high 32-bit */ +#define FLAG_INSN_DO_ZEXT BIT(6) #define FLAG_INSN_SKIP_MASK (FLAG_INSN_SKIP_NOOP | \ FLAG_INSN_SKIP_PREC_DEPENDENT | \ diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c index 36f56eb..e92ee51 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c @@ -744,6 +744,17 @@ static unsigned int nfp_bpf_get_stack_usage(struct nfp_prog *nfp_prog) goto continue_subprog; } +static void nfp_bpf_insn_flag_zext(struct nfp_prog *nfp_prog, + struct bpf_insn_aux_data *aux) +{ + struct nfp_insn_meta *meta; + + list_for_each_entry(meta, &nfp_prog->insns, l) { + if (aux[meta->n].zext_dst) + meta->flags |= FLAG_INSN_DO_ZEXT; + } +} + int nfp_bpf_finalize(struct bpf_verifier_env *env) { struct bpf_subprog_info *info; @@ -784,6 +795,7 @@ int nfp_bpf_finalize(struct bpf_verifier_env *env) return -EOPNOTSUPP; } + nfp_bpf_insn_flag_zext(nfp_prog, env->insn_aux_data); return 0; } From patchwork Tue Mar 26 18:05:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065902 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="lPjZbeZ8"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJxP0Y6hz9sTK for ; Wed, 27 Mar 2019 05:07:05 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732530AbfCZSHD (ORCPT ); Tue, 26 Mar 2019 14:07:03 -0400 Received: from mail-wm1-f68.google.com ([209.85.128.68]:38570 "EHLO mail-wm1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732323AbfCZSG5 (ORCPT ); Tue, 26 Mar 2019 14:06:57 -0400 Received: by mail-wm1-f68.google.com with SMTP id a188so13942278wmf.3 for ; Tue, 26 Mar 2019 11:06:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=WODGgNlfRn0knXdqsfcqM64/Cy/uhyb81Y1sQXXXC18=; b=lPjZbeZ8VatjEKqhPKXWYG9JHf00LIlIilyXKHhJPcb0spu5Z7P++Jm0QKrFBf1DPG 387yZ1/EPWTtwNuizEg9O/40N6ASjW5Nl9jFXRIEkpSWft1gmyeyOTY1bUwsXC9GZtYj Keoczi9o7Or5igLVnOFBN8gWWmIGSWhVaSKq2XRKxhgnlHj5fgMHltXqAOaXNYkqhMCl uBt2zcyY6wQc/wRm7aoiZWLBWQsWAPMSRAKf4YJmjxDFk/KPwo9Y/Cz+ufBNGhYvKMIG YjQOQH5+CluwHNetzPfS5xvjMkEew7M/NCqnLGFcFSTXi/r9jcaE42JqIHTCJO/i9RJu g5DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=WODGgNlfRn0knXdqsfcqM64/Cy/uhyb81Y1sQXXXC18=; b=CtvMAsgF6mOlxsv6GO13hmU8dQOzg0Re5DFWpaonZ5kKINQ24WqFpVVdjCQdivREc+ 6I8PGbF6bfc2RYo7QbMSVCs5mIYP5YjBeyirMTr6lbzrASTp0k4uVcvynQZkSK8iy9sJ E1XIgAztFpzoSBEmuRQDrRyVVgTPJW8UVO7bnU5ReP6zqNL6hZqW/ic7XDoYFilTOgrQ 1G+8hYPNTr8t3leIKEQFT6Bff9Lq8G97KDToF2uxyBuo5Bu/7WlAh8dnJeM6EBjFzpIC U4EKrZMEf0AI+LImV0ZhlKeyouM0uWOirIaWW1nESoMTfcC+AXx0YSjtgfUF760IJgN2 nfaQ== X-Gm-Message-State: APjAAAXPPminWm+oNYaPYeV6jyKa+qX9LZ4yCAO8EEDDCIHr1HbJ6wgg vhXt4kNq2dkd5aG9dvRawW8G9A== X-Google-Smtp-Source: APXvYqx9MiGNy82BzgezRbBmQ/es15vp3hHTFNLv0Jv2PdqqSDye2P/xc9G0/2zmJ1s+yP6KUrOlmA== X-Received: by 2002:a1c:a186:: with SMTP id k128mr15598321wme.54.1553623614878; Tue, 26 Mar 2019 11:06:54 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:53 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 15/16] selftests: bpf: new field "xlated_insns" for insn scan test after verification Date: Tue, 26 Mar 2019 18:05:38 +0000 Message-Id: <1553623539-15474-16-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org Instruction scan is needed to test the new zero extension insertion pass. This patch introduces the new "xlated_insns" field. Once it is set, instructions from "xlated_insns" will be compared with the instruction sequences returned by prog query syscall after verification. Failure will be reported if there is mismatch, meaning transformations haven't happened as expected. One thing to note is we want to always run such tests but the test host do NOT necessarily has this optimization enabled. So, we need to set sysctl variable "bpf_jit_32bit_opt" to true manually before running such tests and restore its value back. Also, we disable JIT blinding which could cause trouble when matching instructions. We only run insn scan tests under privileged mode. Signed-off-by: Jiong Wang --- tools/testing/selftests/bpf/test_verifier.c | 220 ++++++++++++++++++++++++++-- 1 file changed, 210 insertions(+), 10 deletions(-) diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index 19b5d03..aeb2566 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -65,7 +65,8 @@ static int skips; struct bpf_test { const char *descr; - struct bpf_insn insns[MAX_INSNS]; + struct bpf_insn insns[MAX_INSNS]; + struct bpf_insn xlated_insns[MAX_INSNS]; int fixup_map_hash_8b[MAX_FIXUPS]; int fixup_map_hash_48b[MAX_FIXUPS]; int fixup_map_hash_16b[MAX_FIXUPS]; @@ -257,14 +258,33 @@ static struct bpf_test tests[] = { #undef FILL_ARRAY }; -static int probe_filter_length(const struct bpf_insn *fp) +static int probe_filter_length_bidir(const struct bpf_insn *fp, bool reverse) { int len; - for (len = MAX_INSNS - 1; len > 0; --len) - if (fp[len].code != 0 || fp[len].imm != 0) + if (reverse) { + for (len = MAX_INSNS - 1; len > 0; --len) + if (fp[len].code != 0 || fp[len].imm != 0) + break; + return len + 1; + } + + for (len = 0; len < MAX_INSNS; len++) + if (fp[len].code == 0 && fp[len].imm == 0) break; - return len + 1; + + return len; +} + +static int probe_filter_length(const struct bpf_insn *fp) +{ + return probe_filter_length_bidir(fp, true); +} + +static int probe_xlated_filter_length(const struct bpf_insn *fp) +{ + /* Translated insn array is very likely to be empty. */ + return probe_filter_length_bidir(fp, false); } static bool skip_unsupported_map(enum bpf_map_type map_type) @@ -698,13 +718,130 @@ static int do_prog_test_run(int fd_prog, bool unpriv, uint32_t expected_val, return 0; } +static inline __u64 ptr_to_u64(const void *ptr) +{ + return (__u64)(unsigned long)ptr; +} + +static int read_bpf_procfs(const char *name) +{ + char path[64], *endptr, *line = NULL; + size_t len = 0; + FILE *fd; + int res; + + snprintf(path, sizeof(path), "/proc/sys/net/core/%s", name); + + fd = fopen(path, "r"); + if (!fd) + return -1; + + res = getline(&line, &len, fd); + fclose(fd); + if (res < 0) + return -1; + + errno = 0; + res = strtol(line, &endptr, 10); + if (errno || *line == '\0' || *endptr != '\n') + res = -1; + free(line); + + return res; +} + +static int write_bpf_procfs(const char *name, const char value) +{ + char path[64]; + FILE *fd; + int res; + + snprintf(path, sizeof(path), "/proc/sys/net/core/%s", name); + + fd = fopen(path, "w"); + if (!fd) + return -1; + + res = fwrite(&value, 1, 1, fd); + fclose(fd); + if (res != 1) + return -1; + + return 0; +} + +static int check_xlated_insn(int fd_prog, struct bpf_test *test, int xlated_len) +{ + struct bpf_insn *xlated_insn_buf; + __u32 len, *member_len, buf_size; + struct bpf_prog_info info; + __u64 *member_ptr; + int err, idx; + + len = sizeof(info); + memset(&info, 0, sizeof(info)); + member_len = &info.xlated_prog_len; + member_ptr = &info.xlated_prog_insns; + err = bpf_obj_get_info_by_fd(fd_prog, &info, &len); + if (err) { + printf("FAIL\nFailed to get prog info '%s'!\n", + strerror(errno)); + return -1; + } + if (!*member_len) { + printf("FAIL\nNo xlated insn returned!\n"); + return -1; + } + buf_size = *member_len; + xlated_insn_buf = malloc(buf_size); + if (!xlated_insn_buf) { + printf("FAIL\nFailed to alloc xlated insn buffer!\n"); + return -1; + } + + memset(&info, 0, sizeof(info)); + *member_ptr = ptr_to_u64(xlated_insn_buf); + *member_len = buf_size; + err = bpf_obj_get_info_by_fd(fd_prog, &info, &len); + if (err) { + printf("FAIL\nFailed to get prog info '%s'!\n", + strerror(errno)); + return -1; + } + if (*member_len > buf_size) { + printf("FAIL\nToo many xlated insns returned!\n"); + return -1; + } + for (idx = 0; idx < xlated_len; idx++) { + struct bpf_insn expect_insn = test->xlated_insns[idx]; + struct bpf_insn got_insn = xlated_insn_buf[idx]; + bool match_fail; + + /* Verifier will rewrite call imm/offset, just compare code. */ + if (expect_insn.code == (BPF_JMP | BPF_CALL)) + match_fail = got_insn.code != expect_insn.code; + else /* Full match. */ + match_fail = memcmp(&got_insn, &expect_insn, + sizeof(struct bpf_insn)); + + if (match_fail) { + printf("FAIL\nFailed to match xlated insns[%d]\n", idx); + return -1; + } + } + + return 0; +} + static void do_test_single(struct bpf_test *test, bool unpriv, int *passes, int *errors) { - int fd_prog, expected_ret, alignment_prevented_execution; - int prog_len, prog_type = test->prog_type; + int fd_prog = -1, expected_ret, alignment_prevented_execution; + int original_jit_blind = 0, original_jit_32bit_opt = 0; + int xlated_len, prog_len, prog_type = test->prog_type; struct bpf_insn *prog = test->insns; int run_errs, run_successes; + bool has_xlated_insn_test; int map_fds[MAX_NR_MAPS]; const char *expected_err; int fixup_skips; @@ -724,6 +861,45 @@ static void do_test_single(struct bpf_test *test, bool unpriv, if (fixup_skips != skips) return; prog_len = probe_filter_length(prog); + xlated_len = probe_xlated_filter_length(test->xlated_insns); + expected_ret = unpriv && test->result_unpriv != UNDEF ? + test->result_unpriv : test->result; + has_xlated_insn_test = expected_ret == ACCEPT && xlated_len; + if (!unpriv) { + /* Disable 32-bit optimization for all the other tests. The + * inserted shifts could break some test assumption, for + * example, those hard coded map fixup insn indexes. + */ + char opt_enable = '0'; + + original_jit_32bit_opt = read_bpf_procfs("bpf_jit_32bit_opt"); + if (original_jit_32bit_opt < 0) { + printf("FAIL\nRead jit 32bit opt proc info\n"); + goto fail; + } + /* Disable JIT blinding and enable 32-bit optimization when + * there is translated insn match test. + */ + if (has_xlated_insn_test) { + original_jit_blind = read_bpf_procfs("bpf_jit_harden"); + if (original_jit_blind < 0) { + printf("FAIL\nRead jit blinding proc info\n"); + goto fail; + } + err = write_bpf_procfs("bpf_jit_harden", '0'); + if (err < 0) { + printf("FAIL\nDisable jit blinding\n"); + goto fail; + } + + opt_enable = '1'; + } + err = write_bpf_procfs("bpf_jit_32bit_opt", opt_enable); + if (err < 0) { + printf("FAIL\nSetting jit 32-bit opt enablement\n"); + goto fail; + } + } pflags = 0; if (test->flags & F_LOAD_WITH_STRICT_ALIGNMENT) @@ -738,8 +914,6 @@ static void do_test_single(struct bpf_test *test, bool unpriv, goto close_fds; } - expected_ret = unpriv && test->result_unpriv != UNDEF ? - test->result_unpriv : test->result; expected_err = unpriv && test->errstr_unpriv ? test->errstr_unpriv : test->errstr; @@ -756,6 +930,31 @@ static void do_test_single(struct bpf_test *test, bool unpriv, (test->flags & F_NEEDS_EFFICIENT_UNALIGNED_ACCESS)) alignment_prevented_execution = 1; #endif + + if (!unpriv) { + /* Restore 32-bit optimization variable . */ + err = write_bpf_procfs("bpf_jit_32bit_opt", + '0' + original_jit_32bit_opt); + if (err < 0) { + printf("FAIL\nRestore jit 32-bit opt\n"); + goto fail; + } + if (has_xlated_insn_test) { + char c = '0' + original_jit_blind; + + /* Restore JIT blinding variable . */ + err = write_bpf_procfs("bpf_jit_harden", c); + if (err < 0) { + printf("FAIL\nRestore jit blinding\n"); + goto fail; + } + /* Do xlated insn comparisons. */ + err = check_xlated_insn(fd_prog, test, + xlated_len); + if (err < 0) + goto fail; + } + } } else { if (fd_prog >= 0) { printf("FAIL\nUnexpected success to load!\n"); @@ -836,8 +1035,9 @@ static void do_test_single(struct bpf_test *test, bool unpriv, sched_yield(); return; fail_log: - (*errors)++; printf("%s", bpf_vlog); +fail: + (*errors)++; goto close_fds; } From patchwork Tue Mar 26 18:05:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiong Wang X-Patchwork-Id: 1065896 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="ZcnN7tAx"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44TJxJ021yz9sTF for ; Wed, 27 Mar 2019 05:07:00 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732496AbfCZSG7 (ORCPT ); Tue, 26 Mar 2019 14:06:59 -0400 Received: from mail-wm1-f67.google.com ([209.85.128.67]:50295 "EHLO mail-wm1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732550AbfCZSG7 (ORCPT ); Tue, 26 Mar 2019 14:06:59 -0400 Received: by mail-wm1-f67.google.com with SMTP id z11so13781791wmi.0 for ; Tue, 26 Mar 2019 11:06:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=kp55HY6x50BAaP+YPYIVdE+fL/VPUW1VOJEZqQ4aFqs=; b=ZcnN7tAxMjlVKdPXCQpGxSiqdIQInrI81QG/cIUaOL/WJ0pa6XvR7kNsfebeDQgKTB LZG778KBAcpjt9WEoRPCit+KrBYxQS1mo8OG6EVK/HU58eHPcfUVMukpOTtEb7WtcI+T REINnCzJdCj3AkxLJKmxAsR6puPPfGcnlcj+owVOUX4LlWRRdt0IV/15IQik1mKgfjkJ Rejd9kCabwbBRmo2fFW+O5MN9jtxmZvKvdjvoyUy4tIppXFvHp2CXT+181IrEik5ARWh l5jz2tpej8l0f4yqtiswh/Ek/jo+MDEGiujOlWUAkh35MK/Qx9kXgxpRd9YvrjW1Ggrw 6TpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=kp55HY6x50BAaP+YPYIVdE+fL/VPUW1VOJEZqQ4aFqs=; b=LELn4moqsNHerlM4gtx0GXyLa2GYP2R6+MWovyxTkY9WKO24xwE8ZoPCVECuiiJZoP d0vhIpGYtVDJFnh3vIQghJv6AVaE9Aopx6lMKUyujW1V8SvQ2KNFFpoZu3h8yBZAzgO6 h9ZF2bIxK8VDJ7sspx98N0s+fvsaGLYPS/J2yXHFUF0SH8n5CDlYxwfNh+Ppzgwq8aVb 3zExAKp2s2hT4nLLuu3XDVjmXfTrn6eN4G4lXWqaxft0UoMyh5y6NdoNGGNmS1RL13M7 /XWAYZy45zi3m+NqJGvLRdimo3LwRr5iHBAqQYVnrpUubeVqwHQHx7fx83zQhjNWEIHw kWoQ== X-Gm-Message-State: APjAAAUFKAlDN4wN3UESk+wa5Yz+0OKRHypWr2Io8LhKVmNyYCeYbIyc TWbLVAcpMkd/wE78juCmqzNr8w== X-Google-Smtp-Source: APXvYqwZni5uwSB+SLC96uq94/quXkUn/BW/LivoEn/7OA65xKwxE4h20LnoVLnshCl3c6UqcbIlbg== X-Received: by 2002:a1c:6587:: with SMTP id z129mr15351423wmb.84.1553623616367; Tue, 26 Mar 2019 11:06:56 -0700 (PDT) Received: from cbtest28.netronome.com ([217.38.71.146]) by smtp.gmail.com with ESMTPSA id i28sm36697534wrc.32.2019.03.26.11.06.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 26 Mar 2019 11:06:55 -0700 (PDT) From: Jiong Wang To: alexei.starovoitov@gmail.com, daniel@iogearbox.net Cc: bpf@vger.kernel.org, netdev@vger.kernel.org, oss-drivers@netronome.com, Jiong Wang Subject: [PATCH/RFC bpf-next 16/16] selftests: bpf: unit testcases for zero extension insertion pass Date: Tue, 26 Mar 2019 18:05:39 +0000 Message-Id: <1553623539-15474-17-git-send-email-jiong.wang@netronome.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> References: <1553623539-15474-1-git-send-email-jiong.wang@netronome.com> Sender: bpf-owner@vger.kernel.org Precedence: bulk List-Id: netdev.vger.kernel.org This patch adds some unit testcases. There are a couple of code paths inside verifier doing register read/write marking, therefore are the places that could trigger zero extension insertion logic. Create one test for each of them. A couple of testcases for complex CFG also included. They cover register read propagation during path pruning etc. Signed-off-by: Jiong Wang --- tools/testing/selftests/bpf/verifier/zext.c | 651 ++++++++++++++++++++++++++++ 1 file changed, 651 insertions(+) create mode 100644 tools/testing/selftests/bpf/verifier/zext.c diff --git a/tools/testing/selftests/bpf/verifier/zext.c b/tools/testing/selftests/bpf/verifier/zext.c new file mode 100644 index 0000000..b45a429 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/zext.c @@ -0,0 +1,651 @@ +/* There are a couple of code paths inside verifier doing register + * read/write marking. Create one test for each. + */ +{ + "zext: basic 1", + .insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 0, +}, +{ + "zext: basic 2", + .insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_ALU64_IMM(BPF_NEG, BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_ALU64_IMM(BPF_NEG, BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = -1, +}, +{ + "zext: basic 3", + .insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 0, +}, +{ + "zext: basic 4", + .insns = { + BPF_LD_IMM64(BPF_REG_1, 0x300000001ULL), + BPF_MOV32_IMM(BPF_REG_1, 1), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 32), + BPF_MOV32_IMM(BPF_REG_2, 2), + BPF_JMP_REG(BPF_JSLE, BPF_REG_1, BPF_REG_2, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + BPF_MOV32_IMM(BPF_REG_0, 4), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_1, 0x300000001ULL), + BPF_MOV32_IMM(BPF_REG_1, 1), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 32), + BPF_MOV32_IMM(BPF_REG_2, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_2, 32), + BPF_JMP_REG(BPF_JSLE, BPF_REG_1, BPF_REG_2, 2), + BPF_MOV64_IMM(BPF_REG_0, 3), + BPF_EXIT_INSN(), + BPF_MOV32_IMM(BPF_REG_0, 4), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 4, +}, +{ + "zext: basic 5", + .insns = { + BPF_LD_IMM64(BPF_REG_1, 0x100000001ULL), + BPF_MOV32_IMM(BPF_REG_1, 0), + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_1, 0x100000001ULL), + BPF_MOV32_IMM(BPF_REG_1, 0), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 32), + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 0, +}, +{ + "zext: basic 6", + .insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_MOV32_IMM(BPF_REG_1, 1), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL), + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_MOV32_IMM(BPF_REG_1, 1), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 32), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, +}, +{ + "zext: ret from main", + .insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 1, +}, +{ + "zext: ret from helper", + .insns = { + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_IMM(BPF_REG_8, BPF_REG_0), + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + /* Shouldn't do zext. */ + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_IMM(BPF_REG_8, BPF_REG_0), + BPF_MOV32_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 0, +}, +{ + "zext: xadd", + .insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL), + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_STX_XADD(BPF_DW, BPF_REG_10, BPF_REG_0, -8), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL), + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_STX_XADD(BPF_DW, BPF_REG_10, BPF_REG_0, -8), + BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, + .retval = 1, +}, +{ + "zext: ld_abs ind", + .insns = { + BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1), + BPF_LD_IMM64(BPF_REG_8, 0x100000000ULL), + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + BPF_LD_IND(BPF_B, BPF_REG_8, 0), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1), + BPF_LD_IMM64(BPF_REG_8, 0x100000000ULL), + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + }, + .data = { + 10, 20, 30, 40, 50, + }, + .prog_type = BPF_PROG_TYPE_SCHED_CLS, + .result = ACCEPT, + .retval = 10, +}, +{ + "zext: multi paths, all 32-bit use", + .insns = { + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_8), + BPF_EXIT_INSN(), + BPF_MOV32_REG(BPF_REG_7, BPF_REG_8), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_7), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_8), + BPF_EXIT_INSN(), + BPF_MOV32_REG(BPF_REG_7, BPF_REG_8), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_7), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 2, +}, +{ + "zext: multi paths, partial 64-bit use", + .insns = { + BPF_LD_IMM64(BPF_REG_8, 0x100000001ULL), + BPF_MOV32_IMM(BPF_REG_8, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_8), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_LD_IMM64(BPF_REG_8, 0x100000001ULL), + BPF_MOV32_IMM(BPF_REG_8, 0), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_8), + BPF_EXIT_INSN(), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 0, +}, +{ + "zext: multi paths, 32-bit def override", + .insns = { + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3), + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_8), + BPF_EXIT_INSN(), + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_8), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_7), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3), + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_8), + BPF_EXIT_INSN(), + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_8), + BPF_MOV32_REG(BPF_REG_0, BPF_REG_7), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 2, +}, +{ + /* Diamond CFG + * + * ----- + * | BB0 | + * ----- + * /\ + * / \ + * / \ + * ----- ----- + * | BB1 | | BB2 | u32 def + * ----- ----- + * \ / + * \ / -> pruned, but u64 read should propagate backward + * \ / + * ----- + * | BB3 | u64 read + * ----- + */ + "zext: complex cfg 1", + .insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), + /* BB1 */ + BPF_MOV64_IMM(BPF_REG_8, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 2), + /* BB2 */ + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 0), + /* BB3, 64-bit R8 read should be prop backward. */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), + /* BB1 */ + BPF_MOV64_IMM(BPF_REG_8, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 3), + /* BB2 */ + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + /* BB3 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 0, +}, +{ + /* Diamond CFG + * + * ----- + * | BB0 | u32 def + * ----- + * /\ + * / \ + * / \ + * ----- ----- + * | BB1 | | BB2 | u32 def + * ----- ----- + * \ / + * \ / -> pruned, but u64 read should propagate backward + * \ / + * ----- + * | BB3 | u64 read + * ----- + */ + "zext: complex cfg 2", + .insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_MOV32_IMM(BPF_REG_6, 2), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), + /* BB1 */ + BPF_MOV64_IMM(BPF_REG_8, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 2), + /* BB2 */ + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 0), + /* BB3, 64-bit R8 read should be prop backward. */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_MOV32_IMM(BPF_REG_6, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_6, 32), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2), + /* BB1 */ + BPF_MOV64_IMM(BPF_REG_8, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 3), + /* BB2 */ + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + /* BB3 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 2, +}, +{ + /* Diamond CFG + * + * ----- + * | BB0 | u32 def A + * ----- + * /\ + * / \ + * / \ + * ----- ----- + * u64 def A | BB1 | | BB2 | u32 def B + * ----- ----- + * \ / + * \ / + * \ / + * ----- + * | BB3 | u64 read A and B + * ----- + */ + "zext: complex cfg 3", + .insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_MOV32_IMM(BPF_REG_6, 2), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3), + /* BB1 */ + BPF_MOV64_IMM(BPF_REG_8, 2), + BPF_MOV64_IMM(BPF_REG_6, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 2), + /* BB2 */ + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 0), + /* BB3, 64-bit R8 read should be prop backward. */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_MOV32_IMM(BPF_REG_6, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_6, 32), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3), + /* BB1 */ + BPF_MOV64_IMM(BPF_REG_8, 2), + BPF_MOV64_IMM(BPF_REG_6, 2), + BPF_JMP_IMM(BPF_JA, 0, 0, 3), + /* BB2 */ + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + /* BB3 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 2, +}, +{ + /* Diamond CFG + * + * ----- + * | BB0 | u32 def A + * ----- + * /\ + * / \ + * / \ + * ----- ----- + * u64 def A | BB1 | | BB2 | u64 def A and u32 def B + * ----- ----- + * \ / + * \ / + * \ / + * ----- + * | BB3 | u64 read A and B + * ----- + */ + "zext: complex cfg 4", + .insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_MOV32_IMM(BPF_REG_6, 2), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3), + /* BB1 */ + BPF_MOV64_IMM(BPF_REG_8, 2), + BPF_MOV64_IMM(BPF_REG_6, 3), + BPF_JMP_IMM(BPF_JA, 0, 0, 3), + /* BB2 */ + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_MOV64_IMM(BPF_REG_6, 3), + BPF_JMP_IMM(BPF_JA, 0, 0, 0), + /* BB3, 64-bit R8 read should be prop backward. */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32), + BPF_MOV64_REG(BPF_REG_7, BPF_REG_0), + BPF_MOV32_IMM(BPF_REG_6, 2), + BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3), + /* BB1 */ + BPF_MOV64_IMM(BPF_REG_8, 2), + BPF_MOV64_IMM(BPF_REG_6, 3), + BPF_JMP_IMM(BPF_JA, 0, 0, 4), + /* BB2 */ + BPF_MOV32_IMM(BPF_REG_8, 2), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_8, 32), + BPF_MOV64_IMM(BPF_REG_6, 3), + /* BB3 */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_8), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + }, + .result = ACCEPT, + .retval = 3, +}, +{ + "zext: callee-saved", + .insns = { + BPF_MOV32_IMM(BPF_REG_6, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), + BPF_MOV64_REG(BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + /* callee */ + BPF_MOV32_IMM(BPF_REG_6, 1), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + /* caller u32 def should be zero extended. */ + BPF_MOV32_IMM(BPF_REG_6, 0), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_6, 32), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), + /* u64 use. */ + BPF_MOV64_REG(BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + /* callee u32 def shouldn't be affected. */ + BPF_MOV32_IMM(BPF_REG_6, 1), + BPF_EXIT_INSN(), + }, + .errstr_unpriv = "function calls to other bpf functions are allowed for root only", + .result_unpriv = REJECT, + .result = ACCEPT, + .retval = 0, +}, +{ + "zext: arg regs", + .insns = { + BPF_MOV32_IMM(BPF_REG_1, 0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + /* callee */ + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + /* caller u32 def should be zero extended. */ + BPF_MOV32_IMM(BPF_REG_1, 0), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 32), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), + /* u64 use. */ + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_EXIT_INSN(), + /* callee u64 use on caller-saved reg. */ + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), + BPF_EXIT_INSN(), + }, + .errstr_unpriv = "function calls to other bpf functions are allowed for root only", + .result_unpriv = REJECT, + .result = ACCEPT, + .retval = 0, +}, +{ + "zext: return arg", + .insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5), + BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_MOV32_REG(BPF_REG_6, BPF_REG_0), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + /* callee 1 */ + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* callee 2 */ + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .xlated_insns = { + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 7), + BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 1), + BPF_MOV32_REG(BPF_REG_6, BPF_REG_0), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_6, 32), + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4), + BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6), + BPF_EXIT_INSN(), + /* callee 1 */ + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + /* callee 2 */ + BPF_MOV32_IMM(BPF_REG_0, 1), + BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32), + BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32), + BPF_EXIT_INSN(), + }, + .errstr_unpriv = "function calls to other bpf functions are allowed for root only", + .result_unpriv = REJECT, + .result = ACCEPT, + .retval = 3, +},