From patchwork Fri Nov 3 20:56:25 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 834099 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="M/ql/ELN"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yTDmb5Rjlz9s7p for ; Sat, 4 Nov 2017 07:57:35 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755960AbdKCU5e (ORCPT ); Fri, 3 Nov 2017 16:57:34 -0400 Received: from mail-pf0-f196.google.com ([209.85.192.196]:54951 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754725AbdKCU4z (ORCPT ); Fri, 3 Nov 2017 16:56:55 -0400 Received: by mail-pf0-f196.google.com with SMTP id n89so3027172pfk.11 for ; Fri, 03 Nov 2017 13:56:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0mMVF3+8LKEWuA1UFgwJblR2pue/dl1yO7apzLlfzIk=; b=M/ql/ELNkVsKMBiioAdcvH+bNdWmbg12U8HMIt9EAMIY+EWCi41M5NCoX7S2PPQDp+ 6B0FsC8VBMXjsnC9lCb49pnrKI4+Z9fm21N19XItAhVuuh8P+6HgceWyumMQJdzo4Lqh DV8O3/eEWHKg/Kcp2SzS643tl05AXaZLQYHIZPHo09PCs01Kwm2BE50UtHFE3lwuhIZv oSnEbS6DzZcMXxPwbgU8MV/HxsqKNr8rCrMP8ob2DrpchmJyYBD1WDLTCu02R9EJ1SeK OVmbPzrP7sh1Qx6TaDOlQOTkYvJvrEXtY64L3zFp8kGnx5Sa1X6f3A6qaqM7e3jZwCkG LZOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0mMVF3+8LKEWuA1UFgwJblR2pue/dl1yO7apzLlfzIk=; b=G9b5arygN1VAr3pA5Ja4Hm1VkJcE98qU1XRUlS7+XbixJVBLlyVGjfFbs5DhZqOk5m lJxWpc9uIeRdruY0Txm1IaytqZNLdp8jvRxHCSUuJ/l0grIy3Ac8BzMjSRarlil0vjcG kgH2gdtiDPWUQaqj2kfZm3APEi99/DGyyLlhWZLK7d4q4V8bGnUOOc6X0I0bO76GVGVF +s6g0b/3+Urtf271x77qYmUo2D9XCh9n68rjV7BcJ0xg1PqRnkzo6n8adwC1ULXILdOm Emv8c0/LEDV8hzDb3kF5n6LL+ipza24afQ3M6ijaZn0EHlFuR0EuYHZl72e9TR41rclk p6wA== X-Gm-Message-State: AMCzsaUg4xe66PWtulC90W1DvN+FHPEZY4ha9QBrhn8RQ1gKNNONOl25 w+0yDeV38MySQA/BRsPrkRTN+fwl X-Google-Smtp-Source: ABhQp+QsFC44RAnN5eHFGobiEAFnZGQCHysu8MYwFQP4iyckcW9myovmmgAJIhwqnZcgrF7gnhUZpw== X-Received: by 10.99.135.199 with SMTP id i190mr8306447pge.5.1509742614289; Fri, 03 Nov 2017 13:56:54 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([75.53.12.129]) by smtp.gmail.com with ESMTPSA id o10sm9989029pgq.69.2017.11.03.13.56.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 03 Nov 2017 13:56:53 -0700 (PDT) From: Jakub Kicinski To: netdev@vger.kernel.org Cc: oss-drivers@netronome.com, alexei.starovoitov@gmail.com, daniel@iogearbox.net, Jakub Kicinski Subject: [PATCH net-next v2 10/15] nfp: bpf: refactor offload logic Date: Fri, 3 Nov 2017 13:56:25 -0700 Message-Id: <20171103205630.1083-11-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171103205630.1083-1-jakub.kicinski@netronome.com> References: <20171103205630.1083-1-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org We currently create a fake cls_bpf offload object when we want to offload XDP. Simplify and clarify the code by moving the TC/XDP specific logic out of common offload code. This is easy now that we don't support legacy TC actions. We only need the bpf program and state of the skip_sw flag. Temporarily set @code to NULL in nfp_net_bpf_offload(), compilers seem to have trouble recognizing it's always initialized. Next patches will eliminate that variable. Signed-off-by: Jakub Kicinski Reviewed-by: Quentin Monnet --- drivers/net/ethernet/netronome/nfp/bpf/main.c | 67 +++++++++++----------- drivers/net/ethernet/netronome/nfp/bpf/main.h | 4 +- drivers/net/ethernet/netronome/nfp/bpf/offload.c | 73 ++++++++++-------------- 3 files changed, 67 insertions(+), 77 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.c b/drivers/net/ethernet/netronome/nfp/bpf/main.c index 2ff97f12c160..9e1286346d42 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.c @@ -54,28 +54,25 @@ static int nfp_bpf_xdp_offload(struct nfp_app *app, struct nfp_net *nn, struct bpf_prog *prog) { - struct tc_cls_bpf_offload cmd = { - .prog = prog, - }; + bool running, xdp_running; int ret; if (!nfp_net_ebpf_capable(nn)) return -EINVAL; - if (nn->dp.ctrl & NFP_NET_CFG_CTRL_BPF) { - if (!nn->dp.bpf_offload_xdp) - return prog ? -EBUSY : 0; - cmd.command = prog ? TC_CLSBPF_REPLACE : TC_CLSBPF_DESTROY; - } else { - if (!prog) - return 0; - cmd.command = TC_CLSBPF_ADD; - } + running = nn->dp.ctrl & NFP_NET_CFG_CTRL_BPF; + xdp_running = running && nn->dp.bpf_offload_xdp; + + if (!prog && !xdp_running) + return 0; + if (prog && running && !xdp_running) + return -EBUSY; - ret = nfp_net_bpf_offload(nn, &cmd); + ret = nfp_net_bpf_offload(nn, prog, running, true); /* Stop offload if replace not possible */ - if (ret && cmd.command == TC_CLSBPF_REPLACE) + if (ret && prog) nfp_bpf_xdp_offload(app, nn, NULL); + nn->dp.bpf_offload_xdp = prog && !ret; return ret; } @@ -96,27 +93,33 @@ static int nfp_bpf_setup_tc_block_cb(enum tc_setup_type type, { struct tc_cls_bpf_offload *cls_bpf = type_data; struct nfp_net *nn = cb_priv; + bool skip_sw; + + if (type != TC_SETUP_CLSBPF || + !tc_can_offload(nn->dp.netdev) || + !nfp_net_ebpf_capable(nn) || + cls_bpf->common.protocol != htons(ETH_P_ALL) || + cls_bpf->common.chain_index) + return -EOPNOTSUPP; + if (nn->dp.bpf_offload_xdp) + return -EBUSY; - if (!tc_can_offload(nn->dp.netdev)) + /* Only support TC direct action */ + if (!cls_bpf->exts_integrated || + tcf_exts_has_actions(cls_bpf->exts)) { + nn_err(nn, "only direct action with no legacy actions supported\n"); return -EOPNOTSUPP; + } - switch (type) { - case TC_SETUP_CLSBPF: - if (!nfp_net_ebpf_capable(nn) || - cls_bpf->common.protocol != htons(ETH_P_ALL) || - cls_bpf->common.chain_index) - return -EOPNOTSUPP; - if (nn->dp.bpf_offload_xdp) - return -EBUSY; - - /* Only support TC direct action */ - if (!cls_bpf->exts_integrated || - tcf_exts_has_actions(cls_bpf->exts)) { - nn_err(nn, "only direct action with no legacy actions supported\n"); - return -EOPNOTSUPP; - } - - return nfp_net_bpf_offload(nn, cls_bpf); + skip_sw = !!(cls_bpf->gen_flags & TCA_CLS_FLAGS_SKIP_SW); + + switch (cls_bpf->command) { + case TC_CLSBPF_REPLACE: + return nfp_net_bpf_offload(nn, cls_bpf->prog, true, !skip_sw); + case TC_CLSBPF_ADD: + return nfp_net_bpf_offload(nn, cls_bpf->prog, false, !skip_sw); + case TC_CLSBPF_DESTROY: + return nfp_net_bpf_offload(nn, NULL, true, !skip_sw); default: return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h index 9f0df6a9786d..6dddab95d57a 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/main.h +++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h @@ -181,8 +181,8 @@ nfp_bpf_jit(struct bpf_prog *filter, void *prog, int nfp_prog_verify(struct nfp_prog *nfp_prog, struct bpf_prog *prog); struct nfp_net; -struct tc_cls_bpf_offload; -int nfp_net_bpf_offload(struct nfp_net *nn, struct tc_cls_bpf_offload *cls_bpf); +int nfp_net_bpf_offload(struct nfp_net *nn, struct bpf_prog *prog, + bool old_prog, bool sw_fallback); #endif diff --git a/drivers/net/ethernet/netronome/nfp/bpf/offload.c b/drivers/net/ethernet/netronome/nfp/bpf/offload.c index 268ba1ba82db..c09efa1a9649 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/offload.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/offload.c @@ -52,8 +52,7 @@ #include "../nfp_net.h" static int -nfp_net_bpf_offload_prepare(struct nfp_net *nn, - struct tc_cls_bpf_offload *cls_bpf, +nfp_net_bpf_offload_prepare(struct nfp_net *nn, struct bpf_prog *prog, struct nfp_bpf_result *res, void **code, dma_addr_t *dma_addr, u16 max_instr) { @@ -73,9 +72,9 @@ nfp_net_bpf_offload_prepare(struct nfp_net *nn, done_off = nn_readw(nn, NFP_NET_CFG_BPF_DONE); stack_size = nn_readb(nn, NFP_NET_CFG_BPF_STACK_SZ) * 64; - if (cls_bpf->prog->aux->stack_depth > stack_size) { + if (prog->aux->stack_depth > stack_size) { nn_info(nn, "stack too large: program %dB > FW stack %dB\n", - cls_bpf->prog->aux->stack_depth, stack_size); + prog->aux->stack_depth, stack_size); return -EOPNOTSUPP; } @@ -83,8 +82,7 @@ nfp_net_bpf_offload_prepare(struct nfp_net *nn, if (!*code) return -ENOMEM; - ret = nfp_bpf_jit(cls_bpf->prog, *code, start_off, done_off, - max_instr, res); + ret = nfp_bpf_jit(prog, *code, start_off, done_off, max_instr, res); if (ret) goto out; @@ -96,13 +94,13 @@ nfp_net_bpf_offload_prepare(struct nfp_net *nn, } static void -nfp_net_bpf_load_and_start(struct nfp_net *nn, u32 tc_flags, +nfp_net_bpf_load_and_start(struct nfp_net *nn, bool sw_fallback, void *code, dma_addr_t dma_addr, unsigned int code_sz, unsigned int n_instr) { int err; - nn->dp.bpf_offload_skip_sw = !!(tc_flags & TCA_CLS_FLAGS_SKIP_SW); + nn->dp.bpf_offload_skip_sw = !sw_fallback; nn_writew(nn, NFP_NET_CFG_BPF_SIZE, n_instr); nn_writeq(nn, NFP_NET_CFG_BPF_ADDR, dma_addr); @@ -134,7 +132,8 @@ static int nfp_net_bpf_stop(struct nfp_net *nn) return nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_GEN); } -int nfp_net_bpf_offload(struct nfp_net *nn, struct tc_cls_bpf_offload *cls_bpf) +int nfp_net_bpf_offload(struct nfp_net *nn, struct bpf_prog *prog, + bool old_prog, bool sw_fallback) { struct nfp_bpf_result res; dma_addr_t dma_addr; @@ -142,49 +141,37 @@ int nfp_net_bpf_offload(struct nfp_net *nn, struct tc_cls_bpf_offload *cls_bpf) void *code; int err; + /* There is nothing stopping us from implementing seamless + * replace but the simple method of loading I adopted in + * the firmware does not handle atomic replace (i.e. we have to + * stop the BPF offload and re-enable it). Leaking-in a few + * frames which didn't have BPF applied in the hardware should + * be fine if software fallback is available, though. + */ + if (prog && old_prog && nn->dp.bpf_offload_skip_sw) + return -EBUSY; + + /* Something else is loaded, different program type? */ + if (!old_prog && nn->dp.ctrl & NFP_NET_CFG_CTRL_BPF) + return -EBUSY; + max_instr = nn_readw(nn, NFP_NET_CFG_BPF_MAX_LEN); + code = NULL; - switch (cls_bpf->command) { - case TC_CLSBPF_REPLACE: - /* There is nothing stopping us from implementing seamless - * replace but the simple method of loading I adopted in - * the firmware does not handle atomic replace (i.e. we have to - * stop the BPF offload and re-enable it). Leaking-in a few - * frames which didn't have BPF applied in the hardware should - * be fine if software fallback is available, though. - */ - if (nn->dp.bpf_offload_skip_sw) - return -EBUSY; - - err = nfp_net_bpf_offload_prepare(nn, cls_bpf, &res, &code, + if (prog) { + err = nfp_net_bpf_offload_prepare(nn, prog, &res, &code, &dma_addr, max_instr); if (err) return err; + } + if (old_prog) nfp_net_bpf_stop(nn); - nfp_net_bpf_load_and_start(nn, cls_bpf->gen_flags, code, - dma_addr, max_instr * sizeof(u64), - res.n_instr); - return 0; - - case TC_CLSBPF_ADD: - if (nn->dp.ctrl & NFP_NET_CFG_CTRL_BPF) - return -EBUSY; - - err = nfp_net_bpf_offload_prepare(nn, cls_bpf, &res, &code, - &dma_addr, max_instr); - if (err) - return err; - nfp_net_bpf_load_and_start(nn, cls_bpf->gen_flags, code, + if (prog) + nfp_net_bpf_load_and_start(nn, sw_fallback, code, dma_addr, max_instr * sizeof(u64), res.n_instr); - return 0; - case TC_CLSBPF_DESTROY: - return nfp_net_bpf_stop(nn); - - default: - return -EOPNOTSUPP; - } + return 0; }