From patchwork Fri Jan 12 04:29:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 859489 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="UrewetfM"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3zHqXt3Dshz9sQm for ; Fri, 12 Jan 2018 15:30:06 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933298AbeALEaE (ORCPT ); Thu, 11 Jan 2018 23:30:04 -0500 Received: from mail-pg0-f65.google.com ([74.125.83.65]:33735 "EHLO mail-pg0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932976AbeALE36 (ORCPT ); Thu, 11 Jan 2018 23:29:58 -0500 Received: by mail-pg0-f65.google.com with SMTP id i196so3834599pgd.0 for ; Thu, 11 Jan 2018 20:29:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wISIi0SrlcuIPoHtexGda4yvumcygTIlNpX8kQuGgRs=; b=UrewetfM4bcI/ram8KaTFJGIXGy7Gq8SbCbv8SuvXrO2tqZnxh8fbtFm0cn3e4BXMl ysnoU804xEcsFxD58/axlrFOD1fV/oeb1+8PnDmylvqamC6qCEtVxRiaduhWG8MarkTb zal3RI7B5TNGXrDJCsA5tbss28MzkLt/8/AOCQKQuXcO4W+7ZYMRzCmSkBVY328U0Qkr ubHCjgXV0QeZ5jx3uZufcrmH+E009d9+MgF9EhoyZ6oZya7Qo5+s44GM5G/9jpAUsmgT i3PK0Rbgjeq82XHVuPGmMuIfVcmwNYoA2qmPesv4UOzsq3Eb81kIV0oUW1k39NnmWBkn tDgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wISIi0SrlcuIPoHtexGda4yvumcygTIlNpX8kQuGgRs=; b=Utnb+L5bTGtfPpr8zmJqc4bu0bXRhzte6HATLTuBkXhFFEijzR4rCcawWn2lv/niKR HKsFGr/BJHzxdB11hrU/HRT4ltZ2Y8BzQg/eY30M4bL+mHNtqEzUxL3otiDQIXABKaX6 6KVoTLnp27rbYbloAROWAq6NwregmW2NEZKXS45fsIFGMuQvab2LU3YnWel1TEpTjNaN 2/DFqVRD5TjuDqi5PKBww1W5RtRCExyisxS5qLB21DGzkPLzzL6r7AGIDmvzPdVjgzY3 rQoWtOCWSb+NIbVRFCC9Y+JkDrBD+NAe6v75f5plDAm4gDn4JzbFRLubFfaDMx17WLZP FDVw== X-Gm-Message-State: AKwxyteq213HIZNP3pJ6gv93jdyPMTX7hv8rCVqDtZ9Hxv/BVI1HHE92 DIzAaLGawJa9xUmtmQg5Oiy7dQ== X-Google-Smtp-Source: ACJfBoumwnYVTqSZ9FTN0kE29FZ9AxJY2R+NKrEMhdaMbvi2ilR0ne6HDjbWlQF29WG3GxWObauyzg== X-Received: by 10.159.253.11 with SMTP id p11mr4231157pls.43.1515731397607; Thu, 11 Jan 2018 20:29:57 -0800 (PST) Received: from jkicinski-Precision-T1700.netronome.com ([75.53.12.129]) by smtp.gmail.com with ESMTPSA id e8sm45863875pfk.6.2018.01.11.20.29.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 11 Jan 2018 20:29:57 -0800 (PST) From: Jakub Kicinski To: alexei.starovoitov@gmail.com, daniel@iogearbox.net, davem@davemloft.net Cc: netdev@vger.kernel.org, oss-drivers@netronome.com, tehnerd@fb.com, Jakub Kicinski Subject: [PATCH bpf-next v2 14/15] nfp: bpf: add support for reading map memory Date: Thu, 11 Jan 2018 20:29:16 -0800 Message-Id: <20180112042917.10348-15-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180112042917.10348-1-jakub.kicinski@netronome.com> References: <20180112042917.10348-1-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Map memory needs to use 40 bit addressing. Add handling of such accesses. Since 40 bit addresses are formed by using both 32 bit operands we need to pre-calculate the actual address instead of adding in the offset inside the instruction, like we did in 32 bit mode. Signed-off-by: Jakub Kicinski Reviewed-by: Quentin Monnet --- drivers/net/ethernet/netronome/nfp/bpf/jit.c | 77 ++++++++++++++++++++--- drivers/net/ethernet/netronome/nfp/bpf/verifier.c | 8 +++ 2 files changed, 76 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c index 77a5f35d7809..cdc949fabe98 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c @@ -553,27 +553,51 @@ wrp_reg_subpart(struct nfp_prog *nfp_prog, swreg dst, swreg src, u8 field_len, emit_ld_field_any(nfp_prog, dst, mask, src, sc, offset * 8, true); } +static void +addr40_offset(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, + swreg *rega, swreg *regb) +{ + if (offset == reg_imm(0)) { + *rega = reg_a(src_gpr); + *regb = reg_b(src_gpr + 1); + return; + } + + emit_alu(nfp_prog, imm_a(nfp_prog), reg_a(src_gpr), ALU_OP_ADD, offset); + emit_alu(nfp_prog, imm_b(nfp_prog), reg_b(src_gpr + 1), ALU_OP_ADD_C, + reg_imm(0)); + *rega = imm_a(nfp_prog); + *regb = imm_b(nfp_prog); +} + /* NFP has Command Push Pull bus which supports bluk memory operations. */ static int nfp_cpp_memcpy(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) { bool descending_seq = meta->ldst_gather_len < 0; s16 len = abs(meta->ldst_gather_len); swreg src_base, off; + bool src_40bit_addr; unsigned int i; u8 xfer_num; off = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); + src_40bit_addr = meta->ptr.type == PTR_TO_MAP_VALUE; src_base = reg_a(meta->insn.src_reg * 2); xfer_num = round_up(len, 4) / 4; + if (src_40bit_addr) + addr40_offset(nfp_prog, meta->insn.src_reg, off, &src_base, + &off); + /* Setup PREV_ALU fields to override memory read length. */ if (len > 32) wrp_immed(nfp_prog, reg_none(), CMD_OVE_LEN | FIELD_PREP(CMD_OV_LEN, xfer_num - 1)); /* Memory read from source addr into transfer-in registers. */ - emit_cmd_any(nfp_prog, CMD_TGT_READ32_SWAP, CMD_MODE_32b, 0, src_base, - off, xfer_num - 1, true, len > 32); + emit_cmd_any(nfp_prog, CMD_TGT_READ32_SWAP, + src_40bit_addr ? CMD_MODE_40b_BA : CMD_MODE_32b, 0, + src_base, off, xfer_num - 1, true, len > 32); /* Move from transfer-in to transfer-out. */ for (i = 0; i < xfer_num; i++) @@ -711,20 +735,20 @@ data_ld(struct nfp_prog *nfp_prog, swreg offset, u8 dst_gpr, int size) } static int -data_ld_host_order(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, - u8 dst_gpr, int size) +data_ld_host_order(struct nfp_prog *nfp_prog, u8 dst_gpr, + swreg lreg, swreg rreg, int size, enum cmd_mode mode) { unsigned int i; u8 mask, sz; - /* We load the value from the address indicated in @offset and then + /* We load the value from the address indicated in rreg + lreg and then * mask out the data we don't need. Note: this is little endian! */ sz = max(size, 4); mask = size < 4 ? GENMASK(size - 1, 0) : 0; - emit_cmd(nfp_prog, CMD_TGT_READ32_SWAP, CMD_MODE_32b, 0, - reg_a(src_gpr), offset, sz / 4 - 1, true); + emit_cmd(nfp_prog, CMD_TGT_READ32_SWAP, mode, 0, + lreg, rreg, sz / 4 - 1, true); i = 0; if (mask) @@ -740,6 +764,26 @@ data_ld_host_order(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, return 0; } +static int +data_ld_host_order_addr32(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, + u8 dst_gpr, u8 size) +{ + return data_ld_host_order(nfp_prog, dst_gpr, reg_a(src_gpr), offset, + size, CMD_MODE_32b); +} + +static int +data_ld_host_order_addr40(struct nfp_prog *nfp_prog, u8 src_gpr, swreg offset, + u8 dst_gpr, u8 size) +{ + swreg rega, regb; + + addr40_offset(nfp_prog, src_gpr, offset, ®a, ®b); + + return data_ld_host_order(nfp_prog, dst_gpr, rega, regb, + size, CMD_MODE_40b_BA); +} + static int construct_data_ind_ld(struct nfp_prog *nfp_prog, u16 offset, u16 src, u8 size) { @@ -1778,8 +1822,20 @@ mem_ldx_data(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); - return data_ld_host_order(nfp_prog, meta->insn.src_reg * 2, tmp_reg, - meta->insn.dst_reg * 2, size); + return data_ld_host_order_addr32(nfp_prog, meta->insn.src_reg * 2, + tmp_reg, meta->insn.dst_reg * 2, size); +} + +static int +mem_ldx_emem(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + unsigned int size) +{ + swreg tmp_reg; + + tmp_reg = re_load_imm_any(nfp_prog, meta->insn.off, imm_b(nfp_prog)); + + return data_ld_host_order_addr40(nfp_prog, meta->insn.src_reg * 2, + tmp_reg, meta->insn.dst_reg * 2, size); } static int @@ -1803,6 +1859,9 @@ mem_ldx(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, return mem_ldx_stack(nfp_prog, meta, size, meta->ptr.off + meta->ptr.var_off.value); + if (meta->ptr.type == PTR_TO_MAP_VALUE) + return mem_ldx_emem(nfp_prog, meta, size); + return -EOPNOTSUPP; } diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c index f867577cc315..741438896cc7 100644 --- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c +++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c @@ -249,6 +249,7 @@ nfp_bpf_check_ptr(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, if (reg->type != PTR_TO_CTX && reg->type != PTR_TO_STACK && + reg->type != PTR_TO_MAP_VALUE && reg->type != PTR_TO_PACKET) { pr_vlog(env, "unsupported ptr type: %d\n", reg->type); return -EINVAL; @@ -260,6 +261,13 @@ nfp_bpf_check_ptr(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, return err; } + if (reg->type == PTR_TO_MAP_VALUE) { + if (is_mbpf_store(meta)) { + pr_info("map writes not supported\n"); + return -EOPNOTSUPP; + } + } + if (meta->ptr.type != NOT_INIT && meta->ptr.type != reg->type) { pr_vlog(env, "ptr type changed for instruction %d -> %d\n", meta->ptr.type, reg->type);