From patchwork Wed Sep 21 10:43:55 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 672804 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3sfGVQ1Lnlz9sD5 for ; Wed, 21 Sep 2016 20:45:18 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b=YETzCFw/; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933366AbcIUKpQ (ORCPT ); Wed, 21 Sep 2016 06:45:16 -0400 Received: from mail-wm0-f54.google.com ([74.125.82.54]:38243 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754562AbcIUKoz (ORCPT ); Wed, 21 Sep 2016 06:44:55 -0400 Received: by mail-wm0-f54.google.com with SMTP id l132so83901814wmf.1 for ; Wed, 21 Sep 2016 03:44:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=I7Xtnvww6lhCnCSc/N6J8Z8zgJAQ81o09WWiao7qcuA=; b=YETzCFw/OfhVrzjXtw8M6KjCrE4w6Lon1mcunjKh9p08Fa1o81xX/cmKWIbp4fUuJ2 g6ca4lZpG3LlcC41AMRTajbrGasTygxsA44dlub+Y15A9aZaqYwpZ8xtxNKMrSL0I2rf DzDr/IooMKVbxYqkAqAemyF6KP1SKNEDkGpCGupGIGf239eIVPV6XEXBofxUzW5EIu5t U2xxyM+r55O2nyYDnr9L9zJ0Lo0Qs91jUHTgLMNTCC9lpYMMhdOgASBkPsVN17GI68cJ 9LtxsQh8ICNJhzodgLp/H5QI76Wsot/xNOYsNSIhREg3i3ILuK8+03bTGd9TFPzH1jnv tNvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=I7Xtnvww6lhCnCSc/N6J8Z8zgJAQ81o09WWiao7qcuA=; b=LW++0WP82Ng124xKrlXV2VndlHLjXnvAJWP6VcZwfm61Vw8R7dXeK5MgIFU9AnSG0x tewauyexISyda1tJYKhamS9LRK30gHKEFUVjL31s8XJO4CIH9BdSU8+I7BW5Xo2gRejG goHcJ1wNQWFl50lbI9fBQp+Ca9U3TDP06RTnfhc6GRhbECkW4/5uQrHezhBEZxUhPKL8 f61DiLCAz3p30IL2noRYX0mvs/3qMwNc6ZOjovCPG1GcTXz6z4burDcZwsE9V2jCg9La frnZRNBodQ2uTxnVVoTVB1+iR2p8VY/YuTOsvm8pS3ExO77wrPP8WZd53ONSva0OzShS 0YNQ== X-Gm-Message-State: AE9vXwPV8W0nrBQHkosjmTuG52LUx1FOowSapac34edJCyxKWItXJOaFNImmbVe8q8QbeUzc X-Received: by 10.28.150.145 with SMTP id y139mr2770349wmd.112.1474454675053; Wed, 21 Sep 2016 03:44:35 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com (host-79-78-33-110.static.as9105.net. [79.78.33.110]) by smtp.gmail.com with ESMTPSA id m75sm27407068wmi.0.2016.09.21.03.44.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 21 Sep 2016 03:44:34 -0700 (PDT) From: Jakub Kicinski To: netdev@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, Jakub Kicinski Subject: [PATCHv7 net-next 03/15] net: cls_bpf: add support for marking filters as hardware-only Date: Wed, 21 Sep 2016 11:43:55 +0100 Message-Id: <1474454647-20137-4-git-send-email-jakub.kicinski@netronome.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1474454647-20137-1-git-send-email-jakub.kicinski@netronome.com> References: <1474454647-20137-1-git-send-email-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add cls_bpf support for the TCA_CLS_FLAGS_SKIP_SW flag. Signed-off-by: Jakub Kicinski Acked-by: Daniel Borkmann --- net/sched/cls_bpf.c | 34 +++++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c index ebf01f7c1470..1becc2fe1bc5 100644 --- a/net/sched/cls_bpf.c +++ b/net/sched/cls_bpf.c @@ -28,7 +28,7 @@ MODULE_DESCRIPTION("TC BPF based classifier"); #define CLS_BPF_NAME_LEN 256 #define CLS_BPF_SUPPORTED_GEN_FLAGS \ - TCA_CLS_FLAGS_SKIP_HW + (TCA_CLS_FLAGS_SKIP_HW | TCA_CLS_FLAGS_SKIP_SW) struct cls_bpf_head { struct list_head plist; @@ -96,7 +96,9 @@ static int cls_bpf_classify(struct sk_buff *skb, const struct tcf_proto *tp, qdisc_skb_cb(skb)->tc_classid = prog->res.classid; - if (at_ingress) { + if (tc_skip_sw(prog->gen_flags)) { + filter_res = prog->exts_integrated ? TC_ACT_UNSPEC : 0; + } else if (at_ingress) { /* It is safe to push/pull even if skb_shared() */ __skb_push(skb, skb->mac_len); bpf_compute_data_end(skb); @@ -164,32 +166,42 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog, tp->protocol, &offload); } -static void cls_bpf_offload(struct tcf_proto *tp, struct cls_bpf_prog *prog, - struct cls_bpf_prog *oldprog) +static int cls_bpf_offload(struct tcf_proto *tp, struct cls_bpf_prog *prog, + struct cls_bpf_prog *oldprog) { struct net_device *dev = tp->q->dev_queue->dev; struct cls_bpf_prog *obj = prog; enum tc_clsbpf_command cmd; + bool skip_sw; + int ret; + + skip_sw = tc_skip_sw(prog->gen_flags) || + (oldprog && tc_skip_sw(oldprog->gen_flags)); if (oldprog && oldprog->offloaded) { if (tc_should_offload(dev, tp, prog->gen_flags)) { cmd = TC_CLSBPF_REPLACE; - } else { + } else if (!tc_skip_sw(prog->gen_flags)) { obj = oldprog; cmd = TC_CLSBPF_DESTROY; + } else { + return -EINVAL; } } else { if (!tc_should_offload(dev, tp, prog->gen_flags)) - return; + return skip_sw ? -EINVAL : 0; cmd = TC_CLSBPF_ADD; } - if (cls_bpf_offload_cmd(tp, obj, cmd)) - return; + ret = cls_bpf_offload_cmd(tp, obj, cmd); + if (ret) + return skip_sw ? ret : 0; obj->offloaded = true; if (oldprog) oldprog->offloaded = false; + + return 0; } static void cls_bpf_stop_offload(struct tcf_proto *tp, @@ -498,7 +510,11 @@ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb, if (ret < 0) goto errout; - cls_bpf_offload(tp, prog, oldprog); + ret = cls_bpf_offload(tp, prog, oldprog); + if (ret) { + cls_bpf_delete_prog(tp, prog); + return ret; + } if (oldprog) { list_replace_rcu(&oldprog->link, &prog->link);