From patchwork Sun Sep 18 15:09:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 671410 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3scXWH1DJlz9t0p for ; Mon, 19 Sep 2016 01:10:03 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b=z0nQoUFl; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933938AbcIRPJ4 (ORCPT ); Sun, 18 Sep 2016 11:09:56 -0400 Received: from mail-wm0-f42.google.com ([74.125.82.42]:37995 "EHLO mail-wm0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932338AbcIRPJu (ORCPT ); Sun, 18 Sep 2016 11:09:50 -0400 Received: by mail-wm0-f42.google.com with SMTP id 1so117959484wmz.1 for ; Sun, 18 Sep 2016 08:09:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FuiXt5oI0X/sHBdfR+B/mrmjd2lkm9P8yCHblN5MMyA=; b=z0nQoUFlTPheTHNnbKRckiKb3XySPYa+yL3v/gxLST4dnlp+pUc/epr/MuLMEPI9tS kIv2G7rqLbS2XNuTm0OwJBNxhvy7tSt95AdTu5n9uPPK2KAoL7G1nktLldnYLwu5tqH6 wFKT0bLGc/hQqgyBJYcGNrwn8jxQeREIuQHCXJujddjE2Lct/wO5w855jIuARuLtbSom LIMhvPUy5VDkyTuoU3avQGEPDHI+EF7cp+LceEtE95l3MNz7dYnsYCePoDG55hQv4936 lG8kXKGNBOJUD2lTcCibePyvvs0gPPjYI2yien5HoNdkRQ6Ux0Xuvfe4lEfaqW98CZiC CBwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FuiXt5oI0X/sHBdfR+B/mrmjd2lkm9P8yCHblN5MMyA=; b=EyyAnQrblgvEHQLJY2tb1oJyCaMyNmWg1N6vyifEA++/EhfZ6Agrx9K6hoKABo7Njo UXR/8BGXo8CRU3qPP7Cba0HWgVlxjMLPRUUWY/soBNylMe79zwkOBqNOwdKeLoGDUQMP Xri7RbG+baSS+k8S9ZeNFIUKvmmNnuf9aep6gMJj2a8VACjHPlyWLLUTtTMyA3IFf6Yz J38lor4ogBELV6DyQtUUf+PxhscUyNJW/rtykEWlX3QcdFyWgr6lMfMZ+Xx/u0Ic/lCE wLawRLA00Hdtsf6lvEMI62GBiStHfnZHNvGjrq2AbtUB0/So1c8OQKLtIIL++I5dLp0x nXhg== X-Gm-Message-State: AE9vXwOQ6PYz2XirDJqMqFpOpAYbxv9cQOvFemaG2D08uIRBgKGcDBJsNIgx70lXX9YzitT4 X-Received: by 10.28.126.81 with SMTP id z78mr6086918wmc.13.1474211388296; Sun, 18 Sep 2016 08:09:48 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com (host-79-78-33-110.static.as9105.net. [79.78.33.110]) by smtp.gmail.com with ESMTPSA id p13sm17332081wmd.1.2016.09.18.08.09.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 18 Sep 2016 08:09:47 -0700 (PDT) From: Jakub Kicinski To: netdev@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, Jakub Kicinski Subject: [PATCHv6 net-next 03/15] net: cls_bpf: add support for marking filters as hardware-only Date: Sun, 18 Sep 2016 16:09:13 +0100 Message-Id: <1474211365-20088-4-git-send-email-jakub.kicinski@netronome.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1474211365-20088-1-git-send-email-jakub.kicinski@netronome.com> References: <1474211365-20088-1-git-send-email-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add cls_bpf support for the TCA_CLS_FLAGS_SKIP_SW flag. Signed-off-by: Jakub Kicinski Acked-by: Daniel Borkmann --- net/sched/cls_bpf.c | 34 +++++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c index 46af423a8a8f..18f9869cd4da 100644 --- a/net/sched/cls_bpf.c +++ b/net/sched/cls_bpf.c @@ -28,7 +28,7 @@ MODULE_DESCRIPTION("TC BPF based classifier"); #define CLS_BPF_NAME_LEN 256 #define CLS_BPF_SUPPORTED_GEN_FLAGS \ - TCA_CLS_FLAGS_SKIP_HW + (TCA_CLS_FLAGS_SKIP_HW | TCA_CLS_FLAGS_SKIP_SW) struct cls_bpf_head { struct list_head plist; @@ -95,7 +95,9 @@ static int cls_bpf_classify(struct sk_buff *skb, const struct tcf_proto *tp, qdisc_skb_cb(skb)->tc_classid = prog->res.classid; - if (at_ingress) { + if (tc_skip_sw(prog->gen_flags)) { + filter_res = prog->exts_integrated ? TC_ACT_UNSPEC : 0; + } else if (at_ingress) { /* It is safe to push/pull even if skb_shared() */ __skb_push(skb, skb->mac_len); bpf_compute_data_end(skb); @@ -163,32 +165,42 @@ static int cls_bpf_offload_cmd(struct tcf_proto *tp, struct cls_bpf_prog *prog, tp->protocol, &offload); } -static void cls_bpf_offload(struct tcf_proto *tp, struct cls_bpf_prog *prog, - struct cls_bpf_prog *oldprog) +static int cls_bpf_offload(struct tcf_proto *tp, struct cls_bpf_prog *prog, + struct cls_bpf_prog *oldprog) { struct net_device *dev = tp->q->dev_queue->dev; struct cls_bpf_prog *obj = prog; enum tc_clsbpf_command cmd; + bool skip_sw; + int ret; + + skip_sw = tc_skip_sw(prog->gen_flags) || + (oldprog && tc_skip_sw(oldprog->gen_flags)); if (oldprog && oldprog->offloaded) { if (tc_should_offload(dev, tp, prog->gen_flags)) { cmd = TC_CLSBPF_REPLACE; - } else { + } else if (!tc_skip_sw(prog->gen_flags)) { obj = oldprog; cmd = TC_CLSBPF_DESTROY; + } else { + return -EINVAL; } } else { if (!tc_should_offload(dev, tp, prog->gen_flags)) - return; + return skip_sw ? -EINVAL : 0; cmd = TC_CLSBPF_ADD; } - if (cls_bpf_offload_cmd(tp, obj, cmd)) - return; + ret = cls_bpf_offload_cmd(tp, obj, cmd); + if (ret) + return skip_sw ? ret : 0; obj->offloaded = true; if (oldprog) oldprog->offloaded = false; + + return 0; } static void cls_bpf_stop_offload(struct tcf_proto *tp, @@ -496,7 +508,11 @@ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb, if (ret < 0) goto errout; - cls_bpf_offload(tp, prog, oldprog); + ret = cls_bpf_offload(tp, prog, oldprog); + if (ret) { + cls_bpf_delete_prog(tp, prog); + return ret; + } if (oldprog) { list_replace_rcu(&oldprog->link, &prog->link);