From patchwork Mon Jun 25 04:34:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 934056 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=netronome.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b="KFh/U1cd"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 41Dbtz0yKyz9s2t for ; Mon, 25 Jun 2018 14:35:07 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751318AbeFYEe6 (ORCPT ); Mon, 25 Jun 2018 00:34:58 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:39232 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750936AbeFYEex (ORCPT ); Mon, 25 Jun 2018 00:34:53 -0400 Received: by mail-wm0-f65.google.com with SMTP id p11-v6so8315536wmc.4 for ; Sun, 24 Jun 2018 21:34:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rvH2XUoFRrHpXxY5UP88LLNWrzfArnyQ+Hg8oR88iEA=; b=KFh/U1cdcJz1QVBXMoUueP0cXUyllOEwM8HxoNnEoH5Ugd6VtMh4hFNmit/AgvJa20 vrSstpUgX1elj2PlxutL6FpMMom/5ezDKcyovRJnsIcLBBZgQfvXmQBSRbdUsE+jg+Xz gfveEosEZ7hDsgr2CoQHuQjt1P0Jfj6K+lvc6R9T7BN4tdNOOPWbgRiOj7qTy8CXTWFB eRXpnN/TmF/6zM6hoLg5gsPOpFCpKCn1F8vYgtJSseWuXajjMtsxZyqUt/qfWFTnPKI5 /gQIAQA8fGOBxFWM8KjdYcF1Ez9apWcMWjVqm7UkvVAU4tNvIjz/9Ow11ni5bGti1txK 0uUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rvH2XUoFRrHpXxY5UP88LLNWrzfArnyQ+Hg8oR88iEA=; b=XVILjCLqh5Rx+DoTyS96jL5fr2FYuT1CxCtisKKd2zmSkVsgdq4IhrGBYqCM7pDddO Go1A0zxwrYIcjNhqJTjOQNCtsKkSNPGAKhIb7+93OZlFY4K30YakaAaujD37fUNNJ1Jo g4CkS4/FbGgJZfhwV3VwYgzjfGWdpNHd6qexkrxsnLRjBFnO3b+kVz2XfW1Sme2ROZbs 3D1xwt81hZ1Smn1RTzYrJSepP6V+kLLg4QIDYuCsHFHKYRyGk6A0pwm725y1rwQOGbKI ldNZeCaOxkrDMaf4m8VVoSCCBnNxZrlYfOKcTJtd3dysdk5PDqLzyJlxNW3SrH1p4BRf ZNww== X-Gm-Message-State: APt69E0X/GFOy1Rc1K8X8+kLtliMHgD4YLoPP7wvptly8xX/1Tr5kaIF FpcM5fTct66GHOCPi9op/7dj7A== X-Google-Smtp-Source: AAOMgpdIZdpRLdeGCqBYTq4l4ai4Vf+jgIzIsM0A1XAsquJoG15mvQpBGzBFHB2H8smEizK4yUPKLA== X-Received: by 2002:a1c:7801:: with SMTP id t1-v6mr457249wmc.27.1529901292386; Sun, 24 Jun 2018 21:34:52 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com ([75.53.12.129]) by smtp.gmail.com with ESMTPSA id x24-v6sm6017692wmc.2.2018.06.24.21.34.49 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 24 Jun 2018 21:34:51 -0700 (PDT) From: Jakub Kicinski To: davem@davemloft.net, jiri@resnulli.us Cc: xiyou.wangcong@gmail.com, jhs@mojatatu.com, gerlitz.or@gmail.com, netdev@vger.kernel.org, oss-drivers@netronome.com, John Hurley , Jakub Kicinski Subject: [PATCH net-next 3/7] net: sched: cls_flower: implement offload tcf_proto_op Date: Sun, 24 Jun 2018 21:34:27 -0700 Message-Id: <20180625043431.13413-4-jakub.kicinski@netronome.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180625043431.13413-1-jakub.kicinski@netronome.com> References: <20180625043431.13413-1-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: John Hurley Add the reoffload tcf_proto_op in flower to generate an offload message for each filter in the given tcf_proto. Call the specified callback with this new offload message. The function only returns an error if the callback rejects adding a 'hardware only' rule. A filter contains a flag to indicate if it is in hardware or not. To ensure the reoffload function properly maintains this flag, keep a reference counter for the number of instances of the filter that are in hardware. Only update the flag when this counter changes from or to 0. Add a generic helper function to implement this behaviour. Signed-off-by: John Hurley Signed-off-by: Jakub Kicinski --- include/net/sch_generic.h | 15 +++++++++++++ net/sched/cls_flower.c | 44 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 88ed64f60056..c0bd11a928ed 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -336,6 +336,21 @@ static inline void tcf_block_offload_dec(struct tcf_block *block, u32 *flags) block->offloadcnt--; } +static inline void +tc_cls_offload_cnt_update(struct tcf_block *block, u32 *cnt, u32 *flags, + bool add) +{ + if (add) { + if (!*cnt) + tcf_block_offload_inc(block, flags); + (*cnt)++; + } else { + (*cnt)--; + if (!*cnt) + tcf_block_offload_dec(block, flags); + } +} + static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz) { struct qdisc_skb_cb *qcb; diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c index 9e8b26a80fb3..919bbcfd629b 100644 --- a/net/sched/cls_flower.c +++ b/net/sched/cls_flower.c @@ -87,6 +87,7 @@ struct cls_fl_filter { struct list_head list; u32 handle; u32 flags; + u32 in_hw_count; struct rcu_work rwork; struct net_device *hw_dev; }; @@ -289,6 +290,7 @@ static int fl_hw_replace_filter(struct tcf_proto *tp, fl_hw_destroy_filter(tp, f, NULL); return err; } else if (err > 0) { + f->in_hw_count = err; tcf_block_offload_inc(block, &f->flags); } @@ -1087,6 +1089,47 @@ static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg) } } +static int fl_reoffload(struct tcf_proto *tp, bool add, tc_setup_cb_t *cb, + void *cb_priv, struct netlink_ext_ack *extack) +{ + struct cls_fl_head *head = rtnl_dereference(tp->root); + struct tc_cls_flower_offload cls_flower = {}; + struct tcf_block *block = tp->chain->block; + struct fl_flow_mask *mask; + struct cls_fl_filter *f; + int err; + + list_for_each_entry(mask, &head->masks, list) { + list_for_each_entry(f, &mask->filters, list) { + if (tc_skip_hw(f->flags)) + continue; + + tc_cls_common_offload_init(&cls_flower.common, tp, + f->flags, extack); + cls_flower.command = add ? + TC_CLSFLOWER_REPLACE : TC_CLSFLOWER_DESTROY; + cls_flower.cookie = (unsigned long)f; + cls_flower.dissector = &mask->dissector; + cls_flower.mask = &f->mkey; + cls_flower.key = &f->key; + cls_flower.exts = &f->exts; + cls_flower.classid = f->res.classid; + + err = cb(TC_SETUP_CLSFLOWER, &cls_flower, cb_priv); + if (err) { + if (add && tc_skip_sw(f->flags)) + return err; + continue; + } + + tc_cls_offload_cnt_update(block, &f->in_hw_count, + &f->flags, add); + } + } + + return 0; +} + static int fl_dump_key_val(struct sk_buff *skb, void *val, int val_type, void *mask, int mask_type, int len) @@ -1438,6 +1481,7 @@ static struct tcf_proto_ops cls_fl_ops __read_mostly = { .change = fl_change, .delete = fl_delete, .walk = fl_walk, + .reoffload = fl_reoffload, .dump = fl_dump, .bind_class = fl_bind_class, .owner = THIS_MODULE,