From patchwork Wed Jun 8 19:11:04 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Kicinski X-Patchwork-Id: 632466 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3rPyjs01Zsz9sDC for ; Thu, 9 Jun 2016 05:12:17 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=netronome-com.20150623.gappssmtp.com header.i=@netronome-com.20150623.gappssmtp.com header.b=VJHZBQlL; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1423438AbcFHTML (ORCPT ); Wed, 8 Jun 2016 15:12:11 -0400 Received: from mail-wm0-f42.google.com ([74.125.82.42]:38528 "EHLO mail-wm0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932727AbcFHTL4 (ORCPT ); Wed, 8 Jun 2016 15:11:56 -0400 Received: by mail-wm0-f42.google.com with SMTP id m124so30247832wme.1 for ; Wed, 08 Jun 2016 12:11:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=netronome-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=VtafaovpStvS5WwpqhYZuP8zHh/S1Yrpni0ODKB589E=; b=VJHZBQlLVEFyHbCwWeyObe4YZJvCAaQrT7IUxx0kQXLL4DblcTdVEaZQyiOTdAnNoJ QkZOcb0qwuCiPPZIaNHKanGSYpa1RgiZY6jNxSvX+0ZBuOsYAipXfbWI3G1z0xzij9tw Kvddx3f52uG/1JVc+SWV06q/tNe5Tm1mGFj+2bItwlbOMwfIvjvd0jUw+NCfTMkDAYkV vaB75KA51eagUom6pqUoNvV3o5rp3QPZDe/DShvr+rhyp2FxMSaACNoAy9p2yNNd0YjN LgIJzUhVvD/awoJPfLuz6XJd2U2wvp+YceLxyQddqeQcLUaOuJqO9fDhVyMXlnfkPSem mfng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=VtafaovpStvS5WwpqhYZuP8zHh/S1Yrpni0ODKB589E=; b=NjQ8rMbDR7oZUv9TqUNW9pRNbwvM/m3fUHPItvgzYTneCdDwRNzU9kuYwPgg0R9KBM 1tZkQSysPi2l2uoXj7dvj0udSbFSxFfdj82veE07fmgtGv7kdQNk4sGV6aqX95y6gEFK I75BuEvlSsbjHxesxPAb/5381WJCTylDx28/z57XRQ8JdzIFq66aH6KnvIuyF8P5Cvvm 9AU+nvI9ZbclcG5b5oLo6nMd28NxFXNeOYK0sc68J5eEupM4jnhsqeACPrrIqs9LyBSD lz42r+mLHUaIn4IZecWN4sDgyPsUlVECpho7OVMUZadzp4tYtereOHGfNdCIGjbuLHcw fF1A== X-Gm-Message-State: ALyK8tKNaLsaa502uE+/R1C+4Hn/smlE8qZZIKY4y94RB5hOXT3pcREK29+pNlh9WTsWitHE X-Received: by 10.28.45.200 with SMTP id t191mr6354466wmt.40.1465413114681; Wed, 08 Jun 2016 12:11:54 -0700 (PDT) Received: from jkicinski-Precision-T1700.netronome.com (host-79-78-33-110.static.as9105.net. [79.78.33.110]) by smtp.gmail.com with ESMTPSA id d7sm26414644wmd.11.2016.06.08.12.11.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 08 Jun 2016 12:11:54 -0700 (PDT) From: Jakub Kicinski To: netdev@vger.kernel.org Cc: john.r.fastabend@intel.com, sridhar.samudrala@intel.com, Jakub Kicinski Subject: [PATCH net 2/2] net: cls_u32: be more strict about skip-sw flag for knodes Date: Wed, 8 Jun 2016 20:11:04 +0100 Message-Id: <1465413064-32196-3-git-send-email-jakub.kicinski@netronome.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1465413064-32196-1-git-send-email-jakub.kicinski@netronome.com> References: <1465413064-32196-1-git-send-email-jakub.kicinski@netronome.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Return an error if user requested skip-sw and the underlaying hardware cannot handle tc offloads (or offloads are disabled). This patch fixes the knode handling. Signed-off-by: Jakub Kicinski --- net/sched/cls_u32.c | 37 +++++++++++++++++++------------------ 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c index 54ab32a8ff4c..ffe593efe930 100644 --- a/net/sched/cls_u32.c +++ b/net/sched/cls_u32.c @@ -508,27 +508,28 @@ static int u32_replace_hw_knode(struct tcf_proto *tp, offload.type = TC_SETUP_CLSU32; offload.cls_u32 = &u32_offload; - if (tc_should_offload(dev, tp, flags)) { - offload.cls_u32->command = TC_CLSU32_REPLACE_KNODE; - offload.cls_u32->knode.handle = n->handle; - offload.cls_u32->knode.fshift = n->fshift; + if (!tc_should_offload(dev, tp, flags)) + return tc_skip_sw(flags) ? -EINVAL : 0; + + offload.cls_u32->command = TC_CLSU32_REPLACE_KNODE; + offload.cls_u32->knode.handle = n->handle; + offload.cls_u32->knode.fshift = n->fshift; #ifdef CONFIG_CLS_U32_MARK - offload.cls_u32->knode.val = n->val; - offload.cls_u32->knode.mask = n->mask; + offload.cls_u32->knode.val = n->val; + offload.cls_u32->knode.mask = n->mask; #else - offload.cls_u32->knode.val = 0; - offload.cls_u32->knode.mask = 0; + offload.cls_u32->knode.val = 0; + offload.cls_u32->knode.mask = 0; #endif - offload.cls_u32->knode.sel = &n->sel; - offload.cls_u32->knode.exts = &n->exts; - if (n->ht_down) - offload.cls_u32->knode.link_handle = n->ht_down->handle; - - err = dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, - tp->protocol, &offload); - if (tc_skip_sw(flags)) - return err; - } + offload.cls_u32->knode.sel = &n->sel; + offload.cls_u32->knode.exts = &n->exts; + if (n->ht_down) + offload.cls_u32->knode.link_handle = n->ht_down->handle; + + err = dev->netdev_ops->ndo_setup_tc(dev, tp->q->handle, + tp->protocol, &offload); + if (tc_skip_sw(flags)) + return err; return 0; }