From patchwork Wed Jun 5 13:20:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 1110486 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="mRYi4qcN"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45JqDX2sQXz9sNl for ; Wed, 5 Jun 2019 23:21:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728036AbfFENUT (ORCPT ); Wed, 5 Jun 2019 09:20:19 -0400 Received: from mail-lj1-f196.google.com ([209.85.208.196]:38062 "EHLO mail-lj1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727947AbfFENUQ (ORCPT ); Wed, 5 Jun 2019 09:20:16 -0400 Received: by mail-lj1-f196.google.com with SMTP id o13so23088453lji.5 for ; Wed, 05 Jun 2019 06:20:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=AdSMTsu+LtgUCnPNRXGwSYsCPQyqbNfQqw5/RcvDJNQ=; b=mRYi4qcNVbZ//c6G57A5oDbp2t2NbASCzTsCDdchf6yJyjEu+CCRyMzKu+9CKFoid3 ZwHgGIiAW5HvrqKBJ1X5jiXV8j+0CWCui9NP5egRT36VcapZyU/vhp6SCCBuya6WqE09 EJI5NV/HOBQBQfIBMKnZYUXlDbPa5bHZf929T1qOP8lEPB6+5X4JX3FTK/mOgRyEjxPo NR9mpIYi6oFT+jPz7XZ9EZc54Uv5lBo2CILwvY4E+3997zzLyyuwJiHIM/f3AzUS5BuF Ott0Mezc+lVW/EK674CogiBpGozPsRXV/dA4sdWr0gp2tJ/mtnknDAgMwxIU0+VWMBe/ l5IA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=AdSMTsu+LtgUCnPNRXGwSYsCPQyqbNfQqw5/RcvDJNQ=; b=RRlc/8jmVaEMANDNZtrInyzUuK61Cr/dLYc/8B/a96a2H5MYemBOBi6ZzvMzyg0en5 gVcX8qNj78yxugPJYETi7MU1DFbWMQ5X7aNaw6h0Ki3p/xQ97w9DsvHF5Km56e0iqSbA TOkiYeyDi/1DjISwkq3kRCM4IxdkVxEILQTpNdGjHA7zXXv9czbp267Mcse43k9PhxNr ZWQsSCRHKfmS/qc7nLYkQEbVVGHbT1IZER7lmPrh2sSyPWgyUkQdgcy/zNZDILvo3iYB 1oou5fJ6FjsyjI7n2xXMnKDOrBBv/4MCYFPDwdu97cOhOq1MqiDDwWgFOOT2e0Vo4Fat uH+g== X-Gm-Message-State: APjAAAVOH8LjhEHFfhKEe4cMWlXsuNJZ5q3m28hI2RYTjJmX6XRZo0bJ /uZ3ify8i/Hsz5K8CNFhvyaDFQ== X-Google-Smtp-Source: APXvYqxvv58TgjJcuU4yrUeNFAHBwd42aANAPZB41bOxwg874J6YPY976CnnpFzfO0Rvt1BD4pVGQw== X-Received: by 2002:a2e:9c03:: with SMTP id s3mr6004919lji.209.1559740814483; Wed, 05 Jun 2019 06:20:14 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:13 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Jesper Dangaard Brouer , Ivan Khoronzhuk Subject: [PATCH v3 net-next 1/7] net: page_pool: add helper function to retrieve dma addresses Date: Wed, 5 Jun 2019 16:20:03 +0300 Message-Id: <20190605132009.10734-2-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ilias Apalodimas On a previous patch dma addr was stored in 'struct page'. Use that to retrieve DMA addresses used by network drivers Signed-off-by: Ilias Apalodimas Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Ivan Khoronzhuk --- include/net/page_pool.h | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 694d055e01ef..b885d86cb7a1 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -132,6 +132,11 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, __page_pool_put_page(pool, page, true); } +static inline dma_addr_t page_pool_get_dma_addr(struct page *page) +{ + return page->dma_addr; +} + static inline bool is_page_pool_compiled_in(void) { #ifdef CONFIG_PAGE_POOL From patchwork Wed Jun 5 13:20:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 1110487 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="kakw92oc"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45JqDb2g6gz9sNk for ; Wed, 5 Jun 2019 23:21:03 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728158AbfFENVC (ORCPT ); Wed, 5 Jun 2019 09:21:02 -0400 Received: from mail-lf1-f66.google.com ([209.85.167.66]:36573 "EHLO mail-lf1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727903AbfFENUR (ORCPT ); Wed, 5 Jun 2019 09:20:17 -0400 Received: by mail-lf1-f66.google.com with SMTP id q26so19064467lfc.3 for ; Wed, 05 Jun 2019 06:20:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=J8udFRfvbZ6fXOojTQ4C0YPBDnze74Ovvj+f7jr19Fw=; b=kakw92oc5wH+oxJ6ODpgbLaUIoQW9h/zUc9x9pUca+W4hUS2xJ0UItvFLf9VQu8CIc 1szBRsbJ0awE7LTjIq4G2XLFuR6NA7/GMra+3MrrKql34p3S1K/0BW9XeVMFhQYKCTzz GDEEbdIwXdjGoo65Q5GIPn9ATH2Pplj3ZlximgzKWmvhwJO5uL4xt3wvQPWA9XCOdqby pldooEWZ4PPz+4lA99qqmGvqQJJB+5gPWnOAnNelCPYCw6w7kQYvm1e9sfZN8Rio0xT9 oX8uoCH8Mn8GL0cCRgeiLGtGD9Z2H3fzWi6Nd6Ib1p80OBQKGvSmhpjPo8Pw9YT+Zszq pj7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=J8udFRfvbZ6fXOojTQ4C0YPBDnze74Ovvj+f7jr19Fw=; b=gaekiRL+DKWofq+fLxbKkAqNXnQxWWq8fhmycKU7XqJjvs36FlQceG4V2oId5j+2SJ aeoLJDcFNv5WwTxxtyb7N3QmMg7gJrdX4tPgvIPMsSoEaiFLxTfLZmSvRgNreEbdYNus ViGKVpflYgh6qOMB44iUFsJ+nWT7JeIhGrkHINbRCbRQRKBNifuK4Tb/2tN0X06ZQIF/ 5jcrgIBXlpTGp2dTM5C6n3HRb/9t7yTpnlEwabgCOzs+eYkP3ziWHIcvJYmerquvkxa+ qwS4z8frRqjdyk9XyMpmFbvMOk0NEcQ+OHF2oeWx78E4Ci7OOV6yYX1JFmunOTv1mZ/G V3Ig== X-Gm-Message-State: APjAAAUjwkwOI/cGwBMe1I+Cl3Db529OTlMq2mHDLOda2ubJT+0SG0zY Z+vmA6kuMJAxiuWuuueGyrQUIQ== X-Google-Smtp-Source: APXvYqxUNjdEnl96GlRKiWrA7aDQd1EXYbFqh9DMeeWi0M9W28w5vnZ2wm2F2zFOSdm42p7PRe2ZeA== X-Received: by 2002:a19:2753:: with SMTP id n80mr20327221lfn.127.1559740815723; Wed, 05 Jun 2019 06:20:15 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:15 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Jesper Dangaard Brouer , Ivan Khoronzhuk Subject: [PATCH v3 net-next 2/7] net: page_pool: add helper function to unmap dma addresses Date: Wed, 5 Jun 2019 16:20:04 +0300 Message-Id: <20190605132009.10734-3-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Ilias Apalodimas On a previous patch dma addr was stored in 'struct page'. Use that to unmap DMA addresses used by network drivers Signed-off-by: Ilias Apalodimas Signed-off-by: Jesper Dangaard Brouer Signed-off-by: Ivan Khoronzhuk --- include/net/page_pool.h | 1 + net/core/page_pool.c | 7 +++++++ 2 files changed, 8 insertions(+) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index b885d86cb7a1..ad218cef88c5 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -110,6 +110,7 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool) struct page_pool *page_pool_create(const struct page_pool_params *params); void page_pool_destroy(struct page_pool *pool); +void page_pool_unmap_page(struct page_pool *pool, struct page *page); /* Never call this directly, use helpers below */ void __page_pool_put_page(struct page_pool *pool, diff --git a/net/core/page_pool.c b/net/core/page_pool.c index 5b2252c6d49b..205af7bd6d09 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -190,6 +190,13 @@ static void __page_pool_clean_page(struct page_pool *pool, page->dma_addr = 0; } +/* unmap the page and clean our state */ +void page_pool_unmap_page(struct page_pool *pool, struct page *page) +{ + __page_pool_clean_page(pool, page); +} +EXPORT_SYMBOL(page_pool_unmap_page); + /* Return a page to the page allocator, cleaning up our state */ static void __page_pool_return_page(struct page_pool *pool, struct page *page) { From patchwork Wed Jun 5 13:20:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 1110485 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="Ie/iVmmm"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45JqDW3J6Yz9sNk for ; Wed, 5 Jun 2019 23:20:59 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728156AbfFENU5 (ORCPT ); Wed, 5 Jun 2019 09:20:57 -0400 Received: from mail-lf1-f66.google.com ([209.85.167.66]:36575 "EHLO mail-lf1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727945AbfFENUT (ORCPT ); Wed, 5 Jun 2019 09:20:19 -0400 Received: by mail-lf1-f66.google.com with SMTP id q26so19064532lfc.3 for ; Wed, 05 Jun 2019 06:20:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CE8Iq8ZjUtNENgYj+FlGTMxcTQ2fFaeeS723D5LJTaU=; b=Ie/iVmmmXC6K55Uaa53Di3vSnNQfwFe1FvKFIdOAI2En/YaClJm3K8u2Hw01cBFi/b RytUn8xjSlb/3NQuDuoxQom0+Udw7r6FvZd18bSt7iyupgyOrRmKC+OMKowrKmAGRB63 uBq2urzv3cyOApD2MKU65Q1u0P8ojAkvxcHiVqF/hBHGcihIQS4HMpqgWyZSZqd0nOo5 8kESN0+fpHn6UddiWD8pg+cXV42hX8sHqS3r43SktDUC27osbBWxOGhPGHrNTC6FexB1 pTr+tpVaOvaF68MfWcPNdljRzLk3X3EQymwMI5uA6DpySSsr+jKQRPsgpStETVz2sPXM w7nQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=CE8Iq8ZjUtNENgYj+FlGTMxcTQ2fFaeeS723D5LJTaU=; b=M3BlTZHt7tX4tzQ1j21hC56Rt8HxLsE3kn0n2D6iaMm6QvG0S3hrlF2nGznDhS/Ul+ CahQ5Uj3mYpWu4rMpxIp9k8sOqMqI3DMnYaRmN9i7152eDqwdiQofWfqggUqIz3mxgcE UCUjR/nL4ocrXXhnNZ9V3+8dSw25TqM0aP0WFq2sa1sb16UTt0EkzEMeHjcBrRGG244J YY7JumQrJ5LA8d3auRO4kpoKtcQDpZODX+by2Y2wUjJzUIX18zsK6DmJs/c/AcU+htnC IBvhiyxV4lbiAiJPcjTr7Y6KGPAv2azn0c1GM1/kmO7Pc61fsWZK2mZchmKuHUKbIp/v mYYg== X-Gm-Message-State: APjAAAWg/IcIMgUTuWQG3Vm0i0XErqcQrRlsqmPnll7LXpO3KQX3AUi6 aMwAsn1zpY248YZ4b0xqDY+rlQ== X-Google-Smtp-Source: APXvYqwAg/kj6/eR/82+VKecddOJZ5/OhEi893jm44m9NIXKkIXEJmyM0m/IMzZ9j+Pv/4znmdaVOw== X-Received: by 2002:a19:7110:: with SMTP id m16mr20414541lfc.4.1559740817213; Wed, 05 Jun 2019 06:20:17 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:16 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 3/7] net: ethernet: ti: cpsw: use cpsw as drv data Date: Wed, 5 Jun 2019 16:20:05 +0300 Message-Id: <20190605132009.10734-4-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org No need to set ndev for drvdata when mainly cpsw reference is needed, so correct this legacy decision. Reviewed-by: Grygorii Strashko Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 6d3f1f3f90cb..3430503e1053 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -2265,8 +2265,7 @@ static int cpsw_probe_dt(struct cpsw_platform_data *data, static void cpsw_remove_dt(struct platform_device *pdev) { - struct net_device *ndev = platform_get_drvdata(pdev); - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct cpsw_common *cpsw = platform_get_drvdata(pdev); struct cpsw_platform_data *data = &cpsw->data; struct device_node *node = pdev->dev.of_node; struct device_node *slave_node; @@ -2477,7 +2476,7 @@ static int cpsw_probe(struct platform_device *pdev) goto clean_cpts; } - platform_set_drvdata(pdev, ndev); + platform_set_drvdata(pdev, cpsw); priv = netdev_priv(ndev); priv->cpsw = cpsw; priv->ndev = ndev; @@ -2570,9 +2569,8 @@ static int cpsw_probe(struct platform_device *pdev) static int cpsw_remove(struct platform_device *pdev) { - struct net_device *ndev = platform_get_drvdata(pdev); - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); - int ret; + struct cpsw_common *cpsw = platform_get_drvdata(pdev); + int i, ret; ret = pm_runtime_get_sync(&pdev->dev); if (ret < 0) { @@ -2580,9 +2578,9 @@ static int cpsw_remove(struct platform_device *pdev) return ret; } - if (cpsw->data.dual_emac) - unregister_netdev(cpsw->slaves[1].ndev); - unregister_netdev(ndev); + for (i = 0; i < cpsw->data.slaves; i++) + if (cpsw->slaves[i].ndev) + unregister_netdev(cpsw->slaves[i].ndev); cpts_release(cpsw->cpts); cpdma_ctlr_destroy(cpsw->dma); From patchwork Wed Jun 5 13:20:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 1110484 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="GeKKL4uF"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45JqDR0Vbkz9sNk for ; Wed, 5 Jun 2019 23:20:55 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728141AbfFENUu (ORCPT ); Wed, 5 Jun 2019 09:20:50 -0400 Received: from mail-lj1-f194.google.com ([209.85.208.194]:39310 "EHLO mail-lj1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728043AbfFENUU (ORCPT ); Wed, 5 Jun 2019 09:20:20 -0400 Received: by mail-lj1-f194.google.com with SMTP id v18so1971377ljh.6 for ; Wed, 05 Jun 2019 06:20:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=mszTEyuZf7y7lIdYQewtMvW8UUIaYv1rPkX1Aw39++8=; b=GeKKL4uFsj2/YpNSdGplVaT5cKsjpFQjUhD224JDjupcB6mr9U156zGjxPD4bGUmXl xB/PoUc1l920zPS+AnJJUOr9ASZFlrKJHyoVjNuSmdT8879k0qfJ9qMZZ+CxTKOYYbT2 DnBUQOhmdPXG6+ez1tunwdiZq5pICCDQjgjAeY2LzFhMwHW0ju/L4HO+RevJrWXifqBT Ox86+S/xgtHpy1qZNnStIiHq6Qm62g9w1K+rkaHKFRD/YXdbTQ1HpPL8lrj6ndFR3YVL dKcg7pP5myPZNZJlqjzpJQMtIGf/p0I/ZTCU0XPBvt2oIGCUCqu0hAR/kU1GDuzk7Rla 1sxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=mszTEyuZf7y7lIdYQewtMvW8UUIaYv1rPkX1Aw39++8=; b=hKNPnWMgvv0hRzsLRo2OYtBvT8fBXnZZ0oMHOBNaAD15jsV6OgXYrVxDsrkHi5mAuU 9J6nn8EEGCIrkDMeEodo14BTAE1RcVvIckTwb/var3M3DR+u72+gH/+rr0IkS/Rt4rXr yjXQRtBGKtL4c3E56piPGuOzfguOdk3NRaF0Rg77o+zyN2HcIdmgh7eT75P+bgMeN0UA I1XUAK5kIRw6wtOhrYnsMFy1TwuRaJ4C0QVmDtBLi8Yvdcxf6yik/P6qeO3kv/T4AOtZ tjyB19Tk3aAZuf7XfYkIgE7uTFpmz4bZYvnxAnaEIdChuTyzjKAYSy4UW8MoVpp/7IWN vxIg== X-Gm-Message-State: APjAAAXy+27ksQ+KM2UbzEf5POyEXmCCIx3ozAoHap/5nm+PsWPggG+I JgKY147eVdRmnowt5+WJB1ZJIg== X-Google-Smtp-Source: APXvYqx0Ver0O4/Equ6GVOE17NZ7KHBeKaZpaYrLCmxVRIpqH65BbJZz1P/KtdUw+kmPhT8rBHcJvQ== X-Received: by 2002:a2e:7216:: with SMTP id n22mr6617872ljc.42.1559740818396; Wed, 05 Jun 2019 06:20:18 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:17 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 4/7] net: ethernet: ti: cpsw_ethtool: simplify slave loops Date: Wed, 5 Jun 2019 16:20:06 +0300 Message-Id: <20190605132009.10734-5-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Only for consistency reasons, do it like in main cpsw.c module and use ndev reference but not by means of slave. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw_ethtool.c | 40 ++++++++++++++------------ 1 file changed, 21 insertions(+), 19 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index a4a7ec0d2531..3d5ae3fa5a8f 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -458,7 +458,6 @@ int cpsw_nway_reset(struct net_device *ndev) static void cpsw_suspend_data_pass(struct net_device *ndev) { struct cpsw_common *cpsw = ndev_to_cpsw(ndev); - struct cpsw_slave *slave; int i; /* Disable NAPI scheduling */ @@ -467,12 +466,13 @@ static void cpsw_suspend_data_pass(struct net_device *ndev) /* Stop all transmit queues for every network device. * Disable re-using rx descriptors with dormant_on. */ - for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) { - if (!(slave->ndev && netif_running(slave->ndev))) + for (i = 0; i < cpsw->data.slaves; i++) { + ndev = cpsw->slaves[i].ndev; + if (!(ndev && netif_running(ndev))) continue; - netif_tx_stop_all_queues(slave->ndev); - netif_dormant_on(slave->ndev); + netif_tx_stop_all_queues(ndev); + netif_dormant_on(ndev); } /* Handle rest of tx packets and stop cpdma channels */ @@ -483,13 +483,14 @@ static int cpsw_resume_data_pass(struct net_device *ndev) { struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; - struct cpsw_slave *slave; int i, ret; /* Allow rx packets handling */ - for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) - if (slave->ndev && netif_running(slave->ndev)) - netif_dormant_off(slave->ndev); + for (i = 0; i < cpsw->data.slaves; i++) { + ndev = cpsw->slaves[i].ndev; + if (ndev && netif_running(ndev)) + netif_dormant_off(ndev); + } /* After this receive is started */ if (cpsw->usage_count) { @@ -502,9 +503,11 @@ static int cpsw_resume_data_pass(struct net_device *ndev) } /* Resume transmit for every affected interface */ - for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) - if (slave->ndev && netif_running(slave->ndev)) - netif_tx_start_all_queues(slave->ndev); + for (i = 0; i < cpsw->data.slaves; i++) { + ndev = cpsw->slaves[i].ndev; + if (ndev && netif_running(ndev)) + netif_tx_start_all_queues(ndev); + } return 0; } @@ -587,7 +590,7 @@ int cpsw_set_channels_common(struct net_device *ndev, { struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; - struct cpsw_slave *slave; + struct net_device *sl_ndev; int i, ret; ret = cpsw_check_ch_settings(cpsw, chs); @@ -604,20 +607,19 @@ int cpsw_set_channels_common(struct net_device *ndev, if (ret) goto err; - for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) { - if (!(slave->ndev && netif_running(slave->ndev))) + for (i = 0; i < cpsw->data.slaves; i++) { + sl_ndev = cpsw->slaves[i].ndev; + if (!(sl_ndev && netif_running(sl_ndev))) continue; /* Inform stack about new count of queues */ - ret = netif_set_real_num_tx_queues(slave->ndev, - cpsw->tx_ch_num); + ret = netif_set_real_num_tx_queues(sl_ndev, cpsw->tx_ch_num); if (ret) { dev_err(priv->dev, "cannot set real number of tx queues\n"); goto err; } - ret = netif_set_real_num_rx_queues(slave->ndev, - cpsw->rx_ch_num); + ret = netif_set_real_num_rx_queues(sl_ndev, cpsw->rx_ch_num); if (ret) { dev_err(priv->dev, "cannot set real number of rx queues\n"); goto err; From patchwork Wed Jun 5 13:20:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 1110483 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="vCpeBGeE"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45JqDB1btZz9sNk for ; Wed, 5 Jun 2019 23:20:42 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728129AbfFENUk (ORCPT ); Wed, 5 Jun 2019 09:20:40 -0400 Received: from mail-lf1-f68.google.com ([209.85.167.68]:33872 "EHLO mail-lf1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728059AbfFENUX (ORCPT ); Wed, 5 Jun 2019 09:20:23 -0400 Received: by mail-lf1-f68.google.com with SMTP id y198so8567189lfa.1 for ; Wed, 05 Jun 2019 06:20:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=VHuNQ1++LWysFMf3aILQufzbZgwyC+0uuSHbvmrF61E=; b=vCpeBGeEkRctKU9ZdjqmLqgeTn1q8TfQfx17gP6KP/v4irpBlkPlTAcTtP2WgKU8eT 1GjAOrrk4NiJZidwYEDIaB0amFJ6EypRPEwY9fVfSlV7VnILxHt+catB5mgCaONQSPI/ ZVkw/2yXruqRxuVQeajoMIjeWMQTuRnOpA6nduvuXSMSZWoUqwfvJ76MJ1ljm0WJ3uBI /JLIKJEKhjglvQdWjlVvqOQqwZTAH4lDYUNYpm16TrieliuIuIzdxfWKLbqg4uh8U9aO 2DHUoQag/9C6vDVObhI/2sgcjMOLBOnubbKXltrKxzj5zOHIxy/ibLyU7FoahHT8P+MR Pl+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=VHuNQ1++LWysFMf3aILQufzbZgwyC+0uuSHbvmrF61E=; b=CszSVyEr3cCNJj0bWT6D36QWnLyOK1Fu6vY0M8ORXnZmg9o3/QCj6nTegloimgZMuZ jEDXp8lge/NRYYvsjHj5gleWvJWZHityS0mgEV1f2rMBef2RkkrDvimLOicK6cy0qYq+ Z9AORBjJ8vVza3+IqQjUdL8X8mtiYkEolcmnid1F+jHpRUzKxMOvCnG0OUEL96vdADmf ueSwQNQD3LrUE6as/Z7HfcCGwJ5tpt8DzYoRC88ozs2wziLuljknUXwqrr9bldxyQcpL ebPNGYnl7s8i2ej9hRfRh2gDEeZ89ea6RlCz5EAlEupf5GL3+jLwUSvKok4r7v7j+fe2 jEvA== X-Gm-Message-State: APjAAAXR09m5GGp3VXJNG4mKNxQjcDEXP7ZTFVrJj9X1nNLnDnNdfSxU EZQUMG2GIbUzaU8JUpIP9eUYcQ== X-Google-Smtp-Source: APXvYqxDmWXUaEHST6B9qkslbB1+Y3E1YjlLRJvPHEJTcmAFLJQ+B5DNurA4aqsam40UJXIeCOcjIw== X-Received: by 2002:a19:2981:: with SMTP id p123mr19756368lfp.190.1559740820942; Wed, 05 Jun 2019 06:20:20 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:20 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 6/7] net: ethernet: ti: davinci_cpdma: return handler status Date: Wed, 5 Jun 2019 16:20:08 +0300 Message-Id: <20190605132009.10734-7-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org This change is needed to return flush status of rx handler for flushing redirected xdp frames after processing channel packets. Do it as separate patch for simplicity. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/cpsw.c | 23 +++++++++++------ drivers/net/ethernet/ti/cpsw_ethtool.c | 2 +- drivers/net/ethernet/ti/cpsw_priv.h | 2 +- drivers/net/ethernet/ti/davinci_cpdma.c | 34 +++++++++++++++---------- drivers/net/ethernet/ti/davinci_cpdma.h | 4 +-- drivers/net/ethernet/ti/davinci_emac.c | 18 ++++++++----- 6 files changed, 50 insertions(+), 33 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 3430503e1053..d89ad428315c 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -337,7 +337,7 @@ void cpsw_intr_disable(struct cpsw_common *cpsw) return; } -void cpsw_tx_handler(void *token, int len, int status) +int cpsw_tx_handler(void *token, int len, int status) { struct netdev_queue *txq; struct sk_buff *skb = token; @@ -355,6 +355,7 @@ void cpsw_tx_handler(void *token, int len, int status) ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; dev_kfree_skb_any(skb); + return 0; } static void cpsw_rx_vlan_encap(struct sk_buff *skb) @@ -400,7 +401,7 @@ static void cpsw_rx_vlan_encap(struct sk_buff *skb) } } -static void cpsw_rx_handler(void *token, int len, int status) +static int cpsw_rx_handler(void *token, int len, int status) { struct cpdma_chan *ch; struct sk_buff *skb = token; @@ -434,7 +435,7 @@ static void cpsw_rx_handler(void *token, int len, int status) /* the interface is going down, skbs are purged */ dev_kfree_skb_any(skb); - return; + return 0; } new_skb = netdev_alloc_skb_ip_align(ndev, cpsw->rx_packet_max); @@ -459,7 +460,7 @@ static void cpsw_rx_handler(void *token, int len, int status) requeue: if (netif_dormant(ndev)) { dev_kfree_skb_any(new_skb); - return; + return 0; } ch = cpsw->rxv[skb_get_queue_mapping(new_skb)].ch; @@ -467,6 +468,8 @@ static void cpsw_rx_handler(void *token, int len, int status) skb_tailroom(new_skb), 0); if (WARN_ON(ret < 0)) dev_kfree_skb_any(new_skb); + + return 0; } void cpsw_split_res(struct cpsw_common *cpsw) @@ -605,7 +608,8 @@ static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget) else cur_budget = txv->budget; - num_tx += cpdma_chan_process(txv->ch, cur_budget); + cpdma_chan_process(txv->ch, &cur_budget); + num_tx += cur_budget; if (num_tx >= budget) break; } @@ -623,7 +627,8 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget) struct cpsw_common *cpsw = napi_to_cpsw(napi_tx); int num_tx; - num_tx = cpdma_chan_process(cpsw->txv[0].ch, budget); + num_tx = budget; + cpdma_chan_process(cpsw->txv[0].ch, &num_tx); if (num_tx < budget) { napi_complete(napi_tx); writel(0xff, &cpsw->wr_regs->tx_en); @@ -655,7 +660,8 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) else cur_budget = rxv->budget; - num_rx += cpdma_chan_process(rxv->ch, cur_budget); + cpdma_chan_process(rxv->ch, &cur_budget); + num_rx += cur_budget; if (num_rx >= budget) break; } @@ -673,7 +679,8 @@ static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget) struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); int num_rx; - num_rx = cpdma_chan_process(cpsw->rxv[0].ch, budget); + num_rx = budget; + cpdma_chan_process(cpsw->rxv[0].ch, &num_rx); if (num_rx < budget) { napi_complete_done(napi_rx, num_rx); writel(0xff, &cpsw->wr_regs->rx_en); diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index 3d5ae3fa5a8f..94f8f5ab46a5 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -538,8 +538,8 @@ static int cpsw_update_channels_res(struct cpsw_priv *priv, int ch_num, int rx, cpdma_handler_fn rx_handler) { struct cpsw_common *cpsw = priv->cpsw; - void (*handler)(void *, int, int); struct netdev_queue *queue; + cpdma_handler_fn handler; struct cpsw_vector *vec; int ret, *ch, vch; diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h index 04795b97ee71..2ecb3af59fe9 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.h +++ b/drivers/net/ethernet/ti/cpsw_priv.h @@ -390,7 +390,7 @@ void cpsw_split_res(struct cpsw_common *cpsw); int cpsw_fill_rx_channels(struct cpsw_priv *priv); void cpsw_intr_enable(struct cpsw_common *cpsw); void cpsw_intr_disable(struct cpsw_common *cpsw); -void cpsw_tx_handler(void *token, int len, int status); +int cpsw_tx_handler(void *token, int len, int status); /* ethtool */ u32 cpsw_get_msglevel(struct net_device *ndev); diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 7f89b2299f05..a59011d315d5 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -1137,15 +1137,16 @@ bool cpdma_check_free_tx_desc(struct cpdma_chan *chan) return free_tx_desc; } -static void __cpdma_chan_free(struct cpdma_chan *chan, - struct cpdma_desc __iomem *desc, - int outlen, int status) +static int __cpdma_chan_free(struct cpdma_chan *chan, + struct cpdma_desc __iomem *desc, int outlen, + int status) { struct cpdma_ctlr *ctlr = chan->ctlr; struct cpdma_desc_pool *pool = ctlr->pool; dma_addr_t buff_dma; int origlen; uintptr_t token; + int ret; token = desc_read(desc, sw_token); origlen = desc_read(desc, sw_len); @@ -1160,14 +1161,16 @@ static void __cpdma_chan_free(struct cpdma_chan *chan, } cpdma_desc_free(pool, desc, 1); - (*chan->handler)((void *)token, outlen, status); + ret = (*chan->handler)((void *)token, outlen, status); + + return ret; } static int __cpdma_chan_process(struct cpdma_chan *chan) { + int status, outlen, ret; struct cpdma_ctlr *ctlr = chan->ctlr; struct cpdma_desc __iomem *desc; - int status, outlen; int cb_status = 0; struct cpdma_desc_pool *pool = ctlr->pool; dma_addr_t desc_dma; @@ -1178,7 +1181,7 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) desc = chan->head; if (!desc) { chan->stats.empty_dequeue++; - status = -ENOENT; + ret = -ENOENT; goto unlock_ret; } desc_dma = desc_phys(pool, desc); @@ -1187,7 +1190,7 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) outlen = status & 0x7ff; if (status & CPDMA_DESC_OWNER) { chan->stats.busy_dequeue++; - status = -EBUSY; + ret = -EBUSY; goto unlock_ret; } @@ -1213,28 +1216,31 @@ static int __cpdma_chan_process(struct cpdma_chan *chan) else cb_status = status; - __cpdma_chan_free(chan, desc, outlen, cb_status); - return status; + ret = __cpdma_chan_free(chan, desc, outlen, cb_status); + return ret; unlock_ret: spin_unlock_irqrestore(&chan->lock, flags); - return status; + return ret; } -int cpdma_chan_process(struct cpdma_chan *chan, int quota) +int cpdma_chan_process(struct cpdma_chan *chan, int *quota) { - int used = 0, ret = 0; + int used = 0, ret = 0, res = 0; if (chan->state != CPDMA_STATE_ACTIVE) return -EINVAL; - while (used < quota) { + while (used < *quota) { ret = __cpdma_chan_process(chan); if (ret < 0) break; + res |= ret; used++; } - return used; + + *quota = used; + return res; } int cpdma_chan_start(struct cpdma_chan *chan) diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index 8f6f27185c63..56543d375923 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -61,7 +61,7 @@ struct cpdma_chan_stats { struct cpdma_ctlr; struct cpdma_chan; -typedef void (*cpdma_handler_fn)(void *token, int len, int status); +typedef int (*cpdma_handler_fn)(void *token, int len, int status); struct cpdma_ctlr *cpdma_ctlr_create(struct cpdma_params *params); int cpdma_ctlr_destroy(struct cpdma_ctlr *ctlr); @@ -81,7 +81,7 @@ int cpdma_chan_submit_mapped(struct cpdma_chan *chan, void *token, dma_addr_t data, int len, int directed); int cpdma_chan_submit(struct cpdma_chan *chan, void *token, void *data, int len, int directed); -int cpdma_chan_process(struct cpdma_chan *chan, int quota); +int cpdma_chan_process(struct cpdma_chan *chan, int *quota); int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable); void cpdma_ctlr_eoi(struct cpdma_ctlr *ctlr, u32 value); diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c index 4bf65cab79e6..3592690b8dd8 100644 --- a/drivers/net/ethernet/ti/davinci_emac.c +++ b/drivers/net/ethernet/ti/davinci_emac.c @@ -860,7 +860,7 @@ static struct sk_buff *emac_rx_alloc(struct emac_priv *priv) return skb; } -static void emac_rx_handler(void *token, int len, int status) +static int emac_rx_handler(void *token, int len, int status) { struct sk_buff *skb = token; struct net_device *ndev = skb->dev; @@ -871,7 +871,7 @@ static void emac_rx_handler(void *token, int len, int status) /* free and bail if we are shutting down */ if (unlikely(!netif_running(ndev))) { dev_kfree_skb_any(skb); - return; + return 0; } /* recycle on receive error */ @@ -892,7 +892,7 @@ static void emac_rx_handler(void *token, int len, int status) if (!skb) { if (netif_msg_rx_err(priv) && net_ratelimit()) dev_err(emac_dev, "failed rx buffer alloc\n"); - return; + return 0; } recycle: @@ -902,9 +902,11 @@ static void emac_rx_handler(void *token, int len, int status) WARN_ON(ret == -ENOMEM); if (unlikely(ret < 0)) dev_kfree_skb_any(skb); + + return 0; } -static void emac_tx_handler(void *token, int len, int status) +static int emac_tx_handler(void *token, int len, int status) { struct sk_buff *skb = token; struct net_device *ndev = skb->dev; @@ -917,6 +919,7 @@ static void emac_tx_handler(void *token, int len, int status) ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; dev_kfree_skb_any(skb); + return 0; } /** @@ -1237,8 +1240,8 @@ static int emac_poll(struct napi_struct *napi, int budget) mask = EMAC_DM646X_MAC_IN_VECTOR_TX_INT_VEC; if (status & mask) { - num_tx_pkts = cpdma_chan_process(priv->txchan, - EMAC_DEF_TX_MAX_SERVICE); + num_tx_pkts = EMAC_DEF_TX_MAX_SERVICE; + cpdma_chan_process(priv->txchan, &num_tx_pkts); } /* TX processing */ mask = EMAC_DM644X_MAC_IN_VECTOR_RX_INT_VEC; @@ -1247,7 +1250,8 @@ static int emac_poll(struct napi_struct *napi, int budget) mask = EMAC_DM646X_MAC_IN_VECTOR_RX_INT_VEC; if (status & mask) { - num_rx_pkts = cpdma_chan_process(priv->rxchan, budget); + num_rx_pkts = budget; + cpdma_chan_process(priv->rxchan, &num_rx_pkts); } /* RX processing */ mask = EMAC_DM644X_MAC_IN_VECTOR_HOST_INT; From patchwork Wed Jun 5 13:20:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ivan Khoronzhuk X-Patchwork-Id: 1110482 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b="fXNGwtP/"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45JqCw62y2z9sCJ for ; Wed, 5 Jun 2019 23:20:28 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728096AbfFENU1 (ORCPT ); Wed, 5 Jun 2019 09:20:27 -0400 Received: from mail-lf1-f66.google.com ([209.85.167.66]:39293 "EHLO mail-lf1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728086AbfFENUZ (ORCPT ); Wed, 5 Jun 2019 09:20:25 -0400 Received: by mail-lf1-f66.google.com with SMTP id p24so12584430lfo.6 for ; Wed, 05 Jun 2019 06:20:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3kBpujiWIdXnq8MUrKCnhnTLaetLoA6c2RM9nWLl5JU=; b=fXNGwtP/Dp4cqjjp05qGSV1SoTXU3pie4Ir0Hye/bofI5RGr2etqrQ4+rLbCDSgIir Cmzfy5In3rAbNUuHPI8h6Yb5rRhEMj5XHJDxak//jnEwfHbxX/vdPDKENXjz2pk1RKLK KJdZWYvmY+V3ULakS9hKqovqju4+tfDehcWw7r7kHjxXSqE8sYGzWXyDoF48qIRQO0mx cpIEcJoNdR44q3TfJT5oNOmA3wn30kcG+M//pxvim+AJPFLCzzT+baKwzhBuAFok4dk8 nvlLSf95MYE5S3ouy0t41RUZqX0vNLFO1x108eNoWDLL5EKV0wgyW9795ezMba0VK8+U vFEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3kBpujiWIdXnq8MUrKCnhnTLaetLoA6c2RM9nWLl5JU=; b=b0Vvg+hEmGRb7AjuG5yH7WIc9FoITBGgZ34JM4xldsB61+/ZFDYj3W+Ij5KTpo0WU1 lJNsr7wOxUP1zQEm+fkXjTaR7RQ79zo5EwEGX8CwRtw2WJh6kllZAn1RsL/N2urG6CgV HL8Vm6Q7o1xP7AtXDFvIpRo5DF5uxkNQckPgEc+UADHXrhnyXzUmEUAf541xVgjBU87V BQDBt8pJ9kqRc75ApuPgUmaiJnk4gjxMvvtkd9nfWmgHpw5rDGdx9wLZ/nP+c6u0yA4W ZLqApGhqPg6uF+Pkvji3NHLQGUQkU+ekD6On3012BjaOyimdEy5F3DPHypM5ESdRTqLo iQ7Q== X-Gm-Message-State: APjAAAV0rNeU5CIzYuhWqi5VMg4TrEWhP3CvBlVQRdpHgRKQukAHTvkF edqoCWtXJBMooWwbzCbQ1LanKQ== X-Google-Smtp-Source: APXvYqzohYjDxjLSYKXt2D+yq4iMurVhbkg5S+LSUZXogQfiB+9QYyAIGQ+sZ832Ky6dp8laA0hufQ== X-Received: by 2002:a19:fc1d:: with SMTP id a29mr21490995lfi.35.1559740822300; Wed, 05 Jun 2019 06:20:22 -0700 (PDT) Received: from localhost.localdomain (59-201-94-178.pool.ukrtel.net. [178.94.201.59]) by smtp.gmail.com with ESMTPSA id t3sm1893259lfk.59.2019.06.05.06.20.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 05 Jun 2019 06:20:21 -0700 (PDT) From: Ivan Khoronzhuk To: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net Cc: ast@kernel.org, linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org, ilias.apalodimas@linaro.org, netdev@vger.kernel.org, daniel@iogearbox.net, jakub.kicinski@netronome.com, john.fastabend@gmail.com, Ivan Khoronzhuk Subject: [PATCH v3 net-next 7/7] net: ethernet: ti: cpsw: add XDP support Date: Wed, 5 Jun 2019 16:20:09 +0300 Message-Id: <20190605132009.10734-8-ivan.khoronzhuk@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> References: <20190605132009.10734-1-ivan.khoronzhuk@linaro.org> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Add XDP support based on rx page_pool allocator, one frame per page. Page pool allocator is used with assumption that only one rx_handler is running simultaneously. DMA map/unmap is reused from page pool despite there is no need to map whole page. Due to specific of cpsw, the same TX/RX handler can be used by 2 network devices, so special fields in buffer are added to identify an interface the frame is destined to. Thus XDP works for both interfaces, that allows to test xdp redirect between two interfaces easily. Aslo, each ndev and its rx queues have own page pools. XDP prog is common for all channels till appropriate changes are added in XDP infrastructure. Also, once page_pool recycling becomes part of skb netstack some simplifications can be added marked with comments. Signed-off-by: Ivan Khoronzhuk --- drivers/net/ethernet/ti/Kconfig | 1 + drivers/net/ethernet/ti/cpsw.c | 524 ++++++++++++++++++++++--- drivers/net/ethernet/ti/cpsw_ethtool.c | 58 ++- drivers/net/ethernet/ti/cpsw_priv.h | 7 + 4 files changed, 523 insertions(+), 67 deletions(-) diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig index bd05a977ee7e..3cb8c5214835 100644 --- a/drivers/net/ethernet/ti/Kconfig +++ b/drivers/net/ethernet/ti/Kconfig @@ -50,6 +50,7 @@ config TI_CPSW depends on ARCH_DAVINCI || ARCH_OMAP2PLUS || COMPILE_TEST select TI_DAVINCI_MDIO select MFD_SYSCON + select PAGE_POOL select REGMAP ---help--- This driver supports TI's CPSW Ethernet Switch. diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index d89ad428315c..391f2378a0c3 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -31,6 +31,10 @@ #include #include #include +#include +#include +#include +#include #include #include @@ -60,6 +64,10 @@ static int descs_pool_size = CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT; module_param(descs_pool_size, int, 0444); MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool"); +/* The buf includes headroom compatible with both skb and xdpf */ +#define CPSW_HEADROOM_NA (max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + NET_IP_ALIGN) +#define CPSW_HEADROOM ALIGN(CPSW_HEADROOM_NA, sizeof(long)) + #define for_each_slave(priv, func, arg...) \ do { \ struct cpsw_slave *slave; \ @@ -74,6 +82,13 @@ MODULE_PARM_DESC(descs_pool_size, "Number of CPDMA CPPI descriptors in pool"); (func)(slave++, ##arg); \ } while (0) +#define CPSW_XMETA_OFFSET ALIGN(sizeof(struct xdp_frame), sizeof(long)) + +#define CPSW_XDP_CONSUMED 1 +#define CPSW_XDP_CONSUMED_FLUSH 2 +#define CPSW_XDP_PASS 0 +#define CPSW_FLUSH_XDP_MAP 1 + static int cpsw_ndo_vlan_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid); @@ -337,24 +352,58 @@ void cpsw_intr_disable(struct cpsw_common *cpsw) return; } +static int cpsw_is_xdpf_handle(void *handle) +{ + return (unsigned long)handle & BIT(0); +} + +static void *cpsw_xdpf_to_handle(struct xdp_frame *xdpf) +{ + return (void *)((unsigned long)xdpf | BIT(0)); +} + +static struct xdp_frame *cpsw_handle_to_xdpf(void *handle) +{ + return (struct xdp_frame *)((unsigned long)handle & ~BIT(0)); +} + +struct __aligned(sizeof(long)) cpsw_meta_xdp { + struct net_device *ndev; + int ch; +}; + int cpsw_tx_handler(void *token, int len, int status) { + struct cpsw_meta_xdp *xmeta; + struct xdp_frame *xdpf; + struct net_device *ndev; struct netdev_queue *txq; - struct sk_buff *skb = token; - struct net_device *ndev = skb->dev; - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct sk_buff *skb; + int ch; + + if (cpsw_is_xdpf_handle(token)) { + xdpf = cpsw_handle_to_xdpf(token); + xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; + ndev = xmeta->ndev; + ch = xmeta->ch; + xdp_return_frame_rx_napi(xdpf); + } else { + skb = token; + ndev = skb->dev; + ch = skb_get_queue_mapping(skb); + cpts_tx_timestamp(ndev_to_cpsw(ndev)->cpts, skb); + dev_kfree_skb_any(skb); + } /* Check whether the queue is stopped due to stalled tx dma, if the * queue is stopped then start the queue as we have free desc for tx */ - txq = netdev_get_tx_queue(ndev, skb_get_queue_mapping(skb)); + txq = netdev_get_tx_queue(ndev, ch); if (unlikely(netif_tx_queue_stopped(txq))) netif_tx_wake_queue(txq); - cpts_tx_timestamp(cpsw->cpts, skb); ndev->stats.tx_packets++; ndev->stats.tx_bytes += len; - dev_kfree_skb_any(skb); return 0; } @@ -401,25 +450,246 @@ static void cpsw_rx_vlan_encap(struct sk_buff *skb) } } +static int cpsw_xdp_tx_frame(struct cpsw_priv *priv, struct xdp_frame *xdpf, + struct page *page) +{ + struct cpsw_common *cpsw = priv->cpsw; + struct cpsw_meta_xdp *xmeta; + struct netdev_queue *txq; + struct cpdma_chan *txch; + dma_addr_t dma; + int ret, port; + + xmeta = (void *)xdpf + CPSW_XMETA_OFFSET; + xmeta->ndev = priv->ndev; + xmeta->ch = 0; + txch = cpsw->txv[0].ch; + + port = priv->emac_port + cpsw->data.dual_emac; + if (page) { + dma = page_pool_get_dma_addr(page); + dma += xdpf->data - (void *)xdpf; + ret = cpdma_chan_submit_mapped(txch, cpsw_xdpf_to_handle(xdpf), + dma, xdpf->len, port); + } else { + if (sizeof(*xmeta) > xdpf->headroom) { + xdp_return_frame_rx_napi(xdpf); + return -EINVAL; + } + + ret = cpdma_chan_submit(txch, cpsw_xdpf_to_handle(xdpf), + xdpf->data, xdpf->len, port); + } + + if (ret) { + xdp_return_frame_rx_napi(xdpf); + goto stop; + } + + /* no tx desc - stop sending us tx frames */ + if (unlikely(!cpdma_check_free_tx_desc(txch))) + goto stop; + + return ret; +stop: + txq = netdev_get_tx_queue(priv->ndev, 0); + netif_tx_stop_queue(txq); + + /* Barrier, so that stop_queue visible to other cpus */ + smp_mb__after_atomic(); + + if (cpdma_check_free_tx_desc(txch)) + netif_tx_wake_queue(txq); + + return ret; +} + +static int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp, + struct page *page) +{ + struct net_device *ndev = priv->ndev; + int ret = CPSW_XDP_CONSUMED; + struct xdp_frame *xdpf; + struct bpf_prog *prog; + u32 act; + + rcu_read_lock(); + + prog = READ_ONCE(priv->xdp_prog); + if (!prog) { + ret = CPSW_XDP_PASS; + goto out; + } + + act = bpf_prog_run_xdp(prog, xdp); + switch (act) { + case XDP_PASS: + ret = CPSW_XDP_PASS; + break; + case XDP_TX: + xdpf = convert_to_xdp_frame(xdp); + if (unlikely(!xdpf)) + goto drop; + + cpsw_xdp_tx_frame(priv, xdpf, page); + break; + case XDP_REDIRECT: + if (xdp_do_redirect(ndev, xdp, prog)) + goto drop; + + ret = CPSW_XDP_CONSUMED_FLUSH; + break; + default: + bpf_warn_invalid_xdp_action(act); + /* fall through */ + case XDP_ABORTED: + trace_xdp_exception(ndev, prog, act); + /* fall through -- handle aborts by dropping packet */ + case XDP_DROP: + goto drop; + } +out: + rcu_read_unlock(); + return ret; +drop: + rcu_read_unlock(); + page_pool_recycle_direct(priv->page_pool[ch], page); + return ret; +} + +static unsigned int cpsw_rxbuf_total_len(unsigned int len) +{ + len += CPSW_HEADROOM; + len += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); + + return SKB_DATA_ALIGN(len); +} + +static void cpsw_destroy_rx_pool(struct cpsw_priv *priv, int ch) +{ + if (!xdp_rxq_info_is_reg(&priv->xdp_rxq[ch])) + return; + + xdp_rxq_info_unreg(&priv->xdp_rxq[ch]); + page_pool_destroy(priv->page_pool[ch]); + priv->page_pool[ch] = NULL; +} + +struct page_pool *cpsw_create_page_pool(struct cpsw_common *cpsw, int size) +{ + struct page_pool_params pp_params; + struct page_pool *pool; + + pp_params.order = 0; + pp_params.flags = PP_FLAG_DMA_MAP; + pp_params.pool_size = size; + pp_params.nid = NUMA_NO_NODE; + pp_params.dma_dir = DMA_BIDIRECTIONAL; + pp_params.dev = cpsw->dev; + + pool = page_pool_create(&pp_params); + if (IS_ERR(pool)) + dev_err(cpsw->dev, "cannot create rx page pool\n"); + + return pool; +} + +static int cpsw_create_rx_pool(struct cpsw_priv *priv, int ch) +{ + struct xdp_rxq_info *xdp_rxq = &priv->xdp_rxq[ch]; + struct cpsw_common *cpsw = priv->cpsw; + struct page_pool *pool; + int ret, pool_size; + + ret = xdp_rxq_info_reg(xdp_rxq, priv->ndev, ch); + if (ret) + return ret; + + pool_size = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); + pool = cpsw_create_page_pool(cpsw, pool_size); + if (IS_ERR(pool)) { + ret = PTR_ERR(pool); + xdp_rxq_info_unreg(xdp_rxq); + return ret; + } + + priv->page_pool[ch] = pool; + ret = xdp_rxq_info_reg_mem_model(xdp_rxq, MEM_TYPE_PAGE_POOL, pool); + if (ret) + cpsw_destroy_rx_pool(priv, ch); + + return ret; +} + +void cpsw_ndev_destroy_rx_pools(struct cpsw_priv *priv) +{ + struct cpsw_common *cpsw = priv->cpsw; + int i; + + for (i = 0; i < cpsw->rx_ch_num; i++) + cpsw_destroy_rx_pool(priv, i); +} + +int cpsw_ndev_create_rx_pools(struct cpsw_priv *priv) +{ + struct cpsw_common *cpsw = priv->cpsw; + int i, ret; + + for (i = 0; i < cpsw->rx_ch_num; i++) { + ret = cpsw_create_rx_pool(priv, i); + if (ret) + goto err_cleanup; + } + + return 0; + +err_cleanup: + cpsw_ndev_destroy_rx_pools(priv); + + return ret; +} + static int cpsw_rx_handler(void *token, int len, int status) { - struct cpdma_chan *ch; - struct sk_buff *skb = token; - struct sk_buff *new_skb; - struct net_device *ndev = skb->dev; - int ret = 0, port; - struct cpsw_common *cpsw = ndev_to_cpsw(ndev); + struct page *new_page, *page = token; + void *pa = page_address(page); + struct cpsw_meta_xdp *xmeta = pa + CPSW_XMETA_OFFSET; + struct cpsw_common *cpsw = ndev_to_cpsw(xmeta->ndev); + int pkt_size = cpsw->rx_packet_max; + int ret = 0, port, ch = xmeta->ch; + int headroom = CPSW_HEADROOM; + struct net_device *ndev = xmeta->ndev; + int res = 0; struct cpsw_priv *priv; + struct page_pool *pool; + struct sk_buff *skb; + struct xdp_buff xdp; + dma_addr_t dma; - if (cpsw->data.dual_emac) { + if (cpsw->data.dual_emac && status >= 0) { port = CPDMA_RX_SOURCE_PORT(status); - if (port) { + if (port) ndev = cpsw->slaves[--port].ndev; - skb->dev = ndev; - } } + priv = netdev_priv(ndev); + pool = priv->page_pool[ch]; if (unlikely(status < 0) || unlikely(!netif_running(ndev))) { + if (cpsw->data.dual_emac && !pool) { + /* In dual mac mode while going down the descriptors + * can have pointer on netdev that has been down, so + * find active device and its page pool. + */ + for (port = 0; port < cpsw->data.slaves; port++) { + ndev = cpsw->slaves[port].ndev; + priv = netdev_priv(ndev); + if (priv->page_pool[ch]) { + pool = priv->page_pool[ch]; + break; + } + } + } + /* In dual emac mode check for all interfaces */ if (cpsw->data.dual_emac && cpsw->usage_count && (status >= 0)) { @@ -427,49 +697,97 @@ static int cpsw_rx_handler(void *token, int len, int status) * is already down and the other interface is up * and running, instead of freeing which results * in reducing of the number of rx descriptor in - * DMA engine, requeue skb back to cpdma. + * DMA engine, requeue page back to cpdma. */ - new_skb = skb; + new_page = page; goto requeue; } - /* the interface is going down, skbs are purged */ - dev_kfree_skb_any(skb); + /* the interface is going down, pages are purged */ + page_pool_recycle_direct(pool, page); return 0; } - new_skb = netdev_alloc_skb_ip_align(ndev, cpsw->rx_packet_max); - if (new_skb) { - skb_copy_queue_mapping(new_skb, skb); - skb_put(skb, len); - if (status & CPDMA_RX_VLAN_ENCAP) - cpsw_rx_vlan_encap(skb); - priv = netdev_priv(ndev); - if (priv->rx_ts_enabled) - cpts_rx_timestamp(cpsw->cpts, skb); - skb->protocol = eth_type_trans(skb, ndev); - netif_receive_skb(skb); - ndev->stats.rx_bytes += len; - ndev->stats.rx_packets++; - kmemleak_not_leak(new_skb); - } else { + new_page = page_pool_dev_alloc_pages(pool); + if (unlikely(!new_page)) { + new_page = page; ndev->stats.rx_dropped++; - new_skb = skb; + goto requeue; } + if (priv->xdp_prog) { + if (status & CPDMA_RX_VLAN_ENCAP) { + xdp.data = pa + CPSW_HEADROOM + + CPSW_RX_VLAN_ENCAP_HDR_SIZE; + xdp.data_end = xdp.data + len - + CPSW_RX_VLAN_ENCAP_HDR_SIZE; + } else { + xdp.data = pa + CPSW_HEADROOM; + xdp.data_end = xdp.data + len; + } + + xdp_set_data_meta_invalid(&xdp); + + xdp.data_hard_start = pa; + xdp.rxq = &priv->xdp_rxq[ch]; + + ret = cpsw_run_xdp(priv, ch, &xdp, page); + if (ret != CPSW_XDP_PASS) { + if (ret == CPSW_XDP_CONSUMED_FLUSH) + res = CPSW_FLUSH_XDP_MAP; + + goto requeue; + } + + /* XDP prog might have changed packet data and boundaries */ + len = xdp.data_end - xdp.data; + headroom = xdp.data - xdp.data_hard_start; + + /* XDP prog can modify vlan tag, so can't use encap header */ + status &= ~CPDMA_RX_VLAN_ENCAP; + } + + /* pass skb to netstack if no XDP prog or returned XDP_PASS */ + skb = build_skb(pa, cpsw_rxbuf_total_len(pkt_size)); + if (!skb) { + ndev->stats.rx_dropped++; + page_pool_recycle_direct(pool, page); + goto requeue; + } + + skb_reserve(skb, headroom); + skb_put(skb, len); + skb->dev = ndev; + if (status & CPDMA_RX_VLAN_ENCAP) + cpsw_rx_vlan_encap(skb); + if (priv->rx_ts_enabled) + cpts_rx_timestamp(cpsw->cpts, skb); + skb->protocol = eth_type_trans(skb, ndev); + + /* unmap page as no netstack skb page recycling */ + page_pool_unmap_page(pool, page); + netif_receive_skb(skb); + + ndev->stats.rx_bytes += len; + ndev->stats.rx_packets++; + requeue: if (netif_dormant(ndev)) { - dev_kfree_skb_any(new_skb); - return 0; + page_pool_recycle_direct(pool, new_page); + return res; } - ch = cpsw->rxv[skb_get_queue_mapping(new_skb)].ch; - ret = cpdma_chan_submit(ch, new_skb, new_skb->data, - skb_tailroom(new_skb), 0); + xmeta = page_address(new_page) + CPSW_XMETA_OFFSET; + xmeta->ndev = ndev; + xmeta->ch = ch; + + dma = page_pool_get_dma_addr(new_page) + CPSW_HEADROOM; + ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, new_page, dma, + pkt_size, 0); if (WARN_ON(ret < 0)) - dev_kfree_skb_any(new_skb); + page_pool_recycle_direct(pool, new_page); - return 0; + return res; } void cpsw_split_res(struct cpsw_common *cpsw) @@ -644,8 +962,8 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget) static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) { u32 ch_map; - int num_rx, cur_budget, ch; struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); + int num_rx, cur_budget, ch, res; struct cpsw_vector *rxv; /* process every unprocessed channel */ @@ -660,8 +978,12 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) else cur_budget = rxv->budget; - cpdma_chan_process(rxv->ch, &cur_budget); + res = cpdma_chan_process(rxv->ch, &cur_budget); num_rx += cur_budget; + + if (res & CPSW_FLUSH_XDP_MAP) + xdp_do_flush_map(); + if (num_rx >= budget) break; } @@ -677,10 +999,15 @@ static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget) static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget) { struct cpsw_common *cpsw = napi_to_cpsw(napi_rx); - int num_rx; + struct cpsw_vector *rxv; + int num_rx, res; num_rx = budget; - cpdma_chan_process(cpsw->rxv[0].ch, &num_rx); + rxv = &cpsw->rxv[0]; + res = cpdma_chan_process(rxv->ch, &num_rx); + if (res & CPSW_FLUSH_XDP_MAP) + xdp_do_flush_map(); + if (num_rx < budget) { napi_complete_done(napi_rx, num_rx); writel(0xff, &cpsw->wr_regs->rx_en); @@ -1042,33 +1369,38 @@ static void cpsw_init_host_port(struct cpsw_priv *priv) int cpsw_fill_rx_channels(struct cpsw_priv *priv) { struct cpsw_common *cpsw = priv->cpsw; - struct sk_buff *skb; + struct cpsw_meta_xdp *xmeta; + struct page_pool *pool; + struct page *page; int ch_buf_num; int ch, i, ret; + dma_addr_t dma; for (ch = 0; ch < cpsw->rx_ch_num; ch++) { + pool = priv->page_pool[ch]; ch_buf_num = cpdma_chan_get_rx_buf_num(cpsw->rxv[ch].ch); for (i = 0; i < ch_buf_num; i++) { - skb = __netdev_alloc_skb_ip_align(priv->ndev, - cpsw->rx_packet_max, - GFP_KERNEL); - if (!skb) { - cpsw_err(priv, ifup, "cannot allocate skb\n"); + page = page_pool_dev_alloc_pages(pool); + if (!page) { + cpsw_err(priv, ifup, "allocate rx page err\n"); return -ENOMEM; } - skb_set_queue_mapping(skb, ch); - ret = cpdma_chan_submit(cpsw->rxv[ch].ch, skb, - skb->data, skb_tailroom(skb), - 0); + xmeta = page_address(page) + CPSW_XMETA_OFFSET; + xmeta->ndev = priv->ndev; + xmeta->ch = ch; + + dma = page_pool_get_dma_addr(page) + CPSW_HEADROOM; + ret = cpdma_chan_submit_mapped(cpsw->rxv[ch].ch, page, + dma, cpsw->rx_packet_max, + 0); if (ret < 0) { cpsw_err(priv, ifup, - "cannot submit skb to channel %d rx, error %d\n", + "cannot submit page to channel %d rx, error %d\n", ch, ret); - kfree_skb(skb); + page_pool_recycle_direct(pool, page); return ret; } - kmemleak_not_leak(skb); } cpsw_info(priv, ifup, "ch %d rx, submitted %d descriptors\n", @@ -1380,6 +1712,10 @@ static int cpsw_ndo_open(struct net_device *ndev) cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan, ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0); + ret = cpsw_ndev_create_rx_pools(priv); + if (ret) + goto err_cleanup; + /* initialize shared resources for every ndev */ if (!cpsw->usage_count) { /* disable priority elevation */ @@ -1430,11 +1766,11 @@ static int cpsw_ndo_open(struct net_device *ndev) return 0; err_cleanup: - if (!cpsw->usage_count) { + if (!cpsw->usage_count) cpdma_ctlr_stop(cpsw->dma); - for_each_slave(priv, cpsw_slave_stop, cpsw); - } + cpsw_ndev_destroy_rx_pools(priv); + for_each_slave(priv, cpsw_slave_stop, cpsw); pm_runtime_put_sync(cpsw->dev); netif_carrier_off(priv->ndev); return ret; @@ -1463,6 +1799,8 @@ static int cpsw_ndo_stop(struct net_device *ndev) if (cpsw_need_resplit(cpsw)) cpsw_split_res(cpsw); + cpsw_ndev_destroy_rx_pools(priv); + cpsw->usage_count--; pm_runtime_put_sync(cpsw->dev); return 0; @@ -2014,6 +2352,64 @@ static int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type, } } +static int cpsw_xdp_prog_setup(struct cpsw_priv *priv, struct netdev_bpf *bpf) +{ + struct bpf_prog *prog = bpf->prog; + + if (!priv->xdpi.prog && !prog) + return 0; + + if (!xdp_attachment_flags_ok(&priv->xdpi, bpf)) + return -EBUSY; + + WRITE_ONCE(priv->xdp_prog, prog); + + xdp_attachment_setup(&priv->xdpi, bpf); + + return 0; +} + +static int cpsw_ndo_bpf(struct net_device *ndev, struct netdev_bpf *bpf) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + + switch (bpf->command) { + case XDP_SETUP_PROG: + return cpsw_xdp_prog_setup(priv, bpf); + + case XDP_QUERY_PROG: + return xdp_attachment_query(&priv->xdpi, bpf); + + default: + return -EINVAL; + } +} + +static int cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, + struct xdp_frame **frames, u32 flags) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + struct xdp_frame *xdpf; + int i, drops = 0; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + for (i = 0; i < n; i++) { + xdpf = frames[i]; + if (xdpf->len < CPSW_MIN_PACKET_SIZE) { + xdp_return_frame_rx_napi(xdpf); + drops++; + continue; + } + + if (cpsw_xdp_tx_frame(priv, xdpf, NULL)) + drops++; + } + + return n - drops; +} + #ifdef CONFIG_NET_POLL_CONTROLLER static void cpsw_ndo_poll_controller(struct net_device *ndev) { @@ -2042,6 +2438,8 @@ static const struct net_device_ops cpsw_netdev_ops = { .ndo_vlan_rx_add_vid = cpsw_ndo_vlan_rx_add_vid, .ndo_vlan_rx_kill_vid = cpsw_ndo_vlan_rx_kill_vid, .ndo_setup_tc = cpsw_ndo_setup_tc, + .ndo_bpf = cpsw_ndo_bpf, + .ndo_xdp_xmit = cpsw_ndo_xdp_xmit, }; static void cpsw_get_drvinfo(struct net_device *ndev, diff --git a/drivers/net/ethernet/ti/cpsw_ethtool.c b/drivers/net/ethernet/ti/cpsw_ethtool.c index 94f8f5ab46a5..71ccef9d1984 100644 --- a/drivers/net/ethernet/ti/cpsw_ethtool.c +++ b/drivers/net/ethernet/ti/cpsw_ethtool.c @@ -584,6 +584,41 @@ static int cpsw_update_channels_res(struct cpsw_priv *priv, int ch_num, int rx, return 0; } +static void cpsw_destroy_rx_pools(struct cpsw_common *cpsw) +{ + struct cpsw_priv *priv; + int i; + + for (i = 0; i < cpsw->data.slaves; i++) { + priv = netdev_priv(cpsw->slaves[i].ndev); + if (priv->ndev && netif_running(priv->ndev)) + cpsw_ndev_destroy_rx_pools(priv); + } +} + +static int cpsw_create_rx_pools(struct cpsw_common *cpsw) +{ + struct cpsw_priv *priv; + int i, ret; + + for (i = 0; i < cpsw->data.slaves; i++) { + priv = netdev_priv(cpsw->slaves[i].ndev); + if (!(priv->ndev && netif_running(priv->ndev))) + continue; + + ret = cpsw_ndev_create_rx_pools(priv); + if (ret) + goto err_cleanup; + } + + return 0; + +err_cleanup: + cpsw_destroy_rx_pools(cpsw); + + return ret; +} + int cpsw_set_channels_common(struct net_device *ndev, struct ethtool_channels *chs, cpdma_handler_fn rx_handler) @@ -591,7 +626,7 @@ int cpsw_set_channels_common(struct net_device *ndev, struct cpsw_priv *priv = netdev_priv(ndev); struct cpsw_common *cpsw = priv->cpsw; struct net_device *sl_ndev; - int i, ret; + int i, new_pools, ret; ret = cpsw_check_ch_settings(cpsw, chs); if (ret < 0) @@ -599,6 +634,10 @@ int cpsw_set_channels_common(struct net_device *ndev, cpsw_suspend_data_pass(ndev); + new_pools = (chs->rx_count != cpsw->rx_ch_num) && cpsw->usage_count; + if (new_pools) + cpsw_destroy_rx_pools(cpsw); + ret = cpsw_update_channels_res(priv, chs->rx_count, 1, rx_handler); if (ret) goto err; @@ -629,6 +668,12 @@ int cpsw_set_channels_common(struct net_device *ndev, if (cpsw->usage_count) cpsw_split_res(cpsw); + if (new_pools) { + ret = cpsw_create_rx_pools(cpsw); + if (ret) + goto err; + } + ret = cpsw_resume_data_pass(ndev); if (!ret) return 0; @@ -654,8 +699,7 @@ void cpsw_get_ringparam(struct net_device *ndev, int cpsw_set_ringparam(struct net_device *ndev, struct ethtool_ringparam *ering) { - struct cpsw_priv *priv = netdev_priv(ndev); - struct cpsw_common *cpsw = priv->cpsw; + struct cpsw_common *cpsw = ndev_to_cpsw(ndev); int ret; /* ignore ering->tx_pending - only rx_pending adjustment is supported */ @@ -670,15 +714,21 @@ int cpsw_set_ringparam(struct net_device *ndev, cpsw_suspend_data_pass(ndev); + cpsw_destroy_rx_pools(cpsw); + cpdma_set_num_rx_descs(cpsw->dma, ering->rx_pending); if (cpsw->usage_count) cpdma_chan_split_pool(cpsw->dma); + ret = cpsw_create_rx_pools(cpsw); + if (ret) + goto err; + ret = cpsw_resume_data_pass(ndev); if (!ret) return 0; - +err: dev_err(cpsw->dev, "cannot set ring params, closing device\n"); dev_close(ndev); return ret; diff --git a/drivers/net/ethernet/ti/cpsw_priv.h b/drivers/net/ethernet/ti/cpsw_priv.h index 2ecb3af59fe9..b428875fedfe 100644 --- a/drivers/net/ethernet/ti/cpsw_priv.h +++ b/drivers/net/ethernet/ti/cpsw_priv.h @@ -360,6 +360,11 @@ struct cpsw_priv { int shp_cfg_speed; int tx_ts_enabled; int rx_ts_enabled; + struct bpf_prog *xdp_prog; + struct xdp_rxq_info xdp_rxq[CPSW_MAX_QUEUES]; + struct page_pool *page_pool[CPSW_MAX_QUEUES]; + struct xdp_attachment_info xdpi; + u32 emac_port; struct cpsw_common *cpsw; }; @@ -391,6 +396,8 @@ int cpsw_fill_rx_channels(struct cpsw_priv *priv); void cpsw_intr_enable(struct cpsw_common *cpsw); void cpsw_intr_disable(struct cpsw_common *cpsw); int cpsw_tx_handler(void *token, int len, int status); +int cpsw_ndev_create_rx_pools(struct cpsw_priv *priv); +void cpsw_ndev_destroy_rx_pools(struct cpsw_priv *priv); /* ethtool */ u32 cpsw_get_msglevel(struct net_device *ndev);