From patchwork Mon May 9 09:13:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jian Hui Lee X-Patchwork-Id: 1628416 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=t9vnoCRP; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4Kxb7J6Y3sz9sGJ for ; Mon, 9 May 2022 19:13:16 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1nnzS3-0000dL-30; Mon, 09 May 2022 09:13:11 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1nnzS0-0000cI-9P for kernel-team@lists.ubuntu.com; Mon, 09 May 2022 09:13:08 +0000 Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 1CE0F3F221 for ; Mon, 9 May 2022 09:13:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1652087588; bh=wKsHZkTLqCALux0qZxUvVPzzjgYQGxPmxCW2QWx7hJ8=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=t9vnoCRPkL7IyvdOJCX0TjklVOT0mkbbVQCaWgYVX76Mxgk4xH0eHZMIN2UrpftU/ hRRXWIX3/alKnjiCf2Q3+K1JKgQdsj3I2JmXmGBJ924gvxVbKXHGZXKwTZkSTfn/pJ 2yj7k0jO/aC0HWODWWuZ+dEuUWRpOUXiw16uOA7qUpX4fhEyfjrCAqG+atqzn4j1LI NgItKDT5wyHKO8OxuRAOafjmst8Po3rNDSF3/SzHydwtoX9EN+5XogT2DpbGSQwG9b kzQPLH12pxFJLNk4OJWonz1jQ9JPiHp0rWIdC3vTOZk8evRXaWqwloJq8udkt1KtyR QqLSQm3zFV7Mw== Received: by mail-pf1-f198.google.com with SMTP id g5-20020a62e305000000b0050d2dba0c5dso4453968pfh.8 for ; Mon, 09 May 2022 02:13:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wKsHZkTLqCALux0qZxUvVPzzjgYQGxPmxCW2QWx7hJ8=; b=wKiwe4kZnLNW8xxmymqL8OAhcLqg0jmKlEZPfVk4lwDOh4opnFBrCNKM1l1mx/3Wd6 BcQ4HN7obgw4mP3nV2l8LAnFN/vWTRSvm47SF8qojg/eUiEiKhURVRwstRBzYwIL6kcj G3iKCx+FEI9UO1pWdTbuMvv0P6s5CzwvaRLnfArTl6WrnWUDNd3lEB5K+ut0Y/wYVmzD TXBmDomimNYTkAIzh+Li8IeE8GbMeAK7OqqjqV5ntcieVaSMTf/f2tQzFSCW+LZTrgP1 7T5/TEINUqHKu11P3R049LPhk951CPUXDVmqOKv+PzpFgkpNHgTSl5HmBvcxDmoxGMo8 CT9g== X-Gm-Message-State: AOAM5334UvIxXi8aUfJZni4s5jfiujg7QpLH5ZsGFO0u6DJfjC/eqcd5 h0Z1sgug+ILOzZmB14q0ez6Ww5JIWPnCamjbkA4VlXjbf4YKLCgpiolCTIh466ZxhOK3+ZwH0IN kSuSUNM9PPkIV6oIQ6oVccZypPnsSHzblwgPSc2KVYg== X-Received: by 2002:a17:90a:d3d1:b0:1bb:fdc5:182 with SMTP id d17-20020a17090ad3d100b001bbfdc50182mr25605438pjw.206.1652087586777; Mon, 09 May 2022 02:13:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxdqlTXj0MDgXrFAEfj7R6Q2FDOSYiGBB5bJOFgfAUeXDfAg3/NzyE3jgp9Pn2lp5yKBoRr0Q== X-Received: by 2002:a17:90a:d3d1:b0:1bb:fdc5:182 with SMTP id d17-20020a17090ad3d100b001bbfdc50182mr25605416pjw.206.1652087586538; Mon, 09 May 2022 02:13:06 -0700 (PDT) Received: from Razer-Stealth.. (36-230-106-94.dynamic-ip.hinet.net. [36.230.106.94]) by smtp.gmail.com with ESMTPSA id x15-20020aa793af000000b0050dc76281f5sm8155084pff.207.2022.05.09.02.13.05 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 May 2022 02:13:06 -0700 (PDT) From: Jian Hui Lee To: kernel-team@lists.ubuntu.com Subject: [SRU][Focal:linux-intel-iotg-5.15][PATCH 1/1] net: stmmac: Add GFP_DMA32 for rx buffers if no 64 capability Date: Mon, 9 May 2022 17:13:02 +0800 Message-Id: <20220509091302.39424-2-jianhui.lee@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220509091302.39424-1-jianhui.lee@canonical.com> References: <20220509091302.39424-1-jianhui.lee@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: David Wu BugLink: https://launchpad.net/bugs/1956413 Use page_pool_alloc_pages instead of page_pool_dev_alloc_pages, which can give the gfp parameter, in the case of not supporting 64-bit width, using 32-bit address memory can reduce a copy from swiotlb. Signed-off-by: David Wu Signed-off-by: David S. Miller (cherry picked from commit 884d2b845477cd0a18302444dc20fe2d9a01743e) Signed-off-by: Jian Hui Lee --- .../net/ethernet/stmicro/stmmac/stmmac_main.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index 9376c4e28626..6b14dd5b4637 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -1463,16 +1463,20 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p, { struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i]; + gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); + + if (priv->dma_cap.addr64 <= 32) + gfp |= GFP_DMA32; if (!buf->page) { - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); + buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); if (!buf->page) return -ENOMEM; buf->page_offset = stmmac_rx_offset(priv); } if (priv->sph && !buf->sec_page) { - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); + buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); if (!buf->sec_page) return -ENOMEM; @@ -4496,6 +4500,10 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue]; int dirty = stmmac_rx_dirty(priv, queue); unsigned int entry = rx_q->dirty_rx; + gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN); + + if (priv->dma_cap.addr64 <= 32) + gfp |= GFP_DMA32; while (dirty-- > 0) { struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry]; @@ -4508,13 +4516,13 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv, u32 queue) p = rx_q->dma_rx + entry; if (!buf->page) { - buf->page = page_pool_dev_alloc_pages(rx_q->page_pool); + buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp); if (!buf->page) break; } if (priv->sph && !buf->sec_page) { - buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool); + buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp); if (!buf->sec_page) break;