From patchwork Wed Feb 13 18:51:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anirudh Venkataramanan X-Patchwork-Id: 1041522 X-Patchwork-Delegate: jeffrey.t.kirsher@intel.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=osuosl.org (client-ip=140.211.166.133; helo=hemlock.osuosl.org; envelope-from=intel-wired-lan-bounces@osuosl.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4407vg3yj2z9s2P for ; Thu, 14 Feb 2019 05:53:18 +1100 (AEDT) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id DE44882186; Wed, 13 Feb 2019 18:53:15 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id EYDID4yBHl5P; Wed, 13 Feb 2019 18:53:15 +0000 (UTC) Received: from ash.osuosl.org (ash.osuosl.org [140.211.166.34]) by hemlock.osuosl.org (Postfix) with ESMTP id 30BC586A14; Wed, 13 Feb 2019 18:53:14 +0000 (UTC) X-Original-To: intel-wired-lan@lists.osuosl.org Delivered-To: intel-wired-lan@lists.osuosl.org Received: from silver.osuosl.org (smtp3.osuosl.org [140.211.166.136]) by ash.osuosl.org (Postfix) with ESMTP id 616991BF984 for ; Wed, 13 Feb 2019 18:51:18 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by silver.osuosl.org (Postfix) with ESMTP id 5E76122DC7 for ; Wed, 13 Feb 2019 18:51:18 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from silver.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RoDTBLkqpw-i for ; Wed, 13 Feb 2019 18:51:17 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by silver.osuosl.org (Postfix) with ESMTPS id 18F17227FA for ; Wed, 13 Feb 2019 18:51:17 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Feb 2019 10:51:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,366,1544515200"; d="scan'208";a="138354234" Received: from shasta.jf.intel.com ([10.166.241.11]) by orsmga001.jf.intel.com with ESMTP; 13 Feb 2019 10:51:16 -0800 From: Anirudh Venkataramanan To: intel-wired-lan@lists.osuosl.org Date: Wed, 13 Feb 2019 10:51:02 -0800 Message-Id: <20190213185115.25877-3-anirudh.venkataramanan@intel.com> X-Mailer: git-send-email 2.14.5 In-Reply-To: <20190213185115.25877-1-anirudh.venkataramanan@intel.com> References: <20190213185115.25877-1-anirudh.venkataramanan@intel.com> Subject: [Intel-wired-lan] [PATCH S14 02/15] ice: Pull out page reuse checks onto separate function X-BeenThere: intel-wired-lan@osuosl.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel Wired Ethernet Linux Kernel Driver Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-wired-lan-bounces@osuosl.org Sender: "Intel-wired-lan" From: Maciej Fijalkowski Introduce ice_can_reuse_rx_page which will verify whether the page can be reused and return the boolean result to caller. Signed-off-by: Maciej Fijalkowski Signed-off-by: Anirudh Venkataramanan Tested-by: Andrew Bowers --- [Anirudh Venkataramanan cleaned up commit message] --- drivers/net/ethernet/intel/ice/ice_txrx.c | 80 +++++++++++++++++-------------- 1 file changed, 45 insertions(+), 35 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 8c0a8b63670b..d1f4aef9bcc2 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -496,6 +496,48 @@ static bool ice_page_is_reserved(struct page *page) return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page); } +/** + * ice_can_reuse_rx_page - Determine if page can be reused for another rx + * @rx_buf: buffer containing the page + * @truesize: the offset that needs to be applied to page + * + * If page is reusable, we have a green light for calling ice_reuse_rx_page, + * which will assign the current buffer to the buffer that next_to_alloc is + * pointing to; otherwise, the dma mapping needs to be destroyed and + * page freed + */ +static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, + unsigned int truesize) +{ + struct page *page = rx_buf->page; + + /* avoid re-using remote pages */ + if (unlikely(ice_page_is_reserved(page))) + return false; + +#if (PAGE_SIZE < 8192) + /* if we are only owner of page we can reuse it */ + if (unlikely(page_count(page) != 1)) + return false; + + /* flip page offset to other buffer */ + rx_buf->page_offset ^= truesize; +#else + /* move offset up to the next cache line */ + rx_buf->page_offset += truesize; + + if (rx_buf->page_offset > PAGE_SIZE - ICE_RXBUF_2048) + return false; +#endif /* PAGE_SIZE < 8192) */ + + /* Even if we own the page, we are not allowed to use atomic_set() + * This would break get_page_unless_zero() users. + */ + get_page(page); + + return true; +} + /** * ice_add_rx_frag - Add contents of Rx buffer to sk_buff * @rx_buf: buffer containing page to add @@ -517,17 +559,9 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb, #if (PAGE_SIZE < 8192) unsigned int truesize = ICE_RXBUF_2048; #else - unsigned int last_offset = PAGE_SIZE - ICE_RXBUF_2048; - unsigned int truesize; + unsigned int truesize = ALIGN(size, L1_CACHE_BYTES); #endif /* PAGE_SIZE < 8192) */ - - struct page *page; - - page = rx_buf->page; - -#if (PAGE_SIZE >= 8192) - truesize = ALIGN(size, L1_CACHE_BYTES); -#endif /* PAGE_SIZE >= 8192) */ + struct page *page = rx_buf->page; /* will the data fit in the skb we allocated? if so, just * copy it as it is pretty small anyway @@ -549,31 +583,7 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb, skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page, rx_buf->page_offset, size, truesize); - /* avoid re-using remote pages */ - if (unlikely(ice_page_is_reserved(page))) - return false; - -#if (PAGE_SIZE < 8192) - /* if we are only owner of page we can reuse it */ - if (unlikely(page_count(page) != 1)) - return false; - - /* flip page offset to other buffer */ - rx_buf->page_offset ^= truesize; -#else - /* move offset up to the next cache line */ - rx_buf->page_offset += truesize; - - if (rx_buf->page_offset > last_offset) - return false; -#endif /* PAGE_SIZE < 8192) */ - - /* Even if we own the page, we are not allowed to use atomic_set() - * This would break get_page_unless_zero() users. - */ - get_page(rx_buf->page); - - return true; + return ice_can_reuse_rx_page(rx_buf, truesize); } /**