From patchwork Mon Mar 25 20:29:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirsher, Jeffrey T" X-Patchwork-Id: 1064790 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 44Sm9D0L64z9sSV for ; Tue, 26 Mar 2019 07:30:24 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730511AbfCYUaX (ORCPT ); Mon, 25 Mar 2019 16:30:23 -0400 Received: from mga05.intel.com ([192.55.52.43]:35055 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730325AbfCYU35 (ORCPT ); Mon, 25 Mar 2019 16:29:57 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Mar 2019 13:29:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.60,270,1549958400"; d="scan'208";a="155082605" Received: from jtkirshe-desk1.jf.intel.com ([134.134.177.96]) by fmsmga002.fm.intel.com with ESMTP; 25 Mar 2019 13:29:56 -0700 From: Jeff Kirsher To: davem@davemloft.net Cc: Maciej Fijalkowski , netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com, Anirudh Venkataramanan , Andrew Bowers , Jeff Kirsher Subject: [net-next 05/15] ice: Retrieve rx_buf in separate function Date: Mon, 25 Mar 2019 13:29:43 -0700 Message-Id: <20190325202953.32095-6-jeffrey.t.kirsher@intel.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190325202953.32095-1-jeffrey.t.kirsher@intel.com> References: <20190325202953.32095-1-jeffrey.t.kirsher@intel.com> MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: Maciej Fijalkowski Introduce ice_get_rx_buf, which will fetch the Rx buffer and do the DMA synchronization. Length of the packet that hardware Rx descriptor contains is now read in ice_clean_rx_irq, so we can feed ice_get_rx_buf with it and resign from rx_desc passed as argument in ice_fetch_rx_buf and ice_add_rx_frag. Signed-off-by: Maciej Fijalkowski Signed-off-by: Anirudh Venkataramanan Tested-by: Andrew Bowers Signed-off-by: Jeff Kirsher --- drivers/net/ethernet/intel/ice/ice_txrx.c | 75 +++++++++++++---------- 1 file changed, 42 insertions(+), 33 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index 64baedee6336..8c0a8b63670b 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -499,8 +499,8 @@ static bool ice_page_is_reserved(struct page *page) /** * ice_add_rx_frag - Add contents of Rx buffer to sk_buff * @rx_buf: buffer containing page to add - * @rx_desc: descriptor containing length of buffer written by hardware * @skb: sk_buf to place the data into + * @size: the length of the packet * * This function will add the data contained in rx_buf->page to the skb. * This is done either through a direct copy if the data in the buffer is @@ -511,8 +511,8 @@ static bool ice_page_is_reserved(struct page *page) * true if the buffer can be reused by the adapter. */ static bool -ice_add_rx_frag(struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *rx_desc, - struct sk_buff *skb) +ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb, + unsigned int size) { #if (PAGE_SIZE < 8192) unsigned int truesize = ICE_RXBUF_2048; @@ -522,10 +522,6 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *rx_desc, #endif /* PAGE_SIZE < 8192) */ struct page *page; - unsigned int size; - - size = le16_to_cpu(rx_desc->wb.pkt_len) & - ICE_RX_FLX_DESC_PKT_LEN_M; page = rx_buf->page; @@ -603,10 +599,35 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf) *new_buf = *old_buf; } +/** + * ice_get_rx_buf - Fetch Rx buffer and synchronize data for use + * @rx_ring: Rx descriptor ring to transact packets on + * @size: size of buffer to add to skb + * + * This function will pull an Rx buffer from the ring and synchronize it + * for use by the CPU. + */ +static struct ice_rx_buf * +ice_get_rx_buf(struct ice_ring *rx_ring, const unsigned int size) +{ + struct ice_rx_buf *rx_buf; + + rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean]; + prefetchw(rx_buf->page); + + /* we are reusing so sync this buffer for CPU use */ + dma_sync_single_range_for_cpu(rx_ring->dev, rx_buf->dma, + rx_buf->page_offset, size, + DMA_FROM_DEVICE); + + return rx_buf; +} + /** * ice_fetch_rx_buf - Allocate skb and populate it * @rx_ring: Rx descriptor ring to transact packets on - * @rx_desc: descriptor containing info written by hardware + * @rx_buf: Rx buffer to pull data from + * @size: the length of the packet * * This function allocates an skb on the fly, and populates it with the page * data from the current receive descriptor, taking care to set up the skb @@ -614,20 +635,14 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf) * necessary. */ static struct sk_buff * -ice_fetch_rx_buf(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) +ice_fetch_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf, + unsigned int size) { - struct ice_rx_buf *rx_buf; - struct sk_buff *skb; - struct page *page; - - rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean]; - page = rx_buf->page; - prefetchw(page); - - skb = rx_buf->skb; + struct sk_buff *skb = rx_buf->skb; if (likely(!skb)) { - u8 *page_addr = page_address(page) + rx_buf->page_offset; + u8 *page_addr = page_address(rx_buf->page) + + rx_buf->page_offset; /* prefetch first cache line of first page */ prefetch(page_addr); @@ -644,25 +659,13 @@ ice_fetch_rx_buf(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) return NULL; } - /* we will be copying header into skb->data in - * pskb_may_pull so it is in our interest to prefetch - * it now to avoid a possible cache miss - */ - prefetchw(skb->data); - skb_record_rx_queue(skb, rx_ring->q_index); } else { - /* we are reusing so sync this buffer for CPU use */ - dma_sync_single_range_for_cpu(rx_ring->dev, rx_buf->dma, - rx_buf->page_offset, - ICE_RXBUF_2048, - DMA_FROM_DEVICE); - rx_buf->skb = NULL; } /* pull page into skb */ - if (ice_add_rx_frag(rx_buf, rx_desc, skb)) { + if (ice_add_rx_frag(rx_buf, skb, size)) { /* hand second half of page back to the ring */ ice_reuse_rx_page(rx_ring, rx_buf); rx_ring->rx_stats.page_reuse_count++; @@ -963,7 +966,9 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) /* start the loop to process RX packets bounded by 'budget' */ while (likely(total_rx_pkts < (unsigned int)budget)) { union ice_32b_rx_flex_desc *rx_desc; + struct ice_rx_buf *rx_buf; struct sk_buff *skb; + unsigned int size; u16 stat_err_bits; u16 vlan_tag = 0; u8 rx_ptype; @@ -993,8 +998,12 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget) */ dma_rmb(); + size = le16_to_cpu(rx_desc->wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M; + + rx_buf = ice_get_rx_buf(rx_ring, size); /* allocate (if needed) and populate skb */ - skb = ice_fetch_rx_buf(rx_ring, rx_desc); + skb = ice_fetch_rx_buf(rx_ring, rx_buf, size); if (!skb) break;