[{"id":1761760,"web_url":"http://patchwork.ozlabs.org/comment/1761760/","msgid":"<CAKgT0Uc2N-nxyB3AQm+VZ_j1Y1TDgBjTVLQQ5uvPKHe7EpdL3g@mail.gmail.com>","list_archive_url":null,"date":"2017-09-01T15:24:01","subject":"Re: [net-next PATCH] ixgbe: add counter for times rx pages gets\n\tallocated, not recycled","submitter":{"id":252,"url":"http://patchwork.ozlabs.org/api/people/252/","name":"Alexander Duyck","email":"alexander.duyck@gmail.com"},"content":"On Fri, Sep 1, 2017 at 3:54 AM, Jesper Dangaard Brouer\n<brouer@redhat.com> wrote:\n> The ixgbe driver have page recycle scheme based around the RX-ring\n> queue, where a RX page is shared between two packets. Based on the\n> refcnt, the driver can determine if the RX-page is currently only used\n> by a single packet, if so it can then directly refill/recycle the\n> RX-slot by with the opposite \"side\" of the page.\n>\n> While this is a clever trick, it is hard to determine when this\n> recycling is successful and when it fails.  Adding a counter, which is\n> available via ethtool --statistics as 'alloc_rx_page'.  Which counts\n> the number of times the recycle fails and the real page allocator is\n> invoked.  When interpreting the stats, do remember that every alloc\n> will serve two packets.\n>\n> The counter is collected per rx_ring, but is summed and ethtool\n> exported as 'alloc_rx_page'.  It would be relevant to know what\n> rx_ring that cannot keep up, but that can be exported later if\n> someone experience a need for this.\n>\n> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>\n> ---\n>  drivers/net/ethernet/intel/ixgbe/ixgbe.h         |    2 ++\n>  drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c |    1 +\n>  drivers/net/ethernet/intel/ixgbe/ixgbe_main.c    |    4 ++++\n>  3 files changed, 7 insertions(+)\n>\n> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h\n> index dd5578756ae0..008d0085e01f 100644\n> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h\n> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h\n> @@ -275,6 +275,7 @@ struct ixgbe_rx_queue_stats {\n>         u64 rsc_count;\n>         u64 rsc_flush;\n>         u64 non_eop_descs;\n> +       u64 alloc_rx_page;\n>         u64 alloc_rx_page_failed;\n>         u64 alloc_rx_buff_failed;\n>         u64 csum_err;\n> @@ -655,6 +656,7 @@ struct ixgbe_adapter {\n>         u64 rsc_total_count;\n>         u64 rsc_total_flush;\n>         u64 non_eop_descs;\n> +       u32 alloc_rx_page;\n>         u32 alloc_rx_page_failed;\n>         u32 alloc_rx_buff_failed;\n>\n> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c\n> index 72c565712a5f..d96d9d6c3492 100644\n> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c\n> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c\n> @@ -104,6 +104,7 @@ static const struct ixgbe_stats ixgbe_gstrings_stats[] = {\n>         {\"tx_flow_control_xoff\", IXGBE_STAT(stats.lxofftxc)},\n>         {\"rx_flow_control_xoff\", IXGBE_STAT(stats.lxoffrxc)},\n>         {\"rx_csum_offload_errors\", IXGBE_STAT(hw_csum_rx_error)},\n> +       {\"alloc_rx_page\", IXGBE_STAT(alloc_rx_page)},\n>         {\"alloc_rx_page_failed\", IXGBE_STAT(alloc_rx_page_failed)},\n>         {\"alloc_rx_buff_failed\", IXGBE_STAT(alloc_rx_buff_failed)},\n>         {\"rx_no_dma_resources\", IXGBE_STAT(hw_rx_no_dma_resources)},\n> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c\n> index d962368d08d0..7d2e4b08cdf4 100644\n> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c\n> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c\n> @@ -1598,6 +1598,7 @@ static bool ixgbe_alloc_mapped_page(struct ixgbe_ring *rx_ring,\n>                 rx_ring->rx_stats.alloc_rx_page_failed++;\n>                 return false;\n>         }\n> +       rx_ring->rx_stats.alloc_rx_page++;\n\nSo this line should be moved down past the DMA page mapping as it can\nfail on some architectures. My personal preference would be to have\nthis placed in the lines just after the \"bi\" members being updated,\nand before the return.\n\n>         /* map page for use */\n>         dma = dma_map_page_attrs(rx_ring->dev, page, 0,\n> @@ -6771,6 +6772,7 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)\n>         u32 i, missed_rx = 0, mpc, bprc, lxon, lxoff, xon_off_tot;\n>         u64 non_eop_descs = 0, restart_queue = 0, tx_busy = 0;\n>         u64 alloc_rx_page_failed = 0, alloc_rx_buff_failed = 0;\n> +       u64 alloc_rx_page = 0;\n>         u64 bytes = 0, packets = 0, hw_csum_rx_error = 0;\n>\n>         if (test_bit(__IXGBE_DOWN, &adapter->state) ||\n> @@ -6791,6 +6793,7 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)\n>         for (i = 0; i < adapter->num_rx_queues; i++) {\n>                 struct ixgbe_ring *rx_ring = adapter->rx_ring[i];\n>                 non_eop_descs += rx_ring->rx_stats.non_eop_descs;\n> +               alloc_rx_page += rx_ring->rx_stats.alloc_rx_page;\n>                 alloc_rx_page_failed += rx_ring->rx_stats.alloc_rx_page_failed;\n>                 alloc_rx_buff_failed += rx_ring->rx_stats.alloc_rx_buff_failed;\n>                 hw_csum_rx_error += rx_ring->rx_stats.csum_err;\n> @@ -6798,6 +6801,7 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)\n>                 packets += rx_ring->stats.packets;\n>         }\n>         adapter->non_eop_descs = non_eop_descs;\n> +       adapter->alloc_rx_page = alloc_rx_page;\n>         adapter->alloc_rx_page_failed = alloc_rx_page_failed;\n>         adapter->alloc_rx_buff_failed = alloc_rx_buff_failed;\n>         adapter->hw_csum_rx_error = hw_csum_rx_error;\n>","headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming@ozlabs.org","Delivered-To":"patchwork-incoming@ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","ozlabs.org; dkim=pass (2048-bit key;\n\tunprotected) header.d=gmail.com header.i=@gmail.com\n\theader.b=\"fGcH7kK+\"; dkim-atps=neutral"],"Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xkNLw6Vy7z9t1t\n\tfor <patchwork-incoming@ozlabs.org>;\n\tSat,  2 Sep 2017 01:24:08 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751975AbdIAPYE (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tFri, 1 Sep 2017 11:24:04 -0400","from mail-qt0-f193.google.com ([209.85.216.193]:35443 \"EHLO\n\tmail-qt0-f193.google.com\" rhost-flags-OK-OK-OK-OK) by vger.kernel.org\n\twith ESMTP id S1750955AbdIAPYC (ORCPT\n\t<rfc822;netdev@vger.kernel.org>); Fri, 1 Sep 2017 11:24:02 -0400","by mail-qt0-f193.google.com with SMTP id u11so397644qtu.2\n\tfor <netdev@vger.kernel.org>; Fri, 01 Sep 2017 08:24:02 -0700 (PDT)","by 10.140.85.211 with HTTP; Fri, 1 Sep 2017 08:24:01 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=gmail.com; s=20161025;\n\th=mime-version:in-reply-to:references:from:date:message-id:subject:to\n\t:cc; bh=Ln+9td2s0VRQRh3qtIFfvyGf2wPMiQJSBE59S02VBPQ=;\n\tb=fGcH7kK+N3zgoSqbdkuuI9v6137k2XgUwB+twqNxL0a+ALvdZ9rxHCeISDJU/nWT3m\n\t8VeceOTg9VBk1FffuKgjM6OAKYXAJMlZS9pwconyoALtg+lqpMexByh4Ob15+JsZwIWZ\n\tX7twCbGNTZPbKQVQwOK7/+lHva04wC2zBI2fyb3M7TkXCJEHqOK7I6Wxoel4DPrbfCAo\n\tG5kL0g2FfcjabBAWyIu3eOcgyEdWxS3556ryYevbm7BCK/OgQgHPdgj5GksEvW4flo5/\n\tY5z4rwoLNrKDf2guPcTvrfIfAABGRaMgVIlyqCJuQCib9SyXkuqQgUn0vbc/qD1NDmgc\n\tODkw==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n\td=1e100.net; s=20161025;\n\th=x-gm-message-state:mime-version:in-reply-to:references:from:date\n\t:message-id:subject:to:cc;\n\tbh=Ln+9td2s0VRQRh3qtIFfvyGf2wPMiQJSBE59S02VBPQ=;\n\tb=cZe2rfV5aOWAhyNtdDq2G+Ow+gqEseWqPS1ipb4P5iKhMEAH/1H7vBneZnmuqrfLXP\n\t2cZ562tqr11Reup3FUXcg6hXHzC2q6jC82HHQOnl/HB33LIKUiwjfubYCYzRmPCjugis\n\tAD9hAfFKbAszYfbwGN8GgcC/kCKrx1l9s2vi9/4kb+mRhui9XeedX211t10C/AQKfAlf\n\tRaZbCJZb/pQ/FjqbwJtKe6i0K5rMcqAZ4iFYFh+9XOIXOcucOAGDdqGmNkh30j0KXSU+\n\tnBKfe5my9URIPAZVX4+55e5jY/ko4YQ46TDgBDdAXcs1DY5wDfCiOXOc4wiSqrYhx2CD\n\tnPbA==","X-Gm-Message-State":"AHPjjUh8Z+trsPHuU21Sa2NI/x/ke43C0CiJUTHb3keKi4dtiYzeRjIv\n\tJfrU7qfdYTWlGOzF4swveGZiakovJ739","X-Google-Smtp-Source":"ADKCNb5+Dn9/X01+F+1FJnzwOtkTwiYQGMF7qW2OWpr+Pkbxije/jeuKpedLKO96scK2ANwyWORpgR30SWuKGCmK6S4=","X-Received":"by 10.200.50.219 with SMTP id a27mr3120939qtb.186.1504279441992; \n\tFri, 01 Sep 2017 08:24:01 -0700 (PDT)","MIME-Version":"1.0","In-Reply-To":"<150426329512.26774.12697329010993174947.stgit@firesoul>","References":"<150426329512.26774.12697329010993174947.stgit@firesoul>","From":"Alexander Duyck <alexander.duyck@gmail.com>","Date":"Fri, 1 Sep 2017 08:24:01 -0700","Message-ID":"<CAKgT0Uc2N-nxyB3AQm+VZ_j1Y1TDgBjTVLQQ5uvPKHe7EpdL3g@mail.gmail.com>","Subject":"Re: [net-next PATCH] ixgbe: add counter for times rx pages gets\n\tallocated, not recycled","To":"Jesper Dangaard Brouer <brouer@redhat.com>","Cc":"Netdev <netdev@vger.kernel.org>,\n\tJeff Kirsher <jeffrey.t.kirsher@intel.com>","Content-Type":"text/plain; charset=\"UTF-8\"","Sender":"netdev-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"}},{"id":1761924,"web_url":"http://patchwork.ozlabs.org/comment/1761924/","msgid":"<1504292993.3922.25.camel@intel.com>","list_archive_url":null,"date":"2017-09-01T19:09:53","subject":"Re: [net-next PATCH] ixgbe: add counter for times rx pages gets\n\tallocated, not recycled","submitter":{"id":473,"url":"http://patchwork.ozlabs.org/api/people/473/","name":"Kirsher, Jeffrey T","email":"jeffrey.t.kirsher@intel.com"},"content":"On Fri, 2017-09-01 at 12:54 +0200, Jesper Dangaard Brouer wrote:\n> The ixgbe driver have page recycle scheme based around the RX-ring\n> queue, where a RX page is shared between two packets. Based on the\n> refcnt, the driver can determine if the RX-page is currently only\n> used\n> by a single packet, if so it can then directly refill/recycle the\n> RX-slot by with the opposite \"side\" of the page.\n> \n> While this is a clever trick, it is hard to determine when this\n> recycling is successful and when it fails.  Adding a counter, which\n> is\n> available via ethtool --statistics as 'alloc_rx_page'.  Which counts\n> the number of times the recycle fails and the real page allocator is\n> invoked.  When interpreting the stats, do remember that every alloc\n> will serve two packets.\n> \n> The counter is collected per rx_ring, but is summed and ethtool\n> exported as 'alloc_rx_page'.  It would be relevant to know what\n> rx_ring that cannot keep up, but that can be exported later if\n> someone experience a need for this.\n> \n> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>\n\nSince Alex has a suggested change for this patch, when you resubmit v2,\ncan you make sure you CC intel-wired-lan mailing list, so that my\npatchwork project picks up this patch?  Thanks in advance Jesper.","headers":{"Return-Path":"<netdev-owner@vger.kernel.org>","X-Original-To":"patchwork-incoming@ozlabs.org","Delivered-To":"patchwork-incoming@ozlabs.org","Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=vger.kernel.org\n\t(client-ip=209.132.180.67; helo=vger.kernel.org;\n\tenvelope-from=netdev-owner@vger.kernel.org;\n\treceiver=<UNKNOWN>)","Received":["from vger.kernel.org (vger.kernel.org [209.132.180.67])\n\tby ozlabs.org (Postfix) with ESMTP id 3xkTMV5M96z9sPm\n\tfor <patchwork-incoming@ozlabs.org>;\n\tSat,  2 Sep 2017 05:09:58 +1000 (AEST)","(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1752292AbdIATJ4 (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tFri, 1 Sep 2017 15:09:56 -0400","from mga14.intel.com ([192.55.52.115]:24486 \"EHLO mga14.intel.com\"\n\trhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP\n\tid S1751715AbdIATJz (ORCPT <rfc822;netdev@vger.kernel.org>);\n\tFri, 1 Sep 2017 15:09:55 -0400","from orsmga005.jf.intel.com ([10.7.209.41])\n\tby fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t01 Sep 2017 12:09:54 -0700","from maguila2-mobl.amr.corp.intel.com ([10.252.196.46])\n\tby orsmga005.jf.intel.com with ESMTP; 01 Sep 2017 12:09:54 -0700"],"X-ExtLoop1":"1","X-IronPort-AV":"E=Sophos;i=\"5.41,459,1498546800\"; \n\td=\"asc'?scan'208\";a=\"144587877\"","Message-ID":"<1504292993.3922.25.camel@intel.com>","Subject":"Re: [net-next PATCH] ixgbe: add counter for times rx pages gets\n\tallocated, not recycled","From":"Jeff Kirsher <jeffrey.t.kirsher@intel.com>","To":"Jesper Dangaard Brouer <brouer@redhat.com>, netdev@vger.kernel.org","Cc":"Alexander Duyck <alexander.duyck@gmail.com>","Date":"Fri, 01 Sep 2017 12:09:53 -0700","In-Reply-To":"<150426329512.26774.12697329010993174947.stgit@firesoul>","References":"<150426329512.26774.12697329010993174947.stgit@firesoul>","Content-Type":"multipart/signed; micalg=\"pgp-sha256\";\n\tprotocol=\"application/pgp-signature\";\n\tboundary=\"=-NNkPMlLmnFl9bc7JTZD3\"","X-Mailer":"Evolution 3.24.5 ","Mime-Version":"1.0","Sender":"netdev-owner@vger.kernel.org","Precedence":"bulk","List-ID":"<netdev.vger.kernel.org>","X-Mailing-List":"netdev@vger.kernel.org"}}]