diff mbox

[net-next] bnx2: Increase max rx ring size from 1K to 2K

Message ID 1287448254-14173-1-git-send-email-mchan@broadcom.com
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Michael Chan Oct. 19, 2010, 12:30 a.m. UTC
A number of customers are reporting packet loss under certain workloads
(e.g. heavy bursts of small packets) with flow control disabled.  A larger
rx ring helps to prevent these losses.

No change in default rx ring size and memory consumption.

Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
Acked-by: John Feeney <jfeeney@redhat.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
---
 drivers/net/bnx2.h |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

Comments

David Miller Oct. 19, 2010, 8:14 a.m. UTC | #1
From: "Michael Chan" <mchan@broadcom.com>
Date: Mon, 18 Oct 2010 17:30:54 -0700

> A number of customers are reporting packet loss under certain workloads
> (e.g. heavy bursts of small packets) with flow control disabled.  A larger
> rx ring helps to prevent these losses.
> 
> No change in default rx ring size and memory consumption.
> 
> Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
> Acked-by: John Feeney <jfeeney@redhat.com>
> Signed-off-by: Michael Chan <mchan@broadcom.com>

I don't see how it's any better to queue things more deeply in
hardware, compared to simply using hardware flow control since that's
what it's for and makes the queuing happen in the networking stack of
the sender, in software, which in the end performs better and gives
better feedback to the source of the data.

These huge RX queue sizes are absolutely rediculious, and I've
complained about this before.

And instead of seeing less of this, I keep seeing more of this stuff.
Please exert some pushback on these folks who are doing such insane
things.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Michael Chan Oct. 19, 2010, 5:03 p.m. UTC | #2
On Tue, 2010-10-19 at 01:14 -0700, David Miller wrote:
> From: "Michael Chan" <mchan@broadcom.com>
> Date: Mon, 18 Oct 2010 17:30:54 -0700
> 
> > A number of customers are reporting packet loss under certain workloads
> > (e.g. heavy bursts of small packets) with flow control disabled.  A larger
> > rx ring helps to prevent these losses.
> > 
> > No change in default rx ring size and memory consumption.
> > 
> > Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
> > Acked-by: John Feeney <jfeeney@redhat.com>
> > Signed-off-by: Michael Chan <mchan@broadcom.com>
> 
> I don't see how it's any better to queue things more deeply in
> hardware, compared to simply using hardware flow control since that's
> what it's for and makes the queuing happen in the networking stack of
> the sender, in software, which in the end performs better and gives
> better feedback to the source of the data.

There are situations where flow control is not desirable.  For example,
if there are many multicast receivers in the network, you may not want a
few slow receivers to slow down the entire network with flow control.

> 
> These huge RX queue sizes are absolutely rediculious, and I've
> complained about this before.

Yes you have and I was initially hesitant to post this patch.  But the
customer sees that many other 1G drivers in the tree can have bigger
ring sizes and as a result, they have fewer packet drops using these
other devices.  In fact, 2K is still much smaller than many other 1G
drivers in the tree.  Please also note that this does not add any extra
memory to the default configuration.  Thanks.

> 
> And instead of seeing less of this, I keep seeing more of this stuff.
> Please exert some pushback on these folks who are doing such insane
> things.
> 
> Thanks.
> 


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Oct. 21, 2010, 10:13 a.m. UTC | #3
From: "Michael Chan" <mchan@broadcom.com>
Date: Mon, 18 Oct 2010 17:30:54 -0700

> A number of customers are reporting packet loss under certain workloads
> (e.g. heavy bursts of small packets) with flow control disabled.  A larger
> rx ring helps to prevent these losses.
> 
> No change in default rx ring size and memory consumption.
> 
> Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
> Acked-by: John Feeney <jfeeney@redhat.com>
> Signed-off-by: Michael Chan <mchan@broadcom.com>

Ok, since the new limit is not the default, applied.

Thanks for the explanation Michael.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/bnx2.h b/drivers/net/bnx2.h
index efdfbc2..62ac83e 100644
--- a/drivers/net/bnx2.h
+++ b/drivers/net/bnx2.h
@@ -6502,8 +6502,8 @@  struct l2_fhdr {
 #define TX_DESC_CNT  (BCM_PAGE_SIZE / sizeof(struct tx_bd))
 #define MAX_TX_DESC_CNT (TX_DESC_CNT - 1)
 
-#define MAX_RX_RINGS	4
-#define MAX_RX_PG_RINGS	16
+#define MAX_RX_RINGS	8
+#define MAX_RX_PG_RINGS	32
 #define RX_DESC_CNT  (BCM_PAGE_SIZE / sizeof(struct rx_bd))
 #define MAX_RX_DESC_CNT (RX_DESC_CNT - 1)
 #define MAX_TOTAL_RX_DESC_CNT (MAX_RX_DESC_CNT * MAX_RX_RINGS)