diff mbox

net/davinci: do not use all descriptors for tx packets

Message ID 1325604467-15122-1-git-send-email-s.hauer@pengutronix.de
State Accepted, archived
Delegated to: David Miller
Headers show

Commit Message

Sascha Hauer Jan. 3, 2012, 3:27 p.m. UTC
The driver uses a shared pool for both rx and tx descriptors.
During open it queues fixed number of 128 descriptors for receive
packets. For each received packet it tries to queue another
descriptor. If this fails the descriptor is lost for rx.
The driver has no limitation on tx descriptors to use, so it
can happen during a nmap / ping -f attack that the driver
allocates all descriptors for tx and looses all rx descriptors.
The driver stops working then.
To fix this limit the number of tx descriptors used to half of
the descriptors available, the rx path uses the other half.

Tested on a custom board using nmap / ping -f to the board from
two different hosts.

Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
---
 drivers/net/ethernet/ti/davinci_emac.c |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

Comments

David Miller Jan. 3, 2012, 6:51 p.m. UTC | #1
From: Sascha Hauer <s.hauer@pengutronix.de>
Date: Tue,  3 Jan 2012 16:27:47 +0100

> The driver uses a shared pool for both rx and tx descriptors.
> During open it queues fixed number of 128 descriptors for receive
> packets. For each received packet it tries to queue another
> descriptor. If this fails the descriptor is lost for rx.
> The driver has no limitation on tx descriptors to use, so it
> can happen during a nmap / ping -f attack that the driver
> allocates all descriptors for tx and looses all rx descriptors.
> The driver stops working then.
> To fix this limit the number of tx descriptors used to half of
> the descriptors available, the rx path uses the other half.
> 
> Tested on a custom board using nmap / ping -f to the board from
> two different hosts.
> 
> Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>

Applied to net-next, thanks.

Well, at this point there is not logical reason to have a shared
descriptor pool unless the hardware requires it.  Does it?
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sascha Hauer Jan. 4, 2012, 8:50 a.m. UTC | #2
On Tue, Jan 03, 2012 at 01:51:41PM -0500, David Miller wrote:
> From: Sascha Hauer <s.hauer@pengutronix.de>
> Date: Tue,  3 Jan 2012 16:27:47 +0100
> 
> > The driver uses a shared pool for both rx and tx descriptors.
> > During open it queues fixed number of 128 descriptors for receive
> > packets. For each received packet it tries to queue another
> > descriptor. If this fails the descriptor is lost for rx.
> > The driver has no limitation on tx descriptors to use, so it
> > can happen during a nmap / ping -f attack that the driver
> > allocates all descriptors for tx and looses all rx descriptors.
> > The driver stops working then.
> > To fix this limit the number of tx descriptors used to half of
> > the descriptors available, the rx path uses the other half.
> > 
> > Tested on a custom board using nmap / ping -f to the board from
> > two different hosts.
> > 
> > Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
> 
> Applied to net-next, thanks.
> 
> Well, at this point there is not logical reason to have a shared
> descriptor pool unless the hardware requires it.  Does it?

I don't have enough knowledge of the hardware to give an answer, but to
me it would appear like a very strange piece of hardware if it can
only receive *or* transmit depending on the type of the next descriptor.
I think the driver needs some cleanup in this area, but this is out of
skope for me at the moment.

Sascha
diff mbox

Patch

diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
index 815c797..794ac30 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -115,6 +115,7 @@  static const char emac_version_string[] = "TI DaVinci EMAC Linux v6.1";
 #define EMAC_DEF_TX_CH			(0) /* Default 0th channel */
 #define EMAC_DEF_RX_CH			(0) /* Default 0th channel */
 #define EMAC_DEF_RX_NUM_DESC		(128)
+#define EMAC_DEF_TX_NUM_DESC		(128)
 #define EMAC_DEF_MAX_TX_CH		(1) /* Max TX channels configured */
 #define EMAC_DEF_MAX_RX_CH		(1) /* Max RX channels configured */
 #define EMAC_POLL_WEIGHT		(64) /* Default NAPI poll weight */
@@ -336,6 +337,7 @@  struct emac_priv {
 	u32 mac_hash2;
 	u32 multicast_hash_cnt[EMAC_NUM_MULTICAST_BITS];
 	u32 rx_addr_type;
+	atomic_t cur_tx;
 	const char *phy_id;
 	struct phy_device *phydev;
 	spinlock_t lock;
@@ -1044,6 +1046,9 @@  static void emac_tx_handler(void *token, int len, int status)
 {
 	struct sk_buff		*skb = token;
 	struct net_device	*ndev = skb->dev;
+	struct emac_priv	*priv = netdev_priv(ndev);
+
+	atomic_dec(&priv->cur_tx);
 
 	if (unlikely(netif_queue_stopped(ndev)))
 		netif_start_queue(ndev);
@@ -1092,6 +1097,9 @@  static int emac_dev_xmit(struct sk_buff *skb, struct net_device *ndev)
 		goto fail_tx;
 	}
 
+	if (atomic_inc_return(&priv->cur_tx) >= EMAC_DEF_TX_NUM_DESC)
+		netif_stop_queue(ndev);
+
 	return NETDEV_TX_OK;
 
 fail_tx: