diff mbox series

3c59x: fix missing dma_mapping_error check

Message ID 20171229164010.1991-1-nhorman@tuxdriver.com
State Changes Requested, archived
Delegated to: David Miller
Headers show
Series 3c59x: fix missing dma_mapping_error check | expand

Commit Message

Neil Horman Dec. 29, 2017, 4:40 p.m. UTC
A few spots in 3c59x missed calls to dma_mapping_error checks, casuing
WARN_ONS to trigger.  Clean those up.

Signed-off-by: Neil Horman <nhorman@redhat.com>
CC: Steffen Klassert <klassert@mathematik.tu-chemnitz.de>
CC: "David S. Miller" <davem@davemloft.net>
Reported-by: tedheadster@gmail.com
---
 drivers/net/ethernet/3com/3c59x.c | 17 +++++++++++++++--
 1 file changed, 15 insertions(+), 2 deletions(-)

Comments

David Miller Jan. 3, 2018, 2:48 a.m. UTC | #1
From: Neil Horman <nhorman@tuxdriver.com>
Date: Fri, 29 Dec 2017 11:40:10 -0500

> @@ -2067,6 +2072,9 @@ vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  		int len = (skb->len + 3) & ~3;
>  		vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len,
>  						PCI_DMA_TODEVICE);
> +		if (dma_mapping_error(&VORTEX_PCI(vp)->dev, vp->tx_skb_dma))
> +			return NETDEV_TX_OK;
> +

This leaks the SKB, right?

And for the RX cases, it allows the RX ring to deplete to empty which
tends to hang most chips.  You need to make the DMA failure detection
early and recycle the RX buffer back to the chip instead of passing
it up to the stack.
Neil Horman Jan. 3, 2018, 10:42 a.m. UTC | #2
On Tue, Jan 02, 2018 at 09:48:27PM -0500, David Miller wrote:
> From: Neil Horman <nhorman@tuxdriver.com>
> Date: Fri, 29 Dec 2017 11:40:10 -0500
> 
> > @@ -2067,6 +2072,9 @@ vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
> >  		int len = (skb->len + 3) & ~3;
> >  		vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len,
> >  						PCI_DMA_TODEVICE);
> > +		if (dma_mapping_error(&VORTEX_PCI(vp)->dev, vp->tx_skb_dma))
> > +			return NETDEV_TX_OK;
> > +
> 
> This leaks the SKB, right?
> 
Crud, I think you're right, I'll respin this to address today.

> And for the RX cases, it allows the RX ring to deplete to empty which
> tends to hang most chips.  You need to make the DMA failure detection
> early and recycle the RX buffer back to the chip instead of passing
> it up to the stack.
> 
Strictly speaking, I think we're ok here, because the dirty_rx counter creates a
contiguous area to refill, and we will just pick up where we left off on the
next napi poll.

That said, it can still depelete if we get several consecutive
dma_mapping_errors in a sufficiently long period of time that the ring doesn't
ever refill, so yeah, recycling will be better.  That will take a bigger rewrite
of this code.  I'll post something asap.

Neil
David Miller Jan. 3, 2018, 2:53 p.m. UTC | #3
From: Neil Horman <nhorman@tuxdriver.com>
Date: Wed, 3 Jan 2018 05:42:04 -0500

> On Tue, Jan 02, 2018 at 09:48:27PM -0500, David Miller wrote:
>> And for the RX cases, it allows the RX ring to deplete to empty which
>> tends to hang most chips.  You need to make the DMA failure detection
>> early and recycle the RX buffer back to the chip instead of passing
>> it up to the stack.
>> 
> Strictly speaking, I think we're ok here, because the dirty_rx counter creates a
> contiguous area to refill, and we will just pick up where we left off on the
> next napi poll.

If you continually fail the mappings, even NAPI poll, eventually the
RX ring will empty.

I don't think we're ok here.
diff mbox series

Patch

diff --git a/drivers/net/ethernet/3com/3c59x.c b/drivers/net/ethernet/3com/3c59x.c
index f4e13a7014bd..6be9212f9093 100644
--- a/drivers/net/ethernet/3com/3c59x.c
+++ b/drivers/net/ethernet/3com/3c59x.c
@@ -1729,6 +1729,7 @@  vortex_open(struct net_device *dev)
 	struct vortex_private *vp = netdev_priv(dev);
 	int i;
 	int retval;
+	dma_addr_t dma;
 
 	/* Use the now-standard shared IRQ implementation. */
 	if ((retval = request_irq(dev->irq, vp->full_bus_master_rx ?
@@ -1753,7 +1754,11 @@  vortex_open(struct net_device *dev)
 				break;			/* Bad news!  */
 
 			skb_reserve(skb, NET_IP_ALIGN);	/* Align IP on 16 byte boundaries */
-			vp->rx_ring[i].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
+			dma = pci_map_single(VORTEX_PCI(vp), skb->data,
+					     PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
+			if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma))
+				break;
+			vp->rx_ring[i].addr = cpu_to_le32(dma);
 		}
 		if (i != RX_RING_SIZE) {
 			pr_emerg("%s: no memory for rx ring\n", dev->name);
@@ -2067,6 +2072,9 @@  vortex_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		int len = (skb->len + 3) & ~3;
 		vp->tx_skb_dma = pci_map_single(VORTEX_PCI(vp), skb->data, len,
 						PCI_DMA_TODEVICE);
+		if (dma_mapping_error(&VORTEX_PCI(vp)->dev, vp->tx_skb_dma))
+			return NETDEV_TX_OK;
+
 		spin_lock_irq(&vp->window_lock);
 		window_set(vp, 7);
 		iowrite32(vp->tx_skb_dma, ioaddr + Wn7_MasterAddr);
@@ -2594,6 +2602,7 @@  boomerang_rx(struct net_device *dev)
 	void __iomem *ioaddr = vp->ioaddr;
 	int rx_status;
 	int rx_work_limit = vp->dirty_rx + RX_RING_SIZE - vp->cur_rx;
+	dma_addr_t dma;
 
 	if (vortex_debug > 5)
 		pr_debug("boomerang_rx(): status %4.4x\n", ioread16(ioaddr+EL3_STATUS));
@@ -2673,7 +2682,11 @@  boomerang_rx(struct net_device *dev)
 				break;			/* Bad news!  */
 			}
 
-			vp->rx_ring[entry].addr = cpu_to_le32(pci_map_single(VORTEX_PCI(vp), skb->data, PKT_BUF_SZ, PCI_DMA_FROMDEVICE));
+			dma = pci_map_single(VORTEX_PCI(vp), skb->data,
+					     PKT_BUF_SZ, PCI_DMA_FROMDEVICE);
+			if (dma_mapping_error(&VORTEX_PCI(vp)->dev, dma))
+				break;
+			vp->rx_ring[entry].addr = cpu_to_le32(dma);
 			vp->rx_skbuff[entry] = skb;
 		}
 		vp->rx_ring[entry].status = 0;	/* Clear complete bit. */