Message ID | 6b84ddc047f72f8a66cc587a1813fbe9.squirrel@kondor.etf.bg.ac.rs |
---|---|
State | Changes Requested, archived |
Delegated to: | David Miller |
Headers | show |
Le vendredi 02 décembre 2011 à 17:31 +0100, "Igor Maravić" a écrit : > > > > Quite frankly, any queued skb is eventually freed, so > > netdev_reset_queue() could be avoided. > > > > If I didn't add netdev_reset_queue(), how dql would reset its parameters to 0? > They are initted at device setup. As I said, if every transmitted skb is correctly freed (and BQL accounted), there is no need to netdev_reset_queue() > I added spin_locks around netdev_completed_queue and netdev_sent_queue > Oh well, no, dont do that. BQL must be lightweight. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> > They are initted at device setup. > What about when device is reseted? > > Oh well, no, dont do that. BQL must be lightweight. > Forcedeth driver utilizes spin_locks. :) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Le vendredi 02 décembre 2011 à 18:00 +0100, Igor Maravić a écrit : > > > > They are initted at device setup. > > > > What about when device is reseted? > What happens to queued skbs ? Are they LOST ? Think about it. > > > > Oh well, no, dont do that. BQL must be lightweight. > > > > Forcedeth driver utilizes spin_locks. :) No added spin_locks in BQL patch. Reread what I said : "BQL must be lightweight" Not : "No lock should be used" OK ? -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Reread what I said : "BQL must be lightweight" > > Not : "No lock should be used" > > OK ? > I'm out of ideas. Do you think, if I remove netdev_reset_queue(tp->dev); from rtl8169_init_ring_indexes, and spin_locks, of course, that would be a good solution. As far as I could see in marvell/sky2.c, sfc/tx.c and intel/e1000e/netdev.c netdev_completed is called with out any lock. Please correct me if I'm wrong. BR Igor -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Le vendredi 02 décembre 2011 à 22:54 +0100, Igor Maravić a écrit : > > > > Reread what I said : "BQL must be lightweight" > > > > Not : "No lock should be used" > > > > OK ? > > > > I'm out of ideas. > > Do you think, if I remove netdev_reset_queue(tp->dev); from > rtl8169_init_ring_indexes, > and spin_locks, of course, that would be a good solution. > > As far as I could see in marvell/sky2.c, sfc/tx.c and intel/e1000e/netdev.c > netdev_completed is called with out any lock. > Please correct me if I'm wrong. These drivers have a clean and separate start_xmit() and xmit_completion path, each one being correctly serialized. No extra lock needed. In the case of r8169, we are still trying to get the driver in a clean state (without races). Then, we'll add BQL, and it will be as easy as other drivers. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c index e5a6d8e..e46d4e6 100644 --- a/drivers/net/ethernet/realtek/r8169.c +++ b/drivers/net/ethernet/realtek/r8169.c @@ -3773,6 +3773,7 @@ static void rtl_init_rxcfg(struct rtl8169_private *tp) static void rtl8169_init_ring_indexes(struct rtl8169_private *tp) { tp->dirty_tx = tp->dirty_rx = tp->cur_tx = tp->cur_rx = 0; + netdev_reset_queue(tp->dev); } static void rtl_hw_jumbo_enable(struct rtl8169_private *tp) @@ -5422,6 +5423,7 @@ static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb, unsigned int cur_frag, entry; struct TxDesc * uninitialized_var(txd); struct device *d = &tp->pci_dev->dev; + unsigned long flag; entry = tp->cur_tx; for (cur_frag = 0; cur_frag < info->nr_frags; cur_frag++) { @@ -5458,6 +5460,9 @@ static int rtl8169_xmit_frags(struct rtl8169_private *tp, struct sk_buff *skb, tp->tx_skb[entry].skb = skb; txd->opts1 |= cpu_to_le32(LastFrag); } + spin_lock_irqsave(&tp->lock,flag); + netdev_sent_queue(tp->dev, skb->len); + spin_unlock_irqrestore(&tp->lock,flag); return cur_frag; @@ -5623,6 +5628,9 @@ static void rtl8169_tx_interrupt(struct net_device *dev, void __iomem *ioaddr) { unsigned int dirty_tx, tx_left; + unsigned int bytes_compl = 0; + int tx_compl = 0; + unsigned long flag; dirty_tx = tp->dirty_tx; smp_rmb(); @@ -5641,8 +5649,8 @@ static void rtl8169_tx_interrupt(struct net_device *dev, rtl8169_unmap_tx_skb(&tp->pci_dev->dev, tx_skb, tp->TxDescArray + entry); if (status & LastFrag) { - dev->stats.tx_packets++; - dev->stats.tx_bytes += tx_skb->skb->len; + bytes_compl += tx_skb->skb->len; + tx_compl++; dev_kfree_skb(tx_skb->skb); tx_skb->skb = NULL; } @@ -5650,6 +5658,11 @@ static void rtl8169_tx_interrupt(struct net_device *dev, tx_left--; } + dev->stats.tx_packets += tx_compl; + dev->stats.tx_bytes += bytes_compl; + spin_lock_irqsave(&tp->lock, flag); + netdev_completed_queue(dev, tx_compl, bytes_compl); + spin_unlock_irqrestore(&tp->lock, flag); if (tp->dirty_tx != dirty_tx) { tp->dirty_tx = dirty_tx; smp_wmb();