Message ID | 4B0BADA6.7080602@gmail.com |
---|---|
State | Changes Requested, archived |
Delegated to: | David Miller |
Headers | show |
On Tue, 2009-11-24 at 02:55 -0700, Eric Dumazet wrote: > Peter P Waskiewicz Jr a écrit : > > > You might have this elsewhere, but it sounds like you're connecting back > > to back with another 82599 NIC. Our optics in that NIC are dual-rate, > > and the software mechanism that tries to "autoneg" link speed gets out > > of sync easily in back-to-back setups. > > > > If it's really annoying, and you're willing to run with a local patch to > > disable the autotry mechanism, try this: > > > > diff --git a/drivers/net/ixgbe/ixgbe_main.c > > b/drivers/net/ixgbe/ixgbe_main.c > > index a5036f7..62c0915 100644 > > --- a/drivers/net/ixgbe/ixgbe_main.c > > +++ b/drivers/net/ixgbe/ixgbe_main.c > > @@ -4670,6 +4670,10 @@ static void ixgbe_multispeed_fiber_task(struct > > work_struct *work) > > autoneg = hw->phy.autoneg_advertised; > > if ((!autoneg) && (hw->mac.ops.get_link_capabilities)) > > hw->mac.ops.get_link_capabilities(hw, &autoneg, > > &negotiation); > > + > > + /* force 10G only */ > > + autoneg = IXGBE_LINK_SPEED_10GB_FULL; > > + > > if (hw->mac.ops.setup_link) > > hw->mac.ops.setup_link(hw, autoneg, negotiation, true); > > adapter->flags |= IXGBE_FLAG_NEED_LINK_UPDATE; > > Thanks ! This did the trick :) > > If I am not mistaken, number of TX queues should be capped by number of possible cpus ? > > Its currently a fixed 128 value, allocating 128*128 = 16384 bytes, > and polluting "tc -s -d class show dev fiber0" output. > Yes, this is a stupid issue we haven't gotten around to fixing yet. This looks fine to me. Thanks for putting it together. > [PATCH net-next-2.6] ixgbe: Do not allocate too many netdev txqueues > > Instead of allocating 128 struct netdev_queue per device, use the minimum > value between 128 and number of possible cpus, to reduce ram usage and > "tc -s -d class show dev ..." output > > diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c > index ebcec30..ec2508d 100644 > --- a/drivers/net/ixgbe/ixgbe_main.c > +++ b/drivers/net/ixgbe/ixgbe_main.c > @@ -5582,7 +5583,10 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, > pci_set_master(pdev); > pci_save_state(pdev); > > - netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), MAX_TX_QUEUES); > + netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), > + min_t(unsigned int, > + MAX_TX_QUEUES, > + num_possible_cpus())); > if (!netdev) { > err = -ENOMEM; > goto err_alloc_etherdev; -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Peter P Waskiewicz Jr wrote: > On Tue, 2009-11-24 at 02:55 -0700, Eric Dumazet wrote: > >> Peter P Waskiewicz Jr a écrit : >> >> >>> You might have this elsewhere, but it sounds like you're connecting back >>> to back with another 82599 NIC. Our optics in that NIC are dual-rate, >>> and the software mechanism that tries to "autoneg" link speed gets out >>> of sync easily in back-to-back setups. >>> >>> If it's really annoying, and you're willing to run with a local patch to >>> disable the autotry mechanism, try this: >>> >>> diff --git a/drivers/net/ixgbe/ixgbe_main.c >>> b/drivers/net/ixgbe/ixgbe_main.c >>> index a5036f7..62c0915 100644 >>> --- a/drivers/net/ixgbe/ixgbe_main.c >>> +++ b/drivers/net/ixgbe/ixgbe_main.c >>> @@ -4670,6 +4670,10 @@ static void ixgbe_multispeed_fiber_task(struct >>> work_struct *work) >>> autoneg = hw->phy.autoneg_advertised; >>> if ((!autoneg) && (hw->mac.ops.get_link_capabilities)) >>> hw->mac.ops.get_link_capabilities(hw, &autoneg, >>> &negotiation); >>> + >>> + /* force 10G only */ >>> + autoneg = IXGBE_LINK_SPEED_10GB_FULL; >>> + >>> if (hw->mac.ops.setup_link) >>> hw->mac.ops.setup_link(hw, autoneg, negotiation, true); >>> adapter->flags |= IXGBE_FLAG_NEED_LINK_UPDATE; >>> >> Thanks ! This did the trick :) >> >> If I am not mistaken, number of TX queues should be capped by number of possible cpus ? >> >> Its currently a fixed 128 value, allocating 128*128 = 16384 bytes, >> and polluting "tc -s -d class show dev fiber0" output. >> >> > > Yes, this is a stupid issue we haven't gotten around to fixing yet. > This looks fine to me. Thanks for putting it together. > > Believe the below patch will break DCB and FCoE though, both features have the potential to set real_num_tx_queues to greater then the number of CPUs. This could result in real_num_tx_queues > num_tx_queues. The current solution isn't that great though, maybe we should set to the minimum of MAX_TX_QUEUES and num_possible_cpus() * 2 + 8. That should cover the maximum possible queues for DCB, FCoE and their combinations. general multiq = num_possible_cpus() DCB = 8 tx queue's FCoE = 2*num_possible_cpus() FCoE + DCB = 8 tx queues + num_possible_cpus thanks, john. >> [PATCH net-next-2.6] ixgbe: Do not allocate too many netdev txqueues >> >> Instead of allocating 128 struct netdev_queue per device, use the minimum >> value between 128 and number of possible cpus, to reduce ram usage and >> "tc -s -d class show dev ..." output >> >> diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c >> index ebcec30..ec2508d 100644 >> --- a/drivers/net/ixgbe/ixgbe_main.c >> +++ b/drivers/net/ixgbe/ixgbe_main.c >> @@ -5582,7 +5583,10 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, >> pci_set_master(pdev); >> pci_save_state(pdev); >> >> - netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), MAX_TX_QUEUES); >> + netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), >> + min_t(unsigned int, >> + MAX_TX_QUEUES, >> + num_possible_cpus())); >> if (!netdev) { >> err = -ENOMEM; >> goto err_alloc_etherdev; >> > > -- > To unsubscribe from this list: send the line "unsubscribe netdev" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Eric Dumazet пишет: > Peter P Waskiewicz Jr a écrit : > >> You might have this elsewhere, but it sounds like you're connecting back >> to back with another 82599 NIC. Our optics in that NIC are dual-rate, >> and the software mechanism that tries to "autoneg" link speed gets out >> of sync easily in back-to-back setups. >> >> If it's really annoying, and you're willing to run with a local patch to >> disable the autotry mechanism, try this: >> >> diff --git a/drivers/net/ixgbe/ixgbe_main.c >> b/drivers/net/ixgbe/ixgbe_main.c >> index a5036f7..62c0915 100644 >> --- a/drivers/net/ixgbe/ixgbe_main.c >> +++ b/drivers/net/ixgbe/ixgbe_main.c >> @@ -4670,6 +4670,10 @@ static void ixgbe_multispeed_fiber_task(struct >> work_struct *work) >> autoneg = hw->phy.autoneg_advertised; >> if ((!autoneg) && (hw->mac.ops.get_link_capabilities)) >> hw->mac.ops.get_link_capabilities(hw, &autoneg, >> &negotiation); >> + >> + /* force 10G only */ >> + autoneg = IXGBE_LINK_SPEED_10GB_FULL; >> + >> if (hw->mac.ops.setup_link) >> hw->mac.ops.setup_link(hw, autoneg, negotiation, true); >> adapter->flags |= IXGBE_FLAG_NEED_LINK_UPDATE; > > Thanks ! This did the trick :) > > If I am not mistaken, number of TX queues should be capped by number of possible cpus ? > > Its currently a fixed 128 value, allocating 128*128 = 16384 bytes, > and polluting "tc -s -d class show dev fiber0" output. > > [PATCH net-next-2.6] ixgbe: Do not allocate too many netdev txqueues > > Instead of allocating 128 struct netdev_queue per device, use the minimum > value between 128 and number of possible cpus, to reduce ram usage and > "tc -s -d class show dev ..." output > > diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c > index ebcec30..ec2508d 100644 > --- a/drivers/net/ixgbe/ixgbe_main.c > +++ b/drivers/net/ixgbe/ixgbe_main.c > @@ -5582,7 +5583,10 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, > pci_set_master(pdev); > pci_save_state(pdev); > > - netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), MAX_TX_QUEUES); > + netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), > + min_t(unsigned int, > + MAX_TX_QUEUES, > + num_possible_cpus())); > if (!netdev) { > err = -ENOMEM; > goto err_alloc_etherdev; > -- > To unsubscribe from this list: send the line "unsubscribe netdev" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > > This also fix log time loading TC rules for me Tested-by: Badalian Vyacheslav <slavon.net@gmail.com> -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
From: John Fastabend <john.r.fastabend@intel.com> Date: Tue, 24 Nov 2009 13:14:12 +0000 > Believe the below patch will break DCB and FCoE though, both features > have the potential to set real_num_tx_queues to greater then the > number of CPUs. This could result in real_num_tx_queues > > num_tx_queues. > > The current solution isn't that great though, maybe we should set to > the minimum of MAX_TX_QUEUES and num_possible_cpus() * 2 + 8. > > That should cover the maximum possible queues for DCB, FCoE and their > combinations. > > general multiq = num_possible_cpus() > DCB = 8 tx queue's > FCoE = 2*num_possible_cpus() > FCoE + DCB = 8 tx queues + num_possible_cpus Eric, I'm tossing your patch because of this problem, just FYI. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
David Miller a écrit : > From: John Fastabend <john.r.fastabend@intel.com> > Date: Tue, 24 Nov 2009 13:14:12 +0000 > >> Believe the below patch will break DCB and FCoE though, both features >> have the potential to set real_num_tx_queues to greater then the >> number of CPUs. This could result in real_num_tx_queues > >> num_tx_queues. >> >> The current solution isn't that great though, maybe we should set to >> the minimum of MAX_TX_QUEUES and num_possible_cpus() * 2 + 8. >> >> That should cover the maximum possible queues for DCB, FCoE and their >> combinations. >> >> general multiq = num_possible_cpus() >> DCB = 8 tx queue's >> FCoE = 2*num_possible_cpus() >> FCoE + DCB = 8 tx queues + num_possible_cpus > > Eric, I'm tossing your patch because of this problem, just FYI. Sure, I guess we need a more generic way to handle this. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Eric Dumazet wrote: > David Miller a écrit : > >> From: John Fastabend <john.r.fastabend@intel.com> >> Date: Tue, 24 Nov 2009 13:14:12 +0000 >> >> >>> Believe the below patch will break DCB and FCoE though, both features >>> have the potential to set real_num_tx_queues to greater then the >>> number of CPUs. This could result in real_num_tx_queues > >>> num_tx_queues. >>> >>> The current solution isn't that great though, maybe we should set to >>> the minimum of MAX_TX_QUEUES and num_possible_cpus() * 2 + 8. >>> >>> That should cover the maximum possible queues for DCB, FCoE and their >>> combinations. >>> >>> general multiq = num_possible_cpus() >>> DCB = 8 tx queue's >>> FCoE = 2*num_possible_cpus() >>> FCoE + DCB = 8 tx queues + num_possible_cpus >>> >> Eric, I'm tossing your patch because of this problem, just FYI. >> > > Sure, I guess we need a more generic way to handle this. > > Eric, I'll resubmit your patch with a small update to fix my concerns soon. thanks, john. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c index ebcec30..ec2508d 100644 --- a/drivers/net/ixgbe/ixgbe_main.c +++ b/drivers/net/ixgbe/ixgbe_main.c @@ -5582,7 +5583,10 @@ static int __devinit ixgbe_probe(struct pci_dev *pdev, pci_set_master(pdev); pci_save_state(pdev); - netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), MAX_TX_QUEUES); + netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), + min_t(unsigned int, + MAX_TX_QUEUES, + num_possible_cpus())); if (!netdev) { err = -ENOMEM; goto err_alloc_etherdev;