Message ID | 20090831195847.GA6506@hmsreliant.think-freely.org |
---|---|
State | Changes Requested, archived |
Delegated to: | David Miller |
Headers | show |
From: Neil Horman <nhorman@tuxdriver.com> Date: Mon, 31 Aug 2009 15:58:47 -0400 > It was recently pointed out to me that the last_rx field of the net_device > structure wasn't updated regularly. In fact only the bonding driver really uses > it currently. Since the drop_monitor code relies on the last_rx field to detect > drops on recevie in hardware, We need to find a more reliable way to rate limit > our drop checks (so that we don't check for drops on every frame recevied, which > would be inefficient. This patch makes a last_rx timestamp that is private to > the drop monitor code and is updated for every device that we track. > > Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Neil, this doesn't apply to net-next-2.6: > diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c > index 9d66fa9..34a05ce 100644 > --- a/net/core/drop_monitor.c > +++ b/net/core/drop_monitor.c ... > @@ -179,18 +180,21 @@ static void trace_napi_poll_hit(struct napi_struct *napi) > { > struct dm_hw_stat_delta *new_stat; > > - /* > - * Ratelimit our check time to dm_hw_check_delta jiffies > - */ > - if (!time_after(jiffies, napi->dev->last_rx + dm_hw_check_delta)) > - return; > > rcu_read_lock(); > list_for_each_entry_rcu(new_stat, &hw_stats_list, list) { In net-next-2.6 this test reads: /* * Ratelimit our check time to dm_hw_check_delta jiffies */ if (!napi->dev || !time_after(jiffies, napi->dev->last_rx + dm_hw_check_delta)) return; and you must retain the napi->dev NULL check there as otherwise the list traversal tests will blindly dereference that pointer. -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Sep 01, 2009 at 06:21:27PM -0700, David Miller wrote: > From: Neil Horman <nhorman@tuxdriver.com> > Date: Mon, 31 Aug 2009 15:58:47 -0400 > > > It was recently pointed out to me that the last_rx field of the net_device > > structure wasn't updated regularly. In fact only the bonding driver really uses > > it currently. Since the drop_monitor code relies on the last_rx field to detect > > drops on recevie in hardware, We need to find a more reliable way to rate limit > > our drop checks (so that we don't check for drops on every frame recevied, which > > would be inefficient. This patch makes a last_rx timestamp that is private to > > the drop monitor code and is updated for every device that we track. > > > > Signed-off-by: Neil Horman <nhorman@tuxdriver.com> > > Neil, this doesn't apply to net-next-2.6: > > > diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c > > index 9d66fa9..34a05ce 100644 > > --- a/net/core/drop_monitor.c > > +++ b/net/core/drop_monitor.c > ... > > @@ -179,18 +180,21 @@ static void trace_napi_poll_hit(struct napi_struct *napi) > > { > > struct dm_hw_stat_delta *new_stat; > > > > - /* > > - * Ratelimit our check time to dm_hw_check_delta jiffies > > - */ > > - if (!time_after(jiffies, napi->dev->last_rx + dm_hw_check_delta)) > > - return; > > > > rcu_read_lock(); > > list_for_each_entry_rcu(new_stat, &hw_stats_list, list) { > > In net-next-2.6 this test reads: > > /* > * Ratelimit our check time to dm_hw_check_delta jiffies > */ > if (!napi->dev || > !time_after(jiffies, napi->dev->last_rx + dm_hw_check_delta)) > return; > > and you must retain the napi->dev NULL check there as otherwise > the list traversal tests will blindly dereference that pointer. > Apologies, this raced with Xiao fix to trace_napi_poll hit, which introduced that null check. I'll rediff/repost shortly. Neil -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c index 9d66fa9..34a05ce 100644 --- a/net/core/drop_monitor.c +++ b/net/core/drop_monitor.c @@ -52,6 +52,7 @@ struct per_cpu_dm_data { struct dm_hw_stat_delta { struct net_device *dev; + unsigned long last_rx; struct list_head list; struct rcu_head rcu; unsigned long last_drop_val; @@ -179,18 +180,21 @@ static void trace_napi_poll_hit(struct napi_struct *napi) { struct dm_hw_stat_delta *new_stat; - /* - * Ratelimit our check time to dm_hw_check_delta jiffies - */ - if (!time_after(jiffies, napi->dev->last_rx + dm_hw_check_delta)) - return; rcu_read_lock(); list_for_each_entry_rcu(new_stat, &hw_stats_list, list) { + /* + * only add a note to our monitor buffer if: + * 1) this is the dev we received on + * 2) its after the last_rx delta + * 3) our rx_dropped count has gone up + */ if ((new_stat->dev == napi->dev) && + (time_after(jiffies, new_stat->last_rx + dm_hw_check_delta)) && (napi->dev->stats.rx_dropped != new_stat->last_drop_val)) { trace_drop_common(NULL, NULL); new_stat->last_drop_val = napi->dev->stats.rx_dropped; + new_stat->last_rx = jiffies; break; } } @@ -286,6 +290,7 @@ static int dropmon_net_event(struct notifier_block *ev_block, goto out; new_stat->dev = dev; + new_stat->last_rx = jiffies; INIT_RCU_HEAD(&new_stat->rcu); spin_lock(&trace_state_lock); list_add_rcu(&new_stat->list, &hw_stats_list);
Hey all- It was recently pointed out to me that the last_rx field of the net_device structure wasn't updated regularly. In fact only the bonding driver really uses it currently. Since the drop_monitor code relies on the last_rx field to detect drops on recevie in hardware, We need to find a more reliable way to rate limit our drop checks (so that we don't check for drops on every frame recevied, which would be inefficient. This patch makes a last_rx timestamp that is private to the drop monitor code and is updated for every device that we track. Neil Signed-off-by: Neil Horman <nhorman@tuxdriver.com> drop_monitor.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe netdev" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html