Patchwork [net-next-2.6] ixgbe: fix multi-ring polling [V2]

login
register
mail settings
Submitter Andy Gospodarek
Date June 16, 2009, 9:42 p.m.
Message ID <20090616214213.GC8515@gospo.rdu.redhat.com>
Download mbox | patch
Permalink /patch/28751/
State RFC
Delegated to: David Miller
Headers show

Comments

Andy Gospodarek - June 16, 2009, 9:42 p.m.
On Tue, Jun 16, 2009 at 02:11:45PM -0700, Brandeburg, Jesse wrote:
> 
> 
> On Tue, 16 Jun 2009, Andy Gospodarek wrote:
> 
> > 
> > When looking at ixgbe_clean_rxtx_many I noticed two small problems.
> > 
> >  - work_done needs to be cleared before calling ixgbe_clean_rx_irq since
> >    it will exit without cleaning any buffers when work_done is greater
> >    than budget (which could happen pretty often after the first ring is
> >    cleaned).  A total count will ensure we still return the correct
> >    number of frames processed.
> 
> but (not seen in the below patch) the budget is divided by the number of 
> rings on this vector before it is passed to rx_clean, so each ring will 
> always get a chance to clean at least one buffer.
> 

Not by my reading of ixgbe_clean_rx_irq (which could be wrong, but I've
looked again to be sure).  If we have 2 rings that need to be cleaned
and a budget of 64 passed into ixgbe_clean_rxtx_many, then budget will
be 32 inside the loop.  If the system is busy and there are more than 32
buffers that need to be cleaned on the first ring ixgbe_clean_rx_irq
will not break until work_done is 32.  When the second ring is polled,
work_done is already 32 and so is budget.  The while loop in
ixgbe_clean_rx_irq will break immediately and nothing will be cleaned on
the second ring.  Let me know if I'm missing something.

> >  - napi_complete should only be called if all rings associated with this
> >    napi instance were cleaned completely.  It seems wise to stay on the
> >    poll-list if not completely cleaned.
> 
> that part I agree to.
> 
> > 
> > This has been compile tested only.
> 
> we can test it but I think we need a V2, see below...
> 
> > 
> > Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
> > ---
> > 
> >  ixgbe_main.c |   11 +++++++----
> >  1 file changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
> > index a551a96..c009642 100644
> > --- a/drivers/net/ixgbe/ixgbe_main.c
> > +++ b/drivers/net/ixgbe/ixgbe_main.c
> > @@ -1362,9 +1362,9 @@ static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
> >  	                       container_of(napi, struct ixgbe_q_vector, napi);
> >  	struct ixgbe_adapter *adapter = q_vector->adapter;
> >  	struct ixgbe_ring *ring = NULL;
> > -	int work_done = 0, i;
> > +	int work_done = 0, total_work = 0, i;
> >  	long r_idx;
> > -	bool tx_clean_complete = true;
> > +	bool rx_clean_complete = true, tx_clean_complete = true;
> >  
> >  	r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
> >  	for (i = 0; i < q_vector->txr_count; i++) {
> > @@ -1384,12 +1384,15 @@ static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
> >  	budget = max(budget, 1);
> >  	r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
> >  	for (i = 0; i < q_vector->rxr_count; i++) {
> > +		work_done = 0;
> >  		ring = &(adapter->rx_ring[r_idx]);
> >  #ifdef CONFIG_IXGBE_DCA
> >  		if (adapter->flags & IXGBE_FLAG_DCA_ENABLED)
> >  			ixgbe_update_rx_dca(adapter, ring);
> >  #endif
> >  		ixgbe_clean_rx_irq(q_vector, ring, &work_done, budget);
> > +		total_work += work_done;
> > +		rx_clean_complete &= (work_done < budget);
> >  		r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues,
> >  		                      r_idx + 1);
> >  	}
> > @@ -1397,7 +1400,7 @@ static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
> >  	r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
> >  	ring = &(adapter->rx_ring[r_idx]);
> >  	/* If all Rx work done, exit the polling mode */
> > -	if (work_done < budget) {
> > +	if (rx_clean_complete && tx_clean_complete) {
> >  		napi_complete(napi);
> >  		if (adapter->itr_setting & 1)
> >  			ixgbe_set_itr_msix(q_vector);
> > @@ -1407,7 +1410,7 @@ static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
> >  		return 0;
> >  	}
> >  
> > -	return work_done;
> > +	return total_work;
> 
> I don't think you can return total_work here unless it is always
>  == budget, otherwise NAPI will hang.
> 
> before the only values that would ever be returned were:
>  
> 1) napi_complete, return work_done (work_done < budget)
> 2) return work_done (work_done >= budget)
> 
> I'm not sure if the > case is even valid in the new napi model, the only 
> one we ever use is the work_done == budget to continue polling.
> 

Adding a check is no problem, but that means we need to save the
original budget.  It would be good to do that to avoid the WARN_ON_ONCE
in net_rx_action as well, but should we be cheating like that?  Here's
the new patch:



[PATCH net-next-2.6] ixgbe: fix multi-ring polling

When looking at ixgbe_clean_rxtx_many I noticed two small problems.

 - work_done needs to be cleared before calling ixgbe_clean_rx_irq since
   it will exit without cleaning any buffers when work_done is greater
   than budget (which could happen pretty often after the first ring is
   cleaned).  A total count will ensure will still return the correct
   number of frames processed.
 - napi_complete should only be called if all rings associated with this
   napi instance were cleaned completely.  It seems wise to stay on the
   poll-list if not completely cleaned.

This has been compile tested only.

Signed-off-by: Andy Gospodarek <andy@greyhouse.net>

---

 ixgbe_main.c |   11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jesse Brandeburg - June 16, 2009, 10:02 p.m.
On Tue, 16 Jun 2009, Andy Gospodarek wrote:
> > > When looking at ixgbe_clean_rxtx_many I noticed two small problems.
> > > 
> > >  - work_done needs to be cleared before calling ixgbe_clean_rx_irq since
> > >    it will exit without cleaning any buffers when work_done is greater
> > >    than budget (which could happen pretty often after the first ring is
> > >    cleaned).  A total count will ensure we still return the correct
> > >    number of frames processed.
> > 
> > but (not seen in the below patch) the budget is divided by the number of 
> > rings on this vector before it is passed to rx_clean, so each ring will 
> > always get a chance to clean at least one buffer.
> > 
> 
> Not by my reading of ixgbe_clean_rx_irq (which could be wrong, but I've
> looked again to be sure).  If we have 2 rings that need to be cleaned
> and a budget of 64 passed into ixgbe_clean_rxtx_many, then budget will
> be 32 inside the loop.  If the system is busy and there are more than 32
> buffers that need to be cleaned on the first ring ixgbe_clean_rx_irq
> will not break until work_done is 32.  When the second ring is polled,
> work_done is already 32 and so is budget.  The while loop in
> ixgbe_clean_rx_irq will break immediately and nothing will be cleaned on
> the second ring.  Let me know if I'm missing something.

ah, you're right, the second loop will already have work_done set, so it 
should be cleared each loop as your patch does.

> > >  - napi_complete should only be called if all rings associated with this
> > >    napi instance were cleaned completely.  It seems wise to stay on the
> > >    poll-list if not completely cleaned.
> > 
> > that part I agree to.
> > 
> > > 
> > > This has been compile tested only.
> > 
> > we can test it but I think we need a V2, see below...
> > 
> > > 
> > > Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
> > > ---
> > > 
> > >  ixgbe_main.c |   11 +++++++----
> > >  1 file changed, 7 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
> > > index a551a96..c009642 100644
> > > --- a/drivers/net/ixgbe/ixgbe_main.c
> > > +++ b/drivers/net/ixgbe/ixgbe_main.c
> > > @@ -1362,9 +1362,9 @@ static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
> > >  	                       container_of(napi, struct ixgbe_q_vector, napi);
> > >  	struct ixgbe_adapter *adapter = q_vector->adapter;
> > >  	struct ixgbe_ring *ring = NULL;
> > > -	int work_done = 0, i;
> > > +	int work_done = 0, total_work = 0, i;
> > >  	long r_idx;
> > > -	bool tx_clean_complete = true;
> > > +	bool rx_clean_complete = true, tx_clean_complete = true;
> > >  
> > >  	r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
> > >  	for (i = 0; i < q_vector->txr_count; i++) {
> > > @@ -1384,12 +1384,15 @@ static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
> > >  	budget = max(budget, 1);
> > >  	r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
> > >  	for (i = 0; i < q_vector->rxr_count; i++) {
> > > +		work_done = 0;
> > >  		ring = &(adapter->rx_ring[r_idx]);
> > >  #ifdef CONFIG_IXGBE_DCA
> > >  		if (adapter->flags & IXGBE_FLAG_DCA_ENABLED)
> > >  			ixgbe_update_rx_dca(adapter, ring);
> > >  #endif
> > >  		ixgbe_clean_rx_irq(q_vector, ring, &work_done, budget);
> > > +		total_work += work_done;
> > > +		rx_clean_complete &= (work_done < budget);
> > >  		r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues,
> > >  		                      r_idx + 1);
> > >  	}
> > > @@ -1397,7 +1400,7 @@ static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
> > >  	r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
> > >  	ring = &(adapter->rx_ring[r_idx]);
> > >  	/* If all Rx work done, exit the polling mode */
> > > -	if (work_done < budget) {
> > > +	if (rx_clean_complete && tx_clean_complete) {
> > >  		napi_complete(napi);
> > >  		if (adapter->itr_setting & 1)
> > >  			ixgbe_set_itr_msix(q_vector);
> > > @@ -1407,7 +1410,7 @@ static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
> > >  		return 0;
> > >  	}
> > >  
> > > -	return work_done;
> > > +	return total_work;
> > 
> > I don't think you can return total_work here unless it is always
> >  == budget, otherwise NAPI will hang.
> > 
> > before the only values that would ever be returned were:
> >  
> > 1) napi_complete, return work_done (work_done < budget)
> > 2) return work_done (work_done >= budget)
> > 
> > I'm not sure if the > case is even valid in the new napi model, the only 
> > one we ever use is the work_done == budget to continue polling.
> > 
> 
> Adding a check is no problem, but that means we need to save the
> original budget.  It would be good to do that to avoid the WARN_ON_ONCE
> in net_rx_action as well, but should we be cheating like that?  Here's
> the new patch:

I hope davem can comment on that.

> [PATCH net-next-2.6] ixgbe: fix multi-ring polling V2

I think technically I'm okay with V2, the outstanding questions about what 
exactly we should return need to be answered.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller - June 17, 2009, 11:40 a.m.
From: "Brandeburg, Jesse" <jesse.brandeburg@intel.com>
Date: Tue, 16 Jun 2009 15:02:24 -0700 (Pacific Daylight Time)

> On Tue, 16 Jun 2009, Andy Gospodarek wrote:
>> Adding a check is no problem, but that means we need to save the
>> original budget.  It would be good to do that to avoid the WARN_ON_ONCE
>> in net_rx_action as well, but should we be cheating like that?  Here's
>> the new patch:
> 
> I hope davem can comment on that.
> 
>> [PATCH net-next-2.6] ixgbe: fix multi-ring polling V2
> 
> I think technically I'm okay with V2, the outstanding questions about what 
> exactly we should return need to be answered.

If you aren't going to complete the NAPI run, you must indicate
to the caller of ->poll() that you've consumed the entire budget.

This is the second driver where the multi-queue-in-one-irq "issue"
has been noticed.  Eric Dumazet posted a similar patch for NIU.

There are a few other ways to approach this problem, now that I've
thought about it for some time:

1) Use multiple NAPI contexts to represent the queues even if
   they are backed by a single interrupt.

2) Use only "1" queue if you only have "1" interrupt.  (replace
   "1" with "N" for all valid values of "N" :-)

Those approaches are a lot cleaner and keeps us from needing all
of this gross starvation-avoidance and budget faking code.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Andy Gospodarek - June 17, 2009, 9:06 p.m.
On Wed, Jun 17, 2009 at 04:40:26AM -0700, David Miller wrote:
> From: "Brandeburg, Jesse" <jesse.brandeburg@intel.com>
> Date: Tue, 16 Jun 2009 15:02:24 -0700 (Pacific Daylight Time)
> 
> > On Tue, 16 Jun 2009, Andy Gospodarek wrote:
> >> Adding a check is no problem, but that means we need to save the
> >> original budget.  It would be good to do that to avoid the WARN_ON_ONCE
> >> in net_rx_action as well, but should we be cheating like that?  Here's
> >> the new patch:
> > 
> > I hope davem can comment on that.
> > 
> >> [PATCH net-next-2.6] ixgbe: fix multi-ring polling V2
> > 
> > I think technically I'm okay with V2, the outstanding questions about what 
> > exactly we should return need to be answered.
> 
> If you aren't going to complete the NAPI run, you must indicate
> to the caller of ->poll() that you've consumed the entire budget.

By 'complete the NAPI run' do you mean call napi_complete?  Looking at
net_rx_action I don't see where it really matters how much work was done
by ->poll as long as it's not more than the device weight (since that
will spring the WARNing).

> This is the second driver where the multi-queue-in-one-irq "issue"
> has been noticed.  Eric Dumazet posted a similar patch for NIU.
> 
> There are a few other ways to approach this problem, now that I've
> thought about it for some time:
> 
> 1) Use multiple NAPI contexts to represent the queues even if
>    they are backed by a single interrupt.

So multiple calls to napi_schedule in a single interrupt handler?
Interesting....


> 2) Use only "1" queue if you only have "1" interrupt.  (replace
>    "1" with "N" for all valid values of "N" :-)
> 
> Those approaches are a lot cleaner and keeps us from needing all
> of this gross starvation-avoidance and budget faking code.

I agree.  It also seems much cleaner to do it that way because then each
queue or device gets the full weight.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/drivers/net/ixgbe/ixgbe_main.c b/drivers/net/ixgbe/ixgbe_main.c
index a551a96..a5c8bf1 100644
--- a/drivers/net/ixgbe/ixgbe_main.c
+++ b/drivers/net/ixgbe/ixgbe_main.c
@@ -1362,9 +1362,9 @@  static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
 	                       container_of(napi, struct ixgbe_q_vector, napi);
 	struct ixgbe_adapter *adapter = q_vector->adapter;
 	struct ixgbe_ring *ring = NULL;
-	int work_done = 0, i;
+	int work_done = 0, total_work = 0, i, old_budget = budget;
 	long r_idx;
-	bool tx_clean_complete = true;
+	bool rx_clean_complete = true, tx_clean_complete = true;
 
 	r_idx = find_first_bit(q_vector->txr_idx, adapter->num_tx_queues);
 	for (i = 0; i < q_vector->txr_count; i++) {
@@ -1384,12 +1384,15 @@  static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
 	budget = max(budget, 1);
 	r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
 	for (i = 0; i < q_vector->rxr_count; i++) {
+		work_done = 0;
 		ring = &(adapter->rx_ring[r_idx]);
 #ifdef CONFIG_IXGBE_DCA
 		if (adapter->flags & IXGBE_FLAG_DCA_ENABLED)
 			ixgbe_update_rx_dca(adapter, ring);
 #endif
 		ixgbe_clean_rx_irq(q_vector, ring, &work_done, budget);
+		total_work += work_done;
+		rx_clean_complete &= (work_done < budget);
 		r_idx = find_next_bit(q_vector->rxr_idx, adapter->num_rx_queues,
 		                      r_idx + 1);
 	}
@@ -1397,7 +1400,7 @@  static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
 	r_idx = find_first_bit(q_vector->rxr_idx, adapter->num_rx_queues);
 	ring = &(adapter->rx_ring[r_idx]);
 	/* If all Rx work done, exit the polling mode */
-	if (work_done < budget) {
+	if (rx_clean_complete && tx_clean_complete) {
 		napi_complete(napi);
 		if (adapter->itr_setting & 1)
 			ixgbe_set_itr_msix(q_vector);
@@ -1407,7 +1410,7 @@  static int ixgbe_clean_rxtx_many(struct napi_struct *napi, int budget)
 		return 0;
 	}
 
-	return work_done;
+	return total_work > old_budget ? old_budget : total_work;
 }
 
 /**