diff mbox

[net-next,1/7] r8152: adjust rx_bottom

Message ID 1394712342-15778-119-Taiwan-albertk@realtek.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Hayes Wang Jan. 19, 2015, 7:13 a.m. UTC
If a error occurs when submitting rx, skip the remaining submissions
and try to submit them again next time.

Signed-off-by: Hayes Wang <hayeswang@realtek.com>
---
 drivers/net/usb/r8152.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

Comments

Scott Feldman Jan. 19, 2015, 8:03 a.m. UTC | #1
On Sun, Jan 18, 2015 at 11:13 PM, Hayes Wang <hayeswang@realtek.com> wrote:
> If a error occurs when submitting rx, skip the remaining submissions
> and try to submit them again next time.
>
> Signed-off-by: Hayes Wang <hayeswang@realtek.com>
> ---
>  drivers/net/usb/r8152.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
> index 2e22442..78a8917 100644
> --- a/drivers/net/usb/r8152.c
> +++ b/drivers/net/usb/r8152.c
> @@ -1655,7 +1655,7 @@ static int rx_bottom(struct r8152 *tp, int budget)
>  {
>         unsigned long flags;
>         struct list_head *cursor, *next, rx_queue;
> -       int work_done = 0;
> +       int ret = 0, work_done = 0;
>
>         if (!skb_queue_empty(&tp->rx_queue)) {
>                 while (work_done < budget) {
> @@ -1746,7 +1746,18 @@ find_next_rx:
>                 }
>
>  submit:
> -               r8152_submit_rx(tp, agg, GFP_ATOMIC);
> +               if (!ret) {
> +                       ret = r8152_submit_rx(tp, agg, GFP_ATOMIC);
> +               } else {
> +                       urb->actual_length = 0;
> +                       list_add_tail(&agg->list, next);

Do you need a spin_lock_irqsave(&tp->rx_lock, flags) around this?

> +               }
> +       }
> +
> +       if (!list_empty(&rx_queue)) {
> +               spin_lock_irqsave(&tp->rx_lock, flags);
> +               list_splice_tail(&rx_queue, &tp->rx_done);
> +               spin_unlock_irqrestore(&tp->rx_lock, flags);
>         }
>
>  out1:
> --
> 2.1.0
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Jan. 19, 2015, 9:13 p.m. UTC | #2
From: Scott Feldman <sfeldma@gmail.com>
Date: Mon, 19 Jan 2015 00:03:42 -0800

> On Sun, Jan 18, 2015 at 11:13 PM, Hayes Wang <hayeswang@realtek.com> wrote:
>> @@ -1746,7 +1746,18 @@ find_next_rx:
>>                 }
>>
>>  submit:
>> -               r8152_submit_rx(tp, agg, GFP_ATOMIC);
>> +               if (!ret) {
>> +                       ret = r8152_submit_rx(tp, agg, GFP_ATOMIC);
>> +               } else {
>> +                       urb->actual_length = 0;
>> +                       list_add_tail(&agg->list, next);
> 
> Do you need a spin_lock_irqsave(&tp->rx_lock, flags) around this?

Indeed, and rtl_start_rx() seems to also access agg->list without
proper locking.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hayes Wang Jan. 20, 2015, 2:48 a.m. UTC | #3
David Miller [mailto:davem@davemloft.net] 
> Sent: Tuesday, January 20, 2015 5:14 AM
[...]
> >> -               r8152_submit_rx(tp, agg, GFP_ATOMIC);
> >> +               if (!ret) {
> >> +                       ret = r8152_submit_rx(tp, agg, GFP_ATOMIC);
> >> +               } else {
> >> +                       urb->actual_length = 0;
> >> +                       list_add_tail(&agg->list, next);
> > 
> > Do you need a spin_lock_irqsave(&tp->rx_lock, flags) around this?
> 
> Indeed, and rtl_start_rx() seems to also access agg->list without
> proper locking.

It is unnecessary because I deal with them in a local list_head. My steps are
   1. Move the whole list from tp->rx_done to local rx_queue. (with spin lock)
   2. dequeue/enqueue the lists in rx_queue.
   3. Move the lists in rx_queue to tp->rx_done if it is necessary. (spin lock)
For step 2, it wouldn't have race, because the list_head is local and no other
function would change it. Therefore, I don't think it needs the spin lock.

The rtl_start_rx() also uses the similar way.
 
Best Regards,
Hayes
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Jan. 20, 2015, 2:52 a.m. UTC | #4
From: Hayes Wang <hayeswang@realtek.com>
Date: Tue, 20 Jan 2015 02:48:50 +0000

>> >> +                       urb->actual_length = 0;
>> >> +                       list_add_tail(&agg->list, next);
>> > 
>> > Do you need a spin_lock_irqsave(&tp->rx_lock, flags) around this?
>> 
>> Indeed, and rtl_start_rx() seems to also access agg->list without
>> proper locking.
> 
> It is unnecessary because I deal with them in a local list_head. My steps are
>    1. Move the whole list from tp->rx_done to local rx_queue. (with spin lock)
>    2. dequeue/enqueue the lists in rx_queue.
>    3. Move the lists in rx_queue to tp->rx_done if it is necessary. (spin lock)
> For step 2, it wouldn't have race, because the list_head is local and no other
> function would change it. Therefore, I don't think it needs the spin lock.
> 
> The rtl_start_rx() also uses the similar way.

agg->list is not local, you have to use a spinlock to protect
modifications to it, some other sites which modify agg->list do take
the lock properly.

You cannot modify a list like agg->list without proper locking.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hayes Wang Jan. 20, 2015, 3:24 a.m. UTC | #5
David Miller [mailto:davem@davemloft.net] 
> Sent: Tuesday, January 20, 2015 10:52 AM
[...]
> agg->list is not local, you have to use a spinlock to protect
> modifications to it, some other sites which modify agg->list do take
> the lock properly.
> 
> You cannot modify a list like agg->list without proper locking.

Excuse me. I don't understand.

Before step1
   tp_rx_done->listA->listB->listC->listD->...
   rx_queue->
Because the other function would chage tp->rx_done,
I need move the lists with spin lock.

After step1
   tp_rx_done->
   rx_queue->listA->listB->listC->listD->...

Now I dequeue one of the lists from the list_head and
deal with it.
   tp_rx_done->
   rx_queue->listA->listC->listD->...
                    listB

Then, if I want to put it back to rx_queue, I have to
use spin lock. Why? No other function would chage
rx_queue and the items in it.
 
Best Regards,
Hayes
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Jan. 25, 2015, 6:43 a.m. UTC | #6
From: Hayes Wang <hayeswang@realtek.com>
Date: Tue, 20 Jan 2015 03:24:52 +0000

>  David Miller [mailto:davem@davemloft.net] 
>> Sent: Tuesday, January 20, 2015 10:52 AM
> [...]
>> agg->list is not local, you have to use a spinlock to protect
>> modifications to it, some other sites which modify agg->list do take
>> the lock properly.
>> 
>> You cannot modify a list like agg->list without proper locking.
> 
> Excuse me. I don't understand.
> 
> Before step1
>    tp_rx_done->listA->listB->listC->listD->...
>    rx_queue->
> Because the other function would chage tp->rx_done,
> I need move the lists with spin lock.
> 
> After step1
>    tp_rx_done->
>    rx_queue->listA->listB->listC->listD->...
> 
> Now I dequeue one of the lists from the list_head and
> deal with it.
>    tp_rx_done->
>    rx_queue->listA->listC->listD->...
>                     listB
> 
> Then, if I want to put it back to rx_queue, I have to
> use spin lock. Why? No other function would chage
> rx_queue and the items in it.

What keeps rtl_start_rx() from running in parallel with
r8152_submit_rx(), or any other accessor of the RX agg->list?

You also keep using different terminology from me when
discussing what lists do or do not need protection, and that
is going to make it difficult for anyone to follow our
conversation at all.

We're talking specifically about RX agg->list objects and
whether access to them need synchronization or not.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hayes Wang Jan. 26, 2015, 7:14 a.m. UTC | #7
David Miller [mailto:davem@davemloft.net] 
> Sent: Sunday, January 25, 2015 2:44 PM
[...]
> What keeps rtl_start_rx() from running in parallel with
> r8152_submit_rx(), or any other accessor of the RX agg->list?

Forgive my poor English. I would try to describe them clearly.
The steps about the rx agg->list would be
1. carrier on or autoresume occurs.
2. Call rtl_start_rx().
3. Rx agg->list flows between device and tp->rx_done.
4. carrier off or autosuspend occurs.
5. call rtl_stop_rx().

The rtl_start_rx() would only be called when the linking
status is changed from off to on or the auto resume occurs.
And rtl_start_rx() would reinitialize the tp->rx_done and
all of the rx agg->list. After step 2, the rx agg->list
would flow between the usb host controller and the driver.
If r8152_submit_rx() is success, the driver wouldn't own the
rx agg->list until it is returned from the usb host controller.
If r8152_submit_rx() is fail, the driver would still own the
rx agg->list, and queue it to the tp->rx_done with spin lock
for next try.

If the status stays in step 3, only the rx_bottom() would submit
the rx agg. The rtl_start_rx() wouldn't be called suddenly,
unless the linking down or auto suspend occur first and linking
on or auto resume occur again. If linking down or auto suspend
occur, rtl_stop_rx() would be called (step 5). After this step,
rx_bottom() wouldn't submit rx, and all rx agg->list would stop
flowing. That is, the tp->rx_done and all rx agg->list wouldn't
be changed until the next rtl_start_rx() is called.

Therefore, the flow for each rx agg->list would be
a. submittd by rtl_start_rx().
b. goto step c if success, otherwise goto step d.
c. completed by usb host controller.
d. queued to tp->rx_done with spin lock.
e. dequeue from tp->rx_done with spin lock by rx_botoom().
f. goto step i if link down, otherwise goto step g.
g. submitted by rx_botoom().
h. goto step b.
i. goto step a if link on.

And the patch change the step g to g1.
g1. submitted by rx_botoom() if (!ret), otherwise goto step d.

Best Regards,
Hayes
> 
> You also keep using different terminology from me when
> discussing what lists do or do not need protection, and that
> is going to make it difficult for anyone to follow our
> conversation at all.
> 
> We're talking specifically about RX agg->list objects and
> whether access to them need synchronization or not.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Scott Feldman Jan. 26, 2015, 9:14 a.m. UTC | #8
On Mon, Jan 19, 2015 at 6:48 PM, Hayes Wang <hayeswang@realtek.com> wrote:
>  David Miller [mailto:davem@davemloft.net]
>> Sent: Tuesday, January 20, 2015 5:14 AM
> [...]
>> >> -               r8152_submit_rx(tp, agg, GFP_ATOMIC);
>> >> +               if (!ret) {
>> >> +                       ret = r8152_submit_rx(tp, agg, GFP_ATOMIC);
>> >> +               } else {
>> >> +                       urb->actual_length = 0;
>> >> +                       list_add_tail(&agg->list, next);
>> >
>> > Do you need a spin_lock_irqsave(&tp->rx_lock, flags) around this?
>>
>> Indeed, and rtl_start_rx() seems to also access agg->list without
>> proper locking.
>
> It is unnecessary because I deal with them in a local list_head. My steps are
>    1. Move the whole list from tp->rx_done to local rx_queue. (with spin lock)
>    2. dequeue/enqueue the lists in rx_queue.
>    3. Move the lists in rx_queue to tp->rx_done if it is necessary. (spin lock)
> For step 2, it wouldn't have race, because the list_head is local and no other
> function would change it. Therefore, I don't think it needs the spin lock.

Sorry guys, I think I made a mistake in my review and caused some
confusion/grief.

My mistake was getting the params to list_add_tail() backwards.  It's
list_add_tail(entry, head).  I saw list_add_tail(&agg->list, next) and
was fooled into thinking agg->list was the list getting appended with
the entry 'next'.  It's the opposite.  Duh.  So locking isn't needed
because the list is indeed local.

-scott
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hayes Wang Feb. 2, 2015, 2:38 a.m. UTC | #9
>  David Miller [mailto:davem@davemloft.net] 
> > Sent: Sunday, January 25, 2015 2:44 PM
> > What keeps rtl_start_rx() from running in parallel with
> > r8152_submit_rx(), or any other accessor of the RX agg->list?
> 
> Forgive my poor English. I would try to describe them clearly.
> The steps about the rx agg->list would be
> 1. carrier on or autoresume occurs.
> 2. Call rtl_start_rx().
> 3. Rx agg->list flows between device and tp->rx_done.
> 4. carrier off or autosuspend occurs.
> 5. call rtl_stop_rx().
> 
> The rtl_start_rx() would only be called when the linking
> status is changed from off to on or the auto resume occurs.
> And rtl_start_rx() would reinitialize the tp->rx_done and
> all of the rx agg->list. After step 2, the rx agg->list
> would flow between the usb host controller and the driver.
> If r8152_submit_rx() is success, the driver wouldn't own the
> rx agg->list until it is returned from the usb host controller.
> If r8152_submit_rx() is fail, the driver would still own the
> rx agg->list, and queue it to the tp->rx_done with spin lock
> for next try.
> 
> If the status stays in step 3, only the rx_bottom() would submit
> the rx agg. The rtl_start_rx() wouldn't be called suddenly,
> unless the linking down or auto suspend occur first and linking
> on or auto resume occur again. If linking down or auto suspend
> occur, rtl_stop_rx() would be called (step 5). After this step,
> rx_bottom() wouldn't submit rx, and all rx agg->list would stop
> flowing. That is, the tp->rx_done and all rx agg->list wouldn't
> be changed until the next rtl_start_rx() is called.
> 
> Therefore, the flow for each rx agg->list would be
> a. submittd by rtl_start_rx().
> b. goto step c if success, otherwise goto step d.
> c. completed by usb host controller.
> d. queued to tp->rx_done with spin lock.
> e. dequeue from tp->rx_done with spin lock by rx_botoom().
> f. goto step i if link down, otherwise goto step g.
> g. submitted by rx_botoom().
> h. goto step b.
> i. goto step a if link on.
> 
> And the patch change the step g to g1.
> g1. submitted by rx_botoom() if (!ret), otherwise goto step d.

Excuse me. Any other question or suggestion for this patch?
 
Best Regards,
Hayes
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index 2e22442..78a8917 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -1655,7 +1655,7 @@  static int rx_bottom(struct r8152 *tp, int budget)
 {
 	unsigned long flags;
 	struct list_head *cursor, *next, rx_queue;
-	int work_done = 0;
+	int ret = 0, work_done = 0;
 
 	if (!skb_queue_empty(&tp->rx_queue)) {
 		while (work_done < budget) {
@@ -1746,7 +1746,18 @@  find_next_rx:
 		}
 
 submit:
-		r8152_submit_rx(tp, agg, GFP_ATOMIC);
+		if (!ret) {
+			ret = r8152_submit_rx(tp, agg, GFP_ATOMIC);
+		} else {
+			urb->actual_length = 0;
+			list_add_tail(&agg->list, next);
+		}
+	}
+
+	if (!list_empty(&rx_queue)) {
+		spin_lock_irqsave(&tp->rx_lock, flags);
+		list_splice_tail(&rx_queue, &tp->rx_done);
+		spin_unlock_irqrestore(&tp->rx_lock, flags);
 	}
 
 out1: