mbox series

[RFC,0/3] hw/net/tulip: Fix LP#1874539

Message ID 20200423231644.15786-1-f4bug@amsat.org
Headers show
Series hw/net/tulip: Fix LP#1874539 | expand

Message

Philippe Mathieu-Daudé April 23, 2020, 11:16 p.m. UTC
Attempt to fix the launchpad bug filled by Helge:

  In a qemu-system-hppa system, qemu release v5.0.0-rc,
  the tulip nic driver is broken.  The tulip nic is detected,
  even getting DHCP info does work.  But when trying to
  download bigger files via network, the tulip driver gets
  stuck.

Philippe Mathieu-Daudé (3):
  hw/net/tulip: Fix 'Descriptor Error' definition
  hw/net/tulip: Log descriptor overflows
  hw/net/tulip: Set descriptor error bit when lenght is incorrect

 hw/net/tulip.h |  2 +-
 hw/net/tulip.c | 32 ++++++++++++++++++++++++++++----
 2 files changed, 29 insertions(+), 5 deletions(-)

Comments

Helge Deller April 24, 2020, 3:27 p.m. UTC | #1
* Philippe Mathieu-Daudé <f4bug@amsat.org>:
> Attempt to fix the launchpad bug filled by Helge:
>
>   In a qemu-system-hppa system, qemu release v5.0.0-rc,
>   the tulip nic driver is broken.  The tulip nic is detected,
>   even getting DHCP info does work.  But when trying to
>   download bigger files via network, the tulip driver gets
>   stuck.
>
> Philippe Mathieu-Daudé (3):
>   hw/net/tulip: Fix 'Descriptor Error' definition
>   hw/net/tulip: Log descriptor overflows
>   hw/net/tulip: Set descriptor error bit when lenght is incorrect
>
>  hw/net/tulip.h |  2 +-
>  hw/net/tulip.c | 32 ++++++++++++++++++++++++++++----
>  2 files changed, 29 insertions(+), 5 deletions(-)

Philippe, thanks for your efforts. Sadly your patch did not fixed the
bug itself, but it had some nice cleanups which should be included at
some point.

Regarding the tulip hang reported by me, the patch below does fix the
issue.

[PATCH] Fix tulip rx hang
Cc: Prasad J Pandit <pjp@fedoraproject.org>
Fixes: 8ffb7265af ("check frame size and r/w data length")
Buglink: https://bugs.launchpad.net/bugs/1874539
Signed-off-by: Helge Deller <deller@gmx.de>

Commit 8ffb7265af ("check frame size and r/w data length") introduced
checks to prevent accesses outside of the rx/tx buffers. But the new
checks were plain wrong. rx_frame_len does count backwards, and the
surrounding code ensures that rx_frame_len will not be bigger than
rx_frame_size. Remove those checks again.

diff --git a/hw/net/tulip.c b/hw/net/tulip.c
index 1295f51d07..59d21defcc 100644
--- a/hw/net/tulip.c
+++ b/hw/net/tulip.c
@@ -171,9 +171,6 @@ static void tulip_copy_rx_bytes(TULIPState *s, struct tulip_descriptor *desc)
             len = s->rx_frame_len;
         }

-        if (s->rx_frame_len + len > sizeof(s->rx_frame)) {
-            return;
-        }
         pci_dma_write(&s->dev, desc->buf_addr1, s->rx_frame +
             (s->rx_frame_size - s->rx_frame_len), len);
         s->rx_frame_len -= len;
@@ -186,9 +183,6 @@ static void tulip_copy_rx_bytes(TULIPState *s, struct tulip_descriptor *desc)
             len = s->rx_frame_len;
         }

-        if (s->rx_frame_len + len > sizeof(s->rx_frame)) {
-            return;
-        }
         pci_dma_write(&s->dev, desc->buf_addr2, s->rx_frame +
             (s->rx_frame_size - s->rx_frame_len), len);
         s->rx_frame_len -= len;
Jason Wang April 26, 2020, 2:49 a.m. UTC | #2
On 2020/4/24 下午11:27, Helge Deller wrote:
> * Philippe Mathieu-Daudé <f4bug@amsat.org>:
>> Attempt to fix the launchpad bug filled by Helge:
>>
>>    In a qemu-system-hppa system, qemu release v5.0.0-rc,
>>    the tulip nic driver is broken.  The tulip nic is detected,
>>    even getting DHCP info does work.  But when trying to
>>    download bigger files via network, the tulip driver gets
>>    stuck.
>>
>> Philippe Mathieu-Daudé (3):
>>    hw/net/tulip: Fix 'Descriptor Error' definition
>>    hw/net/tulip: Log descriptor overflows
>>    hw/net/tulip: Set descriptor error bit when lenght is incorrect
>>
>>   hw/net/tulip.h |  2 +-
>>   hw/net/tulip.c | 32 ++++++++++++++++++++++++++++----
>>   2 files changed, 29 insertions(+), 5 deletions(-)
> Philippe, thanks for your efforts. Sadly your patch did not fixed the
> bug itself, but it had some nice cleanups which should be included at
> some point.
>
> Regarding the tulip hang reported by me, the patch below does fix the
> issue.
>
> [PATCH] Fix tulip rx hang
> Cc: Prasad J Pandit <pjp@fedoraproject.org>
> Fixes: 8ffb7265af ("check frame size and r/w data length")
> Buglink: https://bugs.launchpad.net/bugs/1874539
> Signed-off-by: Helge Deller <deller@gmx.de>
>
> Commit 8ffb7265af ("check frame size and r/w data length") introduced
> checks to prevent accesses outside of the rx/tx buffers. But the new
> checks were plain wrong. rx_frame_len does count backwards, and the
> surrounding code ensures that rx_frame_len will not be bigger than
> rx_frame_size. Remove those checks again.
>
> diff --git a/hw/net/tulip.c b/hw/net/tulip.c
> index 1295f51d07..59d21defcc 100644
> --- a/hw/net/tulip.c
> +++ b/hw/net/tulip.c
> @@ -171,9 +171,6 @@ static void tulip_copy_rx_bytes(TULIPState *s, struct tulip_descriptor *desc)
>               len = s->rx_frame_len;
>           }
>
> -        if (s->rx_frame_len + len > sizeof(s->rx_frame)) {
> -            return;
> -        }
>           pci_dma_write(&s->dev, desc->buf_addr1, s->rx_frame +
>               (s->rx_frame_size - s->rx_frame_len), len);
>           s->rx_frame_len -= len;
> @@ -186,9 +183,6 @@ static void tulip_copy_rx_bytes(TULIPState *s, struct tulip_descriptor *desc)
>               len = s->rx_frame_len;
>           }
>
> -        if (s->rx_frame_len + len > sizeof(s->rx_frame)) {
> -            return;
> -        }
>           pci_dma_write(&s->dev, desc->buf_addr2, s->rx_frame +
>               (s->rx_frame_size - s->rx_frame_len), len);
>           s->rx_frame_len -= len;
>

Looks good to me.

Would you please send a formal patch and cc Peter.

Consider we are about to release 5.0, it's better for him to apply the 
patch directly.

Thanks
Peter Maydell April 26, 2020, 7:57 a.m. UTC | #3
On Sun, 26 Apr 2020 at 03:50, Jason Wang <jasowang@redhat.com> wrote:

> Looks good to me.
>
> Would you please send a formal patch and cc Peter.
>
> Consider we are about to release 5.0, it's better for him to apply the
> patch directly.

I am not applying any further patches for 5.0 unless they come
with an attached rock-solid justification for why we should
delay the release again for them.

thanks
-- PMM
Jason Wang April 27, 2020, 3:32 a.m. UTC | #4
On 2020/4/26 下午3:57, Peter Maydell wrote:
> On Sun, 26 Apr 2020 at 03:50, Jason Wang<jasowang@redhat.com>  wrote:
>
>> Looks good to me.
>>
>> Would you please send a formal patch and cc Peter.
>>
>> Consider we are about to release 5.0, it's better for him to apply the
>> patch directly.
> I am not applying any further patches for 5.0 unless they come
> with an attached rock-solid justification for why we should
> delay the release again for them.
>
> thanks
> -- PMM


Ok.

I will queue that patch for 5.1.

Thanks


>
Philippe Mathieu-Daudé May 12, 2020, 7:13 a.m. UTC | #5
Hi Jason,

On 4/27/20 5:32 AM, Jason Wang wrote:
> On 2020/4/26 下午3:57, Peter Maydell wrote:
>> On Sun, 26 Apr 2020 at 03:50, Jason Wang<jasowang@redhat.com>  wrote:
>>
>>> Looks good to me.
>>>
>>> Would you please send a formal patch and cc Peter.
>>>
>>> Consider we are about to release 5.0, it's better for him to apply the
>>> patch directly.
>> I am not applying any further patches for 5.0 unless they come
>> with an attached rock-solid justification for why we should
>> delay the release again for them.
>>
>> thanks
>> -- PMM
> 
> 
> Ok.
> 
> I will queue that patch for 5.1.

Can you queue patches #1 and #2?

> 
> Thanks
> 
> 
>>
> 
>
Jason Wang May 13, 2020, 2:18 a.m. UTC | #6
On 2020/5/12 下午3:13, Philippe Mathieu-Daudé wrote:
> Hi Jason,
>
> On 4/27/20 5:32 AM, Jason Wang wrote:
>> On 2020/4/26 下午3:57, Peter Maydell wrote:
>>> On Sun, 26 Apr 2020 at 03:50, Jason Wang<jasowang@redhat.com>  wrote:
>>>
>>>> Looks good to me.
>>>>
>>>> Would you please send a formal patch and cc Peter.
>>>>
>>>> Consider we are about to release 5.0, it's better for him to apply the
>>>> patch directly.
>>> I am not applying any further patches for 5.0 unless they come
>>> with an attached rock-solid justification for why we should
>>> delay the release again for them.
>>>
>>> thanks
>>> -- PMM
>>
>>
>> Ok.
>>
>> I will queue that patch for 5.1.
>
> Can you queue patches #1 and #2?


Queued.

Thanks


>
>>
>> Thanks
>>
>>
>>>
>>
>>
>