Patchwork QEMU -netdev vhost=on + -device virtio-net-pci bug

login
register
mail settings
Submitter Alexey Kardashevskiy
Date March 5, 2013, 10:57 p.m.
Message ID <51367864.6080502@ozlabs.ru>
Download mbox | patch
Permalink /patch/225195/
State New
Headers show

Comments

Alexey Kardashevskiy - March 5, 2013, 10:57 p.m.
On 06/03/13 01:23, Michael S. Tsirkin wrote:
> On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
>> On 05/03/13 23:56, Michael S. Tsirkin wrote:
>>>> The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
>>>> masking support" breaks virtio-net + vhost=on on PPC64 platform.
>>>>
>>>> The problem command line is:
>>>> 1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
>>>> -device virtio-net-pci,netdev=tapnet,addr=0.0 \
>>>
>>> I think the issue is irqfd in not supported on kvm ppc.
>>
>> How can I make sure this is the case? Some work has been done there
>> recently but midnight is quite late to figure this out :)
>
> Look in virtio_pci_set_guest_notifiers, what is the
> value of with_irqfd?
>    bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
>          kvm_msi_via_irqfd_enabled();
>
> Also check what each of the values in the expression above is.

Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".

>>> Could you please check this:
>>>
>>> +        /* If guest supports masking, set up irqfd now.
>>> +         * Otherwise, delay until unmasked in the frontend.
>>> +         */
>>> +        if (proxy->vdev->guest_notifier_mask) {
>>> +            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
>>> +            if (ret < 0) {
>>> +                kvm_virtio_pci_vq_vector_release(proxy, vector);
>>> +                goto undo;
>>> +            }
>>> +        }
>>>
>>>
>>> Could you please add a printf before "undo" and check whether the
>>> error path above is triggered?
>>
>>
>> Checked, it is not triggered.
>>
>>
>> --
>> Alexey
>
> I think I get it.
> Does the following help (probably not the right thing to do, but just
> for testing):


It did not compile (no "queue_no") :) I changed it a bit and now vhost=on 
works fine:

      /* Must set vector notifier after guest notifier has been assigned */




> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>
> ---
>
> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> index ba56ab2..c2a0c5a 100644
> --- a/hw/virtio-pci.c
> +++ b/hw/virtio-pci.c
> @@ -800,6 +800,10 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>           }
>       }
>
> +    if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
> +        proxy->vdev->guest_notifier_mask(proxy->vdev, queue_no, !assign);
> +    }
> +
>       /* Must set vector notifier after guest notifier has been assigned */
>       if (with_irqfd && assign) {
>           proxy->vector_irqfd =
>
Michael S. Tsirkin - March 6, 2013, 10:31 a.m.
On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote:
> On 06/03/13 01:23, Michael S. Tsirkin wrote:
> >On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
> >>On 05/03/13 23:56, Michael S. Tsirkin wrote:
> >>>>The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
> >>>>masking support" breaks virtio-net + vhost=on on PPC64 platform.
> >>>>
> >>>>The problem command line is:
> >>>>1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
> >>>>-device virtio-net-pci,netdev=tapnet,addr=0.0 \
> >>>
> >>>I think the issue is irqfd in not supported on kvm ppc.
> >>
> >>How can I make sure this is the case? Some work has been done there
> >>recently but midnight is quite late to figure this out :)
> >
> >Look in virtio_pci_set_guest_notifiers, what is the
> >value of with_irqfd?
> >   bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
> >         kvm_msi_via_irqfd_enabled();
> >
> >Also check what each of the values in the expression above is.
> 
> Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".
> 
> >>>Could you please check this:
> >>>
> >>>+        /* If guest supports masking, set up irqfd now.
> >>>+         * Otherwise, delay until unmasked in the frontend.
> >>>+         */
> >>>+        if (proxy->vdev->guest_notifier_mask) {
> >>>+            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
> >>>+            if (ret < 0) {
> >>>+                kvm_virtio_pci_vq_vector_release(proxy, vector);
> >>>+                goto undo;
> >>>+            }
> >>>+        }
> >>>
> >>>
> >>>Could you please add a printf before "undo" and check whether the
> >>>error path above is triggered?
> >>
> >>
> >>Checked, it is not triggered.
> >>
> >>
> >>--
> >>Alexey
> >
> >I think I get it.
> >Does the following help (probably not the right thing to do, but just
> >for testing):
> 
> 
> It did not compile (no "queue_no") :) I changed it a bit and now
> vhost=on works fine:
> 
> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> index a869f53..df1e443 100644
> --- a/hw/virtio-pci.c
> +++ b/hw/virtio-pci.c
> @@ -798,6 +798,10 @@ static int
> virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool
> assign)
>          if (r < 0) {
>              goto assign_error;
>          }
> +
> +        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
> +            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
> +        }
>      }
> 
>      /* Must set vector notifier after guest notifier has been assigned */
> 
> 

I see, OK, the issue is that vhost now starts in a masked state
and no one unmasks it. While the patch will work I think,
it does not benefit from backend masking, the right thing
to do is to add mask notifiers, like what the irqfd path does.

Will look into this, thanks.
Alexey Kardashevskiy - March 8, 2013, 4:48 a.m.
Michael,

Thanks for the fix.

There was another question which was lost in the thread.

I am testing virtio-net in two ways:

Old -net interface:
-net tap,ifname=tap0,script=qemu-ifup.sh \
-net nic,model=virtio,addr=0:0:0

(qemu) info network
hub 0
  \ virtio-net-pci.0: 
index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
  \ tap.0: 
index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown

New -netdev interface:
-netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
-device virtio-net-pci,netdev=tapnet,addr=0.0

(qemu) info network
virtio-net-pci.0: 
index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
  \ tapnet: 
index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown


I get very different virtio0 device features and speed (70MB/s vs. 
700MB/s). I guess somehow the "hub 0" is responsible but there is no way to 
avoid it.

Is there any way to speed up the virtio-net using the old -net interface?



On 06/03/13 21:31, Michael S. Tsirkin wrote:
> On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote:
>> On 06/03/13 01:23, Michael S. Tsirkin wrote:
>>> On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
>>>> On 05/03/13 23:56, Michael S. Tsirkin wrote:
>>>>>> The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
>>>>>> masking support" breaks virtio-net + vhost=on on PPC64 platform.
>>>>>>
>>>>>> The problem command line is:
>>>>>> 1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
>>>>>> -device virtio-net-pci,netdev=tapnet,addr=0.0 \
>>>>>
>>>>> I think the issue is irqfd in not supported on kvm ppc.
>>>>
>>>> How can I make sure this is the case? Some work has been done there
>>>> recently but midnight is quite late to figure this out :)
>>>
>>> Look in virtio_pci_set_guest_notifiers, what is the
>>> value of with_irqfd?
>>>    bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
>>>          kvm_msi_via_irqfd_enabled();
>>>
>>> Also check what each of the values in the expression above is.
>>
>> Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".
>>
>>>>> Could you please check this:
>>>>>
>>>>> +        /* If guest supports masking, set up irqfd now.
>>>>> +         * Otherwise, delay until unmasked in the frontend.
>>>>> +         */
>>>>> +        if (proxy->vdev->guest_notifier_mask) {
>>>>> +            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
>>>>> +            if (ret < 0) {
>>>>> +                kvm_virtio_pci_vq_vector_release(proxy, vector);
>>>>> +                goto undo;
>>>>> +            }
>>>>> +        }
>>>>>
>>>>>
>>>>> Could you please add a printf before "undo" and check whether the
>>>>> error path above is triggered?
>>>>
>>>>
>>>> Checked, it is not triggered.
>>>>
>>>>
>>>> --
>>>> Alexey
>>>
>>> I think I get it.
>>> Does the following help (probably not the right thing to do, but just
>>> for testing):
>>
>>
>> It did not compile (no "queue_no") :) I changed it a bit and now
>> vhost=on works fine:
>>
>> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
>> index a869f53..df1e443 100644
>> --- a/hw/virtio-pci.c
>> +++ b/hw/virtio-pci.c
>> @@ -798,6 +798,10 @@ static int
>> virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool
>> assign)
>>           if (r < 0) {
>>               goto assign_error;
>>           }
>> +
>> +        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
>> +            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
>> +        }
>>       }
>>
>>       /* Must set vector notifier after guest notifier has been assigned */
>>
>>
>
> I see, OK, the issue is that vhost now starts in a masked state
> and no one unmasks it. While the patch will work I think,
> it does not benefit from backend masking, the right thing
> to do is to add mask notifiers, like what the irqfd path does.
>
> Will look into this, thanks.
>
Michael S. Tsirkin - March 10, 2013, 9:24 a.m.
On Fri, Mar 08, 2013 at 03:48:04PM +1100, Alexey Kardashevskiy wrote:
> Michael,
> 
> Thanks for the fix.
> 
> There was another question which was lost in the thread.
> 
> I am testing virtio-net in two ways:
> 
> Old -net interface:
> -net tap,ifname=tap0,script=qemu-ifup.sh \
> -net nic,model=virtio,addr=0:0:0
> 
> (qemu) info network
> hub 0
>  \ virtio-net-pci.0:
> index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
>  \ tap.0: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
> 
> New -netdev interface:
> -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
> -device virtio-net-pci,netdev=tapnet,addr=0.0
> 
> (qemu) info network
> virtio-net-pci.0:
> index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
>  \ tapnet: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
> 
> 
> I get very different virtio0 device features and speed (70MB/s vs.
> 700MB/s). I guess somehow the "hub 0" is responsible but there is no
> way to avoid it.
> 
> Is there any way to speed up the virtio-net using the old -net interface?

Not at the moment. Why do you want to use it?

> 
> On 06/03/13 21:31, Michael S. Tsirkin wrote:
> >On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote:
> >>On 06/03/13 01:23, Michael S. Tsirkin wrote:
> >>>On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
> >>>>On 05/03/13 23:56, Michael S. Tsirkin wrote:
> >>>>>>The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
> >>>>>>masking support" breaks virtio-net + vhost=on on PPC64 platform.
> >>>>>>
> >>>>>>The problem command line is:
> >>>>>>1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
> >>>>>>-device virtio-net-pci,netdev=tapnet,addr=0.0 \
> >>>>>
> >>>>>I think the issue is irqfd in not supported on kvm ppc.
> >>>>
> >>>>How can I make sure this is the case? Some work has been done there
> >>>>recently but midnight is quite late to figure this out :)
> >>>
> >>>Look in virtio_pci_set_guest_notifiers, what is the
> >>>value of with_irqfd?
> >>>   bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
> >>>         kvm_msi_via_irqfd_enabled();
> >>>
> >>>Also check what each of the values in the expression above is.
> >>
> >>Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".
> >>
> >>>>>Could you please check this:
> >>>>>
> >>>>>+        /* If guest supports masking, set up irqfd now.
> >>>>>+         * Otherwise, delay until unmasked in the frontend.
> >>>>>+         */
> >>>>>+        if (proxy->vdev->guest_notifier_mask) {
> >>>>>+            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
> >>>>>+            if (ret < 0) {
> >>>>>+                kvm_virtio_pci_vq_vector_release(proxy, vector);
> >>>>>+                goto undo;
> >>>>>+            }
> >>>>>+        }
> >>>>>
> >>>>>
> >>>>>Could you please add a printf before "undo" and check whether the
> >>>>>error path above is triggered?
> >>>>
> >>>>
> >>>>Checked, it is not triggered.
> >>>>
> >>>>
> >>>>--
> >>>>Alexey
> >>>
> >>>I think I get it.
> >>>Does the following help (probably not the right thing to do, but just
> >>>for testing):
> >>
> >>
> >>It did not compile (no "queue_no") :) I changed it a bit and now
> >>vhost=on works fine:
> >>
> >>diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> >>index a869f53..df1e443 100644
> >>--- a/hw/virtio-pci.c
> >>+++ b/hw/virtio-pci.c
> >>@@ -798,6 +798,10 @@ static int
> >>virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool
> >>assign)
> >>          if (r < 0) {
> >>              goto assign_error;
> >>          }
> >>+
> >>+        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
> >>+            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
> >>+        }
> >>      }
> >>
> >>      /* Must set vector notifier after guest notifier has been assigned */
> >>
> >>
> >
> >I see, OK, the issue is that vhost now starts in a masked state
> >and no one unmasks it. While the patch will work I think,
> >it does not benefit from backend masking, the right thing
> >to do is to add mask notifiers, like what the irqfd path does.
> >
> >Will look into this, thanks.
> >
> 
> 
> -- 
> Alexey
Alexey Kardashevskiy - March 10, 2013, 11:25 a.m.
On 10/03/13 20:24, Michael S. Tsirkin wrote:
> On Fri, Mar 08, 2013 at 03:48:04PM +1100, Alexey Kardashevskiy wrote:
>> Michael,
>>
>> Thanks for the fix.
>>
>> There was another question which was lost in the thread.
>>
>> I am testing virtio-net in two ways:
>>
>> Old -net interface:
>> -net tap,ifname=tap0,script=qemu-ifup.sh \
>> -net nic,model=virtio,addr=0:0:0
>>
>> (qemu) info network
>> hub 0
>>   \ virtio-net-pci.0:
>> index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
>>   \ tap.0: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
>>
>> New -netdev interface:
>> -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
>> -device virtio-net-pci,netdev=tapnet,addr=0.0
>>
>> (qemu) info network
>> virtio-net-pci.0:
>> index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
>>   \ tapnet: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
>>
>>
>> I get very different virtio0 device features and speed (70MB/s vs.
>> 700MB/s). I guess somehow the "hub 0" is responsible but there is no
>> way to avoid it.
>>
>> Is there any way to speed up the virtio-net using the old -net interface?
>
> Not at the moment. Why do you want to use it?


It is not like I really want it, I was just trying to understand (now it is
more or less clear) why exactly "not" as the documentation is saying about 
-net and -netdev as synonyms while they are not.

Patch

diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
index a869f53..df1e443 100644
--- a/hw/virtio-pci.c
+++ b/hw/virtio-pci.c
@@ -798,6 +798,10 @@  static int virtio_pci_set_guest_notifiers(DeviceState 
*d, int nvqs, bool assign)
          if (r < 0) {
              goto assign_error;
          }
+
+        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
+            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
+        }
      }