diff mbox series

[2/2] Allow VIRTIO_F_IN_ORDER to be negotiated for vdpa devices

Message ID 20220215072231.2498-3-gdawar@xilinx.com
State New
Headers show
Series Allow VIRTIO_F_IN_ORDER negotiation with vhost-vdpa | expand

Commit Message

Gautam Dawar Feb. 15, 2022, 7:22 a.m. UTC
This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
for vhost-vdpa backend when the underlying device supports this
feature.
This would aid in reaping performance benefits with HW devices
that implement this feature. At the same time, it shouldn't have
any negative impact as vhost-vdpa backend doesn't involve any
userspace virtqueue operations.

Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
---
 hw/net/virtio-net.c | 10 ++++++++++
 net/vhost-vdpa.c    |  1 +
 2 files changed, 11 insertions(+)

Comments

Eugenio Perez Martin Feb. 15, 2022, 3:22 p.m. UTC | #1
On Tue, Feb 15, 2022 at 8:23 AM Gautam Dawar <gautam.dawar@xilinx.com> wrote:
>
> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
> for vhost-vdpa backend when the underlying device supports this
> feature.
> This would aid in reaping performance benefits with HW devices
> that implement this feature. At the same time, it shouldn't have
> any negative impact as vhost-vdpa backend doesn't involve any
> userspace virtqueue operations.
>
> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
> ---
>  hw/net/virtio-net.c | 10 ++++++++++
>  net/vhost-vdpa.c    |  1 +
>  2 files changed, 11 insertions(+)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index cf8ab0f8af..a1089d06f6 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>      nc->rxfilter_notify_enabled = 1;
>
>     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);

If more users arise, we could declare a set of features that can be
bypassed as long as the backend doesn't use userspace networking in
virtio.h/.c. Out_of_qemu_tree_valid_features?

But I think it is better to restrict the changes here at the moment.

>          struct virtio_net_config netcfg = {};
> +
>          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
>          vhost_net_set_config(get_vhost_net(nc->peer),
>              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> +
> +       /*
> +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
> +         * make it available for negotiation.
> +         */
> +       features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> +       n->host_features |= features;
>      }
> +
>      QTAILQ_INIT(&n->rsc_chains);
>      n->qdev = dev;
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 25dd6dd975..2886cba5ec 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
>      VIRTIO_NET_F_CTRL_VQ,
>      VIRTIO_F_IOMMU_PLATFORM,
>      VIRTIO_F_RING_PACKED,
> +    VIRTIO_F_IN_ORDER,
>      VIRTIO_NET_F_RSS,
>      VIRTIO_NET_F_HASH_REPORT,
>      VIRTIO_NET_F_GUEST_ANNOUNCE,
> --
> 2.30.1
>

Acked-by: Eugenio Pérez <eperezma@redhat.com>
Jason Wang Feb. 17, 2022, 7:16 a.m. UTC | #2
On Tue, Feb 15, 2022 at 3:23 PM Gautam Dawar <gautam.dawar@xilinx.com> wrote:
>
> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
> for vhost-vdpa backend when the underlying device supports this
> feature.
> This would aid in reaping performance benefits with HW devices
> that implement this feature. At the same time, it shouldn't have
> any negative impact as vhost-vdpa backend doesn't involve any
> userspace virtqueue operations.
>
> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
> ---
>  hw/net/virtio-net.c | 10 ++++++++++
>  net/vhost-vdpa.c    |  1 +
>  2 files changed, 11 insertions(+)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index cf8ab0f8af..a1089d06f6 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>      nc->rxfilter_notify_enabled = 1;
>
>     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
>          struct virtio_net_config netcfg = {};
> +
>          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
>          vhost_net_set_config(get_vhost_net(nc->peer),
>              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> +
> +       /*
> +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
> +         * make it available for negotiation.
> +         */
> +       features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> +       n->host_features |= features;

This looks like a hack, considering we will finally support in_order.
I wonder if it's better to

1) introduce command line parameters "in_order"
2) fail without vhost-vdpa

?

Thanks

>      }
> +
>      QTAILQ_INIT(&n->rsc_chains);
>      n->qdev = dev;
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 25dd6dd975..2886cba5ec 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
>      VIRTIO_NET_F_CTRL_VQ,
>      VIRTIO_F_IOMMU_PLATFORM,
>      VIRTIO_F_RING_PACKED,
> +    VIRTIO_F_IN_ORDER,
>      VIRTIO_NET_F_RSS,
>      VIRTIO_NET_F_HASH_REPORT,
>      VIRTIO_NET_F_GUEST_ANNOUNCE,
> --
> 2.30.1
>
Michael S. Tsirkin Feb. 17, 2022, 7:32 a.m. UTC | #3
On Tue, Feb 15, 2022 at 12:52:31PM +0530, Gautam Dawar wrote:
> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
> for vhost-vdpa backend when the underlying device supports this
> feature.
> This would aid in reaping performance benefits with HW devices
> that implement this feature. At the same time, it shouldn't have
> any negative impact as vhost-vdpa backend doesn't involve any
> userspace virtqueue operations.
> 
> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>

Having features that hardware implements but qemu does not
means we can't migrate between them.
So I'd rather see a userspace implementation.

> ---
>  hw/net/virtio-net.c | 10 ++++++++++
>  net/vhost-vdpa.c    |  1 +
>  2 files changed, 11 insertions(+)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index cf8ab0f8af..a1089d06f6 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>      nc->rxfilter_notify_enabled = 1;
>  
>     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
>          struct virtio_net_config netcfg = {};
> +
>          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
>          vhost_net_set_config(get_vhost_net(nc->peer),
>              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> +
> +	/*
> +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
> +         * make it available for negotiation.
> +         */
> +	features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> +	n->host_features |= features;
>      }
> +
>      QTAILQ_INIT(&n->rsc_chains);
>      n->qdev = dev;
>  
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 25dd6dd975..2886cba5ec 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
>      VIRTIO_NET_F_CTRL_VQ,
>      VIRTIO_F_IOMMU_PLATFORM,
>      VIRTIO_F_RING_PACKED,
> +    VIRTIO_F_IN_ORDER,
>      VIRTIO_NET_F_RSS,
>      VIRTIO_NET_F_HASH_REPORT,
>      VIRTIO_NET_F_GUEST_ANNOUNCE,
> -- 
> 2.30.1
Gautam Dawar Feb. 17, 2022, 8:27 a.m. UTC | #4
-----Original Message-----
From: Jason Wang <jasowang@redhat.com> 
Sent: Thursday, February 17, 2022 12:46 PM
To: Gautam Dawar <gdawar@xilinx.com>
Cc: mst <mst@redhat.com>; qemu-devel <qemu-devel@nongnu.org>; eperezma <eperezma@redhat.com>; Martin Petrus Hubertus Habets <martinh@xilinx.com>; Harpreet Singh Anand <hanand@xilinx.com>; Gautam Dawar <gdawar@xilinx.com>; Tanuj Murlidhar Kamde <tanujk@xilinx.com>; Pablo Cascon <pabloc@xilinx.com>
Subject: Re: [PATCH 2/2] Allow VIRTIO_F_IN_ORDER to be negotiated for vdpa devices

On Tue, Feb 15, 2022 at 3:23 PM Gautam Dawar <gautam.dawar@xilinx.com> wrote:
>
> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit for 
> vhost-vdpa backend when the underlying device supports this feature.
> This would aid in reaping performance benefits with HW devices that 
> implement this feature. At the same time, it shouldn't have any 
> negative impact as vhost-vdpa backend doesn't involve any userspace 
> virtqueue operations.
>
> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
> ---
>  hw/net/virtio-net.c | 10 ++++++++++
>  net/vhost-vdpa.c    |  1 +
>  2 files changed, 11 insertions(+)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c index 
> cf8ab0f8af..a1089d06f6 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>      nc->rxfilter_notify_enabled = 1;
>
>     if (nc->peer && nc->peer->info->type == 
> NET_CLIENT_DRIVER_VHOST_VDPA) {
> +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
>          struct virtio_net_config netcfg = {};
> +
>          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
>          vhost_net_set_config(get_vhost_net(nc->peer),
>              (uint8_t *)&netcfg, 0, ETH_ALEN, 
> VHOST_SET_CONFIG_TYPE_MASTER);
> +
> +       /*
> +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
> +         * make it available for negotiation.
> +         */
> +       features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> +       n->host_features |= features;

This looks like a hack, considering we will finally support in_order.
[GD>>] Yes , I agree a complete solution that will support the emulated virtio device with in_order rx/tx virtqueue functions will definitely be better but at the same time it will take considerable amount of time and effort. I also noticed that something similar (https://patchew.org/QEMU/1533833677-27512-1-git-send-email-i.maximets@samsung.com/)  was proposed years ago which got dropped for similar reasons and it has been status quo since then.
So, unless we know that this work is already in progress & will be up-streamed soon and if you don’t see any side-effects of this patch, we can get it integrated first and then this can be reverted when the complete solution is available. This would help in benchmarking performance boosts achieved with IN_ORDER feature.

I wonder if it's better to

1) introduce command line parameters "in_order"
2) fail without vhost-vdpa

?
[GD>>] I think this patch is taking care of both conditions above with the  if (nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) check. With this, the VIRTIO_F_IN_ORDER feature bit will be exposed by QEMU only when the underlying device supports it. However, it will be negotiated and acked only when the virtio_net driver also supports. Accordingly, in my testing, Linux kernel's virtio_net driver doesn't send it back to the supporting device while DPDK virtio_net driver does.

Thanks

>      }
> +
>      QTAILQ_INIT(&n->rsc_chains);
>      n->qdev = dev;
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c index 
> 25dd6dd975..2886cba5ec 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
>      VIRTIO_NET_F_CTRL_VQ,
>      VIRTIO_F_IOMMU_PLATFORM,
>      VIRTIO_F_RING_PACKED,
> +    VIRTIO_F_IN_ORDER,
>      VIRTIO_NET_F_RSS,
>      VIRTIO_NET_F_HASH_REPORT,
>      VIRTIO_NET_F_GUEST_ANNOUNCE,
> --
> 2.30.1
>
Michael S. Tsirkin Feb. 17, 2022, 8:32 a.m. UTC | #5
On Thu, Feb 17, 2022 at 08:27:14AM +0000, Gautam Dawar wrote:
> [GD>>] Yes , I agree a complete solution that will support the
> emulated virtio device with in_order rx/tx virtqueue functions will
> definitely be better but at the same time it will take considerable
> amount of time and effort. I also noticed that something similar
> (https://patchew.org/QEMU/1533833677-27512-1-git-send-email-i.maximets@samsung.com/)
> was proposed years ago which got dropped for similar reasons and it
> has been status quo since then.


Not applying a patch until it's complete is really the only
leverage maintainers have to push for complete patches.
Otherwise people keep adding half-baked code and hoping
that someone else does the rest of the work.

>  So, unless we know that this work is
> already in progress & will be up-streamed soon and if you don’t see
> any side-effects of this patch, we can get it integrated first and
> then this can be reverted when the complete solution is available.
> This would help in benchmarking performance boosts achieved with
> IN_ORDER feature.

You can just carry the patch in private for benchmarking I guess?
Eugenio Perez Martin Feb. 17, 2022, 8:54 a.m. UTC | #6
On Thu, Feb 17, 2022 at 8:16 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Feb 15, 2022 at 3:23 PM Gautam Dawar <gautam.dawar@xilinx.com> wrote:
> >
> > This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
> > for vhost-vdpa backend when the underlying device supports this
> > feature.
> > This would aid in reaping performance benefits with HW devices
> > that implement this feature. At the same time, it shouldn't have
> > any negative impact as vhost-vdpa backend doesn't involve any
> > userspace virtqueue operations.
> >
> > Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
> > ---
> >  hw/net/virtio-net.c | 10 ++++++++++
> >  net/vhost-vdpa.c    |  1 +
> >  2 files changed, 11 insertions(+)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index cf8ab0f8af..a1089d06f6 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> >      nc->rxfilter_notify_enabled = 1;
> >
> >     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
> >          struct virtio_net_config netcfg = {};
> > +
> >          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> >          vhost_net_set_config(get_vhost_net(nc->peer),
> >              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> > +
> > +       /*
> > +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
> > +         * make it available for negotiation.
> > +         */
> > +       features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> > +       n->host_features |= features;
>
> This looks like a hack, considering we will finally support in_order.
> I wonder if it's better to
>
> 1) introduce command line parameters "in_order"
> 2) fail without vhost-vdpa
>
> ?
>

Do you mean this steps?:
- Add the cmdline parameter, defaulting it to false.
- Even if it's set to true in cmdline, set to false as long as the
backend is different from vDPA. Since we're only scoping for net
devices, this means to add this feature bit check at this same place
at virtio_net_device_realize only if device property has been set to
true, actually.

Have I understood correctly?

Thanks!


> Thanks
>
> >      }
> > +
> >      QTAILQ_INIT(&n->rsc_chains);
> >      n->qdev = dev;
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 25dd6dd975..2886cba5ec 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
> >      VIRTIO_NET_F_CTRL_VQ,
> >      VIRTIO_F_IOMMU_PLATFORM,
> >      VIRTIO_F_RING_PACKED,
> > +    VIRTIO_F_IN_ORDER,
> >      VIRTIO_NET_F_RSS,
> >      VIRTIO_NET_F_HASH_REPORT,
> >      VIRTIO_NET_F_GUEST_ANNOUNCE,
> > --
> > 2.30.1
> >
>
Stefano Garzarella Feb. 17, 2022, 2:29 p.m. UTC | #7
On Thu, Feb 17, 2022 at 02:32:48AM -0500, Michael S. Tsirkin wrote:
>On Tue, Feb 15, 2022 at 12:52:31PM +0530, Gautam Dawar wrote:
>> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
>> for vhost-vdpa backend when the underlying device supports this
>> feature.
>> This would aid in reaping performance benefits with HW devices
>> that implement this feature. At the same time, it shouldn't have
>> any negative impact as vhost-vdpa backend doesn't involve any
>> userspace virtqueue operations.
>>
>> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
>
>Having features that hardware implements but qemu does not
>means we can't migrate between them.
>So I'd rather see a userspace implementation.

FYI Jason proposed to support VIRTIO_F_IN_ORDER in QEMU/Linux as an idea 
for the next GSoC/Outreachy. I offered to mentor and prepared a project 
description [1], if anyone else is interested, it would be great to have 
a co-mentor :-)

Thanks,
Stefano

[1] https://lists.gnu.org/archive/html/qemu-devel/2022-02/msg04032.html
Eugenio Perez Martin Feb. 18, 2022, 10:22 a.m. UTC | #8
On Thu, Feb 17, 2022 at 8:32 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Tue, Feb 15, 2022 at 12:52:31PM +0530, Gautam Dawar wrote:
> > This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
> > for vhost-vdpa backend when the underlying device supports this
> > feature.
> > This would aid in reaping performance benefits with HW devices
> > that implement this feature. At the same time, it shouldn't have
> > any negative impact as vhost-vdpa backend doesn't involve any
> > userspace virtqueue operations.
> >
> > Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
>
> Having features that hardware implements but qemu does not
> means we can't migrate between them.
> So I'd rather see a userspace implementation.
>

While I totally agree the userspace implementation is a better option,
would it be a problem if we implement it as a cmdline option as Jason
proposed?

I see other backends have similar issues with migration. For example
it's possible to run qemu with
-device=virtio-net-pci,...,indirect_desc=on and use a vhost-kernel
backend without indirect support in their features. I also understand
qemu emulated backend as "the base" somehow, but it should work
similarly to my example if cmdline parameter is off by default.

On the other hand, It may be worth thinking if it's worth waiting for
GSoC though, so we avoid this problem entirely at the moment. But I
feel that is going to come back with a different feature set with the
advent of more out of qemu devices and the fast adding of features of
VirtIO.

Thoughts?

Thanks!

> > ---
> >  hw/net/virtio-net.c | 10 ++++++++++
> >  net/vhost-vdpa.c    |  1 +
> >  2 files changed, 11 insertions(+)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index cf8ab0f8af..a1089d06f6 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
> >      nc->rxfilter_notify_enabled = 1;
> >
> >     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
> >          struct virtio_net_config netcfg = {};
> > +
> >          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
> >          vhost_net_set_config(get_vhost_net(nc->peer),
> >              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> > +
> > +     /*
> > +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
> > +         * make it available for negotiation.
> > +         */
> > +     features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> > +     n->host_features |= features;
> >      }
> > +
> >      QTAILQ_INIT(&n->rsc_chains);
> >      n->qdev = dev;
> >
> > diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> > index 25dd6dd975..2886cba5ec 100644
> > --- a/net/vhost-vdpa.c
> > +++ b/net/vhost-vdpa.c
> > @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
> >      VIRTIO_NET_F_CTRL_VQ,
> >      VIRTIO_F_IOMMU_PLATFORM,
> >      VIRTIO_F_RING_PACKED,
> > +    VIRTIO_F_IN_ORDER,
> >      VIRTIO_NET_F_RSS,
> >      VIRTIO_NET_F_HASH_REPORT,
> >      VIRTIO_NET_F_GUEST_ANNOUNCE,
> > --
> > 2.30.1
>
Eugenio Perez Martin Feb. 18, 2022, 10:24 a.m. UTC | #9
On Thu, Feb 17, 2022 at 3:29 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> On Thu, Feb 17, 2022 at 02:32:48AM -0500, Michael S. Tsirkin wrote:
> >On Tue, Feb 15, 2022 at 12:52:31PM +0530, Gautam Dawar wrote:
> >> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
> >> for vhost-vdpa backend when the underlying device supports this
> >> feature.
> >> This would aid in reaping performance benefits with HW devices
> >> that implement this feature. At the same time, it shouldn't have
> >> any negative impact as vhost-vdpa backend doesn't involve any
> >> userspace virtqueue operations.
> >>
> >> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
> >
> >Having features that hardware implements but qemu does not
> >means we can't migrate between them.
> >So I'd rather see a userspace implementation.
>
> FYI Jason proposed to support VIRTIO_F_IN_ORDER in QEMU/Linux as an idea
> for the next GSoC/Outreachy. I offered to mentor and prepared a project
> description [1], if anyone else is interested, it would be great to have
> a co-mentor :-)
>

I'd be happy to co-mentor for sure, should I edit the wiki to reflect it?

Thanks!

> Thanks,
> Stefano
>
> [1] https://lists.gnu.org/archive/html/qemu-devel/2022-02/msg04032.html
>
Michael S. Tsirkin Feb. 18, 2022, 11:09 a.m. UTC | #10
On Tue, Feb 15, 2022 at 12:52:31PM +0530, Gautam Dawar wrote:
> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
> for vhost-vdpa backend when the underlying device supports this
> feature.
> This would aid in reaping performance benefits with HW devices
> that implement this feature. At the same time, it shouldn't have
> any negative impact as vhost-vdpa backend doesn't involve any
> userspace virtqueue operations.
> 
> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>

I have a question by the way. Would it be hard for xilinx device
to support VIRTIO_F_PARTIAL_ORDER instead of VIRTIO_F_IN_ORDER.

That proposal is on the virtio TC mailing list, but did not
get feedback from any hardware vendors. Feedback would be much
appreciated.

> ---
>  hw/net/virtio-net.c | 10 ++++++++++
>  net/vhost-vdpa.c    |  1 +
>  2 files changed, 11 insertions(+)
> 
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index cf8ab0f8af..a1089d06f6 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>      nc->rxfilter_notify_enabled = 1;
>  
>     if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
>          struct virtio_net_config netcfg = {};
> +
>          memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
>          vhost_net_set_config(get_vhost_net(nc->peer),
>              (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
> +
> +	/*
> +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
> +         * make it available for negotiation.
> +         */
> +	features = vhost_net_get_features(get_vhost_net(nc->peer), features);
> +	n->host_features |= features;
>      }
> +
>      QTAILQ_INIT(&n->rsc_chains);
>      n->qdev = dev;
>  
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 25dd6dd975..2886cba5ec 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
>      VIRTIO_NET_F_CTRL_VQ,
>      VIRTIO_F_IOMMU_PLATFORM,
>      VIRTIO_F_RING_PACKED,
> +    VIRTIO_F_IN_ORDER,
>      VIRTIO_NET_F_RSS,
>      VIRTIO_NET_F_HASH_REPORT,
>      VIRTIO_NET_F_GUEST_ANNOUNCE,
> -- 
> 2.30.1
Stefano Garzarella Feb. 18, 2022, 4:57 p.m. UTC | #11
On Fri, Feb 18, 2022 at 11:24:23AM +0100, Eugenio Perez Martin wrote:
>On Thu, Feb 17, 2022 at 3:29 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
>>
>> On Thu, Feb 17, 2022 at 02:32:48AM -0500, Michael S. Tsirkin wrote:
>> >On Tue, Feb 15, 2022 at 12:52:31PM +0530, Gautam Dawar wrote:
>> >> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
>> >> for vhost-vdpa backend when the underlying device supports this
>> >> feature.
>> >> This would aid in reaping performance benefits with HW devices
>> >> that implement this feature. At the same time, it shouldn't have
>> >> any negative impact as vhost-vdpa backend doesn't involve any
>> >> userspace virtqueue operations.
>> >>
>> >> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
>> >
>> >Having features that hardware implements but qemu does not
>> >means we can't migrate between them.
>> >So I'd rather see a userspace implementation.
>>
>> FYI Jason proposed to support VIRTIO_F_IN_ORDER in QEMU/Linux as an idea
>> for the next GSoC/Outreachy. I offered to mentor and prepared a project
>> description [1], if anyone else is interested, it would be great to have
>> a co-mentor :-)
>>
>
>I'd be happy to co-mentor for sure, should I edit the wiki to reflect it?

Great :-)

Yes, please edit the wiki page here: 
https://wiki.qemu.org/Internships/ProjectIdeas/VIRTIO_F_IN_ORDER

Thanks,
Stefano
Jason Wang Feb. 21, 2022, 4:31 a.m. UTC | #12
在 2022/2/18 下午6:22, Eugenio Perez Martin 写道:
> On Thu, Feb 17, 2022 at 8:32 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>> On Tue, Feb 15, 2022 at 12:52:31PM +0530, Gautam Dawar wrote:
>>> This patch adds the ability to negotiate VIRTIO_F_IN_ORDER bit
>>> for vhost-vdpa backend when the underlying device supports this
>>> feature.
>>> This would aid in reaping performance benefits with HW devices
>>> that implement this feature. At the same time, it shouldn't have
>>> any negative impact as vhost-vdpa backend doesn't involve any
>>> userspace virtqueue operations.
>>>
>>> Signed-off-by: Gautam Dawar <gdawar@xilinx.com>
>> Having features that hardware implements but qemu does not
>> means we can't migrate between them.
>> So I'd rather see a userspace implementation.
>>
> While I totally agree the userspace implementation is a better option,
> would it be a problem if we implement it as a cmdline option as Jason
> proposed?
>
> I see other backends have similar issues with migration. For example
> it's possible to run qemu with
> -device=virtio-net-pci,...,indirect_desc=on and use a vhost-kernel
> backend without indirect support in their features. I also understand
> qemu emulated backend as "the base" somehow, but it should work
> similarly to my example if cmdline parameter is off by default.


We had a lot of issues with the command line compatibility like this 
(e.g qemu may silently clear a host feature) which is probably worth to 
fix in the future.


>
> On the other hand, It may be worth thinking if it's worth waiting for
> GSoC though, so we avoid this problem entirely at the moment. But I
> feel that is going to come back with a different feature set with the
> advent of more out of qemu devices and the fast adding of features of
> VirtIO.
>
> Thoughts?


Not sure we had sufficient resources on the Qemu/kernel implementation 
of in-order. But if we decide not to wait we can change the GSoC to 
implement other stuffs like e.g NOTIFICATION_DATA.

Thanks


>
> Thanks!
>
>>> ---
>>>   hw/net/virtio-net.c | 10 ++++++++++
>>>   net/vhost-vdpa.c    |  1 +
>>>   2 files changed, 11 insertions(+)
>>>
>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>> index cf8ab0f8af..a1089d06f6 100644
>>> --- a/hw/net/virtio-net.c
>>> +++ b/hw/net/virtio-net.c
>>> @@ -3507,11 +3507,21 @@ static void virtio_net_device_realize(DeviceState *dev, Error **errp)
>>>       nc->rxfilter_notify_enabled = 1;
>>>
>>>      if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
>>> +        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
>>>           struct virtio_net_config netcfg = {};
>>> +
>>>           memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
>>>           vhost_net_set_config(get_vhost_net(nc->peer),
>>>               (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
>>> +
>>> +     /*
>>> +         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
>>> +         * make it available for negotiation.
>>> +         */
>>> +     features = vhost_net_get_features(get_vhost_net(nc->peer), features);
>>> +     n->host_features |= features;
>>>       }
>>> +
>>>       QTAILQ_INIT(&n->rsc_chains);
>>>       n->qdev = dev;
>>>
>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
>>> index 25dd6dd975..2886cba5ec 100644
>>> --- a/net/vhost-vdpa.c
>>> +++ b/net/vhost-vdpa.c
>>> @@ -62,6 +62,7 @@ const int vdpa_feature_bits[] = {
>>>       VIRTIO_NET_F_CTRL_VQ,
>>>       VIRTIO_F_IOMMU_PLATFORM,
>>>       VIRTIO_F_RING_PACKED,
>>> +    VIRTIO_F_IN_ORDER,
>>>       VIRTIO_NET_F_RSS,
>>>       VIRTIO_NET_F_HASH_REPORT,
>>>       VIRTIO_NET_F_GUEST_ANNOUNCE,
>>> --
>>> 2.30.1
diff mbox series

Patch

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index cf8ab0f8af..a1089d06f6 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -3507,11 +3507,21 @@  static void virtio_net_device_realize(DeviceState *dev, Error **errp)
     nc->rxfilter_notify_enabled = 1;
 
    if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
+        uint64_t features = BIT_ULL(VIRTIO_F_IN_ORDER);
         struct virtio_net_config netcfg = {};
+
         memcpy(&netcfg.mac, &n->nic_conf.macaddr, ETH_ALEN);
         vhost_net_set_config(get_vhost_net(nc->peer),
             (uint8_t *)&netcfg, 0, ETH_ALEN, VHOST_SET_CONFIG_TYPE_MASTER);
+
+	/*
+         * For vhost-vdpa, if underlying device supports IN_ORDER feature,
+         * make it available for negotiation.
+         */
+	features = vhost_net_get_features(get_vhost_net(nc->peer), features);
+	n->host_features |= features;
     }
+
     QTAILQ_INIT(&n->rsc_chains);
     n->qdev = dev;
 
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 25dd6dd975..2886cba5ec 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -62,6 +62,7 @@  const int vdpa_feature_bits[] = {
     VIRTIO_NET_F_CTRL_VQ,
     VIRTIO_F_IOMMU_PLATFORM,
     VIRTIO_F_RING_PACKED,
+    VIRTIO_F_IN_ORDER,
     VIRTIO_NET_F_RSS,
     VIRTIO_NET_F_HASH_REPORT,
     VIRTIO_NET_F_GUEST_ANNOUNCE,