diff mbox

[v10,7/7] vhost-user: add a new message to disable/enable a specific virt queue.

Message ID 1442588324-11365-8-git-send-email-yuanhan.liu@linux.intel.com
State New
Headers show

Commit Message

Yuanhan Liu Sept. 18, 2015, 2:58 p.m. UTC
From: Changchun Ouyang <changchun.ouyang@intel.com>

Add a new message, VHOST_USER_SET_VRING_ENABLE, to enable or disable
a specific virt queue, which is similar to attach/detach queue for
tap device.

virtio driver on guest doesn't have to use max virt queue pair, it
could enable any number of virt queue ranging from 1 to max virt
queue pair.

Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
---
 docs/specs/vhost-user.txt         | 12 +++++++++++-
 hw/net/vhost_net.c                | 18 ++++++++++++++++++
 hw/net/virtio-net.c               |  8 ++++++++
 hw/virtio/vhost-user.c            | 19 +++++++++++++++++++
 include/hw/virtio/vhost-backend.h |  2 ++
 include/net/vhost_net.h           |  2 ++
 6 files changed, 60 insertions(+), 1 deletion(-)

Comments

Eric Blake Sept. 22, 2015, 2:47 p.m. UTC | #1
On 09/18/2015 08:58 AM, Yuanhan Liu wrote:
> From: Changchun Ouyang <changchun.ouyang@intel.com>
> 
> Add a new message, VHOST_USER_SET_VRING_ENABLE, to enable or disable
> a specific virt queue, which is similar to attach/detach queue for
> tap device.
> 
> virtio driver on guest doesn't have to use max virt queue pair, it
> could enable any number of virt queue ranging from 1 to max virt
> queue pair.
> 
> Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
>  docs/specs/vhost-user.txt         | 12 +++++++++++-
>  hw/net/vhost_net.c                | 18 ++++++++++++++++++
>  hw/net/virtio-net.c               |  8 ++++++++
>  hw/virtio/vhost-user.c            | 19 +++++++++++++++++++
>  include/hw/virtio/vhost-backend.h |  2 ++
>  include/net/vhost_net.h           |  2 ++
>  6 files changed, 60 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> index cfc9d41..a5f1c31 100644
> --- a/docs/specs/vhost-user.txt
> +++ b/docs/specs/vhost-user.txt
> @@ -148,7 +148,9 @@ VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
>  requested queues is bigger than that.
>  
>  As all queues share one connection, the master uses a unique index for each
> -queue in the sent message to identify a specified queue.
> +queue in the sent message to identify a specified queue. One queue pairs

s/pairs/pair/

> +is enabled initially. More queues are enabled dynamically, by sending
> +message VHOST_USER_SET_VRING_ENABLE.
>  
>  Message types
>  -------------
> @@ -327,3 +329,11 @@ Message types
>        Query how many queues the backend supports. This request should be
>        sent only when VHOST_USER_PROTOCOL_F_MQ is set in quried protocol
>        features by VHOST_USER_GET_PROTOCOL_FEATURES.
> +
> + * VHOST_USER_SET_VRING_ENABLE
> +
> +      Id: 18
> +      Equivalent ioctl: N/A
> +      Master payload: vring state description
> +
> +      Signal slave to enable or disable corresponding vring.

Does there need to be any QMP control to manually change a given queue,
or is it all used under the hood with no need for management apps to
care other than their initial request of max queues?
Yuanhan Liu Sept. 23, 2015, 2:05 a.m. UTC | #2
On Tue, Sep 22, 2015 at 08:47:04AM -0600, Eric Blake wrote:
> On 09/18/2015 08:58 AM, Yuanhan Liu wrote:
> > From: Changchun Ouyang <changchun.ouyang@intel.com>
> > 
> > Add a new message, VHOST_USER_SET_VRING_ENABLE, to enable or disable
> > a specific virt queue, which is similar to attach/detach queue for
> > tap device.
> > 
> > virtio driver on guest doesn't have to use max virt queue pair, it
> > could enable any number of virt queue ranging from 1 to max virt
> > queue pair.
> > 
> > Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
> > Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> > ---
> >  docs/specs/vhost-user.txt         | 12 +++++++++++-
> >  hw/net/vhost_net.c                | 18 ++++++++++++++++++
> >  hw/net/virtio-net.c               |  8 ++++++++
> >  hw/virtio/vhost-user.c            | 19 +++++++++++++++++++
> >  include/hw/virtio/vhost-backend.h |  2 ++
> >  include/net/vhost_net.h           |  2 ++
> >  6 files changed, 60 insertions(+), 1 deletion(-)
> > 
> > diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
> > index cfc9d41..a5f1c31 100644
> > --- a/docs/specs/vhost-user.txt
> > +++ b/docs/specs/vhost-user.txt
> > @@ -148,7 +148,9 @@ VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
> >  requested queues is bigger than that.
> >  
> >  As all queues share one connection, the master uses a unique index for each
> > -queue in the sent message to identify a specified queue.
> > +queue in the sent message to identify a specified queue. One queue pairs
> 
> s/pairs/pair/

Thanks.

> 
> > +is enabled initially. More queues are enabled dynamically, by sending
> > +message VHOST_USER_SET_VRING_ENABLE.
> >  
> >  Message types
> >  -------------
> > @@ -327,3 +329,11 @@ Message types
> >        Query how many queues the backend supports. This request should be
> >        sent only when VHOST_USER_PROTOCOL_F_MQ is set in quried protocol
> >        features by VHOST_USER_GET_PROTOCOL_FEATURES.
> > +
> > + * VHOST_USER_SET_VRING_ENABLE
> > +
> > +      Id: 18
> > +      Equivalent ioctl: N/A
> > +      Master payload: vring state description
> > +
> > +      Signal slave to enable or disable corresponding vring.
> 
> Does there need to be any QMP control to manually change a given queue,
> or is it all used under the hood with no need for management apps to
> care other than their initial request of max queues?

TBH, I don't know. As far as I know, there is only one queue pair will
be enabled by default, and it's user's job to enable (or disable) more
queue pairs, say, by ethtool:

    # ethtool -L eth0 combined <queue pair number>

Which ends up sending the VHOST_USER_SET_VRING_ENABLE to actually enable
(or disable) a specific queue pairs.

Does that answer your question?

	--yliu
Eric Blake Sept. 23, 2015, 2:06 a.m. UTC | #3
On 09/22/2015 08:05 PM, Yuanhan Liu wrote:

>>> + * VHOST_USER_SET_VRING_ENABLE
>>> +
>>> +      Id: 18
>>> +      Equivalent ioctl: N/A
>>> +      Master payload: vring state description
>>> +
>>> +      Signal slave to enable or disable corresponding vring.
>>
>> Does there need to be any QMP control to manually change a given queue,
>> or is it all used under the hood with no need for management apps to
>> care other than their initial request of max queues?
> 
> TBH, I don't know. As far as I know, there is only one queue pair will
> be enabled by default, and it's user's job to enable (or disable) more
> queue pairs, say, by ethtool:
> 
>     # ethtool -L eth0 combined <queue pair number>
> 
> Which ends up sending the VHOST_USER_SET_VRING_ENABLE to actually enable
> (or disable) a specific queue pairs.

So if I'm understanding, it is ethtool, not qemu, that is used to turn
on additional queues, and therefore we don't need a QMP command in qemu
to control things.

> 
> Does that answer your question?

I think so, but I'll let other reviewers more familiar with this area of
code give a final say.
Yuanhan Liu Sept. 23, 2015, 2:12 a.m. UTC | #4
On Tue, Sep 22, 2015 at 08:06:58PM -0600, Eric Blake wrote:
> On 09/22/2015 08:05 PM, Yuanhan Liu wrote:
> 
> >>> + * VHOST_USER_SET_VRING_ENABLE
> >>> +
> >>> +      Id: 18
> >>> +      Equivalent ioctl: N/A
> >>> +      Master payload: vring state description
> >>> +
> >>> +      Signal slave to enable or disable corresponding vring.
> >>
> >> Does there need to be any QMP control to manually change a given queue,
> >> or is it all used under the hood with no need for management apps to
> >> care other than their initial request of max queues?
> > 
> > TBH, I don't know. As far as I know, there is only one queue pair will
> > be enabled by default, and it's user's job to enable (or disable) more
> > queue pairs, say, by ethtool:
> > 
> >     # ethtool -L eth0 combined <queue pair number>
> > 
> > Which ends up sending the VHOST_USER_SET_VRING_ENABLE to actually enable
> > (or disable) a specific queue pairs.
> 
> So if I'm understanding, it is ethtool, not qemu, that is used to turn
> on additional queues, and therefore we don't need a QMP command in qemu
> to control things.

I guess so, and that's what Michael told me before.

> > Does that answer your question?
> 
> I think so, but I'll let other reviewers more familiar with this area of
> code give a final say.

Michael?

	--yliu
diff mbox

Patch

diff --git a/docs/specs/vhost-user.txt b/docs/specs/vhost-user.txt
index cfc9d41..a5f1c31 100644
--- a/docs/specs/vhost-user.txt
+++ b/docs/specs/vhost-user.txt
@@ -148,7 +148,9 @@  VHOST_USER_GET_PROTOCOL_FEATURES. Master should stop when the number of
 requested queues is bigger than that.
 
 As all queues share one connection, the master uses a unique index for each
-queue in the sent message to identify a specified queue.
+queue in the sent message to identify a specified queue. One queue pairs
+is enabled initially. More queues are enabled dynamically, by sending
+message VHOST_USER_SET_VRING_ENABLE.
 
 Message types
 -------------
@@ -327,3 +329,11 @@  Message types
       Query how many queues the backend supports. This request should be
       sent only when VHOST_USER_PROTOCOL_F_MQ is set in quried protocol
       features by VHOST_USER_GET_PROTOCOL_FEATURES.
+
+ * VHOST_USER_SET_VRING_ENABLE
+
+      Id: 18
+      Equivalent ioctl: N/A
+      Master payload: vring state description
+
+      Signal slave to enable or disable corresponding vring.
diff --git a/hw/net/vhost_net.c b/hw/net/vhost_net.c
index 786191a..a5d045f 100644
--- a/hw/net/vhost_net.c
+++ b/hw/net/vhost_net.c
@@ -422,6 +422,19 @@  VHostNetState *get_vhost_net(NetClientState *nc)
 
     return vhost_net;
 }
+
+int vhost_set_vring_enable(NetClientState *nc, int enable)
+{
+    VHostNetState *net = get_vhost_net(nc);
+    const VhostOps *vhost_ops = net->dev.vhost_ops;
+
+    if (vhost_ops->vhost_backend_set_vring_enable) {
+        return vhost_ops->vhost_backend_set_vring_enable(&net->dev, enable);
+    }
+
+    return 0;
+}
+
 #else
 struct vhost_net *vhost_net_init(VhostNetOptions *options)
 {
@@ -467,4 +480,9 @@  VHostNetState *get_vhost_net(NetClientState *nc)
 {
     return 0;
 }
+
+int vhost_set_vring_enable(NetClientState *nc, int enable)
+{
+    return 0;
+}
 #endif
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index f72eebf..916222f 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -406,6 +406,10 @@  static int peer_attach(VirtIONet *n, int index)
         return 0;
     }
 
+    if (nc->peer->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
+        vhost_set_vring_enable(nc->peer, 1);
+    }
+
     if (nc->peer->info->type != NET_CLIENT_OPTIONS_KIND_TAP) {
         return 0;
     }
@@ -421,6 +425,10 @@  static int peer_detach(VirtIONet *n, int index)
         return 0;
     }
 
+    if (nc->peer->info->type == NET_CLIENT_OPTIONS_KIND_VHOST_USER) {
+        vhost_set_vring_enable(nc->peer, 0);
+    }
+
     if (nc->peer->info->type !=  NET_CLIENT_OPTIONS_KIND_TAP) {
         return 0;
     }
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index e42fde6..b11c0d2 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -48,6 +48,7 @@  typedef enum VhostUserRequest {
     VHOST_USER_GET_PROTOCOL_FEATURES = 15,
     VHOST_USER_SET_PROTOCOL_FEATURES = 16,
     VHOST_USER_GET_QUEUE_NUM = 17,
+    VHOST_USER_SET_VRING_ENABLE = 18,
     VHOST_USER_MAX
 } VhostUserRequest;
 
@@ -290,6 +291,7 @@  static int vhost_user_call(struct vhost_dev *dev, unsigned long int request,
 
     case VHOST_USER_SET_VRING_NUM:
     case VHOST_USER_SET_VRING_BASE:
+    case VHOST_USER_SET_VRING_ENABLE:
         memcpy(&msg.state, arg, sizeof(struct vhost_vring_state));
         msg.size = sizeof(m.state);
         break;
@@ -406,6 +408,22 @@  static int vhost_user_init(struct vhost_dev *dev, void *opaque)
     return 0;
 }
 
+static int vhost_user_set_vring_enable(struct vhost_dev *dev, int enable)
+{
+    struct vhost_vring_state state = {
+        .index = dev->vq_index,
+        .num   = enable,
+    };
+
+    assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
+
+    if (!(dev->protocol_features & (1ULL << VHOST_USER_PROTOCOL_F_MQ))) {
+        return -1;
+    }
+
+    return vhost_user_call(dev, VHOST_USER_SET_VRING_ENABLE, &state);
+}
+
 static int vhost_user_cleanup(struct vhost_dev *dev)
 {
     assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_USER);
@@ -428,4 +446,5 @@  const VhostOps user_ops = {
         .vhost_backend_init = vhost_user_init,
         .vhost_backend_cleanup = vhost_user_cleanup,
         .vhost_backend_get_vq_index = vhost_user_get_vq_index,
+        .vhost_backend_set_vring_enable = vhost_user_set_vring_enable,
 };
diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
index e1dfc6d..3a0f6e2 100644
--- a/include/hw/virtio/vhost-backend.h
+++ b/include/hw/virtio/vhost-backend.h
@@ -25,6 +25,7 @@  typedef int (*vhost_call)(struct vhost_dev *dev, unsigned long int request,
 typedef int (*vhost_backend_init)(struct vhost_dev *dev, void *opaque);
 typedef int (*vhost_backend_cleanup)(struct vhost_dev *dev);
 typedef int (*vhost_backend_get_vq_index)(struct vhost_dev *dev, int idx);
+typedef int (*vhost_backend_set_vring_enable)(struct vhost_dev *dev, int enable);
 
 typedef struct VhostOps {
     VhostBackendType backend_type;
@@ -32,6 +33,7 @@  typedef struct VhostOps {
     vhost_backend_init vhost_backend_init;
     vhost_backend_cleanup vhost_backend_cleanup;
     vhost_backend_get_vq_index vhost_backend_get_vq_index;
+    vhost_backend_set_vring_enable vhost_backend_set_vring_enable;
 } VhostOps;
 
 extern const VhostOps user_ops;
diff --git a/include/net/vhost_net.h b/include/net/vhost_net.h
index 8db723e..0188c4d 100644
--- a/include/net/vhost_net.h
+++ b/include/net/vhost_net.h
@@ -28,4 +28,6 @@  bool vhost_net_virtqueue_pending(VHostNetState *net, int n);
 void vhost_net_virtqueue_mask(VHostNetState *net, VirtIODevice *dev,
                               int idx, bool mask);
 VHostNetState *get_vhost_net(NetClientState *nc);
+
+int vhost_set_vring_enable(NetClientState * nc, int enable);
 #endif