Message ID | 20200218050711.8133-1-coiby.xu@gmail.com |
---|---|
Headers | show |
Series | vhost-user block device backend implementation | expand |
On Tue, Feb 18, 2020 at 01:07:06PM +0800, Coiby Xu wrote: > v4: > * add object properties in class_init > * relocate vhost-user-blk-test > * other changes including using SocketAddress, coding style, etc. Thanks! I think the vhost-user server code can be simplified if libvhost-user uses the event loop for asynchronous socket I/O. Then it's no longer necessary to duplicate vu_message_read() and vu_kick_cb(). I've replied to Patch 1 and we can discuss on IRC if you want to chat about it. I've also CCed Marc-André to see what he thinks about removing the blocking recv from libvhost-user and instead using the event loop (just like for eventfds). Stefan
Hi Stefan, Thank you for reviewing my code! I tried to reach you on IRC. But somehow either you missed my message or I missed your reply. So I will reply by email instead. If we use qio_channel_set_aio_fd_handler to monitor G_IO_IN event, i.e. use vu_dispatch as the read handler, then we can re-use vu_message_read. And "removing the blocking recv from libvhost-user" isn't necessary because "the operation of poll() and ppoll() is not affected by the O_NONBLOCK flag" despite that we use qio_channel_set_blocking before calling qio_channel_set_aio_fd_handler to make recv non-blocking. Previously I needed to run customized vu_kick_cb as coroutines to call blk_co_readv/blk_co_writev directly. After taking Kevin's feedback on v4 into consideration, now I use aio_set_fd_handler to set a read handler for kick_fd and this read handler will then call vu_kick_cb. On Thu, Feb 20, 2020 at 1:58 AM Stefan Hajnoczi <stefanha@redhat.com> wrote: > > On Tue, Feb 18, 2020 at 01:07:06PM +0800, Coiby Xu wrote: > > v4: > > * add object properties in class_init > > * relocate vhost-user-blk-test > > * other changes including using SocketAddress, coding style, etc. > > Thanks! I think the vhost-user server code can be simplified if > libvhost-user uses the event loop for asynchronous socket I/O. Then > it's no longer necessary to duplicate vu_message_read() and > vu_kick_cb(). I've replied to Patch 1 and we can discuss on IRC if you > want to chat about it. > > I've also CCed Marc-André to see what he thinks about removing the > blocking recv from libvhost-user and instead using the event loop (just > like for eventfds). > > Stefan
On Wed, Feb 26, 2020 at 11:18:41PM +0800, Coiby Xu wrote: > Hi Stefan, > > Thank you for reviewing my code! > > I tried to reach you on IRC. But somehow either you missed my message > or I missed your reply. So I will reply by email instead. > > If we use qio_channel_set_aio_fd_handler to monitor G_IO_IN event, > i.e. use vu_dispatch as the read handler, then we can re-use > vu_message_read. And "removing the blocking recv from libvhost-user" > isn't necessary because "the operation of poll() and ppoll() is not > affected by the O_NONBLOCK flag" despite that we use > qio_channel_set_blocking before calling qio_channel_set_aio_fd_handler > to make recv non-blocking. I'm not sure I understand. poll() just says whether the file descriptor is readable. It does not say whether enough bytes are readable :). So our callback will be invoked if there is 1 byte ready, but when we try to read 20 bytes either it will block (without O_NONBLOCK) or return only 1 byte (with O_NONBLOCK). Neither case is okay, so I expect that code changes will be necessary. But please go ahead and send the next revision and I'll take a look. Stefan
Thank you for reminding me of this socket short read issue! It seems we still need customized vu_message_read because libvhost-user assumes we will always get a full-size VhostUserMsg and hasn't taken care of this short read case. I will improve libvhost-user's vu_message_read by making it keep reading from socket util getting enough bytes. I assume short read is a rare case thus introduced performance penalty would be negligible. On Thu, Feb 27, 2020 at 3:41 PM Stefan Hajnoczi <stefanha@redhat.com> wrote: > > On Wed, Feb 26, 2020 at 11:18:41PM +0800, Coiby Xu wrote: > > Hi Stefan, > > > > Thank you for reviewing my code! > > > > I tried to reach you on IRC. But somehow either you missed my message > > or I missed your reply. So I will reply by email instead. > > > > If we use qio_channel_set_aio_fd_handler to monitor G_IO_IN event, > > i.e. use vu_dispatch as the read handler, then we can re-use > > vu_message_read. And "removing the blocking recv from libvhost-user" > > isn't necessary because "the operation of poll() and ppoll() is not > > affected by the O_NONBLOCK flag" despite that we use > > qio_channel_set_blocking before calling qio_channel_set_aio_fd_handler > > to make recv non-blocking. > > I'm not sure I understand. poll() just says whether the file descriptor > is readable. It does not say whether enough bytes are readable :). So > our callback will be invoked if there is 1 byte ready, but when we try > to read 20 bytes either it will block (without O_NONBLOCK) or return > only 1 byte (with O_NONBLOCK). Neither case is okay, so I expect that > code changes will be necessary. > > But please go ahead and send the next revision and I'll take a look. > > Stefan -- Best regards, Coiby
Am 27.02.2020 um 10:53 hat Coiby Xu geschrieben: > Thank you for reminding me of this socket short read issue! It seems > we still need customized vu_message_read because libvhost-user assumes > we will always get a full-size VhostUserMsg and hasn't taken care of > this short read case. I will improve libvhost-user's vu_message_read > by making it keep reading from socket util getting enough bytes. I > assume short read is a rare case thus introduced performance penalty > would be negligible. In any case, please make sure that we use the QIOChannel functions called from a coroutine in QEMU so that it will never block, but the coroutine can just yield while it's waiting for more bytes. Kevin > On Thu, Feb 27, 2020 at 3:41 PM Stefan Hajnoczi <stefanha@redhat.com> wrote: > > > > On Wed, Feb 26, 2020 at 11:18:41PM +0800, Coiby Xu wrote: > > > Hi Stefan, > > > > > > Thank you for reviewing my code! > > > > > > I tried to reach you on IRC. But somehow either you missed my message > > > or I missed your reply. So I will reply by email instead. > > > > > > If we use qio_channel_set_aio_fd_handler to monitor G_IO_IN event, > > > i.e. use vu_dispatch as the read handler, then we can re-use > > > vu_message_read. And "removing the blocking recv from libvhost-user" > > > isn't necessary because "the operation of poll() and ppoll() is not > > > affected by the O_NONBLOCK flag" despite that we use > > > qio_channel_set_blocking before calling qio_channel_set_aio_fd_handler > > > to make recv non-blocking. > > > > I'm not sure I understand. poll() just says whether the file descriptor > > is readable. It does not say whether enough bytes are readable :). So > > our callback will be invoked if there is 1 byte ready, but when we try > > to read 20 bytes either it will block (without O_NONBLOCK) or return > > only 1 byte (with O_NONBLOCK). Neither case is okay, so I expect that > > code changes will be necessary. > > > > But please go ahead and send the next revision and I'll take a look. > > > > Stefan > > > > -- > Best regards, > Coiby >
> > we still need customized vu_message_read because libvhost-user assumes > > we will always get a full-size VhostUserMsg and hasn't taken care of > > this short read case. I will improve libvhost-user's vu_message_read > > by making it keep reading from socket util getting enough bytes. I > > assume short read is a rare case thus introduced performance penalty > > would be negligible. > In any case, please make sure that we use the QIOChannel functions > called from a coroutine in QEMU so that it will never block, but the > coroutine can just yield while it's waiting for more bytes. But if I am not wrong, libvhost-user is supposed to be indepdent from the main QEMU code. So it can't use the QIOChannel functions if we simply modify exiting vu_message_read to address the short read issue. In v3 & v4, I extended libvhost-user to allow vu_message_read to be replaced by one which will depend on the main QEMU code. I'm not sure which way is better. On Thu, Feb 27, 2020 at 6:02 PM Kevin Wolf <kwolf@redhat.com> wrote: > > Am 27.02.2020 um 10:53 hat Coiby Xu geschrieben: > > Thank you for reminding me of this socket short read issue! It seems > > we still need customized vu_message_read because libvhost-user assumes > > we will always get a full-size VhostUserMsg and hasn't taken care of > > this short read case. I will improve libvhost-user's vu_message_read > > by making it keep reading from socket util getting enough bytes. I > > assume short read is a rare case thus introduced performance penalty > > would be negligible. > > In any case, please make sure that we use the QIOChannel functions > called from a coroutine in QEMU so that it will never block, but the > coroutine can just yield while it's waiting for more bytes. > > Kevin > > > On Thu, Feb 27, 2020 at 3:41 PM Stefan Hajnoczi <stefanha@redhat.com> wrote: > > > > > > On Wed, Feb 26, 2020 at 11:18:41PM +0800, Coiby Xu wrote: > > > > Hi Stefan, > > > > > > > > Thank you for reviewing my code! > > > > > > > > I tried to reach you on IRC. But somehow either you missed my message > > > > or I missed your reply. So I will reply by email instead. > > > > > > > > If we use qio_channel_set_aio_fd_handler to monitor G_IO_IN event, > > > > i.e. use vu_dispatch as the read handler, then we can re-use > > > > vu_message_read. And "removing the blocking recv from libvhost-user" > > > > isn't necessary because "the operation of poll() and ppoll() is not > > > > affected by the O_NONBLOCK flag" despite that we use > > > > qio_channel_set_blocking before calling qio_channel_set_aio_fd_handler > > > > to make recv non-blocking. > > > > > > I'm not sure I understand. poll() just says whether the file descriptor > > > is readable. It does not say whether enough bytes are readable :). So > > > our callback will be invoked if there is 1 byte ready, but when we try > > > to read 20 bytes either it will block (without O_NONBLOCK) or return > > > only 1 byte (with O_NONBLOCK). Neither case is okay, so I expect that > > > code changes will be necessary. > > > > > > But please go ahead and send the next revision and I'll take a look. > > > > > > Stefan > > > > > > > > -- > > Best regards, > > Coiby > > >
Am 27.02.2020 um 11:28 hat Coiby Xu geschrieben: > > > we still need customized vu_message_read because libvhost-user assumes > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > this short read case. I will improve libvhost-user's vu_message_read > > > by making it keep reading from socket util getting enough bytes. I > > > assume short read is a rare case thus introduced performance penalty > > > would be negligible. > > > In any case, please make sure that we use the QIOChannel functions > > called from a coroutine in QEMU so that it will never block, but the > > coroutine can just yield while it's waiting for more bytes. > > But if I am not wrong, libvhost-user is supposed to be indepdent from > the main QEMU code. So it can't use the QIOChannel functions if we > simply modify exiting vu_message_read to address the short read issue. > In v3 & v4, I extended libvhost-user to allow vu_message_read to be > replaced by one which will depend on the main QEMU code. I'm not sure > which way is better. The way your latest patches have it, with a separate read function, works for me. You could probably change libvhost-user to reimplement the same functionality, and it might be an improvement for other users of the library, but it's also code duplication and doesn't provide more value in the context of the vhost-user export in QEMU. The point that's really important to me is just that we never block when we run inside QEMU because that would actually stall the guest. This means busy waiting in a tight loop until read() returns enough bytes is not acceptable in QEMU. Kevin > On Thu, Feb 27, 2020 at 6:02 PM Kevin Wolf <kwolf@redhat.com> wrote: > > > > Am 27.02.2020 um 10:53 hat Coiby Xu geschrieben: > > > Thank you for reminding me of this socket short read issue! It seems > > > we still need customized vu_message_read because libvhost-user assumes > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > this short read case. I will improve libvhost-user's vu_message_read > > > by making it keep reading from socket util getting enough bytes. I > > > assume short read is a rare case thus introduced performance penalty > > > would be negligible. > > > > In any case, please make sure that we use the QIOChannel functions > > called from a coroutine in QEMU so that it will never block, but the > > coroutine can just yield while it's waiting for more bytes. > > > > Kevin > > > > > On Thu, Feb 27, 2020 at 3:41 PM Stefan Hajnoczi <stefanha@redhat.com> wrote: > > > > > > > > On Wed, Feb 26, 2020 at 11:18:41PM +0800, Coiby Xu wrote: > > > > > Hi Stefan, > > > > > > > > > > Thank you for reviewing my code! > > > > > > > > > > I tried to reach you on IRC. But somehow either you missed my message > > > > > or I missed your reply. So I will reply by email instead. > > > > > > > > > > If we use qio_channel_set_aio_fd_handler to monitor G_IO_IN event, > > > > > i.e. use vu_dispatch as the read handler, then we can re-use > > > > > vu_message_read. And "removing the blocking recv from libvhost-user" > > > > > isn't necessary because "the operation of poll() and ppoll() is not > > > > > affected by the O_NONBLOCK flag" despite that we use > > > > > qio_channel_set_blocking before calling qio_channel_set_aio_fd_handler > > > > > to make recv non-blocking. > > > > > > > > I'm not sure I understand. poll() just says whether the file descriptor > > > > is readable. It does not say whether enough bytes are readable :). So > > > > our callback will be invoked if there is 1 byte ready, but when we try > > > > to read 20 bytes either it will block (without O_NONBLOCK) or return > > > > only 1 byte (with O_NONBLOCK). Neither case is okay, so I expect that > > > > code changes will be necessary. > > > > > > > > But please go ahead and send the next revision and I'll take a look. > > > > > > > > Stefan > > > > > > > > > > > > -- > > > Best regards, > > > Coiby > > > > > > > > -- > Best regards, > Coiby >
Hi On Thu, Feb 27, 2020 at 11:55 AM Kevin Wolf <kwolf@redhat.com> wrote: > > Am 27.02.2020 um 11:28 hat Coiby Xu geschrieben: > > > > we still need customized vu_message_read because libvhost-user assumes > > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > > this short read case. I will improve libvhost-user's vu_message_read > > > > by making it keep reading from socket util getting enough bytes. I > > > > assume short read is a rare case thus introduced performance penalty > > > > would be negligible. > > > > > In any case, please make sure that we use the QIOChannel functions > > > called from a coroutine in QEMU so that it will never block, but the > > > coroutine can just yield while it's waiting for more bytes. > > > > But if I am not wrong, libvhost-user is supposed to be indepdent from > > the main QEMU code. So it can't use the QIOChannel functions if we > > simply modify exiting vu_message_read to address the short read issue. > > In v3 & v4, I extended libvhost-user to allow vu_message_read to be > > replaced by one which will depend on the main QEMU code. I'm not sure > > which way is better. > > The way your latest patches have it, with a separate read function, > works for me. Done right, I am not against it, fwiw > You could probably change libvhost-user to reimplement the same > functionality, and it might be an improvement for other users of the > library, but it's also code duplication and doesn't provide more value > in the context of the vhost-user export in QEMU. > > The point that's really important to me is just that we never block when > we run inside QEMU because that would actually stall the guest. This > means busy waiting in a tight loop until read() returns enough bytes is > not acceptable in QEMU. In the context of vhost-user, local unix sockets with short messages (do we have >1k messages?), I am not sure if this is really a problem. And isn't it possible to run libvhost-user in its own thread for this series? > > Kevin > > > On Thu, Feb 27, 2020 at 6:02 PM Kevin Wolf <kwolf@redhat.com> wrote: > > > > > > Am 27.02.2020 um 10:53 hat Coiby Xu geschrieben: > > > > Thank you for reminding me of this socket short read issue! It seems > > > > we still need customized vu_message_read because libvhost-user assumes > > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > > this short read case. I will improve libvhost-user's vu_message_read > > > > by making it keep reading from socket util getting enough bytes. I > > > > assume short read is a rare case thus introduced performance penalty > > > > would be negligible. > > > > > > In any case, please make sure that we use the QIOChannel functions > > > called from a coroutine in QEMU so that it will never block, but the > > > coroutine can just yield while it's waiting for more bytes. > > > > > > Kevin > > > > > > > On Thu, Feb 27, 2020 at 3:41 PM Stefan Hajnoczi <stefanha@redhat.com> wrote: > > > > > > > > > > On Wed, Feb 26, 2020 at 11:18:41PM +0800, Coiby Xu wrote: > > > > > > Hi Stefan, > > > > > > > > > > > > Thank you for reviewing my code! > > > > > > > > > > > > I tried to reach you on IRC. But somehow either you missed my message > > > > > > or I missed your reply. So I will reply by email instead. > > > > > > > > > > > > If we use qio_channel_set_aio_fd_handler to monitor G_IO_IN event, > > > > > > i.e. use vu_dispatch as the read handler, then we can re-use > > > > > > vu_message_read. And "removing the blocking recv from libvhost-user" > > > > > > isn't necessary because "the operation of poll() and ppoll() is not > > > > > > affected by the O_NONBLOCK flag" despite that we use > > > > > > qio_channel_set_blocking before calling qio_channel_set_aio_fd_handler > > > > > > to make recv non-blocking. > > > > > > > > > > I'm not sure I understand. poll() just says whether the file descriptor > > > > > is readable. It does not say whether enough bytes are readable :). So > > > > > our callback will be invoked if there is 1 byte ready, but when we try > > > > > to read 20 bytes either it will block (without O_NONBLOCK) or return > > > > > only 1 byte (with O_NONBLOCK). Neither case is okay, so I expect that > > > > > code changes will be necessary. > > > > > > > > > > But please go ahead and send the next revision and I'll take a look. > > > > > > > > > > Stefan > > > > > > > > > > > > > > > > -- > > > > Best regards, > > > > Coiby > > > > > > > > > > > > > -- > > Best regards, > > Coiby > > >
Am 27.02.2020 um 12:07 hat Marc-André Lureau geschrieben: > On Thu, Feb 27, 2020 at 11:55 AM Kevin Wolf <kwolf@redhat.com> wrote: > > Am 27.02.2020 um 11:28 hat Coiby Xu geschrieben: > > > > > we still need customized vu_message_read because libvhost-user assumes > > > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > > > this short read case. I will improve libvhost-user's vu_message_read > > > > > by making it keep reading from socket util getting enough bytes. I > > > > > assume short read is a rare case thus introduced performance penalty > > > > > would be negligible. > > > > > > > In any case, please make sure that we use the QIOChannel functions > > > > called from a coroutine in QEMU so that it will never block, but the > > > > coroutine can just yield while it's waiting for more bytes. > > > > > > But if I am not wrong, libvhost-user is supposed to be indepdent from > > > the main QEMU code. So it can't use the QIOChannel functions if we > > > simply modify exiting vu_message_read to address the short read issue. > > > In v3 & v4, I extended libvhost-user to allow vu_message_read to be > > > replaced by one which will depend on the main QEMU code. I'm not sure > > > which way is better. > > > > The way your latest patches have it, with a separate read function, > > works for me. > > Done right, I am not against it, fwiw > > > You could probably change libvhost-user to reimplement the same > > functionality, and it might be an improvement for other users of the > > library, but it's also code duplication and doesn't provide more value > > in the context of the vhost-user export in QEMU. > > > > The point that's really important to me is just that we never block when > > we run inside QEMU because that would actually stall the guest. This > > means busy waiting in a tight loop until read() returns enough bytes is > > not acceptable in QEMU. > > In the context of vhost-user, local unix sockets with short messages > (do we have >1k messages?), I am not sure if this is really a problem. I'm not sure how much of a problem it is in practice, and whether we can consider the vhost-user client trusted. But using QIOChannel from within a coroutine just avoids the problem completely, so it feels like a natural choice to just do that. > And isn't it possible to run libvhost-user in its own thread for this > series? You need to run the actual block I/O requests in the iothread where the block device happens to run. So if you move the message processing to a separate thread, you would have to communicate between threads for the actual request processing. Possible, but probably slower than necessary and certainly more complex. Kevin
On Thu, Feb 27, 2020 at 12:19:58PM +0100, Kevin Wolf wrote: > Am 27.02.2020 um 12:07 hat Marc-André Lureau geschrieben: > > On Thu, Feb 27, 2020 at 11:55 AM Kevin Wolf <kwolf@redhat.com> wrote: > > > Am 27.02.2020 um 11:28 hat Coiby Xu geschrieben: > > > > > > we still need customized vu_message_read because libvhost-user assumes > > > > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > > > > this short read case. I will improve libvhost-user's vu_message_read > > > > > > by making it keep reading from socket util getting enough bytes. I > > > > > > assume short read is a rare case thus introduced performance penalty > > > > > > would be negligible. > > > > > > > > > In any case, please make sure that we use the QIOChannel functions > > > > > called from a coroutine in QEMU so that it will never block, but the > > > > > coroutine can just yield while it's waiting for more bytes. > > > > > > > > But if I am not wrong, libvhost-user is supposed to be indepdent from > > > > the main QEMU code. So it can't use the QIOChannel functions if we > > > > simply modify exiting vu_message_read to address the short read issue. > > > > In v3 & v4, I extended libvhost-user to allow vu_message_read to be > > > > replaced by one which will depend on the main QEMU code. I'm not sure > > > > which way is better. > > > > > > The way your latest patches have it, with a separate read function, > > > works for me. > > > > Done right, I am not against it, fwiw > > > > > You could probably change libvhost-user to reimplement the same > > > functionality, and it might be an improvement for other users of the > > > library, but it's also code duplication and doesn't provide more value > > > in the context of the vhost-user export in QEMU. > > > > > > The point that's really important to me is just that we never block when > > > we run inside QEMU because that would actually stall the guest. This > > > means busy waiting in a tight loop until read() returns enough bytes is > > > not acceptable in QEMU. > > > > In the context of vhost-user, local unix sockets with short messages > > (do we have >1k messages?), I am not sure if this is really a problem. > > I'm not sure how much of a problem it is in practice, and whether we > can consider the vhost-user client trusted. But using QIOChannel from > within a coroutine just avoids the problem completely, so it feels like > a natural choice to just do that. It isn't clear to me that we have a consitent plan for how we intend libvhost-user to develop & what it is permitted to use. What information I see in the source code and in this thread are contradictory. For example, in the text quoted above: "libvhost-user is supposed to be indepdent from the main QEMU code." which did match my overall understanding too. At the top of libvhost-user.c there is a comment /* this code avoids GLib dependency */ but a few lines later it does #include "qemu/atomic.h" #include "qemu/osdep.h" #include "qemu/memfd.h" and in the Makefile we link it to much of QEMU util code: libvhost-user.a: $(libvhost-user-obj-y) $(util-obj-y) $(stub-obj-y) this in turn pulls in GLib code, and looking at symbols we can see over 100 GLib functions used: $ nm ./libvhost-user.a | grep 'U g_' | sort | uniq | wc -l 128 And over 200 QEMU source object files included: $ nm ./libvhost-user.a | grep '.o:' | sort | wc -l 224 So unless I'm missing something, we have lost any independance from both QEMU and GLib code that we might have had in the past. Note this also has licensing implications, as I expect this means that via the QEMU source objects it pulls in, libvhost-user.a has become a GPLv2-only combined work, not a GPLv2-or-later combined work. Regards, Daniel
Hi On Thu, Feb 27, 2020 at 12:39 PM Daniel P. Berrangé <berrange@redhat.com> wrote: > > On Thu, Feb 27, 2020 at 12:19:58PM +0100, Kevin Wolf wrote: > > Am 27.02.2020 um 12:07 hat Marc-André Lureau geschrieben: > > > On Thu, Feb 27, 2020 at 11:55 AM Kevin Wolf <kwolf@redhat.com> wrote: > > > > Am 27.02.2020 um 11:28 hat Coiby Xu geschrieben: > > > > > > > we still need customized vu_message_read because libvhost-user assumes > > > > > > > we will always get a full-size VhostUserMsg and hasn't taken care of > > > > > > > this short read case. I will improve libvhost-user's vu_message_read > > > > > > > by making it keep reading from socket util getting enough bytes. I > > > > > > > assume short read is a rare case thus introduced performance penalty > > > > > > > would be negligible. > > > > > > > > > > > In any case, please make sure that we use the QIOChannel functions > > > > > > called from a coroutine in QEMU so that it will never block, but the > > > > > > coroutine can just yield while it's waiting for more bytes. > > > > > > > > > > But if I am not wrong, libvhost-user is supposed to be indepdent from > > > > > the main QEMU code. So it can't use the QIOChannel functions if we > > > > > simply modify exiting vu_message_read to address the short read issue. > > > > > In v3 & v4, I extended libvhost-user to allow vu_message_read to be > > > > > replaced by one which will depend on the main QEMU code. I'm not sure > > > > > which way is better. > > > > > > > > The way your latest patches have it, with a separate read function, > > > > works for me. > > > > > > Done right, I am not against it, fwiw > > > > > > > You could probably change libvhost-user to reimplement the same > > > > functionality, and it might be an improvement for other users of the > > > > library, but it's also code duplication and doesn't provide more value > > > > in the context of the vhost-user export in QEMU. > > > > > > > > The point that's really important to me is just that we never block when > > > > we run inside QEMU because that would actually stall the guest. This > > > > means busy waiting in a tight loop until read() returns enough bytes is > > > > not acceptable in QEMU. > > > > > > In the context of vhost-user, local unix sockets with short messages > > > (do we have >1k messages?), I am not sure if this is really a problem. > > > > I'm not sure how much of a problem it is in practice, and whether we > > can consider the vhost-user client trusted. But using QIOChannel from > > within a coroutine just avoids the problem completely, so it feels like > > a natural choice to just do that. > > It isn't clear to me that we have a consitent plan for how we intend > libvhost-user to develop & what it is permitted to use. What information > I see in the source code and in this thread are contradictory. > > For example, in the text quoted above: > > "libvhost-user is supposed to be indepdent from the main QEMU code." > > which did match my overall understanding too. At the top of libvhost-user.c > there is a comment > > /* this code avoids GLib dependency */ > > but a few lines later it does > > #include "qemu/atomic.h" > #include "qemu/osdep.h" > #include "qemu/memfd.h" > > and in the Makefile we link it to much of QEMU util code: > > libvhost-user.a: $(libvhost-user-obj-y) $(util-obj-y) $(stub-obj-y) > > this in turn pulls in GLib code, and looking at symbols we can see > over 100 GLib functions used: > > $ nm ./libvhost-user.a | grep 'U g_' | sort | uniq | wc -l > 128 > > And over 200 QEMU source object files included: > > $ nm ./libvhost-user.a | grep '.o:' | sort | wc -l > 224 > > So unless I'm missing something, we have lost any independance from > both QEMU and GLib code that we might have had in the past. Yep, I think this is mostly due to commit 5f9ff1eff38 ("libvhost-user: Support tracking inflight I/O in shared memory") It may not be that hard to bring back glib independence. Is it worth it though? > > Note this also has licensing implications, as I expect this means that > via the QEMU source objects it pulls in, libvhost-user.a has become > a GPLv2-only combined work, not a GPLv2-or-later combined work. > libvhost-user.c is GPLv2-or-later because tests/vhost-user-bridge.c was and is still as well. Do we need to change that? > Regards, > Daniel > -- > |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| > |: https://libvirt.org -o- https://fstop138.berrange.com :| > |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :| > >