diff mbox

[Bug,1066055] Re: Network performance regression with vde_switch

Message ID 20121022111824.GA6916@amit.redhat.com
State New
Headers show

Commit Message

Amit Shah Oct. 22, 2012, 11:18 a.m. UTC
On (Tue) 16 Oct 2012 [09:48:09], Stefan Hajnoczi wrote:
> On Mon, Oct 15, 2012 at 09:46:06PM -0000, Edivaldo de Araujo Pereira wrote:
> > Hi Stefan,
> > 
> > Thank you, very much for taking the time to help me, and excuse me for
> > not seeing your answer early...
> > 
> > I've run the procedure you pointed me out, and the result is:
> > 
> > 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f is the first bad commit
> > commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> > Author: Amit Shah <amit.shah@redhat.com>
> > Date:   Tue Sep 25 00:05:15 2012 +0530
> > 
> >     virtio: Introduce virtqueue_get_avail_bytes()
> > 
> >     The current virtqueue_avail_bytes() is oddly named, and checks if a
> >     particular number of bytes are available in a vq.  A better API is to
> >     fetch the number of bytes available in the vq, and let the caller do
> >     what's interesting with the numbers.
> > 
> >     Introduce virtqueue_get_avail_bytes(), which returns the number of bytes
> >     for buffers marked for both, in as well as out.  virtqueue_avail_bytes()
> >     is made a wrapper over this new function.
> > 
> >     Signed-off-by: Amit Shah <amit.shah@redhat.com>
> >     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > 
> > :040000 040000 1a58b06a228651cf844621d9ee2f49b525e36c93
> > e09ea66ce7f6874921670b6aeab5bea921a5227d M      hw
> > 
> > I tried to revert that patch in the latest version, but it obviously
> > didnt work; I'm trying to figure out the problem, but I don't know very
> > well the souce code, so I think it's going to take some time. For now,
> > it's all I could do.
> 
> After git-bisect(1) completes it is good to sanity-check the result by
> manually testing 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f^ (the commit
> just before the bad commit) and 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> (the bad commit).
> 
> This will verify that the commit indeed introduces the regression.  I
> suggest doing this just to be sure that you've found the bad commit.
> 
> Regarding this commit, I notice two things:
> 
> 1. We will now loop over all vring descriptors because we calculate the
>    total in/out length instead of returning early as soon as we see
>    there is enough space.  Maybe this makes a difference, although I'm a
>    little surprised you see such a huge regression.
> 
> 2. The comparision semantics have changed from:
> 
>      (in_total += vring_desc_len(desc_pa, i)) >= in_bytes
> 
>    to:
> 
>      (in_bytes && in_bytes < in_total)
> 
>    Notice that virtqueue_avail_bytes() now returns 0 when in_bytes ==
>    in_total.  Previously, it would return 1.  Perhaps we are starving or
>    delaying I/O due to this comparison change.  You can easily change
>    '<' to '<=' to see if it fixes the issue.

Hi Edivaldo,

Can you try the following patch, that will confirm if it's the
descriptor walk or the botched compare that's causing the regression.

Thanks,



		Amit

Comments

Edivaldo de Araujo Pereira Oct. 22, 2012, 1:50 p.m. UTC | #1
Dear Amit,

On a suggestion of Stefan, I've already tested the modification in you patch, and it didn't work; but for confirmation I tested it once again, on the latest snapshot; same result, that is, it didn't work; the problem is still there.

I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.

Thank you very much.

Edivaldo

--- Em seg, 22/10/12, Amit Shah <amit.shah@redhat.com> escreveu:

> De: Amit Shah <amit.shah@redhat.com>
> Assunto: Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch
> Para: "Stefan Hajnoczi" <stefanha@gmail.com>
> Cc: "Bug 1066055" <1066055@bugs.launchpad.net>, qemu-devel@nongnu.org, edivaldoapereira@yahoo.com.br
> Data: Segunda-feira, 22 de Outubro de 2012, 4:18
> On (Tue) 16 Oct 2012 [09:48:09],
> Stefan Hajnoczi wrote:
> > On Mon, Oct 15, 2012 at 09:46:06PM -0000, Edivaldo de
> Araujo Pereira wrote:
> > > Hi Stefan,
> > > 
> > > Thank you, very much for taking the time to help
> me, and excuse me for
> > > not seeing your answer early...
> > > 
> > > I've run the procedure you pointed me out, and the
> result is:
> > > 
> > > 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f is the
> first bad commit
> > > commit 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> > > Author: Amit Shah <amit.shah@redhat.com>
> > > Date:   Tue Sep 25 00:05:15 2012
> +0530
> > > 
> > >     virtio: Introduce
> virtqueue_get_avail_bytes()
> > > 
> > >     The current
> virtqueue_avail_bytes() is oddly named, and checks if a
> > >     particular number of bytes
> are available in a vq.  A better API is to
> > >     fetch the number of bytes
> available in the vq, and let the caller do
> > >     what's interesting with
> the numbers.
> > > 
> > >     Introduce
> virtqueue_get_avail_bytes(), which returns the number of
> bytes
> > >     for buffers marked for
> both, in as well as out.  virtqueue_avail_bytes()
> > >     is made a wrapper over
> this new function.
> > > 
> > >     Signed-off-by: Amit Shah
> <amit.shah@redhat.com>
> > >     Signed-off-by: Michael S.
> Tsirkin <mst@redhat.com>
> > > 
> > > :040000 040000
> 1a58b06a228651cf844621d9ee2f49b525e36c93
> > > e09ea66ce7f6874921670b6aeab5bea921a5227d M 
>     hw
> > > 
> > > I tried to revert that patch in the latest
> version, but it obviously
> > > didnt work; I'm trying to figure out the problem,
> but I don't know very
> > > well the souce code, so I think it's going to take
> some time. For now,
> > > it's all I could do.
> > 
> > After git-bisect(1) completes it is good to
> sanity-check the result by
> > manually testing
> 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f^ (the commit
> > just before the bad commit) and
> 0d8d7690850eb0cf2b2b60933cf47669a6b6f18f
> > (the bad commit).
> > 
> > This will verify that the commit indeed introduces the
> regression.  I
> > suggest doing this just to be sure that you've found
> the bad commit.
> > 
> > Regarding this commit, I notice two things:
> > 
> > 1. We will now loop over all vring descriptors because
> we calculate the
> >    total in/out length instead of returning
> early as soon as we see
> >    there is enough space.  Maybe this
> makes a difference, although I'm a
> >    little surprised you see such a huge
> regression.
> > 
> > 2. The comparision semantics have changed from:
> > 
> >      (in_total +=
> vring_desc_len(desc_pa, i)) >= in_bytes
> > 
> >    to:
> > 
> >      (in_bytes && in_bytes <
> in_total)
> > 
> >    Notice that virtqueue_avail_bytes() now
> returns 0 when in_bytes ==
> >    in_total.  Previously, it would
> return 1.  Perhaps we are starving or
> >    delaying I/O due to this comparison
> change.  You can easily change
> >    '<' to '<=' to see if it fixes the
> issue.
> 
> Hi Edivaldo,
> 
> Can you try the following patch, that will confirm if it's
> the
> descriptor walk or the botched compare that's causing the
> regression.
> 
> Thanks,
> 
> diff --git a/hw/virtio.c b/hw/virtio.c
> index 6821092..bb08ed8 100644
> --- a/hw/virtio.c
> +++ b/hw/virtio.c
> @@ -406,8 +406,8 @@ int virtqueue_avail_bytes(VirtQueue *vq,
> unsigned int in_bytes,
>      unsigned int in_total, out_total;
>  
>      virtqueue_get_avail_bytes(vq,
> &in_total, &out_total);
> -    if ((in_bytes && in_bytes <
> in_total)
> -        || (out_bytes &&
> out_bytes < out_total)) {
> +    if ((in_bytes && in_bytes <=
> in_total)
> +        || (out_bytes &&
> out_bytes <= out_total)) {
>          return 1;
>      }
>      return 0;
> 
> 
>         Amit
>
Stefan Hajnoczi Oct. 23, 2012, 12:55 p.m. UTC | #2
On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.

I've thought a little about how to approach this.  Amit, here's a brain
dump:

The simplest solution is to make virtqueue_avail_bytes() use the old
behavior of stopping early.

However, I wonder if we can actually *improve* performance of existing
code by changing virtio-net.c:virtio_net_receive().  The intuition is
that calling virtio_net_has_buffers() (internally calls
virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
because we're repeatedly traversing the descriptor chain.

We can get rid of this repetition.  A side-effect of this is that we no
longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
how:

The common case in virtio_net_receive() is that we have buffers and they
are large enough for the received packet.  So to optimize for this case:

1. Take the VirtQueueElement off the vring but don't increment
   last_avail_idx yet.  (This is essentially a "peek" operation.)

2. If there is an error or we drop the packet because the
   VirtQueueElement is too small, just bail out and we'll grab the same
   VirtQueueElement again next time.

3. When we've committed filling in this VirtQueueElement, increment
   last_avail_idx.  This is the point of no return.

Essentially we're splitting pop() into peek() and consume().  Peek()
grabs the VirtQueueElement but does not increment last_avail_idx.
Consume() simply increments last_avail_idx and maybe the EVENT_IDX
optimization stuff.

Whether this will improve performance, I'm not sure.  Perhaps
virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
following up with virtqueue_pop() is very cheap already.  But the idea
here is to avoid the virtio_net_has_buffers() because we'll find out
soon enough when we try to pop :).

Another approach would be to drop virtio_net_has_buffers() but continue
to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
VirtIONet across virtio_net_receive() calls in the case where we drop
the packet.  I don't like this approach very much though because it gets
tricky when the guest modifies the vring memory, resets the virtio
device, etc across calls.

Stefan
Amit Shah Nov. 1, 2012, 9:19 a.m. UTC | #3
On (Tue) 23 Oct 2012 [14:55:03], Stefan Hajnoczi wrote:
> On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.
> 
> I've thought a little about how to approach this.  Amit, here's a brain
> dump:
> 
> The simplest solution is to make virtqueue_avail_bytes() use the old
> behavior of stopping early.
> 
> However, I wonder if we can actually *improve* performance of existing
> code by changing virtio-net.c:virtio_net_receive().  The intuition is
> that calling virtio_net_has_buffers() (internally calls
> virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
> because we're repeatedly traversing the descriptor chain.
> 
> We can get rid of this repetition.  A side-effect of this is that we no
> longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
> how:
> 
> The common case in virtio_net_receive() is that we have buffers and they
> are large enough for the received packet.  So to optimize for this case:
> 
> 1. Take the VirtQueueElement off the vring but don't increment
>    last_avail_idx yet.  (This is essentially a "peek" operation.)
> 
> 2. If there is an error or we drop the packet because the
>    VirtQueueElement is too small, just bail out and we'll grab the same
>    VirtQueueElement again next time.
> 
> 3. When we've committed filling in this VirtQueueElement, increment
>    last_avail_idx.  This is the point of no return.
> 
> Essentially we're splitting pop() into peek() and consume().  Peek()
> grabs the VirtQueueElement but does not increment last_avail_idx.
> Consume() simply increments last_avail_idx and maybe the EVENT_IDX
> optimization stuff.
> 
> Whether this will improve performance, I'm not sure.  Perhaps
> virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
> following up with virtqueue_pop() is very cheap already.  But the idea
> here is to avoid the virtio_net_has_buffers() because we'll find out
> soon enough when we try to pop :).

This sounds doable -- adding mst for comments.

> Another approach would be to drop virtio_net_has_buffers() but continue
> to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
> VirtIONet across virtio_net_receive() calls in the case where we drop
> the packet.  I don't like this approach very much though because it gets
> tricky when the guest modifies the vring memory, resets the virtio
> device, etc across calls.

Right.

Also, save/load will become slightly complicated in both these
cases, but it might be worth it.

Michael, can you comment pls?


		Amit
Michael S. Tsirkin Nov. 1, 2012, 12:04 p.m. UTC | #4
On Thu, Nov 01, 2012 at 02:49:18PM +0530, Amit Shah wrote:
> On (Tue) 23 Oct 2012 [14:55:03], Stefan Hajnoczi wrote:
> > On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> > > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.
> > 
> > I've thought a little about how to approach this.  Amit, here's a brain
> > dump:
> > 
> > The simplest solution is to make virtqueue_avail_bytes() use the old
> > behavior of stopping early.
> > 
> > However, I wonder if we can actually *improve* performance of existing
> > code by changing virtio-net.c:virtio_net_receive().  The intuition is
> > that calling virtio_net_has_buffers() (internally calls
> > virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
> > because we're repeatedly traversing the descriptor chain.
> > 
> > We can get rid of this repetition.  A side-effect of this is that we no
> > longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
> > how:
> > 
> > The common case in virtio_net_receive() is that we have buffers and they
> > are large enough for the received packet.  So to optimize for this case:
> > 
> > 1. Take the VirtQueueElement off the vring but don't increment
> >    last_avail_idx yet.  (This is essentially a "peek" operation.)
> > 
> > 2. If there is an error or we drop the packet because the
> >    VirtQueueElement is too small, just bail out and we'll grab the same
> >    VirtQueueElement again next time.
> > 
> > 3. When we've committed filling in this VirtQueueElement, increment
> >    last_avail_idx.  This is the point of no return.
> > 
> > Essentially we're splitting pop() into peek() and consume().  Peek()
> > grabs the VirtQueueElement but does not increment last_avail_idx.
> > Consume() simply increments last_avail_idx and maybe the EVENT_IDX
> > optimization stuff.
> > 
> > Whether this will improve performance, I'm not sure.  Perhaps
> > virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
> > following up with virtqueue_pop() is very cheap already.  But the idea
> > here is to avoid the virtio_net_has_buffers() because we'll find out
> > soon enough when we try to pop :).
> 
> This sounds doable -- adding mst for comments.
> 
> > Another approach would be to drop virtio_net_has_buffers() but continue
> > to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
> > VirtIONet across virtio_net_receive() calls in the case where we drop
> > the packet.  I don't like this approach very much though because it gets
> > tricky when the guest modifies the vring memory, resets the virtio
> > device, etc across calls.
> 
> Right.
> 
> Also, save/load will become slightly complicated in both these
> cases, but it might be worth it.
> 
> Michael, can you comment pls?
> 
> 
> 		Amit

It will also complicate switching to/from vhost-net.

If this patch helps serial but degrades speed for -net I'm inclined
to simply make serial and net user different codepaths.
Amit Shah Nov. 1, 2012, 3:12 p.m. UTC | #5
On (Thu) 01 Nov 2012 [14:04:11], Michael S. Tsirkin wrote:
> On Thu, Nov 01, 2012 at 02:49:18PM +0530, Amit Shah wrote:
> > On (Tue) 23 Oct 2012 [14:55:03], Stefan Hajnoczi wrote:
> > > On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> > > > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.
> > > 
> > > I've thought a little about how to approach this.  Amit, here's a brain
> > > dump:
> > > 
> > > The simplest solution is to make virtqueue_avail_bytes() use the old
> > > behavior of stopping early.
> > > 
> > > However, I wonder if we can actually *improve* performance of existing
> > > code by changing virtio-net.c:virtio_net_receive().  The intuition is
> > > that calling virtio_net_has_buffers() (internally calls
> > > virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
> > > because we're repeatedly traversing the descriptor chain.
> > > 
> > > We can get rid of this repetition.  A side-effect of this is that we no
> > > longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
> > > how:
> > > 
> > > The common case in virtio_net_receive() is that we have buffers and they
> > > are large enough for the received packet.  So to optimize for this case:
> > > 
> > > 1. Take the VirtQueueElement off the vring but don't increment
> > >    last_avail_idx yet.  (This is essentially a "peek" operation.)
> > > 
> > > 2. If there is an error or we drop the packet because the
> > >    VirtQueueElement is too small, just bail out and we'll grab the same
> > >    VirtQueueElement again next time.
> > > 
> > > 3. When we've committed filling in this VirtQueueElement, increment
> > >    last_avail_idx.  This is the point of no return.
> > > 
> > > Essentially we're splitting pop() into peek() and consume().  Peek()
> > > grabs the VirtQueueElement but does not increment last_avail_idx.
> > > Consume() simply increments last_avail_idx and maybe the EVENT_IDX
> > > optimization stuff.
> > > 
> > > Whether this will improve performance, I'm not sure.  Perhaps
> > > virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
> > > following up with virtqueue_pop() is very cheap already.  But the idea
> > > here is to avoid the virtio_net_has_buffers() because we'll find out
> > > soon enough when we try to pop :).
> > 
> > This sounds doable -- adding mst for comments.
> > 
> > > Another approach would be to drop virtio_net_has_buffers() but continue
> > > to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
> > > VirtIONet across virtio_net_receive() calls in the case where we drop
> > > the packet.  I don't like this approach very much though because it gets
> > > tricky when the guest modifies the vring memory, resets the virtio
> > > device, etc across calls.
> > 
> > Right.
> > 
> > Also, save/load will become slightly complicated in both these
> > cases, but it might be worth it.
> > 
> > Michael, can you comment pls?
> > 
> 
> It will also complicate switching to/from vhost-net.
> 
> If this patch helps serial but degrades speed for -net I'm inclined
> to simply make serial and net user different codepaths.

There's an opportunity for optimisation, let's not discount it so
quickly.  The reporter also pointed out there was ~ +20% difference
with tun/tap, so eliminating the call to virtqueue_avail_bytes() can
help overall.

		Amit
Michael S. Tsirkin Nov. 1, 2012, 3:50 p.m. UTC | #6
On Thu, Nov 01, 2012 at 08:42:50PM +0530, Amit Shah wrote:
> On (Thu) 01 Nov 2012 [14:04:11], Michael S. Tsirkin wrote:
> > On Thu, Nov 01, 2012 at 02:49:18PM +0530, Amit Shah wrote:
> > > On (Tue) 23 Oct 2012 [14:55:03], Stefan Hajnoczi wrote:
> > > > On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote:
> > > > > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution.
> > > > 
> > > > I've thought a little about how to approach this.  Amit, here's a brain
> > > > dump:
> > > > 
> > > > The simplest solution is to make virtqueue_avail_bytes() use the old
> > > > behavior of stopping early.
> > > > 
> > > > However, I wonder if we can actually *improve* performance of existing
> > > > code by changing virtio-net.c:virtio_net_receive().  The intuition is
> > > > that calling virtio_net_has_buffers() (internally calls
> > > > virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal
> > > > because we're repeatedly traversing the descriptor chain.
> > > > 
> > > > We can get rid of this repetition.  A side-effect of this is that we no
> > > > longer need to call virtqueue_avail_bytes() from virtio-net.c.  Here's
> > > > how:
> > > > 
> > > > The common case in virtio_net_receive() is that we have buffers and they
> > > > are large enough for the received packet.  So to optimize for this case:
> > > > 
> > > > 1. Take the VirtQueueElement off the vring but don't increment
> > > >    last_avail_idx yet.  (This is essentially a "peek" operation.)
> > > > 
> > > > 2. If there is an error or we drop the packet because the
> > > >    VirtQueueElement is too small, just bail out and we'll grab the same
> > > >    VirtQueueElement again next time.
> > > > 
> > > > 3. When we've committed filling in this VirtQueueElement, increment
> > > >    last_avail_idx.  This is the point of no return.
> > > > 
> > > > Essentially we're splitting pop() into peek() and consume().  Peek()
> > > > grabs the VirtQueueElement but does not increment last_avail_idx.
> > > > Consume() simply increments last_avail_idx and maybe the EVENT_IDX
> > > > optimization stuff.
> > > > 
> > > > Whether this will improve performance, I'm not sure.  Perhaps
> > > > virtio_net_has_buffers() pulls most descriptors into the CPU's cache and
> > > > following up with virtqueue_pop() is very cheap already.  But the idea
> > > > here is to avoid the virtio_net_has_buffers() because we'll find out
> > > > soon enough when we try to pop :).
> > > 
> > > This sounds doable -- adding mst for comments.
> > > 
> > > > Another approach would be to drop virtio_net_has_buffers() but continue
> > > > to use virtqueue_pop().  We'd keep the same VirtQueueElem stashed in
> > > > VirtIONet across virtio_net_receive() calls in the case where we drop
> > > > the packet.  I don't like this approach very much though because it gets
> > > > tricky when the guest modifies the vring memory, resets the virtio
> > > > device, etc across calls.
> > > 
> > > Right.
> > > 
> > > Also, save/load will become slightly complicated in both these
> > > cases, but it might be worth it.
> > > 
> > > Michael, can you comment pls?
> > > 
> > 
> > It will also complicate switching to/from vhost-net.
> > 
> > If this patch helps serial but degrades speed for -net I'm inclined
> > to simply make serial and net user different codepaths.
> 
> There's an opportunity for optimisation, let's not discount it so
> quickly.  The reporter also pointed out there was ~ +20% difference
> with tun/tap,

Seems to do with host powermanagement.

> so eliminating the call to virtqueue_avail_bytes() can
> help overall.
> 
> 		Amit
diff mbox

Patch

diff --git a/hw/virtio.c b/hw/virtio.c
index 6821092..bb08ed8 100644
--- a/hw/virtio.c
+++ b/hw/virtio.c
@@ -406,8 +406,8 @@  int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
     unsigned int in_total, out_total;
 
     virtqueue_get_avail_bytes(vq, &in_total, &out_total);
-    if ((in_bytes && in_bytes < in_total)
-        || (out_bytes && out_bytes < out_total)) {
+    if ((in_bytes && in_bytes <= in_total)
+        || (out_bytes && out_bytes <= out_total)) {
         return 1;
     }
     return 0;