diff mbox

virtio-rng: Bump up quota value only when guest requests entropy

Message ID 1436520840-28742-1-git-send-email-pagupta@redhat.com
State New
Headers show

Commit Message

Pankaj Gupta July 10, 2015, 9:34 a.m. UTC
Timer was added in virtio-rng to rate limit the 
entropy. It used to trigger at regular intervals to 
bump up the quota value. The value of quota and timer 
slice is decided based on entropy source rate in host. 
This resulted in triggring of timer even when quota 
is not exhausted at all and resulting in extra processing.

This patch triggers timer only when guest requests for 
entropy. As soon as first request from guest for entropy 
comes we set the timer. Timer bumps up the quota value 
when it gets triggered.

Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
---
 hw/virtio/virtio-rng.c         | 15 ++++++++-------
 include/hw/virtio/virtio-rng.h |  1 +
 2 files changed, 9 insertions(+), 7 deletions(-)

Comments

Amit Shah July 13, 2015, 6:09 a.m. UTC | #1
On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote:
>    Timer was added in virtio-rng to rate limit the 
> entropy. It used to trigger at regular intervals to 
> bump up the quota value. The value of quota and timer 
> slice is decided based on entropy source rate in host. 

It doesn't necessarily depnd on the source rate in the host - all we
want the quota+timer to do is to limit the amount of data a guest can
take from the host - to ensure one (potentially rogue) guest does not
use up all the entropy from the host.

> This resulted in triggring of timer even when quota 
> is not exhausted at all and resulting in extra processing.
> 
> This patch triggers timer only when guest requests for 
> entropy. As soon as first request from guest for entropy 
> comes we set the timer. Timer bumps up the quota value 
> when it gets triggered.

Can you say how you tested this?

Mainly interested in seeing the results in these cases:

* No quota/timer specified on command line
* Quota+timer specified on command line, and guest keeps asking host
  for unlimited entropy, e.g. by doing 'dd if=/dev/hwrng of=/dev/null'
  in the guest.
* Ensure quota restrictions are maintained, and we're not giving more
  data than configured.

For these tests, it's helpful to use the host's /dev/urandom as the
source, since that can give data faster to the guest than the default
/dev/random.  (Otherwise, if the host itself blocks on /dev/random,
the guest may not get entropy due to that reason vs it not getting
entropy due to rate-limiting.)

I tested one scenario using the trace events.  With some quota and a
timer value specified on the cmdline, before patch, I get tons of
trace events before the guest is even up.  After applying the patch, I
don't get any trace events.  So that's progress!

I have one question:

> Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> ---
>  hw/virtio/virtio-rng.c         | 15 ++++++++-------
>  include/hw/virtio/virtio-rng.h |  1 +
>  2 files changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
> index 22b1d87..8774a0c 100644
> --- a/hw/virtio/virtio-rng.c
> +++ b/hw/virtio/virtio-rng.c
> @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng)
>          return;
>      }
>  
> +    if (vrng->activate_timer) {
> +        timer_mod(vrng->rate_limit_timer,
> +                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) + vrng->conf.period_ms);
> +        vrng->activate_timer = false;
> +    }
> +
>      if (vrng->quota_remaining < 0) {
>          quota = 0;
>      } else {
> @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque)
>  
>      vrng->quota_remaining = vrng->conf.max_bytes;
>      virtio_rng_process(vrng);
> -    timer_mod(vrng->rate_limit_timer,
> -                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) + vrng->conf.period_ms);
> +    vrng->activate_timer = true;
>  }

We're processing an older request first, and then firing the timer.
What's the use of doing it this way?  Why even do this?

I know this is how the code was written originally, but since you've
looked at it, do you know why this is the way it is?

		Amit
Pankaj Gupta July 13, 2015, 6:53 a.m. UTC | #2
Hi Amit,

Thanks for the review.

> 
> On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote:
> >    Timer was added in virtio-rng to rate limit the
> > entropy. It used to trigger at regular intervals to
> > bump up the quota value. The value of quota and timer
> > slice is decided based on entropy source rate in host.
> 
> It doesn't necessarily depnd on the source rate in the host - all we
> want the quota+timer to do is to limit the amount of data a guest can
> take from the host - to ensure one (potentially rogue) guest does not
> use up all the entropy from the host.

Sorry! for not being clear on this. By rate limit I meant same.
I used a broader term.

> 
> > This resulted in triggring of timer even when quota
> > is not exhausted at all and resulting in extra processing.
> > 
> > This patch triggers timer only when guest requests for
> > entropy. As soon as first request from guest for entropy
> > comes we set the timer. Timer bumps up the quota value
> > when it gets triggered.
> 
> Can you say how you tested this?
> 
> Mainly interested in seeing the results in these cases:
> 
> * No quota/timer specified on command line
    Tested this scenario. I am setting timer when first request comes.
    So, timer gets fired after (1 << 16) ms time. 

> * Quota+timer specified on command line, and guest keeps asking host
>   for unlimited entropy, e.g. by doing 'dd if=/dev/hwrng of=/dev/null'
>   in the guest.

    I did not do  'dd if=/dev/hwrng of=/dev/null'.
    Did cat '/dev/hwrng' && '/dev/random'

> * Ensure quota restrictions are maintained, and we're not giving more
>   data than configured.
    Ensured. We are either giving < = requested data
> 
> For these tests, it's helpful to use the host's /dev/urandom as the
> source, since that can give data faster to the guest than the default
> /dev/random.  (Otherwise, if the host itself blocks on /dev/random,
> the guest may not get entropy due to that reason vs it not getting
> entropy due to rate-limiting.)

  Agree.
  Will test this as well.

> 
> I tested one scenario using the trace events.  With some quota and a
> timer value specified on the cmdline, before patch, I get tons of
> trace events before the guest is even up.  After applying the patch, I
> don't get any trace events.  So that's progress!

Thanks.
> 
> I have one question:
> 
> > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > ---
> >  hw/virtio/virtio-rng.c         | 15 ++++++++-------
> >  include/hw/virtio/virtio-rng.h |  1 +
> >  2 files changed, 9 insertions(+), 7 deletions(-)
> > 
> > diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
> > index 22b1d87..8774a0c 100644
> > --- a/hw/virtio/virtio-rng.c
> > +++ b/hw/virtio/virtio-rng.c
> > @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng)
> >          return;
> >      }
> >  
> > +    if (vrng->activate_timer) {
> > +        timer_mod(vrng->rate_limit_timer,
> > +                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > vrng->conf.period_ms);
> > +        vrng->activate_timer = false;
> > +    }
> > +
> >      if (vrng->quota_remaining < 0) {
> >          quota = 0;
> >      } else {
> > @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque)
> >  
> >      vrng->quota_remaining = vrng->conf.max_bytes;
> >      virtio_rng_process(vrng);
> > -    timer_mod(vrng->rate_limit_timer,
> > -                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > vrng->conf.period_ms);
> > +    vrng->activate_timer = true;
> >  }
> 
> We're processing an older request first, and then firing the timer.
> What's the use of doing it this way?  Why even do this?

I also had this query. If we don't call this after resetting 'vrng->quota_remaining' 
further requests does not work. It looks to me some limitation in earlier code when 
'vrng->quota_remaining' goes to < = 0. A self request is needed to reset things.

I will try to find the answer.

> 
> I know this is how the code was written originally, but since you've
> looked at it, do you know why this is the way it is?
  No
> 
>                 Amit
> 
>
Amit Shah July 13, 2015, 7:34 a.m. UTC | #3
On (Mon) 13 Jul 2015 [02:53:36], Pankaj Gupta wrote:
> Hi Amit,
> 
> Thanks for the review.
> 
> > 
> > On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote:
> > >    Timer was added in virtio-rng to rate limit the
> > > entropy. It used to trigger at regular intervals to
> > > bump up the quota value. The value of quota and timer
> > > slice is decided based on entropy source rate in host.
> > 
> > It doesn't necessarily depnd on the source rate in the host - all we
> > want the quota+timer to do is to limit the amount of data a guest can
> > take from the host - to ensure one (potentially rogue) guest does not
> > use up all the entropy from the host.
> 
> Sorry! for not being clear on this. By rate limit I meant same.
> I used a broader term.

My comment was to the 'value of quota and timer slice is decided based
on entropy source rate in host' - admins will usually not decide based
on what sources the host has - they will typically decide how much a
guest is supposed to consume.

> > > This resulted in triggring of timer even when quota
> > > is not exhausted at all and resulting in extra processing.
> > > 
> > > This patch triggers timer only when guest requests for
> > > entropy. As soon as first request from guest for entropy
> > > comes we set the timer. Timer bumps up the quota value
> > > when it gets triggered.
> > 
> > Can you say how you tested this?
> > 
> > Mainly interested in seeing the results in these cases:
> > 
> > * No quota/timer specified on command line
>     Tested this scenario. I am setting timer when first request comes.
>     So, timer gets fired after (1 << 16) ms time. 

But in case a quota or a timer isn't specified, the timer shouldn't be
armed in the first place.

> > * Quota+timer specified on command line, and guest keeps asking host
> >   for unlimited entropy, e.g. by doing 'dd if=/dev/hwrng of=/dev/null'
> >   in the guest.
> 
>     I did not do  'dd if=/dev/hwrng of=/dev/null'.
>     Did cat '/dev/hwrng' && '/dev/random'

OK - that's similar.  I like dd because when dd is stopped, it gives
the rate at which data was received, so it helps in seeing if we're
getting more rate than what was specified on the cmdline.

> > * Ensure quota restrictions are maintained, and we're not giving more
> >   data than configured.
>     Ensured. We are either giving < = requested data
> > 
> > For these tests, it's helpful to use the host's /dev/urandom as the
> > source, since that can give data faster to the guest than the default
> > /dev/random.  (Otherwise, if the host itself blocks on /dev/random,
> > the guest may not get entropy due to that reason vs it not getting
> > entropy due to rate-limiting.)
> 
>   Agree.
>   Will test this as well.
> 
> > 
> > I tested one scenario using the trace events.  With some quota and a
> > timer value specified on the cmdline, before patch, I get tons of
> > trace events before the guest is even up.  After applying the patch, I
> > don't get any trace events.  So that's progress!
> 
> Thanks.
> > 
> > I have one question:
> > 
> > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > > ---
> > >  hw/virtio/virtio-rng.c         | 15 ++++++++-------
> > >  include/hw/virtio/virtio-rng.h |  1 +
> > >  2 files changed, 9 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
> > > index 22b1d87..8774a0c 100644
> > > --- a/hw/virtio/virtio-rng.c
> > > +++ b/hw/virtio/virtio-rng.c
> > > @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng)
> > >          return;
> > >      }
> > >  
> > > +    if (vrng->activate_timer) {
> > > +        timer_mod(vrng->rate_limit_timer,
> > > +                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > vrng->conf.period_ms);
> > > +        vrng->activate_timer = false;
> > > +    }
> > > +
> > >      if (vrng->quota_remaining < 0) {
> > >          quota = 0;
> > >      } else {
> > > @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque)
> > >  
> > >      vrng->quota_remaining = vrng->conf.max_bytes;
> > >      virtio_rng_process(vrng);
> > > -    timer_mod(vrng->rate_limit_timer,
> > > -                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > vrng->conf.period_ms);
> > > +    vrng->activate_timer = true;
> > >  }
> > 
> > We're processing an older request first, and then firing the timer.
> > What's the use of doing it this way?  Why even do this?
> 
> I also had this query. If we don't call this after resetting 'vrng->quota_remaining' 
> further requests does not work. It looks to me some limitation in earlier code when 
> 'vrng->quota_remaining' goes to < = 0. A self request is needed to reset things.
> 
> I will try to find the answer.

OK so I actually read through the thing, and this is useful for such a
scenario:

assume our rate-limit is at 4KB/s.

* guest queues up multiple requests, say 4KB, 8KB, 12KB.
* we will serve the first request in the queue, which is 4KB.
* then, check_rate_limit() is triggered, and we serve the 2nd request.
* since the 2nd request is for 8KB, but we can only give 4KB in 1
  second, we only give the guest 4KB.  We then re-arm the timer so
  that we can get to the next request in the list.  Without this
  re-arming, the already-queued request will not get attention (till
  the next time the guest writes something to the queue.
* we then serve the 3rd request for 12KB, again with 4KB of data.

One thing to observe here is that we just service the minimum data we
can without waiting to service the entire request (i.e. we give the
guest 4KB of data, and not 8 or 12 KB.  We could do that by waiting
for the timer to expire, and then servicing the entire request.
This current way is simpler, and better.  If the guest isn't happy to
receive just 4KB when it asked for 12, it can ask again.

That also saves us the trouble with live migration: if we hold onto a
buffer, that becomes state, and we'll have to migrate it as well.
This current model saves us from doing that.

And now that I type all this, I recall having thought about these
things initially.  Don't know if I wrote it up somewhere, or if there
are email conversations on this subject...

So now with your changes, here is what we can do: instead of just
activating the timer in check_rate_limit(), we can issue a
get_request_size() call.  If the return value is > 0, it means we have
a buffer queued up by the guest, and we can then call
virtio_rng_process() and set activated to true.  Else, no need to call
virtio_rng_process at all, and the job for the timer is done, since
there are no more outstanding requests from the guest.

Makes sense?

		Amit
Michael S. Tsirkin July 13, 2015, 7:55 a.m. UTC | #4
On Mon, Jul 13, 2015 at 01:04:45PM +0530, Amit Shah wrote:
> On (Mon) 13 Jul 2015 [02:53:36], Pankaj Gupta wrote:
> > Hi Amit,
> > 
> > Thanks for the review.
> > 
> > > 
> > > On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote:
> > > >    Timer was added in virtio-rng to rate limit the
> > > > entropy. It used to trigger at regular intervals to
> > > > bump up the quota value. The value of quota and timer
> > > > slice is decided based on entropy source rate in host.
> > > 
> > > It doesn't necessarily depnd on the source rate in the host - all we
> > > want the quota+timer to do is to limit the amount of data a guest can
> > > take from the host - to ensure one (potentially rogue) guest does not
> > > use up all the entropy from the host.
> > 
> > Sorry! for not being clear on this. By rate limit I meant same.
> > I used a broader term.
> 
> My comment was to the 'value of quota and timer slice is decided based
> on entropy source rate in host' - admins will usually not decide based
> on what sources the host has - they will typically decide how much a
> guest is supposed to consume.
> 
> > > > This resulted in triggring of timer even when quota
> > > > is not exhausted at all and resulting in extra processing.
> > > > 
> > > > This patch triggers timer only when guest requests for
> > > > entropy. As soon as first request from guest for entropy
> > > > comes we set the timer. Timer bumps up the quota value
> > > > when it gets triggered.
> > > 
> > > Can you say how you tested this?
> > > 
> > > Mainly interested in seeing the results in these cases:
> > > 
> > > * No quota/timer specified on command line
> >     Tested this scenario. I am setting timer when first request comes.
> >     So, timer gets fired after (1 << 16) ms time. 
> 
> But in case a quota or a timer isn't specified, the timer shouldn't be
> armed in the first place.
> 
> > > * Quota+timer specified on command line, and guest keeps asking host
> > >   for unlimited entropy, e.g. by doing 'dd if=/dev/hwrng of=/dev/null'
> > >   in the guest.
> > 
> >     I did not do  'dd if=/dev/hwrng of=/dev/null'.
> >     Did cat '/dev/hwrng' && '/dev/random'
> 
> OK - that's similar.  I like dd because when dd is stopped, it gives
> the rate at which data was received, so it helps in seeing if we're
> getting more rate than what was specified on the cmdline.
> 
> > > * Ensure quota restrictions are maintained, and we're not giving more
> > >   data than configured.
> >     Ensured. We are either giving < = requested data
> > > 
> > > For these tests, it's helpful to use the host's /dev/urandom as the
> > > source, since that can give data faster to the guest than the default
> > > /dev/random.  (Otherwise, if the host itself blocks on /dev/random,
> > > the guest may not get entropy due to that reason vs it not getting
> > > entropy due to rate-limiting.)
> > 
> >   Agree.
> >   Will test this as well.
> > 
> > > 
> > > I tested one scenario using the trace events.  With some quota and a
> > > timer value specified on the cmdline, before patch, I get tons of
> > > trace events before the guest is even up.  After applying the patch, I
> > > don't get any trace events.  So that's progress!
> > 
> > Thanks.
> > > 
> > > I have one question:
> > > 
> > > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > > > ---
> > > >  hw/virtio/virtio-rng.c         | 15 ++++++++-------
> > > >  include/hw/virtio/virtio-rng.h |  1 +
> > > >  2 files changed, 9 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
> > > > index 22b1d87..8774a0c 100644
> > > > --- a/hw/virtio/virtio-rng.c
> > > > +++ b/hw/virtio/virtio-rng.c
> > > > @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng)
> > > >          return;
> > > >      }
> > > >  
> > > > +    if (vrng->activate_timer) {
> > > > +        timer_mod(vrng->rate_limit_timer,
> > > > +                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > vrng->conf.period_ms);
> > > > +        vrng->activate_timer = false;
> > > > +    }
> > > > +
> > > >      if (vrng->quota_remaining < 0) {
> > > >          quota = 0;
> > > >      } else {
> > > > @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque)
> > > >  
> > > >      vrng->quota_remaining = vrng->conf.max_bytes;
> > > >      virtio_rng_process(vrng);
> > > > -    timer_mod(vrng->rate_limit_timer,
> > > > -                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > vrng->conf.period_ms);
> > > > +    vrng->activate_timer = true;
> > > >  }
> > > 
> > > We're processing an older request first, and then firing the timer.
> > > What's the use of doing it this way?  Why even do this?
> > 
> > I also had this query. If we don't call this after resetting 'vrng->quota_remaining' 
> > further requests does not work. It looks to me some limitation in earlier code when 
> > 'vrng->quota_remaining' goes to < = 0. A self request is needed to reset things.
> > 
> > I will try to find the answer.
> 
> OK so I actually read through the thing, and this is useful for such a
> scenario:
> 
> assume our rate-limit is at 4KB/s.
> 
> * guest queues up multiple requests, say 4KB, 8KB, 12KB.
> * we will serve the first request in the queue, which is 4KB.
> * then, check_rate_limit() is triggered, and we serve the 2nd request.
> * since the 2nd request is for 8KB, but we can only give 4KB in 1
>   second, we only give the guest 4KB.  We then re-arm the timer so
>   that we can get to the next request in the list.  Without this
>   re-arming, the already-queued request will not get attention (till
>   the next time the guest writes something to the queue.
> * we then serve the 3rd request for 12KB, again with 4KB of data.
> 
> One thing to observe here is that we just service the minimum data we
> can without waiting to service the entire request (i.e. we give the
> guest 4KB of data, and not 8 or 12 KB.  We could do that by waiting
> for the timer to expire, and then servicing the entire request.
> This current way is simpler, and better.  If the guest isn't happy to
> receive just 4KB when it asked for 12, it can ask again.
> 
> That also saves us the trouble with live migration: if we hold onto a
> buffer, that becomes state, and we'll have to migrate it as well.
> This current model saves us from doing that.

Timers hold state and have to be migrated, this seems to be missing
here.


> And now that I type all this, I recall having thought about these
> things initially.  Don't know if I wrote it up somewhere, or if there
> are email conversations on this subject...
> 
> So now with your changes, here is what we can do: instead of just
> activating the timer in check_rate_limit(), we can issue a
> get_request_size() call.  If the return value is > 0, it means we have
> a buffer queued up by the guest, and we can then call
> virtio_rng_process() and set activated to true.  Else, no need to call
> virtio_rng_process at all, and the job for the timer is done, since
> there are no more outstanding requests from the guest.
> 
> Makes sense?
> 
> 		Amit
Michael S. Tsirkin July 13, 2015, 7:57 a.m. UTC | #5
On Fri, Jul 10, 2015 at 03:04:00PM +0530, Pankaj Gupta wrote:
> @@ -196,13 +201,9 @@ static void virtio_rng_device_realize(DeviceState *dev, Error **errp)
>  
>      vrng->vq = virtio_add_queue(vdev, 8, handle_input);
>      vrng->quota_remaining = vrng->conf.max_bytes;
> -
>      vrng->rate_limit_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL,
>                                                 check_rate_limit, vrng);

BTW I'm not sure QEMU_CLOCK_VIRTUAL is right here.
We are trying to limit the rate at host.
Does it matter that guest is not running?
Pankaj Gupta July 13, 2015, 8:01 a.m. UTC | #6
> > Hi Amit,
> > 
> > Thanks for the review.
> > 
> > > 
> > > On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote:
> > > >    Timer was added in virtio-rng to rate limit the
> > > > entropy. It used to trigger at regular intervals to
> > > > bump up the quota value. The value of quota and timer
> > > > slice is decided based on entropy source rate in host.
> > > 
> > > It doesn't necessarily depnd on the source rate in the host - all we
> > > want the quota+timer to do is to limit the amount of data a guest can
> > > take from the host - to ensure one (potentially rogue) guest does not
> > > use up all the entropy from the host.
> > 
> > Sorry! for not being clear on this. By rate limit I meant same.
> > I used a broader term.
> 
> My comment was to the 'value of quota and timer slice is decided based
> on entropy source rate in host' - admins will usually not decide based
> on what sources the host has - they will typically decide how much a
> guest is supposed to consume.

o.k.
> 
> > > > This resulted in triggring of timer even when quota
> > > > is not exhausted at all and resulting in extra processing.
> > > > 
> > > > This patch triggers timer only when guest requests for
> > > > entropy. As soon as first request from guest for entropy
> > > > comes we set the timer. Timer bumps up the quota value
> > > > when it gets triggered.
> > > 
> > > Can you say how you tested this?
> > > 
> > > Mainly interested in seeing the results in these cases:
> > > 
> > > * No quota/timer specified on command line
> >     Tested this scenario. I am setting timer when first request comes.
> >     So, timer gets fired after (1 << 16) ms time.
> 
> But in case a quota or a timer isn't specified, the timer shouldn't be
> armed in the first place.

In my current logic. That would avoid quota to refill if timeslice/quota expires once.
We already had default values of timeslice/quota. 

If we go ahead to remove default values we need some number to do the check. Separate
that from check for user provided number because user can also use same number and if it is 
-ve it will fail user value validation check.

If we have to think about all this, there will be some more changes. So, no time slice default timer
was simpler of all and not have big impact. timer is firing in (1 << 16) ms.
> 
> > > * Quota+timer specified on command line, and guest keeps asking host
> > >   for unlimited entropy, e.g. by doing 'dd if=/dev/hwrng of=/dev/null'
> > >   in the guest.
> > 
> >     I did not do  'dd if=/dev/hwrng of=/dev/null'.
> >     Did cat '/dev/hwrng' && '/dev/random'
> 
> OK - that's similar.  I like dd because when dd is stopped, it gives
> the rate at which data was received, so it helps in seeing if we're
> getting more rate than what was specified on the cmdline.

sure. Will do this.
> 
> > > * Ensure quota restrictions are maintained, and we're not giving more
> > >   data than configured.
> >     Ensured. We are either giving < = requested data
> > > 
> > > For these tests, it's helpful to use the host's /dev/urandom as the
> > > source, since that can give data faster to the guest than the default
> > > /dev/random.  (Otherwise, if the host itself blocks on /dev/random,
> > > the guest may not get entropy due to that reason vs it not getting
> > > entropy due to rate-limiting.)
> > 
> >   Agree.
> >   Will test this as well.
> > 
> > > 
> > > I tested one scenario using the trace events.  With some quota and a
> > > timer value specified on the cmdline, before patch, I get tons of
> > > trace events before the guest is even up.  After applying the patch, I
> > > don't get any trace events.  So that's progress!
> > 
> > Thanks.
> > > 
> > > I have one question:
> > > 
> > > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > > > ---
> > > >  hw/virtio/virtio-rng.c         | 15 ++++++++-------
> > > >  include/hw/virtio/virtio-rng.h |  1 +
> > > >  2 files changed, 9 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
> > > > index 22b1d87..8774a0c 100644
> > > > --- a/hw/virtio/virtio-rng.c
> > > > +++ b/hw/virtio/virtio-rng.c
> > > > @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng)
> > > >          return;
> > > >      }
> > > >  
> > > > +    if (vrng->activate_timer) {
> > > > +        timer_mod(vrng->rate_limit_timer,
> > > > +                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > vrng->conf.period_ms);
> > > > +        vrng->activate_timer = false;
> > > > +    }
> > > > +
> > > >      if (vrng->quota_remaining < 0) {
> > > >          quota = 0;
> > > >      } else {
> > > > @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque)
> > > >  
> > > >      vrng->quota_remaining = vrng->conf.max_bytes;
> > > >      virtio_rng_process(vrng);
> > > > -    timer_mod(vrng->rate_limit_timer,
> > > > -                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > vrng->conf.period_ms);
> > > > +    vrng->activate_timer = true;
> > > >  }
> > > 
> > > We're processing an older request first, and then firing the timer.
> > > What's the use of doing it this way?  Why even do this?
> > 
> > I also had this query. If we don't call this after resetting
> > 'vrng->quota_remaining'
> > further requests does not work. It looks to me some limitation in earlier
> > code when
> > 'vrng->quota_remaining' goes to < = 0. A self request is needed to reset
> > things.
> > 
> > I will try to find the answer.
> 
> OK so I actually read through the thing, and this is useful for such a
> scenario:
> 
> assume our rate-limit is at 4KB/s.
> 
> * guest queues up multiple requests, say 4KB, 8KB, 12KB.
> * we will serve the first request in the queue, which is 4KB.
> * then, check_rate_limit() is triggered, and we serve the 2nd request.
> * since the 2nd request is for 8KB, but we can only give 4KB in 1
>   second, we only give the guest 4KB.  We then re-arm the timer so
>   that we can get to the next request in the list.  Without this
>   re-arming, the already-queued request will not get attention (till
>   the next time the guest writes something to the queue.
> * we then serve the 3rd request for 12KB, again with 4KB of data.
> 
> One thing to observe here is that we just service the minimum data we
> can without waiting to service the entire request (i.e. we give the
> guest 4KB of data, and not 8 or 12 KB.  We could do that by waiting
> for the timer to expire, and then servicing the entire request.
> This current way is simpler, and better.  If the guest isn't happy to
> receive just 4KB when it asked for 12, it can ask again.
> 
> That also saves us the trouble with live migration: if we hold onto a
> buffer, that becomes state, and we'll have to migrate it as well.
> This current model saves us from doing that.
> 
> And now that I type all this, I recall having thought about these
> things initially.  Don't know if I wrote it up somewhere, or if there
> are email conversations on this subject...
> 
> So now with your changes, here is what we can do: instead of just
> activating the timer in check_rate_limit(), we can issue a
> get_request_size() call.  If the return value is > 0, it means we have
> a buffer queued up by the guest, and we can then call 

> virtio_rng_process() and set activated to true.  Else, no need to call
> virtio_rng_process at all, and the job for the timer is done, since
> there are no more outstanding requests from the guest.

Only thing which stopped me was :

I did not want to use/call 'get_request_size()' twice (duplicate code).
And I don't want to change 'virtio_rng_process()'.

Even if I provide size in virtio_rng_process(), this interface is being used
at multiple places.

If you are o.k I can call 'get_request_size()' in 'check_rate_limit()' and avoid
timer reset if no request. But I agree this will be more optimised.

> 
> Makes sense?
yes. :)
> 
>                 Amit
>
Pankaj Gupta July 13, 2015, 8:06 a.m. UTC | #7
> On Fri, Jul 10, 2015 at 03:04:00PM +0530, Pankaj Gupta wrote:
> > @@ -196,13 +201,9 @@ static void virtio_rng_device_realize(DeviceState
> > *dev, Error **errp)
> >  
> >      vrng->vq = virtio_add_queue(vdev, 8, handle_input);
> >      vrng->quota_remaining = vrng->conf.max_bytes;
> > -
> >      vrng->rate_limit_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL,
> >                                                 check_rate_limit, vrng);
> 
> BTW I'm not sure QEMU_CLOCK_VIRTUAL is right here.
> We are trying to limit the rate at host.
> Does it matter that guest is not running?

I think, now we will be serving entropy requests only when Guest send those.
In any case Guest should be up and running.
> 
> --
> MST
> 
>
Amit Shah July 13, 2015, 8:47 a.m. UTC | #8
On (Mon) 13 Jul 2015 [04:01:01], Pankaj Gupta wrote:
> 
> > > Hi Amit,
> > > 
> > > Thanks for the review.
> > > 
> > > > 
> > > > On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote:
> > > > >    Timer was added in virtio-rng to rate limit the
> > > > > entropy. It used to trigger at regular intervals to
> > > > > bump up the quota value. The value of quota and timer
> > > > > slice is decided based on entropy source rate in host.
> > > > 
> > > > It doesn't necessarily depnd on the source rate in the host - all we
> > > > want the quota+timer to do is to limit the amount of data a guest can
> > > > take from the host - to ensure one (potentially rogue) guest does not
> > > > use up all the entropy from the host.
> > > 
> > > Sorry! for not being clear on this. By rate limit I meant same.
> > > I used a broader term.
> > 
> > My comment was to the 'value of quota and timer slice is decided based
> > on entropy source rate in host' - admins will usually not decide based
> > on what sources the host has - they will typically decide how much a
> > guest is supposed to consume.
> 
> o.k.
> > 
> > > > > This resulted in triggring of timer even when quota
> > > > > is not exhausted at all and resulting in extra processing.
> > > > > 
> > > > > This patch triggers timer only when guest requests for
> > > > > entropy. As soon as first request from guest for entropy
> > > > > comes we set the timer. Timer bumps up the quota value
> > > > > when it gets triggered.
> > > > 
> > > > Can you say how you tested this?
> > > > 
> > > > Mainly interested in seeing the results in these cases:
> > > > 
> > > > * No quota/timer specified on command line
> > >     Tested this scenario. I am setting timer when first request comes.
> > >     So, timer gets fired after (1 << 16) ms time.
> > 
> > But in case a quota or a timer isn't specified, the timer shouldn't be
> > armed in the first place.
> 
> In my current logic. That would avoid quota to refill if timeslice/quota expires once.
> We already had default values of timeslice/quota. 
> 
> If we go ahead to remove default values we need some number to do the check. Separate
> that from check for user provided number because user can also use same number and if it is 
> -ve it will fail user value validation check.
> 
> If we have to think about all this, there will be some more changes. So, no time slice default timer
> was simpler of all and not have big impact. timer is firing in (1 << 16) ms.

Yes, other changes are fine too (in a different patch/series).

> > > > * Quota+timer specified on command line, and guest keeps asking host
> > > >   for unlimited entropy, e.g. by doing 'dd if=/dev/hwrng of=/dev/null'
> > > >   in the guest.
> > > 
> > >     I did not do  'dd if=/dev/hwrng of=/dev/null'.
> > >     Did cat '/dev/hwrng' && '/dev/random'
> > 
> > OK - that's similar.  I like dd because when dd is stopped, it gives
> > the rate at which data was received, so it helps in seeing if we're
> > getting more rate than what was specified on the cmdline.
> 
> sure. Will do this.
> > 
> > > > * Ensure quota restrictions are maintained, and we're not giving more
> > > >   data than configured.
> > >     Ensured. We are either giving < = requested data
> > > > 
> > > > For these tests, it's helpful to use the host's /dev/urandom as the
> > > > source, since that can give data faster to the guest than the default
> > > > /dev/random.  (Otherwise, if the host itself blocks on /dev/random,
> > > > the guest may not get entropy due to that reason vs it not getting
> > > > entropy due to rate-limiting.)
> > > 
> > >   Agree.
> > >   Will test this as well.
> > > 
> > > > 
> > > > I tested one scenario using the trace events.  With some quota and a
> > > > timer value specified on the cmdline, before patch, I get tons of
> > > > trace events before the guest is even up.  After applying the patch, I
> > > > don't get any trace events.  So that's progress!
> > > 
> > > Thanks.
> > > > 
> > > > I have one question:
> > > > 
> > > > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > > > > ---
> > > > >  hw/virtio/virtio-rng.c         | 15 ++++++++-------
> > > > >  include/hw/virtio/virtio-rng.h |  1 +
> > > > >  2 files changed, 9 insertions(+), 7 deletions(-)
> > > > > 
> > > > > diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
> > > > > index 22b1d87..8774a0c 100644
> > > > > --- a/hw/virtio/virtio-rng.c
> > > > > +++ b/hw/virtio/virtio-rng.c
> > > > > @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng)
> > > > >          return;
> > > > >      }
> > > > >  
> > > > > +    if (vrng->activate_timer) {
> > > > > +        timer_mod(vrng->rate_limit_timer,
> > > > > +                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > > vrng->conf.period_ms);
> > > > > +        vrng->activate_timer = false;
> > > > > +    }
> > > > > +
> > > > >      if (vrng->quota_remaining < 0) {
> > > > >          quota = 0;
> > > > >      } else {
> > > > > @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque)
> > > > >  
> > > > >      vrng->quota_remaining = vrng->conf.max_bytes;
> > > > >      virtio_rng_process(vrng);
> > > > > -    timer_mod(vrng->rate_limit_timer,
> > > > > -                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > > vrng->conf.period_ms);
> > > > > +    vrng->activate_timer = true;
> > > > >  }
> > > > 
> > > > We're processing an older request first, and then firing the timer.
> > > > What's the use of doing it this way?  Why even do this?
> > > 
> > > I also had this query. If we don't call this after resetting
> > > 'vrng->quota_remaining'
> > > further requests does not work. It looks to me some limitation in earlier
> > > code when
> > > 'vrng->quota_remaining' goes to < = 0. A self request is needed to reset
> > > things.
> > > 
> > > I will try to find the answer.
> > 
> > OK so I actually read through the thing, and this is useful for such a
> > scenario:
> > 
> > assume our rate-limit is at 4KB/s.
> > 
> > * guest queues up multiple requests, say 4KB, 8KB, 12KB.
> > * we will serve the first request in the queue, which is 4KB.
> > * then, check_rate_limit() is triggered, and we serve the 2nd request.
> > * since the 2nd request is for 8KB, but we can only give 4KB in 1
> >   second, we only give the guest 4KB.  We then re-arm the timer so
> >   that we can get to the next request in the list.  Without this
> >   re-arming, the already-queued request will not get attention (till
> >   the next time the guest writes something to the queue.
> > * we then serve the 3rd request for 12KB, again with 4KB of data.
> > 
> > One thing to observe here is that we just service the minimum data we
> > can without waiting to service the entire request (i.e. we give the
> > guest 4KB of data, and not 8 or 12 KB.  We could do that by waiting
> > for the timer to expire, and then servicing the entire request.
> > This current way is simpler, and better.  If the guest isn't happy to
> > receive just 4KB when it asked for 12, it can ask again.
> > 
> > That also saves us the trouble with live migration: if we hold onto a
> > buffer, that becomes state, and we'll have to migrate it as well.
> > This current model saves us from doing that.
> > 
> > And now that I type all this, I recall having thought about these
> > things initially.  Don't know if I wrote it up somewhere, or if there
> > are email conversations on this subject...
> > 
> > So now with your changes, here is what we can do: instead of just
> > activating the timer in check_rate_limit(), we can issue a
> > get_request_size() call.  If the return value is > 0, it means we have
> > a buffer queued up by the guest, and we can then call 
> 
> > virtio_rng_process() and set activated to true.  Else, no need to call
> > virtio_rng_process at all, and the job for the timer is done, since
> > there are no more outstanding requests from the guest.
> 
> Only thing which stopped me was :
> 
> I did not want to use/call 'get_request_size()' twice (duplicate code).
> And I don't want to change 'virtio_rng_process()'.

The purpose for the two calls will be different, it's fine to do that.

> Even if I provide size in virtio_rng_process(), this interface is being used
> at multiple places.
> 
> If you are o.k I can call 'get_request_size()' in 'check_rate_limit()' and avoid
> timer reset if no request. But I agree this will be more optimised.

Yes, sure.

It can be a separate patch in this series.

		Amit
Amit Shah July 13, 2015, 8:52 a.m. UTC | #9
On (Mon) 13 Jul 2015 [10:55:55], Michael S. Tsirkin wrote:
> On Mon, Jul 13, 2015 at 01:04:45PM +0530, Amit Shah wrote:

> > That also saves us the trouble with live migration: if we hold onto a
> > buffer, that becomes state, and we'll have to migrate it as well.
> > This current model saves us from doing that.
> 
> Timers hold state and have to be migrated, this seems to be missing
> here.

We do invoke virtio_rng_process in the load path, so that outstanding
requests are serviced.

Also, after migration, the guest will fetch data from the new host, so
the new host will start doing the rate-limit.  We can afford to not
carry over the rate-limiting state we had from the src host.


		Amit
Pankaj Gupta July 13, 2015, 8:58 a.m. UTC | #10
> On (Mon) 13 Jul 2015 [04:01:01], Pankaj Gupta wrote:
> > 
> > > > Hi Amit,
> > > > 
> > > > Thanks for the review.
> > > > 
> > > > > 
> > > > > On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote:
> > > > > >    Timer was added in virtio-rng to rate limit the
> > > > > > entropy. It used to trigger at regular intervals to
> > > > > > bump up the quota value. The value of quota and timer
> > > > > > slice is decided based on entropy source rate in host.
> > > > > 
> > > > > It doesn't necessarily depnd on the source rate in the host - all we
> > > > > want the quota+timer to do is to limit the amount of data a guest can
> > > > > take from the host - to ensure one (potentially rogue) guest does not
> > > > > use up all the entropy from the host.
> > > > 
> > > > Sorry! for not being clear on this. By rate limit I meant same.
> > > > I used a broader term.
> > > 
> > > My comment was to the 'value of quota and timer slice is decided based
> > > on entropy source rate in host' - admins will usually not decide based
> > > on what sources the host has - they will typically decide how much a
> > > guest is supposed to consume.
> > 
> > o.k.
> > > 
> > > > > > This resulted in triggring of timer even when quota
> > > > > > is not exhausted at all and resulting in extra processing.
> > > > > > 
> > > > > > This patch triggers timer only when guest requests for
> > > > > > entropy. As soon as first request from guest for entropy
> > > > > > comes we set the timer. Timer bumps up the quota value
> > > > > > when it gets triggered.
> > > > > 
> > > > > Can you say how you tested this?
> > > > > 
> > > > > Mainly interested in seeing the results in these cases:
> > > > > 
> > > > > * No quota/timer specified on command line
> > > >     Tested this scenario. I am setting timer when first request comes.
> > > >     So, timer gets fired after (1 << 16) ms time.
> > > 
> > > But in case a quota or a timer isn't specified, the timer shouldn't be
> > > armed in the first place.
> > 
> > In my current logic. That would avoid quota to refill if timeslice/quota
> > expires once.
> > We already had default values of timeslice/quota.
> > 
> > If we go ahead to remove default values we need some number to do the
> > check. Separate
> > that from check for user provided number because user can also use same
> > number and if it is
> > -ve it will fail user value validation check.
> > 
> > If we have to think about all this, there will be some more changes. So, no
> > time slice default timer
> > was simpler of all and not have big impact. timer is firing in (1 << 16)
> > ms.
> 
> Yes, other changes are fine too (in a different patch/series).
Sure. I will do this in separate patch/series.
> 
> > > > > * Quota+timer specified on command line, and guest keeps asking host
> > > > >   for unlimited entropy, e.g. by doing 'dd if=/dev/hwrng
> > > > >   of=/dev/null'
> > > > >   in the guest.
> > > > 
> > > >     I did not do  'dd if=/dev/hwrng of=/dev/null'.
> > > >     Did cat '/dev/hwrng' && '/dev/random'
> > > 
> > > OK - that's similar.  I like dd because when dd is stopped, it gives
> > > the rate at which data was received, so it helps in seeing if we're
> > > getting more rate than what was specified on the cmdline.
> > 
> > sure. Will do this.
> > > 
> > > > > * Ensure quota restrictions are maintained, and we're not giving more
> > > > >   data than configured.
> > > >     Ensured. We are either giving < = requested data
> > > > > 
> > > > > For these tests, it's helpful to use the host's /dev/urandom as the
> > > > > source, since that can give data faster to the guest than the default
> > > > > /dev/random.  (Otherwise, if the host itself blocks on /dev/random,
> > > > > the guest may not get entropy due to that reason vs it not getting
> > > > > entropy due to rate-limiting.)
> > > > 
> > > >   Agree.
> > > >   Will test this as well.
> > > > 
> > > > > 
> > > > > I tested one scenario using the trace events.  With some quota and a
> > > > > timer value specified on the cmdline, before patch, I get tons of
> > > > > trace events before the guest is even up.  After applying the patch,
> > > > > I
> > > > > don't get any trace events.  So that's progress!
> > > > 
> > > > Thanks.
> > > > > 
> > > > > I have one question:
> > > > > 
> > > > > > Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
> > > > > > ---
> > > > > >  hw/virtio/virtio-rng.c         | 15 ++++++++-------
> > > > > >  include/hw/virtio/virtio-rng.h |  1 +
> > > > > >  2 files changed, 9 insertions(+), 7 deletions(-)
> > > > > > 
> > > > > > diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
> > > > > > index 22b1d87..8774a0c 100644
> > > > > > --- a/hw/virtio/virtio-rng.c
> > > > > > +++ b/hw/virtio/virtio-rng.c
> > > > > > @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng)
> > > > > >          return;
> > > > > >      }
> > > > > >  
> > > > > > +    if (vrng->activate_timer) {
> > > > > > +        timer_mod(vrng->rate_limit_timer,
> > > > > > +                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > > > vrng->conf.period_ms);
> > > > > > +        vrng->activate_timer = false;
> > > > > > +    }
> > > > > > +
> > > > > >      if (vrng->quota_remaining < 0) {
> > > > > >          quota = 0;
> > > > > >      } else {
> > > > > > @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque)
> > > > > >  
> > > > > >      vrng->quota_remaining = vrng->conf.max_bytes;
> > > > > >      virtio_rng_process(vrng);
> > > > > > -    timer_mod(vrng->rate_limit_timer,
> > > > > > -                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > > > vrng->conf.period_ms);
> > > > > > +    vrng->activate_timer = true;
> > > > > >  }
> > > > > 
> > > > > We're processing an older request first, and then firing the timer.
> > > > > What's the use of doing it this way?  Why even do this?
> > > > 
> > > > I also had this query. If we don't call this after resetting
> > > > 'vrng->quota_remaining'
> > > > further requests does not work. It looks to me some limitation in
> > > > earlier
> > > > code when
> > > > 'vrng->quota_remaining' goes to < = 0. A self request is needed to
> > > > reset
> > > > things.
> > > > 
> > > > I will try to find the answer.
> > > 
> > > OK so I actually read through the thing, and this is useful for such a
> > > scenario:
> > > 
> > > assume our rate-limit is at 4KB/s.
> > > 
> > > * guest queues up multiple requests, say 4KB, 8KB, 12KB.
> > > * we will serve the first request in the queue, which is 4KB.
> > > * then, check_rate_limit() is triggered, and we serve the 2nd request.
> > > * since the 2nd request is for 8KB, but we can only give 4KB in 1
> > >   second, we only give the guest 4KB.  We then re-arm the timer so
> > >   that we can get to the next request in the list.  Without this
> > >   re-arming, the already-queued request will not get attention (till
> > >   the next time the guest writes something to the queue.
> > > * we then serve the 3rd request for 12KB, again with 4KB of data.
> > > 
> > > One thing to observe here is that we just service the minimum data we
> > > can without waiting to service the entire request (i.e. we give the
> > > guest 4KB of data, and not 8 or 12 KB.  We could do that by waiting
> > > for the timer to expire, and then servicing the entire request.
> > > This current way is simpler, and better.  If the guest isn't happy to
> > > receive just 4KB when it asked for 12, it can ask again.
> > > 
> > > That also saves us the trouble with live migration: if we hold onto a
> > > buffer, that becomes state, and we'll have to migrate it as well.
> > > This current model saves us from doing that.
> > > 
> > > And now that I type all this, I recall having thought about these
> > > things initially.  Don't know if I wrote it up somewhere, or if there
> > > are email conversations on this subject...
> > > 
> > > So now with your changes, here is what we can do: instead of just
> > > activating the timer in check_rate_limit(), we can issue a
> > > get_request_size() call.  If the return value is > 0, it means we have
> > > a buffer queued up by the guest, and we can then call
> > 
> > > virtio_rng_process() and set activated to true.  Else, no need to call
> > > virtio_rng_process at all, and the job for the timer is done, since
> > > there are no more outstanding requests from the guest.
> > 
> > Only thing which stopped me was :
> > 
> > I did not want to use/call 'get_request_size()' twice (duplicate code).
> > And I don't want to change 'virtio_rng_process()'.
> 
> The purpose for the two calls will be different, it's fine to do that.
> 
> > Even if I provide size in virtio_rng_process(), this interface is being
> > used
> > at multiple places.
> > 
> > If you are o.k I can call 'get_request_size()' in 'check_rate_limit()' and
> > avoid
> > timer reset if no request. But I agree this will be more optimised.
> 
> Yes, sure.
> 
> It can be a separate patch in this series.

Sure, will do in separate patch and resubmit the series.

Thanks,
Pankaj
> 
> 		Amit
>
diff mbox

Patch

diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
index 22b1d87..8774a0c 100644
--- a/hw/virtio/virtio-rng.c
+++ b/hw/virtio/virtio-rng.c
@@ -78,6 +78,12 @@  static void virtio_rng_process(VirtIORNG *vrng)
         return;
     }
 
+    if (vrng->activate_timer) {
+        timer_mod(vrng->rate_limit_timer,
+                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) + vrng->conf.period_ms);
+        vrng->activate_timer = false;
+    }
+
     if (vrng->quota_remaining < 0) {
         quota = 0;
     } else {
@@ -139,8 +145,7 @@  static void check_rate_limit(void *opaque)
 
     vrng->quota_remaining = vrng->conf.max_bytes;
     virtio_rng_process(vrng);
-    timer_mod(vrng->rate_limit_timer,
-                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) + vrng->conf.period_ms);
+    vrng->activate_timer = true;
 }
 
 static void virtio_rng_device_realize(DeviceState *dev, Error **errp)
@@ -196,13 +201,9 @@  static void virtio_rng_device_realize(DeviceState *dev, Error **errp)
 
     vrng->vq = virtio_add_queue(vdev, 8, handle_input);
     vrng->quota_remaining = vrng->conf.max_bytes;
-
     vrng->rate_limit_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL,
                                                check_rate_limit, vrng);
-
-    timer_mod(vrng->rate_limit_timer,
-                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) + vrng->conf.period_ms);
-
+    vrng->activate_timer = true;
     register_savevm(dev, "virtio-rng", -1, 1, virtio_rng_save,
                     virtio_rng_load, vrng);
 }
diff --git a/include/hw/virtio/virtio-rng.h b/include/hw/virtio/virtio-rng.h
index 0316488..3f07de7 100644
--- a/include/hw/virtio/virtio-rng.h
+++ b/include/hw/virtio/virtio-rng.h
@@ -44,6 +44,7 @@  typedef struct VirtIORNG {
      */
     QEMUTimer *rate_limit_timer;
     int64_t quota_remaining;
+    bool activate_timer;
 } VirtIORNG;
 
 #endif