diff mbox

[2/4] migration: set dirty_pages_rate before autoconverge logic

Message ID 1495642203-12702-3-git-send-email-felipe@nutanix.com
State New
Headers show

Commit Message

Felipe Franciosi May 24, 2017, 4:10 p.m. UTC
Currently, a "period" in the RAM migration logic is at least a second
long and accounts for what happened since the last period (or the
beginning of the migration). The dirty_pages_rate counter is calculated
at the end this logic.

If the auto convergence capability is enabled from the start of the
migration, it won't be able to use this counter the first time around.
This calculates dirty_pages_rate as soon as a period is deemed over,
which allows for it to be used immediately.

Signed-off-by: Felipe Franciosi <felipe@nutanix.com>
---
 migration/ram.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

Comments

Peter Xu May 25, 2017, 12:40 a.m. UTC | #1
On Wed, May 24, 2017 at 05:10:01PM +0100, Felipe Franciosi wrote:
> Currently, a "period" in the RAM migration logic is at least a second
> long and accounts for what happened since the last period (or the
> beginning of the migration). The dirty_pages_rate counter is calculated
> at the end this logic.
> 
> If the auto convergence capability is enabled from the start of the
> migration, it won't be able to use this counter the first time around.
> This calculates dirty_pages_rate as soon as a period is deemed over,
> which allows for it to be used immediately.
> 
> Signed-off-by: Felipe Franciosi <felipe@nutanix.com>

You fixed the indents as well, but imho it's okay.

Reviewed-by: Peter Xu <peterx@redhat.com>

> ---
>  migration/ram.c | 17 ++++++++++-------
>  1 file changed, 10 insertions(+), 7 deletions(-)
> 
> diff --git a/migration/ram.c b/migration/ram.c
> index 36bf720..495ecbe 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -694,6 +694,10 @@ static void migration_bitmap_sync(RAMState *rs)
>  
>      /* more than 1 second = 1000 millisecons */
>      if (end_time > rs->time_last_bitmap_sync + 1000) {
> +        /* calculate period counters */
> +        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> +            / (end_time - rs->time_last_bitmap_sync);
> +
>          if (migrate_auto_converge()) {
>              /* The following detection logic can be refined later. For now:
>                 Check to see if the dirtied bytes is 50% more than the approx.
> @@ -702,15 +706,14 @@ static void migration_bitmap_sync(RAMState *rs)
>                 throttling */
>              bytes_xfer_now = ram_bytes_transferred();
>  
> -            if (rs->dirty_pages_rate &&
> -               (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
> +            if ((rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
>                     (bytes_xfer_now - rs->bytes_xfer_prev) / 2) &&
> -               (rs->dirty_rate_high_cnt++ >= 2)) {
> +                (rs->dirty_rate_high_cnt++ >= 2)) {
>                      trace_migration_throttle();
>                      rs->dirty_rate_high_cnt = 0;
>                      mig_throttle_guest_down();
> -             }
> -             rs->bytes_xfer_prev = bytes_xfer_now;
> +            }
> +            rs->bytes_xfer_prev = bytes_xfer_now;
>          }
>  
>          if (migrate_use_xbzrle()) {
> @@ -723,8 +726,8 @@ static void migration_bitmap_sync(RAMState *rs)
>              rs->iterations_prev = rs->iterations;
>              rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
>          }
> -        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
> -            / (end_time - rs->time_last_bitmap_sync);
> +
> +        /* reset period counters */
>          rs->time_last_bitmap_sync = end_time;
>          rs->num_dirty_pages_period = 0;
>      }
> -- 
> 1.9.5
>
Felipe Franciosi May 25, 2017, 10:52 a.m. UTC | #2
> On 25 May 2017, at 01:40, Peter Xu <peterx@redhat.com> wrote:
> 
> On Wed, May 24, 2017 at 05:10:01PM +0100, Felipe Franciosi wrote:
>> Currently, a "period" in the RAM migration logic is at least a second
>> long and accounts for what happened since the last period (or the
>> beginning of the migration). The dirty_pages_rate counter is calculated
>> at the end this logic.
>> 
>> If the auto convergence capability is enabled from the start of the
>> migration, it won't be able to use this counter the first time around.
>> This calculates dirty_pages_rate as soon as a period is deemed over,
>> which allows for it to be used immediately.
>> 
>> Signed-off-by: Felipe Franciosi <felipe@nutanix.com>
> 
> You fixed the indents as well, but imho it's okay.

Yeah a couple of lines were off-by-one space. Fixed it given I was touching the code anyway, hope it's ok with everyone else. Would you normally patch that separately or just mention it in the commit message?

F.

> 
> Reviewed-by: Peter Xu <peterx@redhat.com>
> 
>> ---
>> migration/ram.c | 17 ++++++++++-------
>> 1 file changed, 10 insertions(+), 7 deletions(-)
>> 
>> diff --git a/migration/ram.c b/migration/ram.c
>> index 36bf720..495ecbe 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -694,6 +694,10 @@ static void migration_bitmap_sync(RAMState *rs)
>> 
>>     /* more than 1 second = 1000 millisecons */
>>     if (end_time > rs->time_last_bitmap_sync + 1000) {
>> +        /* calculate period counters */
>> +        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>> +            / (end_time - rs->time_last_bitmap_sync);
>> +
>>         if (migrate_auto_converge()) {
>>             /* The following detection logic can be refined later. For now:
>>                Check to see if the dirtied bytes is 50% more than the approx.
>> @@ -702,15 +706,14 @@ static void migration_bitmap_sync(RAMState *rs)
>>                throttling */
>>             bytes_xfer_now = ram_bytes_transferred();
>> 
>> -            if (rs->dirty_pages_rate &&
>> -               (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
>> +            if ((rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
>>                    (bytes_xfer_now - rs->bytes_xfer_prev) / 2) &&
>> -               (rs->dirty_rate_high_cnt++ >= 2)) {
>> +                (rs->dirty_rate_high_cnt++ >= 2)) {
>>                     trace_migration_throttle();
>>                     rs->dirty_rate_high_cnt = 0;
>>                     mig_throttle_guest_down();
>> -             }
>> -             rs->bytes_xfer_prev = bytes_xfer_now;
>> +            }
>> +            rs->bytes_xfer_prev = bytes_xfer_now;
>>         }
>> 
>>         if (migrate_use_xbzrle()) {
>> @@ -723,8 +726,8 @@ static void migration_bitmap_sync(RAMState *rs)
>>             rs->iterations_prev = rs->iterations;
>>             rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
>>         }
>> -        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
>> -            / (end_time - rs->time_last_bitmap_sync);
>> +
>> +        /* reset period counters */
>>         rs->time_last_bitmap_sync = end_time;
>>         rs->num_dirty_pages_period = 0;
>>     }
>> -- 
>> 1.9.5
>> 
> 
> -- 
> Peter Xu
Peter Xu May 25, 2017, 11:10 a.m. UTC | #3
On Thu, May 25, 2017 at 10:52:32AM +0000, Felipe Franciosi wrote:
> 
> > On 25 May 2017, at 01:40, Peter Xu <peterx@redhat.com> wrote:
> > 
> > On Wed, May 24, 2017 at 05:10:01PM +0100, Felipe Franciosi wrote:
> >> Currently, a "period" in the RAM migration logic is at least a second
> >> long and accounts for what happened since the last period (or the
> >> beginning of the migration). The dirty_pages_rate counter is calculated
> >> at the end this logic.
> >> 
> >> If the auto convergence capability is enabled from the start of the
> >> migration, it won't be able to use this counter the first time around.
> >> This calculates dirty_pages_rate as soon as a period is deemed over,
> >> which allows for it to be used immediately.
> >> 
> >> Signed-off-by: Felipe Franciosi <felipe@nutanix.com>
> > 
> > You fixed the indents as well, but imho it's okay.
> 
> Yeah a couple of lines were off-by-one space. Fixed it given I was touching the code anyway, hope it's ok with everyone else. Would you normally patch that separately or just mention it in the commit message?

For me normally I don't intentionally touch code up only for
indentation to make sure commit log won't be affected for those lines.
However I'm also okay if we fix some of them, either separately or
squashed into patch like this.

IMHO at last it really depends on the maintainers' flavor on this. :-)

Thanks,
Juan Quintela May 30, 2017, 4:14 p.m. UTC | #4
Felipe Franciosi <felipe@nutanix.com> wrote:
> Currently, a "period" in the RAM migration logic is at least a second
> long and accounts for what happened since the last period (or the
> beginning of the migration). The dirty_pages_rate counter is calculated
> at the end this logic.
>
> If the auto convergence capability is enabled from the start of the
> migration, it won't be able to use this counter the first time around.
> This calculates dirty_pages_rate as soon as a period is deemed over,
> which allows for it to be used immediately.
>
> Signed-off-by: Felipe Franciosi <felipe@nutanix.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>
diff mbox

Patch

diff --git a/migration/ram.c b/migration/ram.c
index 36bf720..495ecbe 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -694,6 +694,10 @@  static void migration_bitmap_sync(RAMState *rs)
 
     /* more than 1 second = 1000 millisecons */
     if (end_time > rs->time_last_bitmap_sync + 1000) {
+        /* calculate period counters */
+        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
+            / (end_time - rs->time_last_bitmap_sync);
+
         if (migrate_auto_converge()) {
             /* The following detection logic can be refined later. For now:
                Check to see if the dirtied bytes is 50% more than the approx.
@@ -702,15 +706,14 @@  static void migration_bitmap_sync(RAMState *rs)
                throttling */
             bytes_xfer_now = ram_bytes_transferred();
 
-            if (rs->dirty_pages_rate &&
-               (rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
+            if ((rs->num_dirty_pages_period * TARGET_PAGE_SIZE >
                    (bytes_xfer_now - rs->bytes_xfer_prev) / 2) &&
-               (rs->dirty_rate_high_cnt++ >= 2)) {
+                (rs->dirty_rate_high_cnt++ >= 2)) {
                     trace_migration_throttle();
                     rs->dirty_rate_high_cnt = 0;
                     mig_throttle_guest_down();
-             }
-             rs->bytes_xfer_prev = bytes_xfer_now;
+            }
+            rs->bytes_xfer_prev = bytes_xfer_now;
         }
 
         if (migrate_use_xbzrle()) {
@@ -723,8 +726,8 @@  static void migration_bitmap_sync(RAMState *rs)
             rs->iterations_prev = rs->iterations;
             rs->xbzrle_cache_miss_prev = rs->xbzrle_cache_miss;
         }
-        rs->dirty_pages_rate = rs->num_dirty_pages_period * 1000
-            / (end_time - rs->time_last_bitmap_sync);
+
+        /* reset period counters */
         rs->time_last_bitmap_sync = end_time;
         rs->num_dirty_pages_period = 0;
     }