Patchwork [v4,resend] rdma: rename 'x-rdma' => 'rdma'

login
register
mail settings
Submitter mrhines@linux.vnet.ibm.com
Date Dec. 18, 2013, 5:43 a.m.
Message ID <1387345437-19430-1-git-send-email-mrhines@linux.vnet.ibm.com>
Download mbox | patch
Permalink /patch/302645/
State New
Headers show

Comments

mrhines@linux.vnet.ibm.com - Dec. 18, 2013, 5:43 a.m.
From: "Michael R. Hines" <mrhines@us.ibm.com>

As far as we can tell, all known bugs have been fixed:

1. Parallel migrations are working
2. IPv6 migration is working
3. virt-test is working

I'm not comfortable sending the revised libvirt patch
until this is accepted or review suggestions are addressed,
(including pin-all support. It does not make sense to
remove experimental for one thing and not the other. That's
too many trips through the libvirt community).

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
---
 docs/rdma.txt    |   24 ++++++++++--------------
 migration-rdma.c |    2 +-
 migration.c      |    6 +++---
 qapi-schema.json |    7 +++----
 4 files changed, 17 insertions(+), 22 deletions(-)
Eric Blake - Dec. 18, 2013, 1:51 p.m.
On 12/17/2013 10:43 PM, mrhines@linux.vnet.ibm.com wrote:
> From: "Michael R. Hines" <mrhines@us.ibm.com>
> 
> As far as we can tell, all known bugs have been fixed:
> 
> 1. Parallel migrations are working
> 2. IPv6 migration is working
> 3. virt-test is working
> 

> +++ b/qapi-schema.json
> @@ -671,10 +671,9 @@
>  #          This feature allows us to minimize migration traffic for certain work
>  #          loads, by sending compressed difference of the pages
>  #
> -# @x-rdma-pin-all: Controls whether or not the entire VM memory footprint is
> +# @rdma-pin-all: Controls whether or not the entire VM memory footprint is
>  #          mlock()'d on demand or all at once. Refer to docs/rdma.txt for usage.
> -#          Disabled by default. Experimental: may (or may not) be renamed after
> -#          further testing is complete. (since 1.6)
> +#          Disabled by default. (since 1.7)

Alas, we missed 1.7.  This should now read 'since 2.0'.
mrhines@linux.vnet.ibm.com - Dec. 18, 2013, 8:52 p.m.
On 12/18/2013 09:51 PM, Eric Blake wrote:
> On 12/17/2013 10:43 PM, mrhines@linux.vnet.ibm.com wrote:
>> From: "Michael R. Hines" <mrhines@us.ibm.com>
>>
>> As far as we can tell, all known bugs have been fixed:
>>
>> 1. Parallel migrations are working
>> 2. IPv6 migration is working
>> 3. virt-test is working
>>
>> +++ b/qapi-schema.json
>> @@ -671,10 +671,9 @@
>>   #          This feature allows us to minimize migration traffic for certain work
>>   #          loads, by sending compressed difference of the pages
>>   #
>> -# @x-rdma-pin-all: Controls whether or not the entire VM memory footprint is
>> +# @rdma-pin-all: Controls whether or not the entire VM memory footprint is
>>   #          mlock()'d on demand or all at once. Refer to docs/rdma.txt for usage.
>> -#          Disabled by default. Experimental: may (or may not) be renamed after
>> -#          further testing is complete. (since 1.6)
>> +#          Disabled by default. (since 1.7)
> Alas, we missed 1.7.  This should now read 'since 2.0'.
>
Thanks, Eric.

- Michael

Patch

diff --git a/docs/rdma.txt b/docs/rdma.txt
index 2aca63b..1f5d9e9 100644
--- a/docs/rdma.txt
+++ b/docs/rdma.txt
@@ -66,7 +66,7 @@  bulk-phase round of the migration and can be enabled for extremely
 high-performance RDMA hardware using the following command:
 
 QEMU Monitor Command:
-$ migrate_set_capability x-rdma-pin-all on # disabled by default
+$ migrate_set_capability rdma-pin-all on # disabled by default
 
 Performing this action will cause all 8GB to be pinned, so if that's
 not what you want, then please ignore this step altogether.
@@ -93,12 +93,12 @@  $ migrate_set_speed 40g # or whatever is the MAX of your RDMA device
 
 Next, on the destination machine, add the following to the QEMU command line:
 
-qemu ..... -incoming x-rdma:host:port
+qemu ..... -incoming rdma:host:port
 
 Finally, perform the actual migration on the source machine:
 
 QEMU Monitor Command:
-$ migrate -d x-rdma:host:port
+$ migrate -d rdma:host:port
 
 PERFORMANCE
 ===========
@@ -120,8 +120,8 @@  For example, in the same 8GB RAM example with all 8GB of memory in
 active use and the VM itself is completely idle using the same 40 gbps
 infiniband link:
 
-1. x-rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
-2. x-rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
+1. rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
+2. rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
 
 These numbers would of course scale up to whatever size virtual machine
 you have to migrate using RDMA.
@@ -407,18 +407,14 @@  socket is broken during a non-RDMA based migration.
 
 TODO:
 =====
-1. 'migrate x-rdma:host:port' and '-incoming x-rdma' options will be
-   renamed to 'rdma' after the experimental phase of this work has
-   completed upstream.
-2. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
+1. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
    are not compatible with infinband memory pinning and will result in
    an aborted migration (but with the source VM left unaffected).
-3. Use of the recent /proc/<pid>/pagemap would likely speed up
+2. Use of the recent /proc/<pid>/pagemap would likely speed up
    the use of KSM and ballooning while using RDMA.
-4. Also, some form of balloon-device usage tracking would also
+3. Also, some form of balloon-device usage tracking would also
    help alleviate some issues.
-5. Move UNREGISTER requests to a separate thread.
-6. Use LRU to provide more fine-grained direction of UNREGISTER
+4. Use LRU to provide more fine-grained direction of UNREGISTER
    requests for unpinning memory in an overcommitted environment.
-7. Expose UNREGISTER support to the user by way of workload-specific
+5. Expose UNREGISTER support to the user by way of workload-specific
    hints about application behavior.
diff --git a/migration-rdma.c b/migration-rdma.c
index f94f3b4..eeb4302 100644
--- a/migration-rdma.c
+++ b/migration-rdma.c
@@ -3412,7 +3412,7 @@  void rdma_start_outgoing_migration(void *opaque,
     }
 
     ret = qemu_rdma_source_init(rdma, &local_err,
-        s->enabled_capabilities[MIGRATION_CAPABILITY_X_RDMA_PIN_ALL]);
+        s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL]);
 
     if (ret) {
         goto err;
diff --git a/migration.c b/migration.c
index 2b1ab20..71356f8 100644
--- a/migration.c
+++ b/migration.c
@@ -81,7 +81,7 @@  void qemu_start_incoming_migration(const char *uri, Error **errp)
     if (strstart(uri, "tcp:", &p))
         tcp_start_incoming_migration(p, errp);
 #ifdef CONFIG_RDMA
-    else if (strstart(uri, "x-rdma:", &p))
+    else if (strstart(uri, "rdma:", &p))
         rdma_start_incoming_migration(p, errp);
 #endif
 #if !defined(WIN32)
@@ -424,7 +424,7 @@  void qmp_migrate(const char *uri, bool has_blk, bool blk,
     if (strstart(uri, "tcp:", &p)) {
         tcp_start_outgoing_migration(s, p, &local_err);
 #ifdef CONFIG_RDMA
-    } else if (strstart(uri, "x-rdma:", &p)) {
+    } else if (strstart(uri, "rdma:", &p)) {
         rdma_start_outgoing_migration(s, p, &local_err);
 #endif
 #if !defined(WIN32)
@@ -502,7 +502,7 @@  bool migrate_rdma_pin_all(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_X_RDMA_PIN_ALL];
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL];
 }
 
 bool migrate_auto_converge(void)
diff --git a/qapi-schema.json b/qapi-schema.json
index c3c939c..0ad465a 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -671,10 +671,9 @@ 
 #          This feature allows us to minimize migration traffic for certain work
 #          loads, by sending compressed difference of the pages
 #
-# @x-rdma-pin-all: Controls whether or not the entire VM memory footprint is
+# @rdma-pin-all: Controls whether or not the entire VM memory footprint is
 #          mlock()'d on demand or all at once. Refer to docs/rdma.txt for usage.
-#          Disabled by default. Experimental: may (or may not) be renamed after
-#          further testing is complete. (since 1.6)
+#          Disabled by default. (since 1.7)
 #
 # @zero-blocks: During storage migration encode blocks of zeroes efficiently. This
 #          essentially saves 1MB of zeroes per block on the wire. Enabling requires
@@ -688,7 +687,7 @@ 
 # Since: 1.2
 ##
 { 'enum': 'MigrationCapability',
-  'data': ['xbzrle', 'x-rdma-pin-all', 'auto-converge', 'zero-blocks'] }
+  'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks'] }
 
 ##
 # @MigrationCapabilityStatus