Patchwork [v3,for-1.7,resend] rdma: rename 'x-rdma' => 'rdma'

login
register
mail settings
Submitter mrhines@linux.vnet.ibm.com
Date Nov. 22, 2013, 4:29 p.m.
Message ID <1385137791-8457-1-git-send-email-mrhines@linux.vnet.ibm.com>
Download mbox | patch
Permalink /patch/293527/
State New
Headers show

Comments

mrhines@linux.vnet.ibm.com - Nov. 22, 2013, 4:29 p.m.
From: "Michael R. Hines" <mrhines@us.ibm.com>

As far as we can tell, all known bugs have been fixed:

1. Parallel RDMA migrations are working
2. IPv6 migration is working
3. Libvirt patches are ready
4. virt-test is working

Objections?

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael R. Hines <mrhines@us.ibm.com>
---
 docs/rdma.txt    |   24 ++++++++++--------------
 migration-rdma.c |    2 +-
 migration.c      |    6 +++---
 qapi-schema.json |    7 +++----
 4 files changed, 17 insertions(+), 22 deletions(-)
Daniel P. Berrange - Nov. 22, 2013, 4:37 p.m.
On Sat, Nov 23, 2013 at 12:29:51AM +0800, mrhines@linux.vnet.ibm.com wrote:
> From: "Michael R. Hines" <mrhines@us.ibm.com>
> 
> As far as we can tell, all known bugs have been fixed:

> 3. Libvirt patches are ready

Please stop claiming this. A proof of concept was posted and got some
review feedback. AFAIK, no followup has been posted to actually finish
the job, nor deal with the new questions around cgroups ACL management
raised last week on qemu-devel

Regards,
Daniel
mrhines@linux.vnet.ibm.com - Nov. 22, 2013, 5:14 p.m.
On 11/23/2013 12:37 AM, Daniel P. Berrange wrote:
> On Sat, Nov 23, 2013 at 12:29:51AM +0800, mrhines@linux.vnet.ibm.com wrote:
>> From: "Michael R. Hines" <mrhines@us.ibm.com>
>>
>> As far as we can tell, all known bugs have been fixed:
>> 3. Libvirt patches are ready
> Please stop claiming this. A proof of concept was posted and got some
> review feedback. AFAIK, no followup has been posted to actually finish
> the job, nor deal with the new questions around cgroups ACL management
> raised last week on qemu-devel
>
> Regards,
> Daniel

My apologies. I have them in my laptop, but haven't sent them out yet.

I'm currently in the process of moving my family from one country to
another am very busy resettling, so I was trying to get the most
urgent patch out first before cleaning up the libvirt patch on my laptop.

- Michael

Patch

diff --git a/docs/rdma.txt b/docs/rdma.txt
index 8d1e003..c559f9a 100644
--- a/docs/rdma.txt
+++ b/docs/rdma.txt
@@ -66,7 +66,7 @@  bulk-phase round of the migration and can be enabled for extremely
 high-performance RDMA hardware using the following command:
 
 QEMU Monitor Command:
-$ migrate_set_capability x-rdma-pin-all on # disabled by default
+$ migrate_set_capability rdma-pin-all on # disabled by default
 
 Performing this action will cause all 8GB to be pinned, so if that's
 not what you want, then please ignore this step altogether.
@@ -93,12 +93,12 @@  $ migrate_set_speed 40g # or whatever is the MAX of your RDMA device
 
 Next, on the destination machine, add the following to the QEMU command line:
 
-qemu ..... -incoming x-rdma:host:port
+qemu ..... -incoming rdma:host:port
 
 Finally, perform the actual migration on the source machine:
 
 QEMU Monitor Command:
-$ migrate -d x-rdma:host:port
+$ migrate -d rdma:host:port
 
 PERFORMANCE
 ===========
@@ -120,8 +120,8 @@  For example, in the same 8GB RAM example with all 8GB of memory in
 active use and the VM itself is completely idle using the same 40 gbps
 infiniband link:
 
-1. x-rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
-2. x-rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
+1. rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
+2. rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
 
 These numbers would of course scale up to whatever size virtual machine
 you have to migrate using RDMA.
@@ -407,18 +407,14 @@  socket is broken during a non-RDMA based migration.
 
 TODO:
 =====
-1. 'migrate x-rdma:host:port' and '-incoming x-rdma' options will be
-   renamed to 'rdma' after the experimental phase of this work has
-   completed upstream.
-2. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
+1. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
    are not compatible with infinband memory pinning and will result in
    an aborted migration (but with the source VM left unaffected).
-3. Use of the recent /proc/<pid>/pagemap would likely speed up
+2. Use of the recent /proc/<pid>/pagemap would likely speed up
    the use of KSM and ballooning while using RDMA.
-4. Also, some form of balloon-device usage tracking would also
+3. Also, some form of balloon-device usage tracking would also
    help alleviate some issues.
-5. Move UNREGISTER requests to a separate thread.
-6. Use LRU to provide more fine-grained direction of UNREGISTER
+4. Use LRU to provide more fine-grained direction of UNREGISTER
    requests for unpinning memory in an overcommitted environment.
-7. Expose UNREGISTER support to the user by way of workload-specific
+5. Expose UNREGISTER support to the user by way of workload-specific
    hints about application behavior.
diff --git a/migration-rdma.c b/migration-rdma.c
index f94f3b4..eeb4302 100644
--- a/migration-rdma.c
+++ b/migration-rdma.c
@@ -3412,7 +3412,7 @@  void rdma_start_outgoing_migration(void *opaque,
     }
 
     ret = qemu_rdma_source_init(rdma, &local_err,
-        s->enabled_capabilities[MIGRATION_CAPABILITY_X_RDMA_PIN_ALL]);
+        s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL]);
 
     if (ret) {
         goto err;
diff --git a/migration.c b/migration.c
index b4f8462..d9c7a62 100644
--- a/migration.c
+++ b/migration.c
@@ -81,7 +81,7 @@  void qemu_start_incoming_migration(const char *uri, Error **errp)
     if (strstart(uri, "tcp:", &p))
         tcp_start_incoming_migration(p, errp);
 #ifdef CONFIG_RDMA
-    else if (strstart(uri, "x-rdma:", &p))
+    else if (strstart(uri, "rdma:", &p))
         rdma_start_incoming_migration(p, errp);
 #endif
 #if !defined(WIN32)
@@ -423,7 +423,7 @@  void qmp_migrate(const char *uri, bool has_blk, bool blk,
     if (strstart(uri, "tcp:", &p)) {
         tcp_start_outgoing_migration(s, p, &local_err);
 #ifdef CONFIG_RDMA
-    } else if (strstart(uri, "x-rdma:", &p)) {
+    } else if (strstart(uri, "rdma:", &p)) {
         rdma_start_outgoing_migration(s, p, &local_err);
 #endif
 #if !defined(WIN32)
@@ -501,7 +501,7 @@  bool migrate_rdma_pin_all(void)
 
     s = migrate_get_current();
 
-    return s->enabled_capabilities[MIGRATION_CAPABILITY_X_RDMA_PIN_ALL];
+    return s->enabled_capabilities[MIGRATION_CAPABILITY_RDMA_PIN_ALL];
 }
 
 bool migrate_auto_converge(void)
diff --git a/qapi-schema.json b/qapi-schema.json
index 145eca8..92509f6 100644
--- a/qapi-schema.json
+++ b/qapi-schema.json
@@ -615,10 +615,9 @@ 
 #          This feature allows us to minimize migration traffic for certain work
 #          loads, by sending compressed difference of the pages
 #
-# @x-rdma-pin-all: Controls whether or not the entire VM memory footprint is
+# @rdma-pin-all: Controls whether or not the entire VM memory footprint is
 #          mlock()'d on demand or all at once. Refer to docs/rdma.txt for usage.
-#          Disabled by default. Experimental: may (or may not) be renamed after
-#          further testing is complete. (since 1.6)
+#          Disabled by default. (since 1.7)
 #
 # @zero-blocks: During storage migration encode blocks of zeroes efficiently. This
 #          essentially saves 1MB of zeroes per block on the wire. Enabling requires
@@ -632,7 +631,7 @@ 
 # Since: 1.2
 ##
 { 'enum': 'MigrationCapability',
-  'data': ['xbzrle', 'x-rdma-pin-all', 'auto-converge', 'zero-blocks'] }
+  'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks'] }
 
 ##
 # @MigrationCapabilityStatus