{"id":810530,"url":"http://patchwork.ozlabs.org/api/patches/810530/?format=json","web_url":"http://patchwork.ozlabs.org/project/qemu-devel/patch/20170906115143.27451-16-quintela@redhat.com/","project":{"id":14,"url":"http://patchwork.ozlabs.org/api/projects/14/?format=json","name":"QEMU Development","link_name":"qemu-devel","list_id":"qemu-devel.nongnu.org","list_email":"qemu-devel@nongnu.org","web_url":"","scm_url":"","webscm_url":"","list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<20170906115143.27451-16-quintela@redhat.com>","list_archive_url":null,"date":"2017-09-06T11:51:36","name":"[v7,15/22] migration: Create ram_multifd_page","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"bd734e5d7fc5803b3f04c4a7037aeea1612c49dd","submitter":{"id":2643,"url":"http://patchwork.ozlabs.org/api/people/2643/?format=json","name":"Juan Quintela","email":"quintela@redhat.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/qemu-devel/patch/20170906115143.27451-16-quintela@redhat.com/mbox/","series":[{"id":1773,"url":"http://patchwork.ozlabs.org/api/series/1773/?format=json","web_url":"http://patchwork.ozlabs.org/project/qemu-devel/list/?series=1773","date":"2017-09-06T11:51:21","name":"Multifd","version":7,"mbox":"http://patchwork.ozlabs.org/series/1773/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/810530/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/810530/checks/","tags":{},"related":[],"headers":{"Return-Path":"<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=nongnu.org\n\t(client-ip=2001:4830:134:3::11; helo=lists.gnu.org;\n\tenvelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n\treceiver=<UNKNOWN>)","ext-mx05.extmail.prod.ext.phx2.redhat.com;\n\tdmarc=none (p=none dis=none) header.from=redhat.com","ext-mx05.extmail.prod.ext.phx2.redhat.com;\n\tspf=fail smtp.mailfrom=quintela@redhat.com"],"Received":["from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11])\n\t(using TLSv1 with cipher AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3xnMRM6bgXz9sBZ\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed,  6 Sep 2017 21:53:19 +1000 (AEST)","from localhost ([::1]:35578 helo=lists.gnu.org)\n\tby lists.gnu.org with esmtp (Exim 4.71) (envelope-from\n\t<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>)\n\tid 1dpYtS-0007iV-0v\n\tfor incoming@patchwork.ozlabs.org; Wed, 06 Sep 2017 07:53:18 -0400","from eggs.gnu.org ([2001:4830:134:3::10]:60384)\n\tby lists.gnu.org with esmtp (Exim 4.71)\n\t(envelope-from <quintela@redhat.com>) id 1dpYse-0007fC-QQ\n\tfor qemu-devel@nongnu.org; Wed, 06 Sep 2017 07:52:32 -0400","from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71)\n\t(envelope-from <quintela@redhat.com>) id 1dpYsd-00089O-FK\n\tfor qemu-devel@nongnu.org; Wed, 06 Sep 2017 07:52:28 -0400","from mx1.redhat.com ([209.132.183.28]:38546)\n\tby eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32)\n\t(Exim 4.71) (envelope-from <quintela@redhat.com>) id 1dpYsd-00088w-6O\n\tfor qemu-devel@nongnu.org; Wed, 06 Sep 2017 07:52:27 -0400","from smtp.corp.redhat.com\n\t(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby mx1.redhat.com (Postfix) with ESMTPS id 1F165461FA\n\tfor <qemu-devel@nongnu.org>; Wed,  6 Sep 2017 11:52:26 +0000 (UTC)","from secure.mitica (ovpn-117-188.ams2.redhat.com [10.36.117.188])\n\tby smtp.corp.redhat.com (Postfix) with ESMTP id 7A6E118ABA;\n\tWed,  6 Sep 2017 11:52:22 +0000 (UTC)"],"DMARC-Filter":"OpenDMARC Filter v1.3.2 mx1.redhat.com 1F165461FA","From":"Juan Quintela <quintela@redhat.com>","To":"qemu-devel@nongnu.org","Date":"Wed,  6 Sep 2017 13:51:36 +0200","Message-Id":"<20170906115143.27451-16-quintela@redhat.com>","In-Reply-To":"<20170906115143.27451-1-quintela@redhat.com>","References":"<20170906115143.27451-1-quintela@redhat.com>","X-Scanned-By":"MIMEDefang 2.79 on 10.5.11.11","X-Greylist":"Sender IP whitelisted, not delayed by milter-greylist-4.5.16\n\t(mx1.redhat.com [10.5.110.29]);\n\tWed, 06 Sep 2017 11:52:26 +0000 (UTC)","X-detected-operating-system":"by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic]\n\t[fuzzy]","X-Received-From":"209.132.183.28","Subject":"[Qemu-devel] [PATCH v7 15/22] migration: Create ram_multifd_page","X-BeenThere":"qemu-devel@nongnu.org","X-Mailman-Version":"2.1.21","Precedence":"list","List-Id":"<qemu-devel.nongnu.org>","List-Unsubscribe":"<https://lists.nongnu.org/mailman/options/qemu-devel>,\n\t<mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>","List-Archive":"<http://lists.nongnu.org/archive/html/qemu-devel/>","List-Post":"<mailto:qemu-devel@nongnu.org>","List-Help":"<mailto:qemu-devel-request@nongnu.org?subject=help>","List-Subscribe":"<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n\t<mailto:qemu-devel-request@nongnu.org?subject=subscribe>","Cc":"lvivier@redhat.com, dgilbert@redhat.com, peterx@redhat.com","Errors-To":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org","Sender":"\"Qemu-devel\"\n\t<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>"},"content":"The function still don't use multifd, but we have simplified\nram_save_page, xbzrle and RDMA stuff is gone.  We have added a new\ncounter and a new flag for this type of pages.\n\nSigned-off-by: Juan Quintela <quintela@redhat.com>\n\n--\nAdd last_page parameter\nAdd commets for done and address\n---\n hmp.c                 |  2 ++\n migration/migration.c |  1 +\n migration/ram.c       | 94 ++++++++++++++++++++++++++++++++++++++++++++++++++-\n qapi/migration.json   |  5 ++-\n 4 files changed, 100 insertions(+), 2 deletions(-)","diff":"diff --git a/hmp.c b/hmp.c\nindex d9562103ee..7e865f5955 100644\n--- a/hmp.c\n+++ b/hmp.c\n@@ -233,6 +233,8 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)\n             monitor_printf(mon, \"postcopy request count: %\" PRIu64 \"\\n\",\n                            info->ram->postcopy_requests);\n         }\n+        monitor_printf(mon, \"multifd: %\" PRIu64 \" pages\\n\",\n+                       info->ram->multifd);\n     }\n \n     if (info->has_disk) {\ndiff --git a/migration/migration.c b/migration/migration.c\nindex b06de8b189..b5875c0b15 100644\n--- a/migration/migration.c\n+++ b/migration/migration.c\n@@ -556,6 +556,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)\n     info->ram->dirty_sync_count = ram_counters.dirty_sync_count;\n     info->ram->postcopy_requests = ram_counters.postcopy_requests;\n     info->ram->page_size = qemu_target_page_size();\n+    info->ram->multifd = ram_counters.multifd;\n \n     if (migrate_use_xbzrle()) {\n         info->has_xbzrle_cache = true;\ndiff --git a/migration/ram.c b/migration/ram.c\nindex 9d45f4c7ca..2ee2699bb2 100644\n--- a/migration/ram.c\n+++ b/migration/ram.c\n@@ -68,6 +68,7 @@\n #define RAM_SAVE_FLAG_XBZRLE   0x40\n /* 0x80 is reserved in migration.h start with 0x100 next */\n #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100\n+#define RAM_SAVE_FLAG_MULTIFD_PAGE     0x200\n \n static inline bool is_zero_range(uint8_t *p, uint64_t size)\n {\n@@ -362,13 +363,23 @@ static void compress_threads_save_setup(void)\n /* Multiple fd's */\n \n struct MultiFDSendParams {\n+    /* not changed */\n     uint8_t id;\n     char *name;\n     QemuThread thread;\n     QIOChannel *c;\n     QemuSemaphore sem;\n     QemuMutex mutex;\n+    /* protected by param mutex */\n     bool quit;\n+    /* This is a temp field.  We are using it now to transmit\n+       something the address of the page.  Later in the series, we\n+       change it for the real page.\n+    */\n+    uint8_t *address;\n+    /* protected by multifd mutex */\n+    /* has the thread finish the last submitted job */\n+    bool done;\n };\n typedef struct MultiFDSendParams MultiFDSendParams;\n \n@@ -376,6 +387,8 @@ struct {\n     MultiFDSendParams *params;\n     /* number of created threads */\n     int count;\n+    QemuMutex mutex;\n+    QemuSemaphore sem;\n } *multifd_send_state;\n \n static void terminate_multifd_send_threads(Error *errp)\n@@ -450,6 +463,7 @@ static void *multifd_send_thread(void *opaque)\n         terminate_multifd_send_threads(local_err);\n         return NULL;\n     }\n+    qemu_sem_post(&multifd_send_state->sem);\n \n     while (true) {\n         qemu_mutex_lock(&p->mutex);\n@@ -457,6 +471,15 @@ static void *multifd_send_thread(void *opaque)\n             qemu_mutex_unlock(&p->mutex);\n             break;\n         }\n+        if (p->address) {\n+            p->address = 0;\n+            qemu_mutex_unlock(&p->mutex);\n+            qemu_mutex_lock(&multifd_send_state->mutex);\n+            p->done = true;\n+            qemu_mutex_unlock(&multifd_send_state->mutex);\n+            qemu_sem_post(&multifd_send_state->sem);\n+            continue;\n+        }\n         qemu_mutex_unlock(&p->mutex);\n         qemu_sem_wait(&p->sem);\n     }\n@@ -477,6 +500,8 @@ int multifd_save_setup(void)\n     multifd_send_state = g_malloc0(sizeof(*multifd_send_state));\n     multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);\n     multifd_send_state->count = 0;\n+    qemu_mutex_init(&multifd_send_state->mutex);\n+    qemu_sem_init(&multifd_send_state->sem, 0);\n     for (i = 0; i < thread_count; i++) {\n         MultiFDSendParams *p = &multifd_send_state->params[i];\n \n@@ -484,6 +509,8 @@ int multifd_save_setup(void)\n         qemu_sem_init(&p->sem, 0);\n         p->quit = false;\n         p->id = i;\n+        p->done = true;\n+        p->address = 0;\n         p->c = socket_send_channel_create(&local_err);\n         if (!p->c) {\n             if (multifd_save_cleanup(&local_err) != 0) {\n@@ -500,6 +527,30 @@ int multifd_save_setup(void)\n     return 0;\n }\n \n+static uint16_t multifd_send_page(uint8_t *address, bool last_page)\n+{\n+    int i;\n+    MultiFDSendParams *p = NULL; /* make happy gcc */\n+\n+    qemu_sem_wait(&multifd_send_state->sem);\n+    qemu_mutex_lock(&multifd_send_state->mutex);\n+    for (i = 0; i < multifd_send_state->count; i++) {\n+        p = &multifd_send_state->params[i];\n+\n+        if (p->done) {\n+            p->done = false;\n+            break;\n+        }\n+    }\n+    qemu_mutex_unlock(&multifd_send_state->mutex);\n+    qemu_mutex_lock(&p->mutex);\n+    p->address = address;\n+    qemu_mutex_unlock(&p->mutex);\n+    qemu_sem_post(&p->sem);\n+\n+    return 0;\n+}\n+\n struct MultiFDRecvParams {\n     uint8_t id;\n     char *name;\n@@ -1082,6 +1133,32 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)\n     return pages;\n }\n \n+static int ram_multifd_page(RAMState *rs, PageSearchStatus *pss,\n+                            bool last_stage)\n+{\n+    int pages;\n+    uint8_t *p;\n+    RAMBlock *block = pss->block;\n+    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;\n+\n+    p = block->host + offset;\n+\n+    pages = save_zero_page(rs, block, offset, p);\n+    if (pages == -1) {\n+        ram_counters.transferred +=\n+            save_page_header(rs, rs->f, block,\n+                             offset | RAM_SAVE_FLAG_MULTIFD_PAGE);\n+        qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);\n+        multifd_send_page(p, rs->migration_dirty_pages == 1);\n+        ram_counters.transferred += TARGET_PAGE_SIZE;\n+        pages = 1;\n+        ram_counters.normal++;\n+        ram_counters.multifd++;\n+    }\n+\n+    return pages;\n+}\n+\n static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,\n                                 ram_addr_t offset)\n {\n@@ -1510,6 +1587,8 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,\n         if (migrate_use_compression() &&\n             (rs->ram_bulk_stage || !migrate_use_xbzrle())) {\n             res = ram_save_compressed_page(rs, pss, last_stage);\n+        } else if (migrate_use_multifd()) {\n+            res = ram_multifd_page(rs, pss, last_stage);\n         } else {\n             res = ram_save_page(rs, pss, last_stage);\n         }\n@@ -2802,6 +2881,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)\n     if (!migrate_use_compression()) {\n         invalid_flags |= RAM_SAVE_FLAG_COMPRESS_PAGE;\n     }\n+\n+    if (!migrate_use_multifd()) {\n+        invalid_flags |= RAM_SAVE_FLAG_MULTIFD_PAGE;\n+    }\n     /* This RCU critical section can be very long running.\n      * When RCU reclaims in the code start to become numerous,\n      * it will be necessary to reduce the granularity of this\n@@ -2826,13 +2909,17 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)\n             if (flags & invalid_flags & RAM_SAVE_FLAG_COMPRESS_PAGE) {\n                 error_report(\"Received an unexpected compressed page\");\n             }\n+            if (flags & invalid_flags  & RAM_SAVE_FLAG_MULTIFD_PAGE) {\n+                error_report(\"Received an unexpected multifd page\");\n+            }\n \n             ret = -EINVAL;\n             break;\n         }\n \n         if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |\n-                     RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {\n+                     RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE |\n+                     RAM_SAVE_FLAG_MULTIFD_PAGE)) {\n             RAMBlock *block = ram_block_from_stream(f, flags);\n \n             host = host_from_ram_block_offset(block, addr);\n@@ -2920,6 +3007,11 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)\n                 break;\n             }\n             break;\n+\n+        case RAM_SAVE_FLAG_MULTIFD_PAGE:\n+            qemu_get_buffer(f, host, TARGET_PAGE_SIZE);\n+            break;\n+\n         case RAM_SAVE_FLAG_EOS:\n             /* normal exit */\n             break;\ndiff --git a/qapi/migration.json b/qapi/migration.json\nindex 6a838369ef..f7efa703de 100644\n--- a/qapi/migration.json\n+++ b/qapi/migration.json\n@@ -39,6 +39,8 @@\n # @page-size: The number of bytes per page for the various page-based\n #        statistics (since 2.10)\n #\n+# @multifd: number of pages sent with multifd (since 2.10)\n+#\n # Since: 0.14.0\n ##\n { 'struct': 'MigrationStats',\n@@ -46,7 +48,8 @@\n            'duplicate': 'int', 'skipped': 'int', 'normal': 'int',\n            'normal-bytes': 'int', 'dirty-pages-rate' : 'int',\n            'mbps' : 'number', 'dirty-sync-count' : 'int',\n-           'postcopy-requests' : 'int', 'page-size' : 'int' } }\n+           'postcopy-requests' : 'int', 'page-size' : 'int',\n+           'multifd': 'int' } }\n \n ##\n # @XBZRLECacheStats:\n","prefixes":["v7","15/22"]}