{"id":813363,"url":"http://patchwork.ozlabs.org/api/patches/813363/?format=json","web_url":"http://patchwork.ozlabs.org/project/qemu-devel/patch/20170913105953.13760-14-quintela@redhat.com/","project":{"id":14,"url":"http://patchwork.ozlabs.org/api/projects/14/?format=json","name":"QEMU Development","link_name":"qemu-devel","list_id":"qemu-devel.nongnu.org","list_email":"qemu-devel@nongnu.org","web_url":"","scm_url":"","webscm_url":"","list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<20170913105953.13760-14-quintela@redhat.com>","list_archive_url":null,"date":"2017-09-13T10:59:46","name":"[v8,13/20] migration: Create ram_multifd_page","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"2a10ad65d1e2ec29ac5755ae9c9c6125b74f318c","submitter":{"id":2643,"url":"http://patchwork.ozlabs.org/api/people/2643/?format=json","name":"Juan Quintela","email":"quintela@redhat.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/qemu-devel/patch/20170913105953.13760-14-quintela@redhat.com/mbox/","series":[{"id":2885,"url":"http://patchwork.ozlabs.org/api/series/2885/?format=json","web_url":"http://patchwork.ozlabs.org/project/qemu-devel/list/?series=2885","date":"2017-09-13T10:59:33","name":"Multifd","version":8,"mbox":"http://patchwork.ozlabs.org/series/2885/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/813363/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/813363/checks/","tags":{},"related":[],"headers":{"Return-Path":"<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@bilbo.ozlabs.org","Authentication-Results":["ozlabs.org;\n\tspf=pass (mailfrom) smtp.mailfrom=nongnu.org\n\t(client-ip=2001:4830:134:3::11; helo=lists.gnu.org;\n\tenvelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n\treceiver=<UNKNOWN>)","ext-mx02.extmail.prod.ext.phx2.redhat.com;\n\tdmarc=none (p=none dis=none) header.from=redhat.com","ext-mx02.extmail.prod.ext.phx2.redhat.com;\n\tspf=fail smtp.mailfrom=quintela@redhat.com"],"Received":["from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11])\n\t(using TLSv1 with cipher AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3xsf8R09W7z9sBW\n\tfor <incoming@patchwork.ozlabs.org>;\n\tWed, 13 Sep 2017 21:10:15 +1000 (AEST)","from localhost ([::1]:41591 helo=lists.gnu.org)\n\tby lists.gnu.org with esmtp (Exim 4.71) (envelope-from\n\t<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>)\n\tid 1ds5Yb-0006BU-3X\n\tfor incoming@patchwork.ozlabs.org; Wed, 13 Sep 2017 07:10:13 -0400","from eggs.gnu.org ([2001:4830:134:3::10]:42295)\n\tby lists.gnu.org with esmtp (Exim 4.71)\n\t(envelope-from <quintela@redhat.com>) id 1ds5PJ-0006Qh-TP\n\tfor qemu-devel@nongnu.org; Wed, 13 Sep 2017 07:00:39 -0400","from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71)\n\t(envelope-from <quintela@redhat.com>) id 1ds5PI-0006vA-Ju\n\tfor qemu-devel@nongnu.org; Wed, 13 Sep 2017 07:00:38 -0400","from mx1.redhat.com ([209.132.183.28]:10316)\n\tby eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32)\n\t(Exim 4.71) (envelope-from <quintela@redhat.com>) id 1ds5PI-0006tc-CD\n\tfor qemu-devel@nongnu.org; Wed, 13 Sep 2017 07:00:36 -0400","from smtp.corp.redhat.com\n\t(int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby mx1.redhat.com (Postfix) with ESMTPS id 58D88883BC\n\tfor <qemu-devel@nongnu.org>; Wed, 13 Sep 2017 11:00:35 +0000 (UTC)","from secure.mitica (ovpn-117-188.ams2.redhat.com [10.36.117.188])\n\tby smtp.corp.redhat.com (Postfix) with ESMTP id 0437E5C545;\n\tWed, 13 Sep 2017 11:00:33 +0000 (UTC)"],"DMARC-Filter":"OpenDMARC Filter v1.3.2 mx1.redhat.com 58D88883BC","From":"Juan Quintela <quintela@redhat.com>","To":"qemu-devel@nongnu.org","Date":"Wed, 13 Sep 2017 12:59:46 +0200","Message-Id":"<20170913105953.13760-14-quintela@redhat.com>","In-Reply-To":"<20170913105953.13760-1-quintela@redhat.com>","References":"<20170913105953.13760-1-quintela@redhat.com>","X-Scanned-By":"MIMEDefang 2.79 on 10.5.11.16","X-Greylist":"Sender IP whitelisted, not delayed by milter-greylist-4.5.16\n\t(mx1.redhat.com [10.5.110.26]);\n\tWed, 13 Sep 2017 11:00:35 +0000 (UTC)","X-detected-operating-system":"by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic]\n\t[fuzzy]","X-Received-From":"209.132.183.28","Subject":"[Qemu-devel] [PATCH v8 13/20] migration: Create ram_multifd_page","X-BeenThere":"qemu-devel@nongnu.org","X-Mailman-Version":"2.1.21","Precedence":"list","List-Id":"<qemu-devel.nongnu.org>","List-Unsubscribe":"<https://lists.nongnu.org/mailman/options/qemu-devel>,\n\t<mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>","List-Archive":"<http://lists.nongnu.org/archive/html/qemu-devel/>","List-Post":"<mailto:qemu-devel@nongnu.org>","List-Help":"<mailto:qemu-devel-request@nongnu.org?subject=help>","List-Subscribe":"<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n\t<mailto:qemu-devel-request@nongnu.org?subject=subscribe>","Cc":"lvivier@redhat.com, dgilbert@redhat.com, peterx@redhat.com","Errors-To":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org","Sender":"\"Qemu-devel\"\n\t<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>"},"content":"The function still don't use multifd, but we have simplified\nram_save_page, xbzrle and RDMA stuff is gone.  We have added a new\ncounter and a new flag for this type of pages.\n\nSigned-off-by: Juan Quintela <quintela@redhat.com>\n\n--\nAdd last_page parameter\nAdd commets for done and address\n---\n hmp.c                 |  2 ++\n migration/migration.c |  1 +\n migration/ram.c       | 94 ++++++++++++++++++++++++++++++++++++++++++++++++++-\n qapi/migration.json   |  5 ++-\n 4 files changed, 100 insertions(+), 2 deletions(-)","diff":"diff --git a/hmp.c b/hmp.c\nindex 203fe1d50e..2e9f9abf1b 100644\n--- a/hmp.c\n+++ b/hmp.c\n@@ -233,6 +233,8 @@ void hmp_info_migrate(Monitor *mon, const QDict *qdict)\n             monitor_printf(mon, \"postcopy request count: %\" PRIu64 \"\\n\",\n                            info->ram->postcopy_requests);\n         }\n+        monitor_printf(mon, \"multifd: %\" PRIu64 \" pages\\n\",\n+                       info->ram->multifd);\n     }\n \n     if (info->has_disk) {\ndiff --git a/migration/migration.c b/migration/migration.c\nindex 679be8e8d4..085177ca26 100644\n--- a/migration/migration.c\n+++ b/migration/migration.c\n@@ -555,6 +555,7 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)\n     info->ram->dirty_sync_count = ram_counters.dirty_sync_count;\n     info->ram->postcopy_requests = ram_counters.postcopy_requests;\n     info->ram->page_size = qemu_target_page_size();\n+    info->ram->multifd = ram_counters.multifd;\n \n     if (migrate_use_xbzrle()) {\n         info->has_xbzrle_cache = true;\ndiff --git a/migration/ram.c b/migration/ram.c\nindex 8577eeb032..1381bfaf8a 100644\n--- a/migration/ram.c\n+++ b/migration/ram.c\n@@ -68,6 +68,7 @@\n #define RAM_SAVE_FLAG_XBZRLE   0x40\n /* 0x80 is reserved in migration.h start with 0x100 next */\n #define RAM_SAVE_FLAG_COMPRESS_PAGE    0x100\n+#define RAM_SAVE_FLAG_MULTIFD_PAGE     0x200\n \n static inline bool is_zero_range(uint8_t *p, uint64_t size)\n {\n@@ -362,13 +363,23 @@ static void compress_threads_save_setup(void)\n /* Multiple fd's */\n \n struct MultiFDSendParams {\n+    /* not changed */\n     uint8_t id;\n     char *name;\n     QemuThread thread;\n     QIOChannel *c;\n     QemuSemaphore sem;\n     QemuMutex mutex;\n+    /* protected by param mutex */\n     bool quit;\n+    /* This is a temp field.  We are using it now to transmit\n+       something the address of the page.  Later in the series, we\n+       change it for the real page.\n+    */\n+    uint8_t *address;\n+    /* protected by multifd mutex */\n+    /* has the thread finish the last submitted job */\n+    bool done;\n };\n typedef struct MultiFDSendParams MultiFDSendParams;\n \n@@ -376,6 +387,8 @@ struct {\n     MultiFDSendParams *params;\n     /* number of created threads */\n     int count;\n+    QemuMutex mutex;\n+    QemuSemaphore sem;\n } *multifd_send_state;\n \n static void terminate_multifd_send_threads(Error *errp)\n@@ -450,6 +463,7 @@ static void *multifd_send_thread(void *opaque)\n         terminate_multifd_send_threads(local_err);\n         return NULL;\n     }\n+    qemu_sem_post(&multifd_send_state->sem);\n \n     while (true) {\n         qemu_mutex_lock(&p->mutex);\n@@ -457,6 +471,15 @@ static void *multifd_send_thread(void *opaque)\n             qemu_mutex_unlock(&p->mutex);\n             break;\n         }\n+        if (p->address) {\n+            p->address = 0;\n+            qemu_mutex_unlock(&p->mutex);\n+            qemu_mutex_lock(&multifd_send_state->mutex);\n+            p->done = true;\n+            qemu_mutex_unlock(&multifd_send_state->mutex);\n+            qemu_sem_post(&multifd_send_state->sem);\n+            continue;\n+        }\n         qemu_mutex_unlock(&p->mutex);\n         qemu_sem_wait(&p->sem);\n     }\n@@ -497,6 +520,8 @@ int multifd_save_setup(void)\n     multifd_send_state = g_malloc0(sizeof(*multifd_send_state));\n     multifd_send_state->params = g_new0(MultiFDSendParams, thread_count);\n     multifd_send_state->count = 0;\n+    qemu_mutex_init(&multifd_send_state->mutex);\n+    qemu_sem_init(&multifd_send_state->sem, 0);\n     for (i = 0; i < thread_count; i++) {\n         MultiFDSendParams *p = &multifd_send_state->params[i];\n \n@@ -504,12 +529,38 @@ int multifd_save_setup(void)\n         qemu_sem_init(&p->sem, 0);\n         p->quit = false;\n         p->id = i;\n+        p->done = true;\n+        p->address = 0;\n         p->name = g_strdup_printf(\"multifdsend_%d\", i);\n         socket_send_channel_create(multifd_new_channel_async, p);\n     }\n     return 0;\n }\n \n+static uint16_t multifd_send_page(uint8_t *address, bool last_page)\n+{\n+    int i;\n+    MultiFDSendParams *p = NULL; /* make happy gcc */\n+\n+    qemu_sem_wait(&multifd_send_state->sem);\n+    qemu_mutex_lock(&multifd_send_state->mutex);\n+    for (i = 0; i < multifd_send_state->count; i++) {\n+        p = &multifd_send_state->params[i];\n+\n+        if (p->done) {\n+            p->done = false;\n+            break;\n+        }\n+    }\n+    qemu_mutex_unlock(&multifd_send_state->mutex);\n+    qemu_mutex_lock(&p->mutex);\n+    p->address = address;\n+    qemu_mutex_unlock(&p->mutex);\n+    qemu_sem_post(&p->sem);\n+\n+    return 0;\n+}\n+\n struct MultiFDRecvParams {\n     uint8_t id;\n     char *name;\n@@ -1092,6 +1143,32 @@ static int ram_save_page(RAMState *rs, PageSearchStatus *pss, bool last_stage)\n     return pages;\n }\n \n+static int ram_multifd_page(RAMState *rs, PageSearchStatus *pss,\n+                            bool last_stage)\n+{\n+    int pages;\n+    uint8_t *p;\n+    RAMBlock *block = pss->block;\n+    ram_addr_t offset = pss->page << TARGET_PAGE_BITS;\n+\n+    p = block->host + offset;\n+\n+    pages = save_zero_page(rs, block, offset, p);\n+    if (pages == -1) {\n+        ram_counters.transferred +=\n+            save_page_header(rs, rs->f, block,\n+                             offset | RAM_SAVE_FLAG_MULTIFD_PAGE);\n+        qemu_put_buffer(rs->f, p, TARGET_PAGE_SIZE);\n+        multifd_send_page(p, rs->migration_dirty_pages == 1);\n+        ram_counters.transferred += TARGET_PAGE_SIZE;\n+        pages = 1;\n+        ram_counters.normal++;\n+        ram_counters.multifd++;\n+    }\n+\n+    return pages;\n+}\n+\n static int do_compress_ram_page(QEMUFile *f, RAMBlock *block,\n                                 ram_addr_t offset)\n {\n@@ -1520,6 +1597,8 @@ static int ram_save_target_page(RAMState *rs, PageSearchStatus *pss,\n         if (migrate_use_compression() &&\n             (rs->ram_bulk_stage || !migrate_use_xbzrle())) {\n             res = ram_save_compressed_page(rs, pss, last_stage);\n+        } else if (migrate_use_multifd()) {\n+            res = ram_multifd_page(rs, pss, last_stage);\n         } else {\n             res = ram_save_page(rs, pss, last_stage);\n         }\n@@ -2812,6 +2891,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)\n     if (!migrate_use_compression()) {\n         invalid_flags |= RAM_SAVE_FLAG_COMPRESS_PAGE;\n     }\n+\n+    if (!migrate_use_multifd()) {\n+        invalid_flags |= RAM_SAVE_FLAG_MULTIFD_PAGE;\n+    }\n     /* This RCU critical section can be very long running.\n      * When RCU reclaims in the code start to become numerous,\n      * it will be necessary to reduce the granularity of this\n@@ -2836,13 +2919,17 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)\n             if (flags & invalid_flags & RAM_SAVE_FLAG_COMPRESS_PAGE) {\n                 error_report(\"Received an unexpected compressed page\");\n             }\n+            if (flags & invalid_flags  & RAM_SAVE_FLAG_MULTIFD_PAGE) {\n+                error_report(\"Received an unexpected multifd page\");\n+            }\n \n             ret = -EINVAL;\n             break;\n         }\n \n         if (flags & (RAM_SAVE_FLAG_ZERO | RAM_SAVE_FLAG_PAGE |\n-                     RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {\n+                     RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE |\n+                     RAM_SAVE_FLAG_MULTIFD_PAGE)) {\n             RAMBlock *block = ram_block_from_stream(f, flags);\n \n             host = host_from_ram_block_offset(block, addr);\n@@ -2930,6 +3017,11 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)\n                 break;\n             }\n             break;\n+\n+        case RAM_SAVE_FLAG_MULTIFD_PAGE:\n+            qemu_get_buffer(f, host, TARGET_PAGE_SIZE);\n+            break;\n+\n         case RAM_SAVE_FLAG_EOS:\n             /* normal exit */\n             break;\ndiff --git a/qapi/migration.json b/qapi/migration.json\nindex f8b365e3f5..9cb59373fa 100644\n--- a/qapi/migration.json\n+++ b/qapi/migration.json\n@@ -39,6 +39,8 @@\n # @page-size: The number of bytes per page for the various page-based\n #        statistics (since 2.10)\n #\n+# @multifd: number of pages sent with multifd (since 2.10)\n+#\n # Since: 0.14.0\n ##\n { 'struct': 'MigrationStats',\n@@ -46,7 +48,8 @@\n            'duplicate': 'int', 'skipped': 'int', 'normal': 'int',\n            'normal-bytes': 'int', 'dirty-pages-rate' : 'int',\n            'mbps' : 'number', 'dirty-sync-count' : 'int',\n-           'postcopy-requests' : 'int', 'page-size' : 'int' } }\n+           'postcopy-requests' : 'int', 'page-size' : 'int',\n+           'multifd': 'int' } }\n \n ##\n # @XBZRLECacheStats:\n","prefixes":["v8","13/20"]}