From patchwork Fri Jul 4 17:41:47 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Dr. David Alan Gilbert" X-Patchwork-Id: 367172 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 19D861400DF for ; Sat, 5 Jul 2014 03:54:47 +1000 (EST) Received: from localhost ([::1]:37464 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X37h7-0005qG-Bp for incoming@patchwork.ozlabs.org; Fri, 04 Jul 2014 13:54:45 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:32936) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X37Vu-00084V-Do for qemu-devel@nongnu.org; Fri, 04 Jul 2014 13:43:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1X37Vn-0000U6-8I for qemu-devel@nongnu.org; Fri, 04 Jul 2014 13:43:10 -0400 Received: from mx1.redhat.com ([209.132.183.28]:26622) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1X37Vm-0000Tw-Uw for qemu-devel@nongnu.org; Fri, 04 Jul 2014 13:43:03 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s64Hh0Ia004537 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 4 Jul 2014 13:43:00 -0400 Received: from dgilbert-t530.home.treblig.org (vpn1-7-141.ams2.redhat.com [10.36.7.141]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s64HfvU5030576; Fri, 4 Jul 2014 13:42:58 -0400 From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org Date: Fri, 4 Jul 2014 18:41:47 +0100 Message-Id: <1404495717-4239-37-git-send-email-dgilbert@redhat.com> In-Reply-To: <1404495717-4239-1-git-send-email-dgilbert@redhat.com> References: <1404495717-4239-1-git-send-email-dgilbert@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: aarcange@redhat.com, yamahata@private.email.ne.jp, lilei@linux.vnet.ibm.com, quintela@redhat.com Subject: [Qemu-devel] [PATCH 36/46] Page request: Consume pages off the post-copy queue X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: "Dr. David Alan Gilbert" When transmitting RAM pages, consume pages that have been queued by MIG_RPCOMM_REQPAGE commands and send them ahead of normal page scanning. Note: a) After a queued page the linear walk carries on from after the unqueued page; there is a reasonable chance that the destination was about to ask for other closeby pages anyway. b) We have to be careful of any assumptions that the page walking code makes, in particular it does some short cuts on its first linear walk that break as soon as we do a queued page. Signed-off-by: Dr. David Alan Gilbert --- arch_init.c | 130 +++++++++++++++++++++++++++++++++++++++++++++++++----------- 1 file changed, 106 insertions(+), 24 deletions(-) diff --git a/arch_init.c b/arch_init.c index cc4acea..c006d21 100644 --- a/arch_init.c +++ b/arch_init.c @@ -458,6 +458,19 @@ static inline bool migration_bitmap_set_dirty(ram_addr_t addr) return ret; } +static inline bool migration_bitmap_clear_dirty(ram_addr_t addr) +{ + bool ret; + int nr = addr >> TARGET_PAGE_BITS; + + ret = test_and_clear_bit(nr, migration_bitmap); + + if (ret) { + migration_dirty_pages--; + } + return ret; +} + static void migration_bitmap_sync_range(ram_addr_t start, ram_addr_t length) { ram_addr_t addr; @@ -658,6 +671,39 @@ static int ram_save_page(QEMUFile *f, RAMBlock* block, ram_addr_t offset, } /* + * Unqueue a page from the queue fed by postcopy page requests + * + * Returns: The RAMBlock* to transmit from (or NULL if the queue is empty) + * ms: MigrationState in + * offset: the byte offset within the RAMBlock for the start of the page + * bitoffset: global offset in the dirty/sent bitmaps + */ +static RAMBlock *ram_save_unqueue_page(MigrationState *ms, ram_addr_t *offset, + unsigned long *bitoffset) +{ + RAMBlock *result = NULL; + qemu_mutex_lock(&ms->src_page_req_mutex); + if (!QSIMPLEQ_EMPTY(&ms->src_page_requests)) { + struct MigrationSrcPageRequest *entry = + QSIMPLEQ_FIRST(&ms->src_page_requests); + result = entry->rb; + *offset = entry->offset; + *bitoffset = (entry->offset + entry->rb->offset) >> TARGET_PAGE_BITS; + + if (entry->len > TARGET_PAGE_SIZE) { + entry->len -= TARGET_PAGE_SIZE; + entry->offset += TARGET_PAGE_SIZE; + } else { + QSIMPLEQ_REMOVE_HEAD(&ms->src_page_requests, next_req); + g_free(entry); + } + } + qemu_mutex_unlock(&ms->src_page_req_mutex); + + return result; +} + +/* * Queue the pages for transmission, e.g. a request from postcopy destination * ms: MigrationStatus in which the queue is held * rbname: The RAMBlock the request is for - may be NULL (to mean reuse last) @@ -718,44 +764,80 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname, static int ram_find_and_save_block(QEMUFile *f, bool last_stage) { + MigrationState *ms = migrate_get_current(); RAMBlock *block = last_seen_block; + RAMBlock *tmpblock; ram_addr_t offset = last_offset; + ram_addr_t tmpoffset; bool complete_round = false; int bytes_sent = 0; - MemoryRegion *mr; unsigned long bitoffset; if (!block) block = QTAILQ_FIRST(&ram_list.blocks); - while (true) { - mr = block->mr; - offset = migration_bitmap_find_and_reset_dirty(mr, offset, &bitoffset); - if (complete_round && block == last_seen_block && - offset >= last_offset) { - break; - } - if (offset >= block->length) { - offset = 0; - block = QTAILQ_NEXT(block, next); - if (!block) { - block = QTAILQ_FIRST(&ram_list.blocks); - complete_round = true; - ram_bulk_stage = false; + while (true) { /* Until we send a block or run out of stuff to send */ + tmpblock = ram_save_unqueue_page(ms, &tmpoffset, &bitoffset); + if (tmpblock) { + /* We've got a block from the postcopy queue */ + DPRINTF("%s: Got postcopy item '%s' offset=%zx bitoffset=%zx", + __func__, tmpblock->idstr, tmpoffset, bitoffset); + /* We're sending this page, and since it's postcopy nothing else + * will dirty it, and we must make sure it doesn't get sent again. + */ + if (!migration_bitmap_clear_dirty(bitoffset << TARGET_PAGE_BITS)) { + DPRINTF("%s: Not dirty for postcopy %s/%zx bito=%zx (sent=%d)", + __func__, tmpblock->idstr, tmpoffset, bitoffset, + test_bit(bitoffset, ms->sentmap)); + continue; } + /* + * As soon as we start servicing pages out of order, then we have + * to kill the bulk stage, since the bulk stage assumes + * in (migration_bitmap_find_and_reset_dirty) that every page is + * dirty, that's no longer true. + */ + ram_bulk_stage = false; + /* + * We mustn't change block/offset unless it's to a valid one + * otherwise we can go down some of the exit cases in the normal + * path. + */ + block = tmpblock; + offset = tmpoffset; } else { - bytes_sent = ram_save_page(f, block, offset, last_stage); - - /* if page is unmodified, continue to the next */ - if (bytes_sent > 0) { - MigrationState *s = migrate_get_current(); - if (s->sentmap) { - set_bit(bitoffset, s->sentmap); + MemoryRegion *mr; + /* priority queue empty, so just search for something dirty */ + mr = block->mr; + offset = migration_bitmap_find_and_reset_dirty(mr, offset, + &bitoffset); + if (complete_round && block == last_seen_block && + offset >= last_offset) { + break; + } + if (offset >= block->length) { + offset = 0; + block = QTAILQ_NEXT(block, next); + if (!block) { + block = QTAILQ_FIRST(&ram_list.blocks); + complete_round = true; + ram_bulk_stage = false; } + continue; /* pick an offset in the new block */ + } + } - last_sent_block = block; - break; + /* We have a page to send, so send it */ + bytes_sent = ram_save_page(f, block, offset, last_stage); + + /* if page is unmodified, continue to the next */ + if (bytes_sent > 0) { + if (ms->sentmap) { + set_bit(bitoffset, ms->sentmap); } + + last_sent_block = block; + break; } } last_seen_block = block;