From patchwork Mon Jan 4 19:48:57 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Michael S. Tsirkin" X-Patchwork-Id: 42213 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 696ACB6EED for ; Wed, 6 Jan 2010 09:41:04 +1100 (EST) Received: from localhost ([127.0.0.1]:55246 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NSI5F-0002lC-DX for incoming@patchwork.ozlabs.org; Tue, 05 Jan 2010 17:41:01 -0500 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NSFYO-0004C2-Ee for qemu-devel@nongnu.org; Tue, 05 Jan 2010 14:58:56 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NSFYJ-00049V-Rt for qemu-devel@nongnu.org; Tue, 05 Jan 2010 14:58:55 -0500 Received: from [199.232.76.173] (port=43307 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NSFYI-000494-V4 for qemu-devel@nongnu.org; Tue, 05 Jan 2010 14:58:51 -0500 Received: from mx20.gnu.org ([199.232.41.8]:36430) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1NSFYH-0005CH-Hl for qemu-devel@nongnu.org; Tue, 05 Jan 2010 14:58:49 -0500 Received: from mx1.redhat.com ([209.132.183.28]) by mx20.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NRsy8-0004Q3-Iq for qemu-devel@nongnu.org; Mon, 04 Jan 2010 14:52:00 -0500 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o04Jpoi5001323 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Mon, 4 Jan 2010 14:51:50 -0500 Received: from redhat.com (vpn1-7-102.ams2.redhat.com [10.36.7.102]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with SMTP id o04JplOo004434; Mon, 4 Jan 2010 14:51:48 -0500 Date: Mon, 4 Jan 2010 21:48:57 +0200 From: "Michael S. Tsirkin" To: Anthony Liguori , qemu-devel@nongnu.org, avi@redhat.com, gleb@redhat.com Message-ID: <20100104194856.GA21299@redhat.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.19 (2009-01-05) X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11 X-detected-operating-system: by mx20.gnu.org: Genre and OS details not recognized. X-detected-operating-system: by monty-python.gnu.org: GNU/Linux 2.6, seldom 2.4 (older, 4) Cc: Subject: [Qemu-devel] [PATCHv2 0/3] qemu: memory notifiers X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org This patch against qemu upstream adds notifiers hook which lets backends get notified on memory changes, and converts kvm to use it. It survived light testing. Avi, could you please take a look at this patch? Thanks! --- cpu-common.h | 19 +++++++++++++++++ exec.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 78 insertions(+), 3 deletions(-) diff --git a/cpu-common.h b/cpu-common.h index 6302372..0ec9b72 100644 --- a/cpu-common.h +++ b/cpu-common.h @@ -8,6 +8,7 @@ #endif #include "bswap.h" +#include "qemu-queue.h" /* address in the RAM (different from a physical address) */ typedef unsigned long ram_addr_t; @@ -61,6 +62,24 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len, void *cpu_register_map_client(void *opaque, void (*callback)(void *opaque)); void cpu_unregister_map_client(void *cookie); +struct CPUPhysMemoryClient; +typedef struct CPUPhysMemoryClient CPUPhysMemoryClient; +struct CPUPhysMemoryClient { + void (*set_memory)(struct CPUPhysMemoryClient *client, + target_phys_addr_t start_addr, + ram_addr_t size, + ram_addr_t phys_offset); + int (*sync_dirty_bitmap)(struct CPUPhysMemoryClient *client, + target_phys_addr_t start_addr, + target_phys_addr_t end_addr); + int (*migration_log)(struct CPUPhysMemoryClient *client, + int enable); + QLIST_ENTRY(CPUPhysMemoryClient) list; +}; + +void cpu_register_phys_memory_client(CPUPhysMemoryClient *); +void cpu_unregister_phys_memory_client(CPUPhysMemoryClient *); + uint32_t ldub_phys(target_phys_addr_t addr); uint32_t lduw_phys(target_phys_addr_t addr); uint32_t ldl_phys(target_phys_addr_t addr); diff --git a/exec.c b/exec.c index 7b7fb5b..daebde5 100644 --- a/exec.c +++ b/exec.c @@ -1880,11 +1880,16 @@ void cpu_physical_memory_reset_dirty(ram_addr_t start, ram_addr_t end, int cpu_physical_memory_set_dirty_tracking(int enable) { + int ret; in_migration = enable; if (kvm_enabled()) { - return kvm_set_migration_log(enable); + ret = kvm_set_migration_log(enable); } - return 0; + if (ret < 0) { + return ret; + } + ret = cpu_notify_migration_log(!!enable); + return ret; } int cpu_physical_memory_get_dirty_tracking(void) @@ -1897,8 +1902,13 @@ int cpu_physical_sync_dirty_bitmap(target_phys_addr_t start_addr, { int ret = 0; - if (kvm_enabled()) + if (kvm_enabled()) { ret = kvm_physical_sync_dirty_bitmap(start_addr, end_addr); + } + if (ret < 0) { + return ret; + } + ret = cpu_notify_sync_dirty_bitmap(start_addr, end_addr); return ret; } @@ -2313,6 +2323,8 @@ void cpu_register_physical_memory_offset(target_phys_addr_t start_addr, if (kvm_enabled()) kvm_set_phys_mem(start_addr, size, phys_offset); + cpu_notify_set_memory(start_addr, size, phys_offset); + if (phys_offset == IO_MEM_UNASSIGNED) { region_offset = start_addr; } @@ -3214,6 +3226,50 @@ static void cpu_notify_map_clients(void) } } +static QLIST_HEAD(memory_client_list, CPUPhysMemoryClient) memory_client_list + = QLIST_HEAD_INITIALIZER(memory_client_list); + +void cpu_register_phys_memory_client(CPUPhysMemoryClient *client) +{ + QLIST_INSERT_HEAD(&memory_client_list, client, list); +} + +void cpu_unregister_phys_memory_client(CPUPhysMemoryClient *client) +{ + QLIST_REMOVE(client, list); +} + +static void cpu_notify_set_memory(target_phys_addr_t start_addr, + ram_addr_t size, + ram_addr_t phys_offset) +{ + CPUPhysMemoryClient *client; + QLIST_FOREACH(client, &memory_client_list, list) { + client->set_memory(client, start_addr, size, phys_offset); + } +} + +static int cpu_notify_sync_dirty_bitmap(target_phys_addr_t start, + target_phys_addr_t end) +{ + QLIST_FOREACH(client, &memory_client_list, list) { + int r = client->sync_dirty_bitmap(client, start, end); + if (r < 0) + return r; + } + return 0; +} + +static int cpu_notify_migration_log(int enable) +{ + QLIST_FOREACH(client, &memory_client_list, list) { + int r = client->migration_log(client, enable); + if (r < 0) + return r; + } + return 0; +} + /* Map a physical memory region into a host virtual address. * May map a subset of the requested range, given by and returned in *plen. * May return NULL if resources needed to perform the mapping are exhausted.