From patchwork Tue May 25 08:36:42 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yoshiaki Tamura X-Patchwork-Id: 53522 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 0350CB7D5C for ; Tue, 25 May 2010 19:19:01 +1000 (EST) Received: from localhost ([127.0.0.1]:50651 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OGqHG-0003Ma-Ca for incoming@patchwork.ozlabs.org; Tue, 25 May 2010 05:18:22 -0400 Received: from [140.186.70.92] (port=48434 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OGpgl-0003EE-Ap for qemu-devel@nongnu.org; Tue, 25 May 2010 04:40:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OGpgW-0003dr-LQ for qemu-devel@nongnu.org; Tue, 25 May 2010 04:40:32 -0400 Received: from sh.osrg.net ([192.16.179.4]:51646) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OGpgV-0003aN-QA for qemu-devel@nongnu.org; Tue, 25 May 2010 04:40:24 -0400 Received: from fs.osrg.net (postfix@fs.osrg.net [10.0.0.12]) by sh.osrg.net (8.14.3/8.14.3/OSRG-NET) with ESMTP id o4P8eHrE010151; Tue, 25 May 2010 17:40:17 +0900 Received: from localhost (hype-wd0.osrg.net [10.72.1.16]) by fs.osrg.net (Postfix) with ESMTP id 997723E02F8; Tue, 25 May 2010 17:40:17 +0900 (JST) From: Yoshiaki Tamura To: kvm@vger.kernel.org, qemu-devel@nongnu.org Date: Tue, 25 May 2010 17:36:42 +0900 Message-Id: <1274776624-16435-3-git-send-email-tamura.yoshiaki@lab.ntt.co.jp> X-Mailer: git-send-email 1.7.0.31.g1df487 In-Reply-To: <1274776624-16435-1-git-send-email-tamura.yoshiaki@lab.ntt.co.jp> References: <1274776624-16435-1-git-send-email-tamura.yoshiaki@lab.ntt.co.jp> X-Dispatcher: imput version 20070423(IM149) Lines: 369 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-3.0 (sh.osrg.net [192.16.179.4]); Tue, 25 May 2010 17:40:19 +0900 (JST) X-Virus-Scanned: clamav-milter 0.96 at sh X-Virus-Status: Clean X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 2) Cc: aliguori@us.ibm.com, mtosatti@redhat.com, avi@redhat.com, Yoshiaki Tamura , ohmura.kei@lab.ntt.co.jp Subject: [Qemu-devel] [RFC PATCH 01/23] Modify DIRTY_FLAG value and introduce DIRTY_IDX to use as indexes of bit-based phys_ram_dirty. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Replaces byte-based phys_ram_dirty bitmap with four (MASTER, VGA, CODE, MIGRATION) bit-based phys_ram_dirty bitmap. On allocation, it sets all bits in the bitmap. It uses ffs() to convert DIRTY_FLAG to DIRTY_IDX. Modifies wrapper functions for byte-based phys_ram_dirty bitmap to bit-based phys_ram_dirty bitmap. MASTER works as a buffer, and upon get_diry() or get_dirty_flags(), it calls cpu_physical_memory_sync_master() to update VGA and MIGRATION. Replaces direct phys_ram_dirty access with wrapper functions to prevent direct access to the phys_ram_dirty bitmap. Signed-off-by: Yoshiaki Tamura Signed-off-by: OHMURA Kei --- cpu-all.h | 130 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++---- exec.c | 60 ++++++++++++++-------------- 2 files changed, 152 insertions(+), 38 deletions(-) diff --git a/cpu-all.h b/cpu-all.h index 51effc0..3f8762d 100644 --- a/cpu-all.h +++ b/cpu-all.h @@ -37,6 +37,9 @@ #include "softfloat.h" +/* to use ffs in flag_to_idx() */ +#include + #if defined(HOST_WORDS_BIGENDIAN) != defined(TARGET_WORDS_BIGENDIAN) #define BSWAP_NEEDED #endif @@ -846,7 +849,6 @@ int cpu_str_to_log_mask(const char *str); /* memory API */ extern int phys_ram_fd; -extern uint8_t *phys_ram_dirty; extern ram_addr_t ram_size; extern ram_addr_t last_ram_offset; extern uint8_t *bios_mem; @@ -869,28 +871,140 @@ extern uint8_t *bios_mem; /* Set if TLB entry is an IO callback. */ #define TLB_MMIO (1 << 5) +/* Use DIRTY_IDX as indexes of bit-based phys_ram_dirty. */ +#define MASTER_DIRTY_IDX 0 +#define VGA_DIRTY_IDX 1 +#define CODE_DIRTY_IDX 2 +#define MIGRATION_DIRTY_IDX 3 +#define NUM_DIRTY_IDX 4 + +#define MASTER_DIRTY_FLAG (1 << MASTER_DIRTY_IDX) +#define VGA_DIRTY_FLAG (1 << VGA_DIRTY_IDX) +#define CODE_DIRTY_FLAG (1 << CODE_DIRTY_IDX) +#define MIGRATION_DIRTY_FLAG (1 << MIGRATION_DIRTY_IDX) + +extern unsigned long *phys_ram_dirty[NUM_DIRTY_IDX]; + +static inline int dirty_flag_to_idx(int flag) +{ + return ffs(flag) - 1; +} + +static inline int dirty_idx_to_flag(int idx) +{ + return 1 << idx; +} + int cpu_memory_rw_debug(CPUState *env, target_ulong addr, uint8_t *buf, int len, int is_write); -#define VGA_DIRTY_FLAG 0x01 -#define CODE_DIRTY_FLAG 0x02 -#define MIGRATION_DIRTY_FLAG 0x08 - /* read dirty bit (return 0 or 1) */ static inline int cpu_physical_memory_is_dirty(ram_addr_t addr) { - return phys_ram_dirty[addr >> TARGET_PAGE_BITS] == 0xff; + unsigned long mask; + ram_addr_t index = (addr >> TARGET_PAGE_BITS) / HOST_LONG_BITS; + int offset = (addr >> TARGET_PAGE_BITS) & (HOST_LONG_BITS - 1); + + mask = 1UL << offset; + return (phys_ram_dirty[MASTER_DIRTY_IDX][index] & mask) == mask; +} + +static inline void cpu_physical_memory_sync_master(ram_addr_t index) +{ + if (phys_ram_dirty[MASTER_DIRTY_IDX][index]) { + phys_ram_dirty[VGA_DIRTY_IDX][index] + |= phys_ram_dirty[MASTER_DIRTY_IDX][index]; + phys_ram_dirty[MIGRATION_DIRTY_IDX][index] + |= phys_ram_dirty[MASTER_DIRTY_IDX][index]; + phys_ram_dirty[MASTER_DIRTY_IDX][index] = 0UL; + } +} + +static inline int cpu_physical_memory_get_dirty_flags(ram_addr_t addr) +{ + unsigned long mask; + ram_addr_t index = (addr >> TARGET_PAGE_BITS) / HOST_LONG_BITS; + int offset = (addr >> TARGET_PAGE_BITS) & (HOST_LONG_BITS - 1); + int ret = 0, i; + + mask = 1UL << offset; + cpu_physical_memory_sync_master(index); + + for (i = VGA_DIRTY_IDX; i <= MIGRATION_DIRTY_IDX; i++) { + if (phys_ram_dirty[i][index] & mask) { + ret |= dirty_idx_to_flag(i); + } + } + + return ret; +} + +static inline int cpu_physical_memory_get_dirty_idx(ram_addr_t addr, + int dirty_idx) +{ + unsigned long mask; + ram_addr_t index = (addr >> TARGET_PAGE_BITS) / HOST_LONG_BITS; + int offset = (addr >> TARGET_PAGE_BITS) & (HOST_LONG_BITS - 1); + + mask = 1UL << offset; + cpu_physical_memory_sync_master(index); + return (phys_ram_dirty[dirty_idx][index] & mask) == mask; } static inline int cpu_physical_memory_get_dirty(ram_addr_t addr, int dirty_flags) { - return phys_ram_dirty[addr >> TARGET_PAGE_BITS] & dirty_flags; + return cpu_physical_memory_get_dirty_idx(addr, + dirty_flag_to_idx(dirty_flags)); } static inline void cpu_physical_memory_set_dirty(ram_addr_t addr) { - phys_ram_dirty[addr >> TARGET_PAGE_BITS] = 0xff; + unsigned long mask; + ram_addr_t index = (addr >> TARGET_PAGE_BITS) / HOST_LONG_BITS; + int offset = (addr >> TARGET_PAGE_BITS) & (HOST_LONG_BITS - 1); + + mask = 1UL << offset; + phys_ram_dirty[MASTER_DIRTY_IDX][index] |= mask; +} + +static inline void cpu_physical_memory_set_dirty_range(ram_addr_t addr, + unsigned long mask) +{ + ram_addr_t index = (addr >> TARGET_PAGE_BITS) / HOST_LONG_BITS; + + phys_ram_dirty[MASTER_DIRTY_IDX][index] |= mask; +} + +static inline void cpu_physical_memory_set_dirty_flags(ram_addr_t addr, + int dirty_flags) +{ + unsigned long mask; + ram_addr_t index = (addr >> TARGET_PAGE_BITS) / HOST_LONG_BITS; + int offset = (addr >> TARGET_PAGE_BITS) & (HOST_LONG_BITS - 1); + + mask = 1UL << offset; + phys_ram_dirty[MASTER_DIRTY_IDX][index] |= mask; + + if (dirty_flags & CODE_DIRTY_FLAG) { + phys_ram_dirty[CODE_DIRTY_IDX][index] |= mask; + } +} + +static inline void cpu_physical_memory_mask_dirty_range(ram_addr_t start, + unsigned long length, + int dirty_flags) +{ + ram_addr_t addr = start, index; + unsigned long mask; + int offset, i; + + for (i = 0; i < length; i += TARGET_PAGE_SIZE) { + index = ((addr + i) >> TARGET_PAGE_BITS) / HOST_LONG_BITS; + offset = ((addr + i) >> TARGET_PAGE_BITS) & (HOST_LONG_BITS - 1); + mask = ~(1UL << offset); + phys_ram_dirty[dirty_flag_to_idx(dirty_flags)][index] &= mask; + } } void cpu_physical_memory_reset_dirty(ram_addr_t start, ram_addr_t end, diff --git a/exec.c b/exec.c index b647512..bf8d703 100644 --- a/exec.c +++ b/exec.c @@ -119,7 +119,7 @@ uint8_t *code_gen_ptr; #if !defined(CONFIG_USER_ONLY) int phys_ram_fd; -uint8_t *phys_ram_dirty; +unsigned long *phys_ram_dirty[NUM_DIRTY_IDX]; uint8_t *bios_mem; static int in_migration; @@ -1947,7 +1947,7 @@ static void tlb_protect_code(ram_addr_t ram_addr) static void tlb_unprotect_code_phys(CPUState *env, ram_addr_t ram_addr, target_ulong vaddr) { - phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS] |= CODE_DIRTY_FLAG; + cpu_physical_memory_set_dirty_flags(ram_addr, CODE_DIRTY_FLAG); } static inline void tlb_reset_dirty_range(CPUTLBEntry *tlb_entry, @@ -1968,8 +1968,7 @@ void cpu_physical_memory_reset_dirty(ram_addr_t start, ram_addr_t end, { CPUState *env; unsigned long length, start1; - int i, mask, len; - uint8_t *p; + int i; start &= TARGET_PAGE_MASK; end = TARGET_PAGE_ALIGN(end); @@ -1977,11 +1976,7 @@ void cpu_physical_memory_reset_dirty(ram_addr_t start, ram_addr_t end, length = end - start; if (length == 0) return; - len = length >> TARGET_PAGE_BITS; - mask = ~dirty_flags; - p = phys_ram_dirty + (start >> TARGET_PAGE_BITS); - for(i = 0; i < len; i++) - p[i] &= mask; + cpu_physical_memory_mask_dirty_range(start, length, dirty_flags); /* we modify the TLB cache so that the dirty bit will be set again when accessing the range */ @@ -2643,6 +2638,7 @@ extern const char *mem_path; ram_addr_t qemu_ram_alloc(ram_addr_t size) { RAMBlock *new_block; + int i; size = TARGET_PAGE_ALIGN(size); new_block = qemu_malloc(sizeof(*new_block)); @@ -2667,10 +2663,14 @@ ram_addr_t qemu_ram_alloc(ram_addr_t size) new_block->next = ram_blocks; ram_blocks = new_block; - phys_ram_dirty = qemu_realloc(phys_ram_dirty, - (last_ram_offset + size) >> TARGET_PAGE_BITS); - memset(phys_ram_dirty + (last_ram_offset >> TARGET_PAGE_BITS), - 0xff, size >> TARGET_PAGE_BITS); + for (i = MASTER_DIRTY_IDX; i < NUM_DIRTY_IDX; i++) { + phys_ram_dirty[i] + = qemu_realloc(phys_ram_dirty[i], + BITMAP_SIZE(last_ram_offset + size)); + memset((uint8_t *)phys_ram_dirty[i] + BITMAP_SIZE(last_ram_offset), + 0xff, BITMAP_SIZE(last_ram_offset + size) + - BITMAP_SIZE(last_ram_offset)); + } last_ram_offset += size; @@ -2833,16 +2833,16 @@ static void notdirty_mem_writeb(void *opaque, target_phys_addr_t ram_addr, uint32_t val) { int dirty_flags; - dirty_flags = phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS]; + dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); if (!(dirty_flags & CODE_DIRTY_FLAG)) { #if !defined(CONFIG_USER_ONLY) tb_invalidate_phys_page_fast(ram_addr, 1); - dirty_flags = phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS]; + dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); #endif } stb_p(qemu_get_ram_ptr(ram_addr), val); dirty_flags |= (0xff & ~CODE_DIRTY_FLAG); - phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS] = dirty_flags; + cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) @@ -2853,16 +2853,16 @@ static void notdirty_mem_writew(void *opaque, target_phys_addr_t ram_addr, uint32_t val) { int dirty_flags; - dirty_flags = phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS]; + dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); if (!(dirty_flags & CODE_DIRTY_FLAG)) { #if !defined(CONFIG_USER_ONLY) tb_invalidate_phys_page_fast(ram_addr, 2); - dirty_flags = phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS]; + dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); #endif } stw_p(qemu_get_ram_ptr(ram_addr), val); dirty_flags |= (0xff & ~CODE_DIRTY_FLAG); - phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS] = dirty_flags; + cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) @@ -2873,16 +2873,16 @@ static void notdirty_mem_writel(void *opaque, target_phys_addr_t ram_addr, uint32_t val) { int dirty_flags; - dirty_flags = phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS]; + dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); if (!(dirty_flags & CODE_DIRTY_FLAG)) { #if !defined(CONFIG_USER_ONLY) tb_invalidate_phys_page_fast(ram_addr, 4); - dirty_flags = phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS]; + dirty_flags = cpu_physical_memory_get_dirty_flags(ram_addr); #endif } stl_p(qemu_get_ram_ptr(ram_addr), val); dirty_flags |= (0xff & ~CODE_DIRTY_FLAG); - phys_ram_dirty[ram_addr >> TARGET_PAGE_BITS] = dirty_flags; + cpu_physical_memory_set_dirty_flags(ram_addr, dirty_flags); /* we remove the notdirty callback only if the code has been flushed */ if (dirty_flags == 0xff) @@ -3334,8 +3334,8 @@ void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, /* invalidate code */ tb_invalidate_phys_page_range(addr1, addr1 + l, 0); /* set dirty bit */ - phys_ram_dirty[addr1 >> TARGET_PAGE_BITS] |= - (0xff & ~CODE_DIRTY_FLAG); + cpu_physical_memory_set_dirty_flags( + addr1, (0xff & ~CODE_DIRTY_FLAG)); } /* qemu doesn't execute guest code directly, but kvm does therefore flush instruction caches */ @@ -3548,8 +3548,8 @@ void cpu_physical_memory_unmap(void *buffer, target_phys_addr_t len, /* invalidate code */ tb_invalidate_phys_page_range(addr1, addr1 + l, 0); /* set dirty bit */ - phys_ram_dirty[addr1 >> TARGET_PAGE_BITS] |= - (0xff & ~CODE_DIRTY_FLAG); + cpu_physical_memory_set_dirty_flags( + addr1, (0xff & ~CODE_DIRTY_FLAG)); } addr1 += l; access_len -= l; @@ -3685,8 +3685,8 @@ void stl_phys_notdirty(target_phys_addr_t addr, uint32_t val) /* invalidate code */ tb_invalidate_phys_page_range(addr1, addr1 + 4, 0); /* set dirty bit */ - phys_ram_dirty[addr1 >> TARGET_PAGE_BITS] |= - (0xff & ~CODE_DIRTY_FLAG); + cpu_physical_memory_set_dirty_flags( + addr1, (0xff & ~CODE_DIRTY_FLAG)); } } } @@ -3754,8 +3754,8 @@ void stl_phys(target_phys_addr_t addr, uint32_t val) /* invalidate code */ tb_invalidate_phys_page_range(addr1, addr1 + 4, 0); /* set dirty bit */ - phys_ram_dirty[addr1 >> TARGET_PAGE_BITS] |= - (0xff & ~CODE_DIRTY_FLAG); + cpu_physical_memory_set_dirty_flags(addr1, + (0xff & ~CODE_DIRTY_FLAG)); } } }