From patchwork Tue Sep 23 08:11:25 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhanghailiang X-Patchwork-Id: 392260 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 223CC14007B for ; Tue, 23 Sep 2014 18:12:45 +1000 (EST) Received: from localhost ([::1]:51401 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWLDG-0003dn-Vv for incoming@patchwork.ozlabs.org; Tue, 23 Sep 2014 04:12:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40540) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWLCw-0003IU-Te for qemu-devel@nongnu.org; Tue, 23 Sep 2014 04:12:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XWLCo-0005RV-Su for qemu-devel@nongnu.org; Tue, 23 Sep 2014 04:12:22 -0400 Received: from szxga01-in.huawei.com ([119.145.14.64]:30365) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWLCo-0005MS-8K for qemu-devel@nongnu.org; Tue, 23 Sep 2014 04:12:14 -0400 Received: from 172.24.2.119 (EHLO szxeml418-hub.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CCA58689; Tue, 23 Sep 2014 16:12:04 +0800 (CST) Received: from localhost (10.177.22.69) by szxeml418-hub.china.huawei.com (10.82.67.157) with Microsoft SMTP Server id 14.3.158.1; Tue, 23 Sep 2014 16:11:54 +0800 From: zhanghailiang To: Date: Tue, 23 Sep 2014 16:11:25 +0800 Message-ID: <1411459885-11916-1-git-send-email-zhang.zhanghailiang@huawei.com> X-Mailer: git-send-email 1.9.2.msysgit.0 MIME-Version: 1.0 X-Originating-IP: [10.177.22.69] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 119.145.14.64 Cc: imammedo@redhat.com, zhanghailiang , luonengjun@huawei.com, peter.huangpeng@huawei.com, hutao@cn.fujitsu.com Subject: [Qemu-devel] [PATCH v3] pc-dimm/numa: Fix stat of memory size in node when hotplug memory X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org When do memory hotplug, if there is numa node, we should add the memory size to the corresponding node memory size. For now, it mainly affects the result of hmp command "info numa". Signed-off-by: zhanghailiang --- v3: - cold-plugged memory should not be excluded when stat memory size (Igor Mammedov) v2: - Don't modify the numa_info.node_mem directly when treating hotplug memory, fix the "info numa" instead (suggested by Igor Mammedov) --- hw/mem/pc-dimm.c | 30 ++++++++++++++++++++++++++++++ include/hw/mem/pc-dimm.h | 2 ++ include/sysemu/sysemu.h | 1 + monitor.c | 6 +++++- numa.c | 15 +++++++++++++++ 5 files changed, 53 insertions(+), 1 deletion(-) diff --git a/hw/mem/pc-dimm.c b/hw/mem/pc-dimm.c index 5bfc5b7..8e80d74 100644 --- a/hw/mem/pc-dimm.c +++ b/hw/mem/pc-dimm.c @@ -195,6 +195,36 @@ out: return ret; } +static int pc_dimm_stat_mem_size(Object *obj, void *opaque) +{ + uint64_t *node_mem = opaque; + int ret; + + if (object_dynamic_cast(obj, TYPE_PC_DIMM)) { + DeviceState *dev = DEVICE(obj); + + if (dev->realized) { + PCDIMMDevice *dimm = PC_DIMM(obj); + int size; + + size = object_property_get_int(OBJECT(dimm), PC_DIMM_SIZE_PROP, + NULL); + if (size < 0) { + return -1; + } + node_mem[dimm->node] += size; + } + } + + ret = object_child_foreach(obj, pc_dimm_stat_mem_size, opaque); + return ret; +} + +void pc_dimm_stat_node_mem(uint64_t *node_mem) +{ + object_child_foreach(qdev_get_machine(), pc_dimm_stat_mem_size, node_mem); +} + static Property pc_dimm_properties[] = { DEFINE_PROP_UINT64(PC_DIMM_ADDR_PROP, PCDIMMDevice, addr, 0), DEFINE_PROP_UINT32(PC_DIMM_NODE_PROP, PCDIMMDevice, node, 0), diff --git a/include/hw/mem/pc-dimm.h b/include/hw/mem/pc-dimm.h index 761eeef..0c9a8eb 100644 --- a/include/hw/mem/pc-dimm.h +++ b/include/hw/mem/pc-dimm.h @@ -78,4 +78,6 @@ uint64_t pc_dimm_get_free_addr(uint64_t address_space_start, int pc_dimm_get_free_slot(const int *hint, int max_slots, Error **errp); int qmp_pc_dimm_device_list(Object *obj, void *opaque); + +void pc_dimm_stat_node_mem(uint64_t *node_mem); #endif diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h index d8539fd..cfc1592 100644 --- a/include/sysemu/sysemu.h +++ b/include/sysemu/sysemu.h @@ -160,6 +160,7 @@ typedef struct node_info { extern NodeInfo numa_info[MAX_NODES]; void set_numa_nodes(void); void set_numa_modes(void); +int query_numa_node_mem(uint64_t *node_mem); extern QemuOptsList qemu_numa_opts; int numa_init_func(QemuOpts *opts, void *opaque); diff --git a/monitor.c b/monitor.c index 7467521..c8c812f 100644 --- a/monitor.c +++ b/monitor.c @@ -1948,7 +1948,10 @@ static void do_info_numa(Monitor *mon, const QDict *qdict) { int i; CPUState *cpu; + uint64_t *node_mem; + node_mem = g_new0(uint64_t, nb_numa_nodes); + query_numa_node_mem(node_mem); monitor_printf(mon, "%d nodes\n", nb_numa_nodes); for (i = 0; i < nb_numa_nodes; i++) { monitor_printf(mon, "node %d cpus:", i); @@ -1959,8 +1962,9 @@ static void do_info_numa(Monitor *mon, const QDict *qdict) } monitor_printf(mon, "\n"); monitor_printf(mon, "node %d size: %" PRId64 " MB\n", i, - numa_info[i].node_mem >> 20); + node_mem[i] >> 20); } + g_free(node_mem); } #ifdef CONFIG_PROFILER diff --git a/numa.c b/numa.c index 3b98135..4e27dd8 100644 --- a/numa.c +++ b/numa.c @@ -35,6 +35,7 @@ #include "hw/boards.h" #include "sysemu/hostmem.h" #include "qmp-commands.h" +#include "hw/mem/pc-dimm.h" QemuOptsList qemu_numa_opts = { .name = "numa", @@ -315,6 +316,20 @@ void memory_region_allocate_system_memory(MemoryRegion *mr, Object *owner, } } +int query_numa_node_mem(uint64_t *node_mem) +{ + int i; + + if (nb_numa_nodes <= 0) { + return 0; + } + pc_dimm_stat_node_mem(node_mem); + for (i = 0; i < nb_numa_nodes; i++) { + node_mem[i] += numa_info[i].node_mem; + } + return 0; +} + static int query_memdev(Object *obj, void *opaque) { MemdevList **list = opaque;