Message ID | 1414773522-7756-4-git-send-email-imammedo@redhat.com |
---|---|
State | New |
Headers | show |
On 31/10/2014 17:38, Igor Mammedov wrote: > check amount of available KVM memory slots after all > devices were initialized and exit with error if > there isn't enough free memory slots for DIMMs. > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > --- > hw/i386/pc.c | 32 ++++++++++++++++++++++++++++++++ > 1 file changed, 32 insertions(+) > > diff --git a/hw/i386/pc.c b/hw/i386/pc.c > index f6dfd9b..41d91fb 100644 > --- a/hw/i386/pc.c > +++ b/hw/i386/pc.c > @@ -1125,6 +1125,36 @@ PcGuestInfo *pc_guest_info_init(ram_addr_t below_4g_mem_size, > return guest_info; > } > > +static int pc_dimm_count(Object *obj, void *opaque) > +{ > + int *count = opaque; > + > + if (object_dynamic_cast(obj, TYPE_PC_DIMM)) { > + (*count)++; > + } > + > + object_child_foreach(obj, pc_dimm_count, opaque); > + return 0; > +} > + > +static void pc_kvm_slot_check(Notifier *notifier, void *data) > +{ > + MachineState *ms = MACHINE(qdev_get_machine()); > + int free_slots = kvm_free_slot_count(ms); > + int used_ram_slots = 0; > + > + pc_dimm_count(OBJECT(ms), &used_ram_slots); > + if ((ms->ram_slots - used_ram_slots) > free_slots) { > + error_report("KVM doesn't support more than %d memory slots", > + kvm_free_slot_count(ms)); > + exit(EXIT_FAILURE); > + } > +} > + > +static Notifier kvm_slot_check_on_machine_done = { > + .notify = pc_kvm_slot_check > + }; > + > /* setup pci memory address space mapping into system address space */ > void pc_pci_as_mapping_init(Object *owner, MemoryRegion *system_memory, > MemoryRegion *pci_address_space) > @@ -1269,6 +1299,8 @@ FWCfgState *pc_memory_init(MachineState *machine, > "hotplug-memory", hotplug_mem_size); > memory_region_add_subregion(system_memory, pcms->hotplug_memory_base, > &pcms->hotplug_memory); > + > + qemu_add_machine_init_done_notifier(&kvm_slot_check_on_machine_done); > } > > /* Initialize PC system firmware */ > This will always be off by 4 or so due to system RAM and ROM slots. I think patch 1 is enough. Paolo
On Mon, 03 Nov 2014 12:51:50 +0100 Paolo Bonzini <pbonzini@redhat.com> wrote: > On 31/10/2014 17:38, Igor Mammedov wrote: > > check amount of available KVM memory slots after all > > devices were initialized and exit with error if > > there isn't enough free memory slots for DIMMs. > > > > Signed-off-by: Igor Mammedov <imammedo@redhat.com> > > --- > > hw/i386/pc.c | 32 ++++++++++++++++++++++++++++++++ > > 1 file changed, 32 insertions(+) > > > > diff --git a/hw/i386/pc.c b/hw/i386/pc.c > > index f6dfd9b..41d91fb 100644 > > --- a/hw/i386/pc.c > > +++ b/hw/i386/pc.c > > @@ -1125,6 +1125,36 @@ PcGuestInfo *pc_guest_info_init(ram_addr_t > > below_4g_mem_size, return guest_info; > > } > > > > +static int pc_dimm_count(Object *obj, void *opaque) > > +{ > > + int *count = opaque; > > + > > + if (object_dynamic_cast(obj, TYPE_PC_DIMM)) { > > + (*count)++; > > + } > > + > > + object_child_foreach(obj, pc_dimm_count, opaque); > > + return 0; > > +} > > + > > +static void pc_kvm_slot_check(Notifier *notifier, void *data) > > +{ > > + MachineState *ms = MACHINE(qdev_get_machine()); > > + int free_slots = kvm_free_slot_count(ms); > > + int used_ram_slots = 0; > > + > > + pc_dimm_count(OBJECT(ms), &used_ram_slots); > > + if ((ms->ram_slots - used_ram_slots) > free_slots) { > > + error_report("KVM doesn't support more than %d memory > > slots", > > + kvm_free_slot_count(ms)); > > + exit(EXIT_FAILURE); > > + } > > +} > > + > > +static Notifier kvm_slot_check_on_machine_done = { > > + .notify = pc_kvm_slot_check > > + }; > > + > > /* setup pci memory address space mapping into system address > > space */ void pc_pci_as_mapping_init(Object *owner, MemoryRegion > > *system_memory, MemoryRegion *pci_address_space) > > @@ -1269,6 +1299,8 @@ FWCfgState *pc_memory_init(MachineState > > *machine, "hotplug-memory", hotplug_mem_size); > > memory_region_add_subregion(system_memory, > > pcms->hotplug_memory_base, &pcms->hotplug_memory); > > + > > + > > qemu_add_machine_init_done_notifier(&kvm_slot_check_on_machine_done); } > > > > /* Initialize PC system firmware */ > > > > This will always be off by 4 or so due to system RAM and ROM slots. I > think patch 1 is enough. The goal of this patch is to prevent starting guest with not supported amount of lots requested on CLI with -m slots=too_much, patch 1 won't prevent user from it. As for "always be off by 4 or so", unfortunately consumed slots are not constant and depend on CLI options. If guest is started 1Gb then 5 slots are consumed, with 4Gb 5 slots are consumed. Devices might also consume slots, for example vga does. I would bet that some another device won't consume it. Hence machine done notifier is suggested. I wouldn't try to assume/make default consumed slots as constant sinceit's quite fragile and easy to break. Runtime check looks like more stable approach in general. > > Paolo
On 03/11/2014 18:00, Igor Mammedov wrote: >> > This will always be off by 4 or so due to system RAM and ROM slots. I >> > think patch 1 is enough. > The goal of this patch is to prevent starting guest with not supported > amount of lots requested on CLI with -m slots=too_much, patch 1 won't > prevent user from it. Understood. But I think -m slots=too_much is not an important problem. What is problematic is filling those slots, which patch 1 will help with. Paolo
On Mon, 03 Nov 2014 18:32:51 +0100 Paolo Bonzini <pbonzini@redhat.com> wrote: > > > On 03/11/2014 18:00, Igor Mammedov wrote: > >> > This will always be off by 4 or so due to system RAM and ROM > >> > slots. I think patch 1 is enough. > > The goal of this patch is to prevent starting guest with not > > supported amount of lots requested on CLI with -m slots=too_much, > > patch 1 won't prevent user from it. > > Understood. But I think -m slots=too_much is not an important > problem. What is problematic is filling those slots, which patch 1 > will help with. From user's pov it's still error, i.e. he asked on CLI for one amount of hotpluggable slots and couldn't use all of them at runtime. I think in addition to this patch we need QMP accessible parameter for libvirt to tell limit, so that upper layers could tell user how many slot it's possible to use. something like this: /machine/available_memory_slots > > Paolo
On 03/11/2014 20:11, Igor Mammedov wrote: >> > Understood. But I think -m slots=too_much is not an important >> > problem. What is problematic is filling those slots, which patch 1 >> > will help with. > From user's pov it's still error, i.e. he asked on CLI for one amount > of hotpluggable slots and couldn't use all of them at runtime. But he cannot even with your patch, if he was off by just a couple items. So I don't think the improvement is worth the extra complication of the code and the small layering violation. Paolo
On Tue, 04 Nov 2014 09:45:15 +0100 Paolo Bonzini <pbonzini@redhat.com> wrote: > On 03/11/2014 20:11, Igor Mammedov wrote: > >> > Understood. But I think -m slots=too_much is not an important > >> > problem. What is problematic is filling those slots, which patch 1 > >> > will help with. > > From user's pov it's still error, i.e. he asked on CLI for one amount > > of hotpluggable slots and couldn't use all of them at runtime. > > But he cannot even with your patch, if he was off by just a couple > items. So I don't think the improvement is worth the extra complication > of the code and the small layering violation. With this patch user can launch QEMU with exact max amount of possible slots with exception that he won't hotplug other than pc-dimm devices that consume KVM's memory slots (not sure if such exists (hotpluggable)). but, sure we can drop this patch until someone complains. How about adding QMP interface to query amount of possible slots, so libvirt could get a hint about max possible value?
On 04/11/2014 16:39, Igor Mammedov wrote: >> > But he cannot even with your patch, if he was off by just a couple >> > items. So I don't think the improvement is worth the extra complication >> > of the code and the small layering violation. > With this patch user can launch QEMU with exact max amount of possible > slots with exception that he won't hotplug other than pc-dimm devices > that consume KVM's memory slots (not sure if such exists (hotpluggable)). > but, sure we can drop this patch until someone complains. Note that option ROM BARs (which are usually mapped only by firmware, but also when accessed via sysfs) and device assignment both consume memory slots. Slots occupied by pflash can also come and go, depending on whether the flash is visible as ROM or in programming mode. Paolo
On Tue, 04 Nov 2014 16:42:23 +0100 Paolo Bonzini <pbonzini@redhat.com> wrote: > > > On 04/11/2014 16:39, Igor Mammedov wrote: > >> > But he cannot even with your patch, if he was off by just a couple > >> > items. So I don't think the improvement is worth the extra complication > >> > of the code and the small layering violation. > > With this patch user can launch QEMU with exact max amount of possible > > slots with exception that he won't hotplug other than pc-dimm devices > > that consume KVM's memory slots (not sure if such exists (hotpluggable)). > > but, sure we can drop this patch until someone complains. > > Note that option ROM BARs (which are usually mapped only by firmware, > but also when accessed via sysfs) and device assignment both consume > memory slots. Slots occupied by pflash can also come and go, depending > on whether the flash is visible as ROM or in programming mode. Then there is a bigger problem, once KVM slots are saturated QEMU will crash if one of above actions from guest happens, and we can't even error out on them. > > Paolo >
On 04/11/2014 17:14, Igor Mammedov wrote: >> > Note that option ROM BARs (which are usually mapped only by firmware, >> > but also when accessed via sysfs) and device assignment both consume >> > memory slots. Slots occupied by pflash can also come and go, depending >> > on whether the flash is visible as ROM or in programming mode. > Then there is a bigger problem, once KVM slots are saturated QEMU will crash > if one of above actions from guest happens, and we can't even error out on them. Yes. I wish I had a better idea than checking for free memslots manually in all of them. Paolo
diff --git a/hw/i386/pc.c b/hw/i386/pc.c index f6dfd9b..41d91fb 100644 --- a/hw/i386/pc.c +++ b/hw/i386/pc.c @@ -1125,6 +1125,36 @@ PcGuestInfo *pc_guest_info_init(ram_addr_t below_4g_mem_size, return guest_info; } +static int pc_dimm_count(Object *obj, void *opaque) +{ + int *count = opaque; + + if (object_dynamic_cast(obj, TYPE_PC_DIMM)) { + (*count)++; + } + + object_child_foreach(obj, pc_dimm_count, opaque); + return 0; +} + +static void pc_kvm_slot_check(Notifier *notifier, void *data) +{ + MachineState *ms = MACHINE(qdev_get_machine()); + int free_slots = kvm_free_slot_count(ms); + int used_ram_slots = 0; + + pc_dimm_count(OBJECT(ms), &used_ram_slots); + if ((ms->ram_slots - used_ram_slots) > free_slots) { + error_report("KVM doesn't support more than %d memory slots", + kvm_free_slot_count(ms)); + exit(EXIT_FAILURE); + } +} + +static Notifier kvm_slot_check_on_machine_done = { + .notify = pc_kvm_slot_check + }; + /* setup pci memory address space mapping into system address space */ void pc_pci_as_mapping_init(Object *owner, MemoryRegion *system_memory, MemoryRegion *pci_address_space) @@ -1269,6 +1299,8 @@ FWCfgState *pc_memory_init(MachineState *machine, "hotplug-memory", hotplug_mem_size); memory_region_add_subregion(system_memory, pcms->hotplug_memory_base, &pcms->hotplug_memory); + + qemu_add_machine_init_done_notifier(&kvm_slot_check_on_machine_done); } /* Initialize PC system firmware */
check amount of available KVM memory slots after all devices were initialized and exit with error if there isn't enough free memory slots for DIMMs. Signed-off-by: Igor Mammedov <imammedo@redhat.com> --- hw/i386/pc.c | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+)