@@ -409,6 +409,9 @@ void numa_complete_configuration(MachineState *ms)
if (i == nb_numa_nodes) {
assert(mc->numa_auto_assign_ram);
mc->numa_auto_assign_ram(mc, numa_info, nb_numa_nodes, ram_size);
+ warn_report("Default splitting of RAM between nodes is deprecated,"
+ " Use '-numa node,memdev' to explictly define RAM"
+ " allocation per node");
}
numa_total = 0;
@@ -74,6 +74,13 @@ parameter @option{mem} to achieve the same fake NUMA effect or a properly
configured @var{memory-backend-file} backend to actually benefit from NUMA
configuration.
+@subsection -numa node (without memory specified) (since 4.0)
+
+Splitting RAM by default between NUMA nodes has the same issues as @option{mem}
+parameter described above with a difference that role of the user plays QEMU
+using generic splitting rule or a board specific one. Use @option{memdev} with
+@var{memory-backend-ram} backend to define mapping explictly instead.
+
@section QEMU Machine Protocol (QMP) commands
@subsection block-dirty-bitmap-add "autoload" parameter (since 2.12.0)
Implict RAM distribution between nodes has exactly the same issues as: "numa: deprecate 'mem' parameter of '-numa node' option" only with QEMU being the user that's 'adding' 'mem' parameter. Depricate it, to get it out of the way so that we could switch to consistent guest RAM allocation using memory backends and possibly memory devices later on top of that. Signed-off-by: Igor Mammedov <imammedo@redhat.com> --- numa.c | 3 +++ qemu-deprecated.texi | 7 +++++++ 2 files changed, 10 insertions(+)