diff mbox

[1/2] KVM: Add new -cpu best

Message ID 1326066759-8210-1-git-send-email-agraf@suse.de
State New
Headers show

Commit Message

Alexander Graf Jan. 8, 2012, 11:52 p.m. UTC
During discussions on whether to make -cpu host the default in SLE, I found
myself disagreeing to the thought, because it potentially opens a big can
of worms for potential bugs. But if I already am so opposed to it for SLE, how
can it possibly be reasonable to default to -cpu host in upstream QEMU? And
what would a sane default look like?

So I had this idea of looping through all available CPU definitions. We can
pretty well tell if our host is able to execute any of them by checking the
respective flags and seeing if our host has all features the CPU definition
requires. With that, we can create a -cpu type that would fall back to the
"best known CPU definition" that our host can fulfill. On my Phenom II
system for example, that would be -cpu phenom.

With this approach we can test and verify that CPU types actually work at
any random user setup, because we can always verify that all the -cpu types
we ship actually work. And we only default to some clever mechanism that
chooses from one of these.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 target-i386/cpuid.c |   81 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 81 insertions(+), 0 deletions(-)

Comments

Peter Maydell Jan. 9, 2012, 12:02 a.m. UTC | #1
On 8 January 2012 23:52, Alexander Graf <agraf@suse.de> wrote:
> During discussions on whether to make -cpu host the default in SLE, I found
> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?
>
> So I had this idea of looping through all available CPU definitions. We can
> pretty well tell if our host is able to execute any of them by checking the
> respective flags and seeing if our host has all features the CPU definition
> requires. With that, we can create a -cpu type that would fall back to the
> "best known CPU definition" that our host can fulfill. On my Phenom II
> system for example, that would be -cpu phenom.

...shouldn't this be supported on at least all hosts with KVM support,
not just x86?

Also I don't see any documentation updates in this patchset :-)

-- PMM
Alexander Graf Jan. 9, 2012, 12:06 a.m. UTC | #2
On 09.01.2012, at 01:02, Peter Maydell wrote:

> On 8 January 2012 23:52, Alexander Graf <agraf@suse.de> wrote:
>> During discussions on whether to make -cpu host the default in SLE, I found
>> myself disagreeing to the thought, because it potentially opens a big can
>> of worms for potential bugs. But if I already am so opposed to it for SLE, how
>> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
>> what would a sane default look like?
>> 
>> So I had this idea of looping through all available CPU definitions. We can
>> pretty well tell if our host is able to execute any of them by checking the
>> respective flags and seeing if our host has all features the CPU definition
>> requires. With that, we can create a -cpu type that would fall back to the
>> "best known CPU definition" that our host can fulfill. On my Phenom II
>> system for example, that would be -cpu phenom.
> 
> ...shouldn't this be supported on at least all hosts with KVM support,
> not just x86?

I don't think it makes sense on any other platform. For PPC -cpu host is good enough, since it's basically doing the same as -cpu best. We only have a single 32 bit number as identifier for -cpu host there.

> Also I don't see any documentation updates in this patchset :-)

Do we have documentation for -cpu host?


Alex
Anthony Liguori Jan. 16, 2012, 7:33 p.m. UTC | #3
On 01/08/2012 05:52 PM, Alexander Graf wrote:
> During discussions on whether to make -cpu host the default in SLE, I found
> myself disagreeing to the thought, because it potentially opens a big can
> of worms for potential bugs. But if I already am so opposed to it for SLE, how
> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
> what would a sane default look like?


What are the arguments against -cpu host?

Regards,

Anthony Liguori
Alexander Graf Jan. 16, 2012, 7:42 p.m. UTC | #4
On 16.01.2012, at 20:33, Anthony Liguori wrote:

> On 01/08/2012 05:52 PM, Alexander Graf wrote:
>> During discussions on whether to make -cpu host the default in SLE, I found
>> myself disagreeing to the thought, because it potentially opens a big can
>> of worms for potential bugs. But if I already am so opposed to it for SLE, how
>> can it possibly be reasonable to default to -cpu host in upstream QEMU? And
>> what would a sane default look like?
> 
> 
> What are the arguments against -cpu host?

It's hard to test. New CPUs have new features and we're having a hard time to catch up. With -cpu best we only select from a pool of known-good CPU types. If you want to check that everything works, go to a box that has the maximum available features, go through all -cpu options that users could run into and you're good. With -cpu host you can't really test (unless you own all possible CPUs there are).

We expose CPUID information that doesn't exist that way in the real world.

A small example from today's code.

There are a bunch of CPUID leafs. On Nehalem, one of them is a list of possible C-States to go into. With -cpu host we sync feature bits, CPU name, CPU family and some other bits of information, but not the C-State information. So we end up with a CPU inside the guest that looks and feels like a Nehalem CPU, but doesn't expose any C-State information.

Linux now boots, goes in, checks that it's running on Nehalem, sets the powersave mechanism to the respective model and fills an internal callback table with the C-State information with a loop that ends without any action, since we expose 0 C-State bits. When the guest now calls the idle callback, it dereferences that table, which contains a NULL pointer, oops.

That is just one example from current Linux. Another one would be my development AMD box that when it came out wasn't around in the market yet, so guests would just refuse to boot at all. Since they'd just say the CPUID is unknown.

Overall, I used to be a big fan of -cpu host, but it's a maintainability nightmare. It can be great for testing stuff, so we should definitely keep it around. But after thinking about it again, I don't think it should be the default. The default should be something safe.


Alex
diff mbox

Patch

diff --git a/target-i386/cpuid.c b/target-i386/cpuid.c
index 91a104b..b2e3420 100644
--- a/target-i386/cpuid.c
+++ b/target-i386/cpuid.c
@@ -550,6 +550,85 @@  static int cpu_x86_fill_host(x86_def_t *x86_cpu_def)
     return 0;
 }
 
+/* Are all guest feature bits present on the host? */
+static bool cpu_x86_feature_subset(uint32_t host, uint32_t guest)
+{
+    int i;
+
+    for (i = 0; i < 32; i++) {
+        uint32_t mask = 1 << i;
+        if ((guest & mask) && !(host & mask)) {
+            return false;
+        }
+    }
+
+    return true;
+}
+
+/* Does the host support all the features of the CPU definition? */
+static bool cpu_x86_fits_host(x86_def_t *x86_cpu_def)
+{
+    uint32_t eax = 0, ebx = 0, ecx = 0, edx = 0;
+
+    host_cpuid(0x0, 0, &eax, &ebx, &ecx, &edx);
+    if (x86_cpu_def->level > eax) {
+        return false;
+    }
+    if ((x86_cpu_def->vendor1 != ebx) ||
+        (x86_cpu_def->vendor2 != edx) ||
+        (x86_cpu_def->vendor3 != ecx)) {
+        return false;
+    }
+
+    host_cpuid(0x1, 0, &eax, &ebx, &ecx, &edx);
+    if (!cpu_x86_feature_subset(ecx, x86_cpu_def->ext_features) ||
+        !cpu_x86_feature_subset(edx, x86_cpu_def->features)) {
+        return false;
+    }
+
+    host_cpuid(0x80000000, 0, &eax, &ebx, &ecx, &edx);
+    if (x86_cpu_def->xlevel > eax) {
+        return false;
+    }
+
+    host_cpuid(0x80000001, 0, &eax, &ebx, &ecx, &edx);
+    if (!cpu_x86_feature_subset(edx, x86_cpu_def->ext2_features) ||
+        !cpu_x86_feature_subset(ecx, x86_cpu_def->ext3_features)) {
+        return false;
+    }
+
+    return true;
+}
+
+/* Returns true when new_def is higher versioned than old_def */
+static int cpu_x86_fits_higher(x86_def_t *new_def, x86_def_t *old_def)
+{
+    int old_fammod = (old_def->family << 24) | (old_def->model << 8)
+                   | (old_def->stepping);
+    int new_fammod = (new_def->family << 24) | (new_def->model << 8)
+                   | (new_def->stepping);
+
+    return new_fammod > old_fammod;
+}
+
+static void cpu_x86_fill_best(x86_def_t *x86_cpu_def)
+{
+    x86_def_t *def;
+
+    x86_cpu_def->family = 0;
+    x86_cpu_def->model = 0;
+    for (def = x86_defs; def; def = def->next) {
+        if (cpu_x86_fits_host(def) && cpu_x86_fits_higher(def, x86_cpu_def)) {
+            memcpy(x86_cpu_def, def, sizeof(*def));
+        }
+    }
+
+    if (!x86_cpu_def->family && !x86_cpu_def->model) {
+        fprintf(stderr, "No fitting CPU model found!\n");
+        exit(1);
+    }
+}
+
 static int unavailable_host_feature(struct model_features_t *f, uint32_t mask)
 {
     int i;
@@ -617,6 +696,8 @@  static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
             break;
     if (kvm_enabled() && name && strcmp(name, "host") == 0) {
         cpu_x86_fill_host(x86_cpu_def);
+    } else if (kvm_enabled() && name && strcmp(name, "best") == 0) {
+        cpu_x86_fill_best(x86_cpu_def);
     } else if (!def) {
         goto error;
     } else {