Message ID | jpg1u8vlv4h.fsf@redhat.com |
---|---|
State | New |
Headers | show |
Forwarding message by Eduardo. I had misspelled nongnu.org in my first attempt! The spaces/tab comment by Eduardo has been fixed. Eduardo Habkost <ehabkost@redhat.com> writes: > > By default, CPUID_EXT_MONITOR is enabled for some cpu models > such as Opteron_G3. Disable it if kvm_enabled() is true since > monitor/mwait aren't supported by KVM yet. > > Signed-off-by: Bandan Das <bsd@redhat.com> Interesting, I haven't noticed that TCG supports CPUID_EXT_MONITOR. I believe that's yet another reason to make the KVM CPU models separate classes from the TCG CPU models: because "-machine ...,accel=kvm -cpu Foo" and "-machine ...,accel=tcg -cpu Foo" _already_ have different meanings today and result in different CPUs. Making them classes would just make the fact that they _are_ different CPU models explicit. > --- > There is no user visible side-effect to this behavior, the aim > is to clean up the default flags that are not supported (yet). There is one user-visible effect: "-cpu ...,enforce" will stop failing because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly the point: there's no point in having CPU model definitions that would never work as-is with neither TCG or KVM. This patch is changing the meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what was already happening in practice. > > target-i386/cpu.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/target-i386/cpu.c b/target-i386/cpu.c > index 1a501d9..c83ba1c 100644 > --- a/target-i386/cpu.c > +++ b/target-i386/cpu.c > @@ -1749,6 +1749,7 @@ static void cpu_x86_register(X86CPU *cpu, const char *name, Error **errp) > > if (kvm_enabled()) { > def->features[FEAT_KVM] |= kvm_default_features; > + def->features[FEAT_1_ECX] &= ~CPUID_EXT_MONITOR; You are mixing spaces and tabs, here. > } > def->features[FEAT_1_ECX] |= CPUID_EXT_HYPERVISOR; > > -- > 1.8.1.4 >
Il 25/05/2013 03:21, Bandan Das ha scritto: > There is one user-visible effect: "-cpu ...,enforce" will stop failing > because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly > the point: there's no point in having CPU model definitions that would > never work as-is with neither TCG or KVM. This patch is changing the > meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what > was already happening in practice. But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it worth it? Paolo
On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote: > Il 25/05/2013 03:21, Bandan Das ha scritto: > > There is one user-visible effect: "-cpu ...,enforce" will stop failing > > because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly > > the point: there's no point in having CPU model definitions that would > > never work as-is with neither TCG or KVM. This patch is changing the > > meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what > > was already happening in practice. > > But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it > worth it? No models match a "real" CPU this way, because neither TCG or KVM support all features supported by a real CPU. I ask the opposite question: is it worth maintaining an "accurate" CPU model definition that would never work without feature-bit tweaking in the command-line?
Il 27/05/2013 14:09, Eduardo Habkost ha scritto: > On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote: >> Il 25/05/2013 03:21, Bandan Das ha scritto: >>> There is one user-visible effect: "-cpu ...,enforce" will stop failing >>> because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly >>> the point: there's no point in having CPU model definitions that would >>> never work as-is with neither TCG or KVM. This patch is changing the >>> meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what >>> was already happening in practice. >> >> But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it >> worth it? > > No models match a "real" CPU this way, because neither TCG or KVM > support all features supported by a real CPU. I ask the opposite > question: is it worth maintaining an "accurate" CPU model definition > that would never work without feature-bit tweaking in the command-line? It would work with TCG. Changing TCG to KVM should not change hardware if you use "-cpu ...,enforce", so it is right that it fails when starting with KVM. Paolo
On Mon, May 27, 2013 at 02:21:36PM +0200, Paolo Bonzini wrote: > Il 27/05/2013 14:09, Eduardo Habkost ha scritto: > > On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote: > >> Il 25/05/2013 03:21, Bandan Das ha scritto: > >>> There is one user-visible effect: "-cpu ...,enforce" will stop failing > >>> because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly > >>> the point: there's no point in having CPU model definitions that would > >>> never work as-is with neither TCG or KVM. This patch is changing the > >>> meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what > >>> was already happening in practice. > >> > >> But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it > >> worth it? > > > > No models match a "real" CPU this way, because neither TCG or KVM > > support all features supported by a real CPU. I ask the opposite > > question: is it worth maintaining an "accurate" CPU model definition > > that would never work without feature-bit tweaking in the command-line? > > It would work with TCG. Changing TCG to KVM should not change hardware > if you use "-cpu ...,enforce", so it is right that it fails when > starting with KVM. > Changing between KVM and TCG _does_ change hardware, today (with or without check/enforce). All CPU models on TCG have features not supported by TCG automatically removed. See the "if (!kvm_enabled())" block at x86_cpu_realizefn(). (That's why I argue that we need separate classes/names for TCG and KVM modes. Otherwise our predefined models get less useful as they will require low-level feature-bit fiddling on the libvirt side to make them work as expected.)
Il 27/05/2013 15:07, Eduardo Habkost ha scritto: >> Changing TCG to KVM should not change hardware >> > if you use "-cpu ...,enforce", so it is right that it fails when >> > starting with KVM. >> > > Changing between KVM and TCG _does_ change hardware, today (with or > without check/enforce). All CPU models on TCG have features not > supported by TCG automatically removed. See the "if (!kvm_enabled())" > block at x86_cpu_realizefn(). Perhaps (for "-cpu ...,enforce" or check) that's the real bug we have to fix? Paolo > (That's why I argue that we need separate classes/names for TCG and KVM > modes. Otherwise our predefined models get less useful as they will > require low-level feature-bit fiddling on the libvirt side to make them > work as expected.) >
On Mon, May 27, 2013 at 03:14:25PM +0200, Paolo Bonzini wrote: > Il 27/05/2013 15:07, Eduardo Habkost ha scritto: > >> Changing TCG to KVM should not change hardware > >> > if you use "-cpu ...,enforce", so it is right that it fails when > >> > starting with KVM. > >> > > > Changing between KVM and TCG _does_ change hardware, today (with or > > without check/enforce). All CPU models on TCG have features not > > supported by TCG automatically removed. See the "if (!kvm_enabled())" > > block at x86_cpu_realizefn(). > > Perhaps (for "-cpu ...,enforce" or check) that's the real bug we have to > fix? It would be 100% accurate, but would it be useful? We will then have a Opteron_G3 CPU model that can't be used reliably without low-level bit fiddling, on either TCG or KVM mode. Note that I am not completely against it. I agree that having a single CPU model namespace with 100%-equivalent definitions for both TCG and KVM modes would simplify the logic a lot. I am just not sure if it is worth it. (To be honest, I am more worried about the amount of time I will waste trying to change the behavior of the TCG code. I am pretty sure I will hear something like "why are you wasting your time trying to make check/enforce work for TCG? Please leave TCG alone!".)
Eduardo Habkost <ehabkost@redhat.com> writes: > On Mon, May 27, 2013 at 02:21:36PM +0200, Paolo Bonzini wrote: >> Il 27/05/2013 14:09, Eduardo Habkost ha scritto: >> > On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote: >> >> Il 25/05/2013 03:21, Bandan Das ha scritto: >> >>> There is one user-visible effect: "-cpu ...,enforce" will stop failing >> >>> because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly >> >>> the point: there's no point in having CPU model definitions that would >> >>> never work as-is with neither TCG or KVM. This patch is changing the >> >>> meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what >> >>> was already happening in practice. >> >> >> >> But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it >> >> worth it? >> > >> > No models match a "real" CPU this way, because neither TCG or KVM >> > support all features supported by a real CPU. I ask the opposite >> > question: is it worth maintaining an "accurate" CPU model definition >> > that would never work without feature-bit tweaking in the command-line? >> >> It would work with TCG. Changing TCG to KVM should not change hardware >> if you use "-cpu ...,enforce", so it is right that it fails when >> starting with KVM. >> > > Changing between KVM and TCG _does_ change hardware, today (with or > without check/enforce). All CPU models on TCG have features not > supported by TCG automatically removed. See the "if (!kvm_enabled())" > block at x86_cpu_realizefn(). Yes, this is exactly why I was inclined to remove the monitor flag. We already have uses of kvm_enabled() to set (or remove) kvm specific stuff, and this change is no different. I can see Paolo's point though, having a common definition probably makes sense too. > (That's why I argue that we need separate classes/names for TCG and KVM > modes. Otherwise our predefined models get less useful as they will > require low-level feature-bit fiddling on the libvirt side to make them > work as expected.) Agreed. From a user's perspective, I think the more a CPU model "just works", whether it's KVM or TCG, the better. Bandan
Il 28/05/2013 18:34, Bandan Das ha scritto: > Eduardo Habkost <ehabkost@redhat.com> writes: > >> On Mon, May 27, 2013 at 02:21:36PM +0200, Paolo Bonzini wrote: >>> Il 27/05/2013 14:09, Eduardo Habkost ha scritto: >>>> On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote: >>>>> Il 25/05/2013 03:21, Bandan Das ha scritto: >>>>>> There is one user-visible effect: "-cpu ...,enforce" will stop failing >>>>>> because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly >>>>>> the point: there's no point in having CPU model definitions that would >>>>>> never work as-is with neither TCG or KVM. This patch is changing the >>>>>> meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what >>>>>> was already happening in practice. >>>>> >>>>> But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it >>>>> worth it? >>>> >>>> No models match a "real" CPU this way, because neither TCG or KVM >>>> support all features supported by a real CPU. I ask the opposite >>>> question: is it worth maintaining an "accurate" CPU model definition >>>> that would never work without feature-bit tweaking in the command-line? >>> >>> It would work with TCG. Changing TCG to KVM should not change hardware >>> if you use "-cpu ...,enforce", so it is right that it fails when >>> starting with KVM. >>> >> >> Changing between KVM and TCG _does_ change hardware, today (with or >> without check/enforce). All CPU models on TCG have features not >> supported by TCG automatically removed. See the "if (!kvm_enabled())" >> block at x86_cpu_realizefn(). > > Yes, this is exactly why I was inclined to remove the monitor flag. > We already have uses of kvm_enabled() to set (or remove) kvm specific stuff, > and this change is no different. Do any of these affect something that is part of x86_def_t? > I can see Paolo's point though, having > a common definition probably makes sense too. >> (That's why I argue that we need separate classes/names for TCG and KVM >> modes. Otherwise our predefined models get less useful as they will >> require low-level feature-bit fiddling on the libvirt side to make them >> work as expected.) > > Agreed. From a user's perspective, I think the more a CPU model "just works", > whether it's KVM or TCG, the better. Yes, that's right. But I think extending the same expectation to "-cpu ...,enforce" is not necessary, and perhaps even wrong for "-cpu ...,check" since it's only a warning rather than a fatal error. Paolo
Am 28.05.2013 18:46, schrieb Paolo Bonzini: > Il 28/05/2013 18:34, Bandan Das ha scritto: >> Eduardo Habkost <ehabkost@redhat.com> writes: >> >>> On Mon, May 27, 2013 at 02:21:36PM +0200, Paolo Bonzini wrote: >>>> Il 27/05/2013 14:09, Eduardo Habkost ha scritto: >>>>> On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote: >>>>>> Il 25/05/2013 03:21, Bandan Das ha scritto: >>>>>>> There is one user-visible effect: "-cpu ...,enforce" will stop failing >>>>>>> because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly >>>>>>> the point: there's no point in having CPU model definitions that would >>>>>>> never work as-is with neither TCG or KVM. This patch is changing the >>>>>>> meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what >>>>>>> was already happening in practice. >>>>>> >>>>>> But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it >>>>>> worth it? >>>>> >>>>> No models match a "real" CPU this way, because neither TCG or KVM >>>>> support all features supported by a real CPU. I ask the opposite >>>>> question: is it worth maintaining an "accurate" CPU model definition >>>>> that would never work without feature-bit tweaking in the command-line? >>>> >>>> It would work with TCG. Changing TCG to KVM should not change hardware >>>> if you use "-cpu ...,enforce", so it is right that it fails when >>>> starting with KVM. >>>> >>> >>> Changing between KVM and TCG _does_ change hardware, today (with or >>> without check/enforce). All CPU models on TCG have features not >>> supported by TCG automatically removed. See the "if (!kvm_enabled())" >>> block at x86_cpu_realizefn(). >> >> Yes, this is exactly why I was inclined to remove the monitor flag. >> We already have uses of kvm_enabled() to set (or remove) kvm specific stuff, >> and this change is no different. > > Do any of these affect something that is part of x86_def_t? The vendor comes to mind. Andreas >> I can see Paolo's point though, having >> a common definition probably makes sense too. > >>> (That's why I argue that we need separate classes/names for TCG and KVM >>> modes. Otherwise our predefined models get less useful as they will >>> require low-level feature-bit fiddling on the libvirt side to make them >>> work as expected.) >> >> Agreed. From a user's perspective, I think the more a CPU model "just works", >> whether it's KVM or TCG, the better. > > Yes, that's right. But I think extending the same expectation to "-cpu > ...,enforce" is not necessary, and perhaps even wrong for "-cpu > ...,check" since it's only a warning rather than a fatal error. > > Paolo >
(CCing libvirt people) On Tue, May 28, 2013 at 06:48:52PM +0200, Andreas Färber wrote: > Am 28.05.2013 18:46, schrieb Paolo Bonzini: > > Il 28/05/2013 18:34, Bandan Das ha scritto: > >> Eduardo Habkost <ehabkost@redhat.com> writes: > >> > >>> On Mon, May 27, 2013 at 02:21:36PM +0200, Paolo Bonzini wrote: > >>>> Il 27/05/2013 14:09, Eduardo Habkost ha scritto: > >>>>> On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote: > >>>>>> Il 25/05/2013 03:21, Bandan Das ha scritto: > >>>>>>> There is one user-visible effect: "-cpu ...,enforce" will stop failing > >>>>>>> because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly > >>>>>>> the point: there's no point in having CPU model definitions that would > >>>>>>> never work as-is with neither TCG or KVM. This patch is changing the > >>>>>>> meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what > >>>>>>> was already happening in practice. > >>>>>> > >>>>>> But then -cpu Opteron_G3 does not match a "real" Opteron G3. Is it > >>>>>> worth it? > >>>>> > >>>>> No models match a "real" CPU this way, because neither TCG or KVM > >>>>> support all features supported by a real CPU. I ask the opposite > >>>>> question: is it worth maintaining an "accurate" CPU model definition > >>>>> that would never work without feature-bit tweaking in the command-line? > >>>> > >>>> It would work with TCG. Changing TCG to KVM should not change hardware > >>>> if you use "-cpu ...,enforce", so it is right that it fails when > >>>> starting with KVM. > >>>> > >>> > >>> Changing between KVM and TCG _does_ change hardware, today (with or > >>> without check/enforce). All CPU models on TCG have features not > >>> supported by TCG automatically removed. See the "if (!kvm_enabled())" > >>> block at x86_cpu_realizefn(). > >> > >> Yes, this is exactly why I was inclined to remove the monitor flag. > >> We already have uses of kvm_enabled() to set (or remove) kvm specific stuff, > >> and this change is no different. > > > > Do any of these affect something that is part of x86_def_t? > > The vendor comes to mind. I believe we can still consider the "vendor" field a special one: if other components care about the TCG/KVM difference regarding the "vendor" field, they can simply set "vendor" explicitly on the command-line. > >> I can see Paolo's point though, having > >> a common definition probably makes sense too. > > Paolo is convincing me that keeping the rest of the features exactly the same on TCG and KVM modes (and making check/enforce work for TCG as well) would simplify the logic a lot. This will add a little extra work for libvirt, that will probably need to use "-cpu Opteron_G3,-monitor" once it implements enforce-mode (to make sure the results really match existing libvirt assumptions about the Opteron_G* models), but it is probably worth it. I will give it a try and send a proposal soon. > >>> (That's why I argue that we need separate classes/names for TCG and KVM > >>> modes. Otherwise our predefined models get less useful as they will > >>> require low-level feature-bit fiddling on the libvirt side to make them > >>> work as expected.) > >> > >> Agreed. From a user's perspective, I think the more a CPU model "just works", > >> whether it's KVM or TCG, the better. > > > > Yes, that's right. But I think extending the same expectation to "-cpu > > ...,enforce" is not necessary, and perhaps even wrong for "-cpu > > ...,check" since it's only a warning rather than a fatal error. > > > > Paolo > > > > > -- > SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
diff --git a/target-i386/cpu.c b/target-i386/cpu.c index 1a501d9..c83ba1c 100644 --- a/target-i386/cpu.c +++ b/target-i386/cpu.c @@ -1749,6 +1749,7 @@ static void cpu_x86_register(X86CPU *cpu, const char *name, Error **errp) if (kvm_enabled()) { def->features[FEAT_KVM] |= kvm_default_features; + def->features[FEAT_1_ECX] &= ~CPUID_EXT_MONITOR; } def->features[FEAT_1_ECX] |= CPUID_EXT_HYPERVISOR;
By default, CPUID_EXT_MONITOR is enabled for some cpu models such as Opteron_G3. Disable it if kvm_enabled() is true since monitor/mwait aren't supported by KVM yet. Signed-off-by: Bandan Das <bsd@redhat.com> --- There is no user visible side-effect to this behavior, the aim is to clean up the default flags that are not supported (yet). target-i386/cpu.c | 1 + 1 file changed, 1 insertion(+)