diff mbox

Add definitions for current cpu models..

Message ID 4B549016.6090501@redhat.com
State New
Headers show

Commit Message

john cooper Jan. 18, 2010, 4:45 p.m. UTC
This is a rework of the prior version which adds definitions
for contemporary processors selected via -cpu <model>, as an
alternative to the existing use of "-cpu qemu64" augmented
with a series of feature flags.

The primary motivation was determination of a least common
denominator within a given processor class to simplify guest
migration.  It is still possible to modify an arbitrary model
via additional feature flags however the goal here was to
make doing so unnecessary in typical usage.  The other
consideration was providing models names reflective of
current processors.  Both AMD and Intel have reviewed the
models in terms of balancing generality of migration vs.
excessive feature downgrade relative to released silicon. 

Concerning the prior version of the patch, the proposed name
used for a given model drew a fair amount of debate, the
main concern being use of names as mnemonic as possible to
the wisest group of users.  Another suggestion was to use
the vendor name of released silicon corresponding to a least
common denominator CPU within the class, rational being doing
so is more definitive of the intended functionality.  However
something like:

     -cpu "Intel Core 2 Duo P9xxx"

probably isn't all that easy to remember nor type when
selecting a Penryn class cpu.  So I struck what I believe to
be a reasonable compromise where the original x86_def_t.name
was for the most part retained with the x86_def_t.model_id
capturing the marketing name of the cpu being used as the
least common denominator for the class.  To make it easier for
a user to associate a *.name with *.model_id, "-cpu ?" invoked
rather as "-cpu ??" will append *.model_id to the generated
table:

        :
    x86           Conroe  Intel Celeron_4x0 (Conroe/Merom Class Core 2)   
    x86           Penryn  Intel Core 2 Duo P9xxx (Penryn Class Core 2)    
    x86          Nehalem  Intel Core i7 9xx (Nehalem Class Core i7)       
    x86       Opteron_G1  AMD Opteron 240 (Gen 1 Class Opteron)           
    x86       Opteron_G2  AMD Opteron 22xx (Gen 2 Class Opteron)          
    x86       Opteron_G3  AMD Opteron 23xx (Gen 3 Class Opteron)
        :         

As before a cpu feature 'check' option is added which warns when
feature flags (either implicit in a cpu model or explicit on the
command line) would have otherwise been quietly unavailable to a
guest:

    # qemu-system-x86_64 ... -cpu Nehalem,check
    warning: host cpuid 0000_0001 lacks requested flag 'sse4.2' [0x00100000]
    warning: host cpuid 0000_0001 lacks requested flag 'popcnt' [0x00800000]

This patch was tested relative to qemu.git.

Signed-off-by: john cooper <john.cooper@redhat.com>
---

Comments

Anthony Liguori Jan. 19, 2010, 7:39 p.m. UTC | #1
On 01/18/2010 10:45 AM, john cooper wrote:
> This is a rework of the prior version which adds definitions
> for contemporary processors selected via -cpu<model>, as an
> alternative to the existing use of "-cpu qemu64" augmented
> with a series of feature flags.
>
> The primary motivation was determination of a least common
> denominator within a given processor class to simplify guest
> migration.  It is still possible to modify an arbitrary model
> via additional feature flags however the goal here was to
> make doing so unnecessary in typical usage.  The other
> consideration was providing models names reflective of
> current processors.  Both AMD and Intel have reviewed the
> models in terms of balancing generality of migration vs.
> excessive feature downgrade relative to released silicon.
>
> Concerning the prior version of the patch, the proposed name
> used for a given model drew a fair amount of debate, the
> main concern being use of names as mnemonic as possible to
> the wisest group of users.  Another suggestion was to use
> the vendor name of released silicon corresponding to a least
> common denominator CPU within the class, rational being doing
> so is more definitive of the intended functionality.  However
> something like:
>
>       -cpu "Intel Core 2 Duo P9xxx"
>    

Stick with Xeon naming, it's far less annoying.

> probably isn't all that easy to remember nor type when
> selecting a Penryn class cpu.  So I struck what I believe to
> be a reasonable compromise where the original x86_def_t.name
> was for the most part retained with the x86_def_t.model_id
> capturing the marketing name of the cpu being used as the
> least common denominator for the class.  To make it easier for
> a user to associate a *.name with *.model_id, "-cpu ?" invoked
> rather as "-cpu ??" will append *.model_id to the generated
> table:
>
>          :
>      x86           Conroe  Intel Celeron_4x0 (Conroe/Merom Class Core 2)
>      x86           Penryn  Intel Core 2 Duo P9xxx (Penryn Class Core 2)
>      x86          Nehalem  Intel Core i7 9xx (Nehalem Class Core i7)
>      x86       Opteron_G1  AMD Opteron 240 (Gen 1 Class Opteron)
>      x86       Opteron_G2  AMD Opteron 22xx (Gen 2 Class Opteron)
>      x86       Opteron_G3  AMD Opteron 23xx (Gen 3 Class Opteron)
>          :
>    

I'm very much against having -cpu Nehalem.  The whole point of this is 
to make things easier for a user and for most of the users I've 
encountered, -cpu Nehalem is just as obscure as -cpu qemu64,-sse3,+vmx,...

Regards,

Anthony Liguori
Chris Wright Jan. 19, 2010, 8:03 p.m. UTC | #2
* Anthony Liguori (anthony@codemonkey.ws) wrote:
> I'm very much against having -cpu Nehalem.  The whole point of this is  
> to make things easier for a user and for most of the users I've  
> encountered, -cpu Nehalem is just as obscure as -cpu 
> qemu64,-sse3,+vmx,...

What name will these users know?  FWIW, it makes sense to me as it is.

thanks,
-chris
Jamie Lokier Jan. 19, 2010, 10:11 p.m. UTC | #3
Anthony Liguori wrote:
> On 01/18/2010 10:45 AM, john cooper wrote:
> >     x86           Conroe  Intel Celeron_4x0 (Conroe/Merom Class Core 2)
> >     x86           Penryn  Intel Core 2 Duo P9xxx (Penryn Class Core 2)
> >     x86          Nehalem  Intel Core i7 9xx (Nehalem Class Core i7)
> >     x86       Opteron_G1  AMD Opteron 240 (Gen 1 Class Opteron)
> >     x86       Opteron_G2  AMD Opteron 22xx (Gen 2 Class Opteron)
> >     x86       Opteron_G3  AMD Opteron 23xx (Gen 3 Class Opteron)
> 
> I'm very much against having -cpu Nehalem.  The whole point of this is 
> to make things easier for a user and for most of the users I've 
> encountered, -cpu Nehalem is just as obscure as -cpu qemu64,-sse3,+vmx,...

When I saw that table just now, I had no idea whether Nehalem is newer
and more advanced than Penryn, or the other way around.  I also have
no idea if "Core i7" is newer than "Core 2 Duo" or not.

I'm not a typical user: I know quite a lot about x86 architecture;
I just haven't kept up to date enough to know the code/model names.
Typical users will know less about them.

It's only from seeing the G1/G2/G3 order that I guess they are listed
in ascending order of functionality.

Naturally, if I were choosing one, I'd want to choose the one with the
most capabilities that works on whatever my host hardware provides.

-- Jamie
Jamie Lokier Jan. 19, 2010, 10:12 p.m. UTC | #4
Chris Wright wrote:
> * Anthony Liguori (anthony@codemonkey.ws) wrote:
> > I'm very much against having -cpu Nehalem.  The whole point of this is  
> > to make things easier for a user and for most of the users I've  
> > encountered, -cpu Nehalem is just as obscure as -cpu 
> > qemu64,-sse3,+vmx,...
> 
> What name will these users know?  FWIW, it makes sense to me as it is.

2001, 2005, 2008, 2010 :-)

-- Jamie
Jamie Lokier Jan. 19, 2010, 10:15 p.m. UTC | #5
john cooper wrote:
> As before a cpu feature 'check' option is added which warns when
> feature flags (either implicit in a cpu model or explicit on the
> command line) would have otherwise been quietly unavailable to a
> guest:
> 
>     # qemu-system-x86_64 ... -cpu Nehalem,check
>     warning: host cpuid 0000_0001 lacks requested flag 'sse4.2' [0x00100000]
>     warning: host cpuid 0000_0001 lacks requested flag 'popcnt' [0x00800000]

That's a nice feature.  Can we have a 'checkfail' option which refuses
to run if a requested capability isn't available?  Thanks.

I foresee wanting to iterate over the models and pick the latest one
which a host supports - on the grounds that you have done the hard
work of ensuring it is a reasonably good performer, while "probably"
working on another host of similar capability when a new host is made
available.

-- Jamie
Chris Wright Jan. 19, 2010, 10:20 p.m. UTC | #6
* Jamie Lokier (jamie@shareable.org) wrote:
> Chris Wright wrote:
> > * Anthony Liguori (anthony@codemonkey.ws) wrote:
> > > I'm very much against having -cpu Nehalem.  The whole point of this is  
> > > to make things easier for a user and for most of the users I've  
> > > encountered, -cpu Nehalem is just as obscure as -cpu 
> > > qemu64,-sse3,+vmx,...
> > 
> > What name will these users know?  FWIW, it makes sense to me as it is.
> 
> 2001, 2005, 2008, 2010 :-)

Heh, sadly not far from the truth I bet ;-)  Flip side, if you deploy
the sekrit decoder ring at ark.intel.com, the Xeon® + number seems
equally obscure.  Seems we'll never make 'em all happy.

thanks,
-chris
Anthony Liguori Jan. 19, 2010, 10:25 p.m. UTC | #7
On 01/19/2010 02:03 PM, Chris Wright wrote:
> * Anthony Liguori (anthony@codemonkey.ws) wrote:
>    
>> I'm very much against having -cpu Nehalem.  The whole point of this is
>> to make things easier for a user and for most of the users I've
>> encountered, -cpu Nehalem is just as obscure as -cpu
>> qemu64,-sse3,+vmx,...
>>      
> What name will these users know?  FWIW, it makes sense to me as it is.
>    

Whatever is in /proc/cpuinfo.

There is no mention of "Nehalem" in /proc/cpuinfo.

Regards,

Anthony Liguori
> thanks,
> -chris
>
Chris Wright Jan. 20, 2010, 12:15 a.m. UTC | #8
* Anthony Liguori (anthony@codemonkey.ws) wrote:
> On 01/19/2010 02:03 PM, Chris Wright wrote:
>> * Anthony Liguori (anthony@codemonkey.ws) wrote:
>>    
>>> I'm very much against having -cpu Nehalem.  The whole point of this is
>>> to make things easier for a user and for most of the users I've
>>> encountered, -cpu Nehalem is just as obscure as -cpu
>>> qemu64,-sse3,+vmx,...
>>>      
>> What name will these users know?  FWIW, it makes sense to me as it is.
>
> Whatever is in /proc/cpuinfo.

That doesn't exactly generalize to families w/ similar cpuid features.

Intel(R) Xeon(R) {E,L,X}{74,55}**
Intel(R) Core(TM)2 {Duo,Quad,Extreme} ...

thanks,
-chris
Jamie Lokier Jan. 20, 2010, 1:38 a.m. UTC | #9
Anthony Liguori wrote:
> On 01/19/2010 02:03 PM, Chris Wright wrote:
> >* Anthony Liguori (anthony@codemonkey.ws) wrote:
> >   
> >>I'm very much against having -cpu Nehalem.  The whole point of this is
> >>to make things easier for a user and for most of the users I've
> >>encountered, -cpu Nehalem is just as obscure as -cpu
> >>qemu64,-sse3,+vmx,...
> >>     
> >What name will these users know?  FWIW, it makes sense to me as it is.
> >   
> 
> Whatever is in /proc/cpuinfo.
> 
> There is no mention of "Nehalem" in /proc/cpuinfo.

My 5 /proc/cpuinfos say:

    Genuine Intel(R) CPU T2500  @ 2.00GHz
    Intel(R) Xeon(TM) CPU 3.00GHz
    Intel(R) Xeon(R) CPU E5335  @ 2.00GHz
    Intel(R) Xeon(TM) CPU 2.80GHz
    Intel(R) Xeon(R) CPU X5482  @ 3.20GHz

I'm not sure if that's any more helpful :-)

Especially the first one.  I don't think of my laptop as having a
T2500.  I think of it as having a 32-bit Core Duo.  And I have no idea
what the different types of Xeon are.  But then, I couldn't tell you
whether they are Nehalems or Penryns either, and I'm quite sure the
owners couldn't either.

    $ grep name /proc/cpuinfo
    model name : QEMU Virtual CPU version 0.9.1

If only they were all so clear :-)

-- Jamie
Anthony Liguori Jan. 20, 2010, 2:21 p.m. UTC | #10
On 01/19/2010 06:15 PM, Chris Wright wrote:
> * Anthony Liguori (anthony@codemonkey.ws) wrote:
>    
>> On 01/19/2010 02:03 PM, Chris Wright wrote:
>>      
>>> * Anthony Liguori (anthony@codemonkey.ws) wrote:
>>>
>>>        
>>>> I'm very much against having -cpu Nehalem.  The whole point of this is
>>>> to make things easier for a user and for most of the users I've
>>>> encountered, -cpu Nehalem is just as obscure as -cpu
>>>> qemu64,-sse3,+vmx,...
>>>>
>>>>          
>>> What name will these users know?  FWIW, it makes sense to me as it is.
>>>        
>> Whatever is in /proc/cpuinfo.
>>      
> That doesn't exactly generalize to families w/ similar cpuid features.
>
> Intel(R) Xeon(R) {E,L,X}{74,55}**
> Intel(R) Core(TM)2 {Duo,Quad,Extreme} ...
>    

Then we should key off of family and model.

So -cpu AMD_Family_10h

or something like that.  At least that is discoverable by a user.

Regards,

Anthony Liguori
Gleb Natapov Jan. 20, 2010, 2:27 p.m. UTC | #11
On Wed, Jan 20, 2010 at 08:21:44AM -0600, Anthony Liguori wrote:
> On 01/19/2010 06:15 PM, Chris Wright wrote:
> >* Anthony Liguori (anthony@codemonkey.ws) wrote:
> >>On 01/19/2010 02:03 PM, Chris Wright wrote:
> >>>* Anthony Liguori (anthony@codemonkey.ws) wrote:
> >>>
> >>>>I'm very much against having -cpu Nehalem.  The whole point of this is
> >>>>to make things easier for a user and for most of the users I've
> >>>>encountered, -cpu Nehalem is just as obscure as -cpu
> >>>>qemu64,-sse3,+vmx,...
> >>>>
> >>>What name will these users know?  FWIW, it makes sense to me as it is.
> >>Whatever is in /proc/cpuinfo.
> >That doesn't exactly generalize to families w/ similar cpuid features.
> >
> >Intel(R) Xeon(R) {E,L,X}{74,55}**
> >Intel(R) Core(TM)2 {Duo,Quad,Extreme} ...
> 
> Then we should key off of family and model.
> 
> So -cpu AMD_Family_10h
> 
> or something like that.  At least that is discoverable by a user.
> 

Or use CPU price/year as a distinguisher. -cpu Intel_300$_2005 will give
you Intel cpus that cost 300$ in 2005.

--
			Gleb.
john cooper Jan. 20, 2010, 8:09 p.m. UTC | #12
Jamie Lokier wrote:
> Anthony Liguori wrote:
>> On 01/18/2010 10:45 AM, john cooper wrote:
>>>     x86           Conroe  Intel Celeron_4x0 (Conroe/Merom Class Core 2)
>>>     x86           Penryn  Intel Core 2 Duo P9xxx (Penryn Class Core 2)
>>>     x86          Nehalem  Intel Core i7 9xx (Nehalem Class Core i7)
>>>     x86       Opteron_G1  AMD Opteron 240 (Gen 1 Class Opteron)
>>>     x86       Opteron_G2  AMD Opteron 22xx (Gen 2 Class Opteron)
>>>     x86       Opteron_G3  AMD Opteron 23xx (Gen 3 Class Opteron)
>> I'm very much against having -cpu Nehalem.  The whole point of this is 
>> to make things easier for a user and for most of the users I've 
>> encountered, -cpu Nehalem is just as obscure as -cpu qemu64,-sse3,+vmx,...
> 
> When I saw that table just now, I had no idea whether Nehalem is newer
> and more advanced than Penryn, or the other way around.  I also have
> no idea if "Core i7" is newer than "Core 2 Duo" or not.

I can appreciate the argument above, however the goal was
choosing names with some basis in reality.  These were
recommended by our contacts within Intel, are used by VmWare
to describe their similar cpu models, and arguably have fallen
to defacto usage as evidenced by such sources as:

    http://en.wikipedia.org/wiki/Conroe_(microprocessor)
    http://en.wikipedia.org/wiki/Penryn_(microprocessor)
    http://en.wikipedia.org/wiki/Nehalem_(microarchitecture)

I suspect whatever we choose of reasonable length as a model
tag for "-cpu" some further detail is going to be required.
That was the motivation to augment the table as above with
an instance of a LCD for that associated class.
 
> I'm not a typical user: I know quite a lot about x86 architecture;
> I just haven't kept up to date enough to know the code/model names.
> Typical users will know less about them.

Understood.  One thought I had to further clarify what is going
on under the hood was to dump the cpuid flags for each model as
part of (or in addition to) the above table.  But this seems a
bit extreme and kvm itself can modify flags exported from qemu
to a guest.

-john
john cooper Jan. 20, 2010, 8:09 p.m. UTC | #13
Anthony Liguori wrote:
> On 01/19/2010 02:03 PM, Chris Wright wrote:
>> * Anthony Liguori (anthony@codemonkey.ws) wrote:
>>   
>>> I'm very much against having -cpu Nehalem.  The whole point of this is
>>> to make things easier for a user and for most of the users I've
>>> encountered, -cpu Nehalem is just as obscure as -cpu
>>> qemu64,-sse3,+vmx,...
>>>      
>> What name will these users know?  FWIW, it makes sense to me as it is.
>>    
> 
> Whatever is in /proc/cpuinfo.

    $ grep name /proc/cpuinfo
    model name  : Intel(R) Core(TM)2 Duo CPU     E8400  @ 3.00GHz

Which is detailing that exact cpu vs. the class
of which it is a member.  So are you suggesting
to map all instances of processors called out
in /proc/cpuinfo into one of the three defined
models?  We can certainly do that however I was
looking for a more terse and simplified solution
at this level while deferring more ornate mapping
schemes to management tools.

Still at the user facing CLI this doesn't strike
me as the most friendly encoding of a -cpu <name>.  

-john
john cooper Jan. 20, 2010, 8:11 p.m. UTC | #14
Jamie Lokier wrote:
> john cooper wrote:
>> As before a cpu feature 'check' option is added which warns when
>> feature flags (either implicit in a cpu model or explicit on the
>> command line) would have otherwise been quietly unavailable to a
>> guest:
>>
>>     # qemu-system-x86_64 ... -cpu Nehalem,check
>>     warning: host cpuid 0000_0001 lacks requested flag 'sse4.2' [0x00100000]
>>     warning: host cpuid 0000_0001 lacks requested flag 'popcnt' [0x00800000]
> 
> That's a nice feature.  Can we have a 'checkfail' option which refuses
> to run if a requested capability isn't available?  Thanks.

Certainly, others have requested the same.  Let's resolve
the issue at hand first.

> I foresee wanting to iterate over the models and pick the latest one
> which a host supports - on the grounds that you have done the hard
> work of ensuring it is a reasonably good performer, while "probably"
> working on another host of similar capability when a new host is made
> available.

That's a fairly close use case to that of safe migration
which was one of the primary motivations to identify
the models being discussed.  Although presentation and
administration of such was considered the domain of management
tools.

-john
Daniel P. Berrangé Jan. 20, 2010, 8:26 p.m. UTC | #15
On Wed, Jan 20, 2010 at 03:09:53PM -0500, john cooper wrote:
> Anthony Liguori wrote:
> > On 01/19/2010 02:03 PM, Chris Wright wrote:
> >> * Anthony Liguori (anthony@codemonkey.ws) wrote:
> >>   
> >>> I'm very much against having -cpu Nehalem.  The whole point of this is
> >>> to make things easier for a user and for most of the users I've
> >>> encountered, -cpu Nehalem is just as obscure as -cpu
> >>> qemu64,-sse3,+vmx,...
> >>>      
> >> What name will these users know?  FWIW, it makes sense to me as it is.
> >>    
> > 
> > Whatever is in /proc/cpuinfo.
> 
>     $ grep name /proc/cpuinfo
>     model name  : Intel(R) Core(TM)2 Duo CPU     E8400  @ 3.00GHz
> 
> Which is detailing that exact cpu vs. the class
> of which it is a member.  So are you suggesting
> to map all instances of processors called out
> in /proc/cpuinfo into one of the three defined
> models?  We can certainly do that however I was
> looking for a more terse and simplified solution
> at this level while deferring more ornate mapping
> schemes to management tools.
> 
> Still at the user facing CLI this doesn't strike
> me as the most friendly encoding of a -cpu <name>.  

To be honest all possible naming schemes for '-cpu <name>' are just as
unfriendly as each other. The only user friendly option is '-cpu host'. 

IMHO, we should just pick a concise naming scheme & document it. Given
they are all equally unfriendly, the one that has consistency with vmware
naming seems like a mild winner.

Daniel
Anthony Liguori Jan. 20, 2010, 8:53 p.m. UTC | #16
On 01/20/2010 02:26 PM, Daniel P. Berrange wrote:
> To be honest all possible naming schemes for '-cpu<name>' are just as
> unfriendly as each other. The only user friendly option is '-cpu host'.
>
> IMHO, we should just pick a concise naming scheme&  document it. Given
> they are all equally unfriendly, the one that has consistency with vmware
> naming seems like a mild winner.
>    

IIUC, VMware uses "Group A, Group B", etc. which is pretty close to 
saying Family 10h IMHO.

Regards,

Anthony Liguori

> Daniel
>
Arnd Bergmann Jan. 20, 2010, 11:20 p.m. UTC | #17
On Monday 18 January 2010, john cooper wrote:
> +        .name = "Conroe",
> +        .level = 2,
> +        .vendor1 = CPUID_VENDOR_INTEL_1,
> +        .vendor2 = CPUID_VENDOR_INTEL_2,
> +        .vendor3 = CPUID_VENDOR_INTEL_3,
> +        .family = 6,   /* P6 */
> +        .model = 2,

                ^^^^^^^^ that looks wrong -- what is model 2 actually?

> +        .stepping = 3,
> +        .features = PPRO_FEATURES | 
> +            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
> +            CPUID_PSE36,                                /* note 2 */
> +        .ext_features = CPUID_EXT_SSE3 | CPUID_EXT_SSSE3,
> +        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
> +            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
> +        .ext3_features = CPUID_EXT3_LAHF_LM,
> +        .xlevel = 0x8000000A,
> +        .model_id = "Intel Celeron_4x0 (Conroe/Merom Class Core 2)",
> +    },

Celeron_4x0 is a rather bad example, because it is based on the 
single-core Conroe-L, which is family 6 / model 22 unlike all the dual-
and quad-core Merom/Conroe that are model 15.

> +    {
> +        .name = "Penryn",
> +        .level = 2,
> +        .vendor1 = CPUID_VENDOR_INTEL_1,
> +        .vendor2 = CPUID_VENDOR_INTEL_2,
> +        .vendor3 = CPUID_VENDOR_INTEL_3,
> +        .family = 6,   /* P6 */
> +        .model = 2,
> +        .stepping = 3,
> +        .features = PPRO_FEATURES | 
> +            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
> +            CPUID_PSE36,                                /* note 2 */
> +        .ext_features = CPUID_EXT_SSE3 |
> +            CPUID_EXT_CX16 | CPUID_EXT_SSSE3 | CPUID_EXT_SSE41,
> +        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
> +            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
> +        .ext3_features = CPUID_EXT3_LAHF_LM,
> +        .xlevel = 0x8000000A,
> +        .model_id = "Intel Core 2 Duo P9xxx (Penryn Class Core 2)",
> +    },

This would be model 23 for Penryn-class Xeon/Core/Pentium/Celeron processors
without L3 cache.

> +    {
> +        .name = "Nehalem",
> +        .level = 2,
> +        .vendor1 = CPUID_VENDOR_INTEL_1,
> +        .vendor2 = CPUID_VENDOR_INTEL_2,
> +        .vendor3 = CPUID_VENDOR_INTEL_3,
> +        .family = 6,   /* P6 */
> +        .model = 2,
> +        .stepping = 3,
> +        .features = PPRO_FEATURES | 
> +            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
> +            CPUID_PSE36,                                /* note 2 */
> +        .ext_features = CPUID_EXT_SSE3 |
> +            CPUID_EXT_CX16 | CPUID_EXT_SSSE3 | CPUID_EXT_SSE41 |
> +            CPUID_EXT_SSE42 | CPUID_EXT_POPCNT,
> +        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
> +            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
> +        .ext3_features = CPUID_EXT3_LAHF_LM,
> +        .xlevel = 0x8000000A,
> +        .model_id = "Intel Core i7 9xx (Nehalem Class Core i7)",
> +    },

Apparently, not all the i7-9xx CPUs are Nehalem, the i7-980X is supposed
to be Westmere, which has more features.

Because of the complexity, I'd recommend passing down the *model* number
of the emulated CPU, the interesting Intel ones (those supported by KVM) being:

15-6: CedarMill/Presler/Dempsey/Tulsa (Pentium 4/Pentium D/Xeon 50xx/Xeon 71xx)
6-14: Yonah/Sossaman (Celeron M4xx, Core Solo/Duo, Pentium Dual-Core T1000, Xeon ULV)
6-15: Merom/Conroe/Kentsfield/Woodcrest/Clovertown/Tigerton
      (Celeron M5xx/E1xxx/T1xxx, Pentium T2xxx/T3xxx/E2xxx,Core 2 Solo U2xxx,
       Core 2 Duo E4xxx/E6xxx/Q6xxx/T5xxx/T7xxx/L7xxx/U7xxx/SP7xxx,
       Xeon 30xx/32xx/51xx/52xx/72xx/73xx)
6-22: Penryn/Wolfdale/Yorkfield/Harpertown (Celeron 7xx/9xx/SU2xxx/T3xxx/E3xxx,
       Pentium T4xxx/SU2xxx/SU4xxx/E5xxx/E6xxx, Core 2 Solo SU3xxx,
       Core 2 Duo Pxxxx/SUxxxx/T6xxx/x8xxx/x9xxx,
       Xeon 31xx/33xx/52xx/54xx)
6-26: Gainestown/Bloomfield (Xeon 35xx/55xx, Core i7-9xx)
6-28: Atom
6-29: Dunnington (Xeon 74xx)
6-30: Lynnfield/Clarksfield/JasperForest (Xeon 34xx, Core i7-8xx, Core i7-xxxQM,
       Core i5-7xx)
6-37: Arrandale/Clarkdale (Dual-Core Core i3/i5/i7)
6-44: Gulftown (six-core)

	Arnd
Chris Wright Jan. 21, 2010, 12:25 a.m. UTC | #18
* Daniel P. Berrange (berrange@redhat.com) wrote:
> To be honest all possible naming schemes for '-cpu <name>' are just as
> unfriendly as each other. The only user friendly option is '-cpu host'. 
> 
> IMHO, we should just pick a concise naming scheme & document it. Given
> they are all equally unfriendly, the one that has consistency with vmware
> naming seems like a mild winner.

Heh, I completely agree, and was just saying the same thing to John
earlier today.  May as well be -cpu {foo,bar,baz} since the meaning for
those command line options must be well-documented in the man page.

This is from an EVC kb article[1]:

ESX/ESXi 4.0 supports the following EVC modes:

    * AMD Opteron™ Generation 1 (Rev. E)
    * AMD Opteron™ Generation 2 (Rev. F)
    * AMD Opteron™ Generation 3 (Greyhound)
    * Intel® Xeon® Core2 (Merom)
    * Intel® Xeon® 45nm Core2 (Penryn)
    * Intel® Xeon® Core i7 (Nehalem) 

Not that different from John's proposal.

thanks,
-chris

[1] http://kb.vmware.com/kb/1005764
john cooper Jan. 21, 2010, 1:18 a.m. UTC | #19
Chris Wright wrote:
> * Daniel P. Berrange (berrange@redhat.com) wrote:
>> To be honest all possible naming schemes for '-cpu <name>' are just as
>> unfriendly as each other. The only user friendly option is '-cpu host'. 
>>
>> IMHO, we should just pick a concise naming scheme & document it. Given
>> they are all equally unfriendly, the one that has consistency with vmware
>> naming seems like a mild winner.
> 
> Heh, I completely agree, and was just saying the same thing to John
> earlier today.  May as well be -cpu {foo,bar,baz} since the meaning for
> those command line options must be well-documented in the man page.

I can appreciate the concern of wanting to get this
as "correct" as possible.  But ultimately we just
need three unique tags which ideally have some relation
to their associated architectures.  The diatribes
available from /proc/cpuinfo while generally accurate
don't really offer any more of a clue to the model
group, and in their unmodified form are rather unwieldy
as command line flags.

> This is from an EVC kb article[1]:

Here is a pointer to a more detailed version:

   http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003212


We probably should also add an option to dump out the
full set of qemu-side cpuid flags for the benefit of
users and upper level tools.

-john
Andre Przywara Jan. 21, 2010, 2:39 p.m. UTC | #20
john cooper wrote:
> Chris Wright wrote:
>> * Daniel P. Berrange (berrange@redhat.com) wrote:
>>> To be honest all possible naming schemes for '-cpu <name>' are just as
>>> unfriendly as each other. The only user friendly option is '-cpu host'. 
>>>
>>> IMHO, we should just pick a concise naming scheme & document it. Given
>>> they are all equally unfriendly, the one that has consistency with vmware
>>> naming seems like a mild winner.
>> Heh, I completely agree, and was just saying the same thing to John
>> earlier today.  May as well be -cpu {foo,bar,baz} since the meaning for
>> those command line options must be well-documented in the man page.
> 
> I can appreciate the concern of wanting to get this
> as "correct" as possible.  But ultimately we just
> need three unique tags which ideally have some relation
> to their associated architectures.  The diatribes
> available from /proc/cpuinfo while generally accurate
> don't really offer any more of a clue to the model
> group, and in their unmodified form are rather unwieldy
> as command line flags.
I agree. I'd underline that this patch is for migration purposes only, 
so you don't want to specify an exact CPU, but more like a class of 
CPUs. If you look into the available CPUID features in each CPU, you 
will find that there are only a few groups, with currently three for 
each vendor being a good guess.
/proc/cpuinfo just prints out marketing names, which have only a mild 
relationship to a feature-related technical CPU model. Maybe we can use 
a generation approach like the AMD Opteron ones for Intel, too.
These G1/G2/G3 names are just arbitrary and have no roots within AMD.

I think that an exact CPU model specification is out of scope for this 
patch and maybe even for QEMU. One could create a database with CPU 
names and associated CPUID flags and provide an external tool to 
generate a QEMU command line out of this. Keeping this database 
up-to-date (especially for desktop CPU models) is a burden that the QEMU 
project does not want to bear.

> 
>> This is from an EVC kb article[1]:
> 
> Here is a pointer to a more detailed version:
> 
>    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003212
> 
> 
> We probably should also add an option to dump out the
> full set of qemu-side cpuid flags for the benefit of
> users and upper level tools.
You mean like this one?
http://lists.gnu.org/archive/html/qemu-devel/2009-09/msg01228.html
Resending this patch set is on my plan for next week. What is the state 
of this patch? Will it go in soon? Then I'd rebase my patch set on top 
of it.

Regards,
Andre.
Anthony Liguori Jan. 21, 2010, 3:05 p.m. UTC | #21
On 01/20/2010 07:18 PM, john cooper wrote:
> Chris Wright wrote:
>    
>> * Daniel P. Berrange (berrange@redhat.com) wrote:
>>      
>>> To be honest all possible naming schemes for '-cpu<name>' are just as
>>> unfriendly as each other. The only user friendly option is '-cpu host'.
>>>
>>> IMHO, we should just pick a concise naming scheme&  document it. Given
>>> they are all equally unfriendly, the one that has consistency with vmware
>>> naming seems like a mild winner.
>>>        
>> Heh, I completely agree, and was just saying the same thing to John
>> earlier today.  May as well be -cpu {foo,bar,baz} since the meaning for
>> those command line options must be well-documented in the man page.
>>      
> I can appreciate the concern of wanting to get this
> as "correct" as possible.
>    

This is the root of the trouble.  At the qemu layer, we try to focus on 
being correct.

Management tools are typically the layer that deals with being "correct".

A good compromise is making things user tunable which means that a 
downstream can make "correctness" decisions without forcing those 
decisions on upstream.

In this case, the idea would be to introduce a new option, say something 
like -cpu-def.  The syntax would be:

  -cpu-def 
name=coreduo,level=10,family=6,model=14,stepping=8,features=+vme+mtrr+clflush+mca+sse3+monitor,xlevel=0x80000008,model_id="Genuine 
Intel(R) CPU         T2600 @ 2.16GHz"

Which is not that exciting since it just lets you do -cpu coreduo in a 
much more complex way.  However, if we take advantage of the current 
config support, you can have:

[cpu-def]
   name=coreduo
   level=10
   family=6
   model=14
   stepping=8
   features="+vme+mtrr+clflush+mca+sse3.."
   model_id="Genuine Intel..."

And that can be stored in a config file.  We should then parse 
/etc/qemu/target-<targetname>.conf by default.  We'll move the current 
x86_defs table into this config file and then downstreams/users can 
define whatever compatibility classes they want.

With this feature, I'd be inclined to take "correct" compatibility 
classes like Nehalem as part of the default qemurc that we install 
because it's easily overridden by a user.  It then becomes just a 
suggestion on our part verses a guarantee.

It should just be a matter of adding qemu_cpudefs_opts to 
qemu-config.[ch], taking a new command line that parses the argument via 
QemuOpts, then passing the parsed options to a target-specific function 
that then builds the table of supported cpus.

Regards,

Anthony Liguori
john cooper Jan. 21, 2010, 4:43 p.m. UTC | #22
Anthony Liguori wrote:
> On 01/20/2010 07:18 PM, john cooper wrote: 
>> I can appreciate the concern of wanting to get this
>> as "correct" as possible.
>>    
> 
> This is the root of the trouble.  At the qemu layer, we try to focus on
> being correct.
> 
> Management tools are typically the layer that deals with being "correct".
> 
> A good compromise is making things user tunable which means that a
> downstream can make "correctness" decisions without forcing those
> decisions on upstream.

Conceptually I agree with such a malleable approach -- actually
I prefer it.  I thought however it was too much infrastructure to
foist on the problem just to add a few more models into the mix.

The only reservation which comes to mind is that of logistics.
This may ruffle the code some and impact others such as Andre
who seem to have existing patches relative to the current structure.
Anyone have strong objections to this approach before I have a
look at an implementation?

Thanks,

-john
Blue Swirl Jan. 21, 2010, 5:06 p.m. UTC | #23
On Thu, Jan 21, 2010 at 2:39 PM, Andre Przywara <andre.przywara@amd.com> wrote:
> john cooper wrote:
>>
>> Chris Wright wrote:
>>>
>>> * Daniel P. Berrange (berrange@redhat.com) wrote:
>>>>
>>>> To be honest all possible naming schemes for '-cpu <name>' are just as
>>>> unfriendly as each other. The only user friendly option is '-cpu host'.
>>>> IMHO, we should just pick a concise naming scheme & document it. Given
>>>> they are all equally unfriendly, the one that has consistency with
>>>> vmware
>>>> naming seems like a mild winner.
>>>
>>> Heh, I completely agree, and was just saying the same thing to John
>>> earlier today.  May as well be -cpu {foo,bar,baz} since the meaning for
>>> those command line options must be well-documented in the man page.
>>
>> I can appreciate the concern of wanting to get this
>> as "correct" as possible.  But ultimately we just
>> need three unique tags which ideally have some relation
>> to their associated architectures.  The diatribes
>> available from /proc/cpuinfo while generally accurate
>> don't really offer any more of a clue to the model
>> group, and in their unmodified form are rather unwieldy
>> as command line flags.
>
> I agree. I'd underline that this patch is for migration purposes only, so
> you don't want to specify an exact CPU, but more like a class of CPUs. If
> you look into the available CPUID features in each CPU, you will find that
> there are only a few groups, with currently three for each vendor being a
> good guess.
> /proc/cpuinfo just prints out marketing names, which have only a mild
> relationship to a feature-related technical CPU model. Maybe we can use a
> generation approach like the AMD Opteron ones for Intel, too.
> These G1/G2/G3 names are just arbitrary and have no roots within AMD.
>
> I think that an exact CPU model specification is out of scope for this patch
> and maybe even for QEMU. One could create a database with CPU names and
> associated CPUID flags and provide an external tool to generate a QEMU
> command line out of this. Keeping this database up-to-date (especially for
> desktop CPU models) is a burden that the QEMU project does not want to bear.
>
>>
>>> This is from an EVC kb article[1]:
>>
>> Here is a pointer to a more detailed version:
>>
>>
>> http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003212
>>
>>
>> We probably should also add an option to dump out the
>> full set of qemu-side cpuid flags for the benefit of
>> users and upper level tools.
>
> You mean like this one?
> http://lists.gnu.org/archive/html/qemu-devel/2009-09/msg01228.html
> Resending this patch set is on my plan for next week. What is the state of
> this patch? Will it go in soon? Then I'd rebase my patch set on top of it.

FYI, a similar CPU flag mechanism has been implemented for Sparc and
x86, unifying these would be cool.
Jamie Lokier Jan. 21, 2010, 5:50 p.m. UTC | #24
john cooper wrote:
> kvm itself can modify flags exported from qemu to a guest.

I would hope for an option to request that qemu doesn't run if the
guest won't get the cpuid flags requested on the command line.

-- Jamie
Jamie Lokier Jan. 21, 2010, 5:55 p.m. UTC | #25
john cooper wrote:
> > I foresee wanting to iterate over the models and pick the latest one
> > which a host supports - on the grounds that you have done the hard
> > work of ensuring it is a reasonably good performer, while "probably"
> > working on another host of similar capability when a new host is made
> > available.
> 
> That's a fairly close use case to that of safe migration
> which was one of the primary motivations to identify
> the models being discussed.  Although presentation and
> administration of such was considered the domain of management
> tools.

My hypothetical script which iterates over models in that way is a
"management tool", and would use qemu to help do its job.

Do you mean that more powerful management tools to support safe
migration will maintain _their own_ processor model tables, and
perform their calculations using their own tables instead of querying
qemu, and therefore not have any need of qemu's built in table?

If so, I favour more strongly Anthony's suggestion that the processor
model table lives in a config file (eventually), as that file could be
shared between management tools and qemu itself without duplication.

-- Jamie
Jamie Lokier Jan. 21, 2010, 6:13 p.m. UTC | #26
john cooper wrote:
> I can appreciate the argument above, however the goal was
> choosing names with some basis in reality.  These were
> recommended by our contacts within Intel, are used by VmWare
> to describe their similar cpu models, and arguably have fallen
> to defacto usage as evidenced by such sources as:
> 
>     http://en.wikipedia.org/wiki/Conroe_(microprocessor)
>     http://en.wikipedia.org/wiki/Penryn_(microprocessor)
>     http://en.wikipedia.org/wiki/Nehalem_(microarchitecture)

(Aside: I can confirm they haven't fallen into de facto usage anywhere
in my vicinity :-) I wonder if the contact within Intel are living in
a bit of a bubble where these names are more familiar than the outside
world.)

I think we can all agree that there is no point looking for a familiar
-cpu naming scheme because there aren't any familiar and meaningful names
these days.

> used by VmWare to describe their similar cpu models

If the same names are being used, I see some merit in qemu's list
matching VMware's cpu models *exactly* (in capabilities, not id
strings), to aid migration from VMware.  Is that feasible?  Do they
match already?

> I suspect whatever we choose of reasonable length as a model
> tag for "-cpu" some further detail is going to be required.
> That was the motivation to augment the table as above with
> an instance of a LCD for that associated class.
>  
> > I'm not a typical user: I know quite a lot about x86 architecture;
> > I just haven't kept up to date enough to know the code/model names.
> > Typical users will know less about them.
> 
> Understood.


> One thought I had to further clarify what is going on under the hood
> was to dump the cpuid flags for each model as part of (or in
> addition to) the above table.  But this seems a bit extreme and kvm
> itself can modify flags exported from qemu to a guest.

Here's another idea.

It would be nice if qemu could tell the user which of the built-in
-cpu choices is the most featureful subset of their own host.  With
-cpu host implemented, finding that is probably quite easy.

Users with multiple hosts will get a better feel for what the -cpu
names mean that way, probably better than any documentation would give
them, because they probably have not much idea what CPU families they
have anyway.  (cat /proc/cpuinfo doesn't clarify, as I found).

And it would give a simple, effective, quick indication of what they
must choose if they want an VM image that runs on more than one of
their hosts without a management tool.

-- Jamie
john cooper Jan. 21, 2010, 6:34 p.m. UTC | #27
Jamie Lokier wrote:

> Do you mean that more powerful management tools to support safe
> migration will maintain _their own_ processor model tables, and
> perform their calculations using their own tables instead of querying
> qemu, and therefore not have any need of qemu's built in table?

I would expect so.  IIRC that is what the libvirt folks have
in mind for example.  But we're also trying to simplify the use
case of the lonesome user at one with the qemu CLI.

-john
john cooper Jan. 21, 2010, 6:36 p.m. UTC | #28
Jamie Lokier wrote:

> I think we can all agree that there is no point looking for a familiar
> -cpu naming scheme because there aren't any familiar and meaningful names
> these days.

Even if we dismiss the Intel coined names as internal
code names, there is still VMW's use of them in this
space which we can either align with or attempt to
displace.   All considered I don't see any motivation
nor gain in doing the latter.  Anyway it doesn't appear
likely we're going to resolve this to our collective
satisfaction with a hard-wired naming scheme.   
 
> It would be nice if qemu could tell the user which of the built-in
> -cpu choices is the most featureful subset of their own host.  With
> -cpu host implemented, finding that is probably quite easy.

This should be doable although it may not be as simple
as traversing a hierarchy of features and picking one
with the most host flags present.  In any case this
should be fairly detachable from settling the immediate
issue.

-john
Anthony Liguori Jan. 21, 2010, 6:59 p.m. UTC | #29
On 01/21/2010 10:43 AM, john cooper wrote:
> Anthony Liguori wrote:
>    
>> On 01/20/2010 07:18 PM, john cooper wrote:
>>      
>>> I can appreciate the concern of wanting to get this
>>> as "correct" as possible.
>>>
>>>        
>> This is the root of the trouble.  At the qemu layer, we try to focus on
>> being correct.
>>
>> Management tools are typically the layer that deals with being "correct".
>>
>> A good compromise is making things user tunable which means that a
>> downstream can make "correctness" decisions without forcing those
>> decisions on upstream.
>>      
> Conceptually I agree with such a malleable approach -- actually
> I prefer it.  I thought however it was too much infrastructure to
> foist on the problem just to add a few more models into the mix.
>    

See list for patches.  I didn't do the cpu bits but it should be very 
obvious how to do that now.

Regards,

Anthony Liguori

> The only reservation which comes to mind is that of logistics.
> This may ruffle the code some and impact others such as Andre
> who seem to have existing patches relative to the current structure.
> Anyone have strong objections to this approach before I have a
> look at an implementation?
>
> Thanks,
>
> -john
>
>
>
Dor Laor Jan. 25, 2010, 9:08 a.m. UTC | #30
On 01/21/2010 05:05 PM, Anthony Liguori wrote:
> On 01/20/2010 07:18 PM, john cooper wrote:
>> Chris Wright wrote:
>>> * Daniel P. Berrange (berrange@redhat.com) wrote:
>>>> To be honest all possible naming schemes for '-cpu<name>' are just as
>>>> unfriendly as each other. The only user friendly option is '-cpu host'.
>>>>
>>>> IMHO, we should just pick a concise naming scheme& document it. Given
>>>> they are all equally unfriendly, the one that has consistency with
>>>> vmware
>>>> naming seems like a mild winner.
>>> Heh, I completely agree, and was just saying the same thing to John
>>> earlier today. May as well be -cpu {foo,bar,baz} since the meaning for
>>> those command line options must be well-documented in the man page.
>> I can appreciate the concern of wanting to get this
>> as "correct" as possible.
>
> This is the root of the trouble. At the qemu layer, we try to focus on
> being correct.
>
> Management tools are typically the layer that deals with being "correct".
>
> A good compromise is making things user tunable which means that a
> downstream can make "correctness" decisions without forcing those
> decisions on upstream.
>
> In this case, the idea would be to introduce a new option, say something
> like -cpu-def. The syntax would be:
>
> -cpu-def
> name=coreduo,level=10,family=6,model=14,stepping=8,features=+vme+mtrr+clflush+mca+sse3+monitor,xlevel=0x80000008,model_id="Genuine
> Intel(R) CPU T2600 @ 2.16GHz"
>
> Which is not that exciting since it just lets you do -cpu coreduo in a
> much more complex way. However, if we take advantage of the current
> config support, you can have:
>
> [cpu-def]
> name=coreduo
> level=10
> family=6
> model=14
> stepping=8
> features="+vme+mtrr+clflush+mca+sse3.."
> model_id="Genuine Intel..."
>
> And that can be stored in a config file. We should then parse
> /etc/qemu/target-<targetname>.conf by default. We'll move the current
> x86_defs table into this config file and then downstreams/users can
> define whatever compatibility classes they want.
>
> With this feature, I'd be inclined to take "correct" compatibility
> classes like Nehalem as part of the default qemurc that we install
> because it's easily overridden by a user. It then becomes just a
> suggestion on our part verses a guarantee.
>
> It should just be a matter of adding qemu_cpudefs_opts to
> qemu-config.[ch], taking a new command line that parses the argument via
> QemuOpts, then passing the parsed options to a target-specific function
> that then builds the table of supported cpus.

Isn't the outcome of John's patches and these configs will be exactly 
the same? Since these cpu models won't ever change, there is no reason 
why not to hard code them. Adding configs or command lines is a good 
idea but it is more friendlier to have basic support to the common cpus.
This is why qemu today offers: -cpu ?
x86           qemu64
x86           phenom
x86         core2duo
x86            kvm64
x86           qemu32
x86          coreduo
x86              486
x86          pentium
x86         pentium2
x86         pentium3
x86           athlon
x86             n270

So bottom line, my point is to have John's base + your configs. We need 
to keep also the check verb and the migration support for sending those.

btw: IMO we should deal with this complexity ourselves and save 99.9% of 
the users the need to define such models, don't ask this from a java 
programmer, he is running on a JVM :-)


>
> Regards,
>
> Anthony Liguori
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Jamie Lokier Jan. 25, 2010, 11:27 a.m. UTC | #31
Dor Laor wrote:
> x86           qemu64
> x86           phenom
> x86         core2duo
> x86            kvm64
> x86           qemu32
> x86          coreduo
> x86              486
> x86          pentium
> x86         pentium2
> x86         pentium3
> x86           athlon
> x86             n270

I wonder if kvm32 would be good, for symmetry if nothing else.

-- Jamie
Anthony Liguori Jan. 25, 2010, 2:21 p.m. UTC | #32
On 01/25/2010 03:08 AM, Dor Laor wrote:
>> qemu-config.[ch], taking a new command line that parses the argument via
>> QemuOpts, then passing the parsed options to a target-specific function
>> that then builds the table of supported cpus.
> It should just be a matter of adding qemu_cpudefs_opts to
>
> Isn't the outcome of John's patches and these configs will be exactly 
> the same? Since these cpu models won't ever change, there is no reason 
> why not to hard code them. Adding configs or command lines is a good 
> idea but it is more friendlier to have basic support to the common cpus.
> This is why qemu today offers: -cpu ?
> x86           qemu64
> x86           phenom
> x86         core2duo
> x86            kvm64
> x86           qemu32
> x86          coreduo
> x86              486
> x86          pentium
> x86         pentium2
> x86         pentium3
> x86           athlon
> x86             n270
>
> So bottom line, my point is to have John's base + your configs. We 
> need to keep also the check verb and the migration support for sending 
> those.
>
> btw: IMO we should deal with this complexity ourselves and save 99.9% 
> of the users the need to define such models, don't ask this from a 
> java programmer, he is running on a JVM :-)

I'm suggesting John's base should be implemented as a default config 
that gets installed by default in QEMU.  The point is that a smart user 
(or a downstream) can modify this to suite their needs more appropriately.

Another way to look at this is that implementing a somewhat arbitrary 
policy within QEMU's .c files is something we should try to avoid.  
Implementing arbitrary policy in our default config file is a fine thing 
to do.  Default configs are suggested configurations that are modifiable 
by a user.  Something baked into QEMU is something that ought to work 
for everyone in all circumstances.

Regards,

Anthony Liguori
Dor Laor Jan. 25, 2010, 10:35 p.m. UTC | #33
On 01/25/2010 04:21 PM, Anthony Liguori wrote:
> On 01/25/2010 03:08 AM, Dor Laor wrote:
>>> qemu-config.[ch], taking a new command line that parses the argument via
>>> QemuOpts, then passing the parsed options to a target-specific function
>>> that then builds the table of supported cpus.
>> It should just be a matter of adding qemu_cpudefs_opts to
>>
>> Isn't the outcome of John's patches and these configs will be exactly
>> the same? Since these cpu models won't ever change, there is no reason
>> why not to hard code them. Adding configs or command lines is a good
>> idea but it is more friendlier to have basic support to the common cpus.
>> This is why qemu today offers: -cpu ?
>> x86 qemu64
>> x86 phenom
>> x86 core2duo
>> x86 kvm64
>> x86 qemu32
>> x86 coreduo
>> x86 486
>> x86 pentium
>> x86 pentium2
>> x86 pentium3
>> x86 athlon
>> x86 n270
>>
>> So bottom line, my point is to have John's base + your configs. We
>> need to keep also the check verb and the migration support for sending
>> those.
>>
>> btw: IMO we should deal with this complexity ourselves and save 99.9%
>> of the users the need to define such models, don't ask this from a
>> java programmer, he is running on a JVM :-)
>
> I'm suggesting John's base should be implemented as a default config
> that gets installed by default in QEMU. The point is that a smart user
> (or a downstream) can modify this to suite their needs more appropriately.
>
> Another way to look at this is that implementing a somewhat arbitrary
> policy within QEMU's .c files is something we should try to avoid.
> Implementing arbitrary policy in our default config file is a fine thing
> to do. Default configs are suggested configurations that are modifiable
> by a user. Something baked into QEMU is something that ought to work for

If we get the models right, users and mgmt stacks won't need to define 
them. It seems like almost impossible task for us, mgmt stack/users 
won't do a better job, the opposite I guess. The configs are great, I 
have no argument against them, my case is that if we can pin down some 
definitions, its better live in the code, like the above models.
It might even help to get the same cpus across the various vendors, 
otherwise we might end up with IBM's core2duo, RH's core2duo, Suse's,..

> everyone in all circumstances.
>
> Regards,
>
> Anthony Liguori
>
>
Gerd Hoffmann Jan. 26, 2010, 8:26 a.m. UTC | #34
On 01/25/10 23:35, Dor Laor wrote:
> On 01/25/2010 04:21 PM, Anthony Liguori wrote:
>> Another way to look at this is that implementing a somewhat arbitrary
>> policy within QEMU's .c files is something we should try to avoid.
>> Implementing arbitrary policy in our default config file is a fine thing
>> to do. Default configs are suggested configurations that are modifiable
>> by a user. Something baked into QEMU is something that ought to work for
 >
> If we get the models right, users and mgmt stacks won't need to define
> them. It seems like almost impossible task for us, mgmt stack/users
> won't do a better job, the opposite I guess. The configs are great, I
> have no argument against them, my case is that if we can pin down some
> definitions, its better live in the code, like the above models.
> It might even help to get the same cpus across the various vendors,
> otherwise we might end up with IBM's core2duo, RH's core2duo, Suse's,..

I agree.  When looking at this thread and config file idea it feels a 
bit like "we have a hard time to agree on some sensible default cpu 
types, so lets make this configurable so we don't have to".  Which is a 
bad thing IMHO.

cheers,
   Gerd
Anthony Liguori Jan. 26, 2010, 12:54 p.m. UTC | #35
On 01/26/2010 02:26 AM, Gerd Hoffmann wrote:
> On 01/25/10 23:35, Dor Laor wrote:
>> On 01/25/2010 04:21 PM, Anthony Liguori wrote:
>>> Another way to look at this is that implementing a somewhat arbitrary
>>> policy within QEMU's .c files is something we should try to avoid.
>>> Implementing arbitrary policy in our default config file is a fine 
>>> thing
>>> to do. Default configs are suggested configurations that are modifiable
>>> by a user. Something baked into QEMU is something that ought to work 
>>> for
> >
>> If we get the models right, users and mgmt stacks won't need to define
>> them. It seems like almost impossible task for us, mgmt stack/users
>> won't do a better job, the opposite I guess. The configs are great, I
>> have no argument against them, my case is that if we can pin down some
>> definitions, its better live in the code, like the above models.
>> It might even help to get the same cpus across the various vendors,
>> otherwise we might end up with IBM's core2duo, RH's core2duo, Suse's,..
>
> I agree.  When looking at this thread and config file idea it feels a 
> bit like "we have a hard time to agree on some sensible default cpu 
> types, so lets make this configurable so we don't have to".  Which is 
> a bad thing IMHO.

There's no sensible default.  If a user only has Nehalem-EX class 
processors and Westmeres, why would they want to limit themselves to 
just Nehalem?  For an organization that already uses and understand the 
VMware grouping, is it wrong for them to want to just use VMware-style 
grouping?

This feature is purely data driven.  There is no code involved.  Any 
time a feature is purely data driven and there isn't a clear right and 
wrong solution, a configuration file is a natural solution IMHO.

I think the only real question is whether it belongs in the default 
config or a dedicated configuration file but honestly that's just a 
statement of convention.

Regards,

Anthony Liguori

> cheers,
>   Gerd
Arnd Bergmann Jan. 28, 2010, 8:19 a.m. UTC | #36
On Monday 25 January 2010, Dor Laor wrote:
> x86           qemu64
> x86           phenom
> x86         core2duo
> x86            kvm64
> x86           qemu32
> x86          coreduo
> x86              486
> x86          pentium
> x86         pentium2
> x86         pentium3
> x86           athlon
> x86             n270

I think a really nice addition would be an autodetect option for those
users (e.g. desktop) that know they do not want to migrate the guest
to a lower-spec machine. 

That option IMHO should just show up as identical to the host cpu, with
the exception of features that are not supported in the guest.

	Arnd
Alexander Graf Jan. 28, 2010, 8:43 a.m. UTC | #37
On 28.01.2010, at 09:19, Arnd Bergmann wrote:

> On Monday 25 January 2010, Dor Laor wrote:
>> x86           qemu64
>> x86           phenom
>> x86         core2duo
>> x86            kvm64
>> x86           qemu32
>> x86          coreduo
>> x86              486
>> x86          pentium
>> x86         pentium2
>> x86         pentium3
>> x86           athlon
>> x86             n270
> 
> I think a really nice addition would be an autodetect option for those
> users (e.g. desktop) that know they do not want to migrate the guest
> to a lower-spec machine. 
> 
> That option IMHO should just show up as identical to the host cpu, with
> the exception of features that are not supported in the guest.

That's exactly what -cpu host is. IIRC it's the default now.

Alex
Arnd Bergmann Jan. 28, 2010, 10:09 a.m. UTC | #38
On Thursday 28 January 2010, Alexander Graf wrote:
> > That option IMHO should just show up as identical to the host cpu, with
> > the exception of features that are not supported in the guest.
> 
> That's exactly what -cpu host is. IIRC it's the default now.

Ah, cool. Sorry for my ignorance here.

From what I can tell, neither the man page nor 'qemu -cpu \?' show that
information though.

	Arnd
Anthony Liguori Jan. 28, 2010, 2:10 p.m. UTC | #39
On 01/28/2010 02:43 AM, Alexander Graf wrote:
> On 28.01.2010, at 09:19, Arnd Bergmann wrote:
>
>    
>> On Monday 25 January 2010, Dor Laor wrote:
>>      
>>> x86           qemu64
>>> x86           phenom
>>> x86         core2duo
>>> x86            kvm64
>>> x86           qemu32
>>> x86          coreduo
>>> x86              486
>>> x86          pentium
>>> x86         pentium2
>>> x86         pentium3
>>> x86           athlon
>>> x86             n270
>>>        
>> I think a really nice addition would be an autodetect option for those
>> users (e.g. desktop) that know they do not want to migrate the guest
>> to a lower-spec machine.
>>
>> That option IMHO should just show up as identical to the host cpu, with
>> the exception of features that are not supported in the guest.
>>      
> That's exactly what -cpu host is. IIRC it's the default now.
>    

Not yet.  Someone has to send a patch.

We can't enforce this wrt migration until someone implements migration 
of cpuid state.

Regards,

Anthony Liguori

> Alex
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
diff mbox

Patch

diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index f3834b3..58400ab 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -722,8 +722,8 @@  typedef struct CPUX86State {
 CPUX86State *cpu_x86_init(const char *cpu_model);
 int cpu_x86_exec(CPUX86State *s);
 void cpu_x86_close(CPUX86State *s);
-void x86_cpu_list (FILE *f, int (*cpu_fprintf)(FILE *f, const char *fmt,
-                                                 ...));
+void x86_cpu_list (FILE *f, int (*cpu_fprintf)(FILE *f, const char *fmt, ...),
+                                const char *optarg);
 int cpu_get_pic_interrupt(CPUX86State *s);
 /* MSDOS compatibility mode FPU exception support */
 void cpu_set_ferr(CPUX86State *s);
@@ -875,7 +875,7 @@  uint64_t cpu_get_tsc(CPUX86State *env);
 #define cpu_exec cpu_x86_exec
 #define cpu_gen_code cpu_x86_gen_code
 #define cpu_signal_handler cpu_x86_signal_handler
-#define cpu_list x86_cpu_list
+#define cpu_list_id x86_cpu_list
 
 #define CPU_SAVE_VERSION 11
 
diff --git a/target-i386/helper.c b/target-i386/helper.c
index 730e396..34f4936 100644
--- a/target-i386/helper.c
+++ b/target-i386/helper.c
@@ -42,7 +42,7 @@  static const char *feature_name[] = {
 static const char *ext_feature_name[] = {
     "pni" /* Intel,AMD sse3 */, NULL, NULL, "monitor", "ds_cpl", "vmx", NULL /* Linux smx */, "est",
     "tm2", "ssse3", "cid", NULL, NULL, "cx16", "xtpr", NULL,
-    NULL, NULL, "dca", NULL, NULL, NULL, NULL, "popcnt",
+    NULL, NULL, "dca", "sse4.1", "sse4.2", "x2apic", NULL, "popcnt",
     NULL, NULL, NULL, NULL, NULL, NULL, NULL, "hypervisor",
 };
 static const char *ext2_feature_name[] = {
@@ -58,6 +58,18 @@  static const char *ext3_feature_name[] = {
     NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
 };
 
+/* collects per-function cpuid data
+ */
+typedef struct model_features_t {
+    uint32_t *guest_feat;
+    uint32_t *host_feat;
+    uint32_t check_feat;
+    const char **flag_names;
+    uint32_t cpuid;
+    } model_features_t;
+
+int check_cpuid = 0;
+
 static void add_flagname_to_bitmaps(const char *flagname, uint32_t *features,
                                     uint32_t *ext_features,
                                     uint32_t *ext2_features,
@@ -111,10 +123,25 @@  typedef struct x86_def_t {
           CPUID_MTRR | CPUID_PGE | CPUID_MCA | CPUID_CMOV | CPUID_PAT | \
           CPUID_PSE36 | CPUID_FXSR)
 #define PENTIUM3_FEATURES (PENTIUM2_FEATURES | CPUID_SSE)
-#define PPRO_FEATURES (CPUID_FP87 | CPUID_DE | CPUID_PSE | CPUID_TSC | \
-          CPUID_MSR | CPUID_MCE | CPUID_CX8 | CPUID_PGE | CPUID_CMOV | \
-          CPUID_PAT | CPUID_FXSR | CPUID_MMX | CPUID_SSE | CPUID_SSE2 | \
-          CPUID_PAE | CPUID_SEP | CPUID_APIC)
+
+#define PPRO_FEATURES (\
+    0|CPUID_SSE2|CPUID_SSE|CPUID_FXSR|            /* 7 */ \
+    CPUID_MMX|0 << 22|0 << 21|0 << 20|            /* 8 */ \
+    0 << 19|0 << 18|0 << 17|CPUID_PAT|            /* 1 */ \
+    CPUID_CMOV|0 << 14|CPUID_PGE|0 << 12|         /* a */ \
+    CPUID_SEP|0 << 10|CPUID_APIC|CPUID_CX8|       /* b */ \
+    CPUID_MCE|CPUID_PAE|CPUID_MSR|CPUID_TSC|      /* f */ \
+    CPUID_PSE|CPUID_DE|0 << 1|CPUID_FP87)         /* d */
+
+#define CPUID_EXT2_MASK (\
+    0 << 27|0 << 26|0 << 25|CPUID_FXSR|           /* 1 */ \
+    CPUID_MMX|0 << 22|0 << 21|0 << 20|            /* 8 */ \
+    0 << 19|0 << 18|CPUID_PSE36|CPUID_PAT|        /* 3 */ \
+    CPUID_CMOV|CPUID_MCA|CPUID_PGE|CPUID_MTRR|    /* f */ \
+    0 << 11|0 << 10|CPUID_APIC|CPUID_CX8|         /* 3 */ \
+    CPUID_MCE|CPUID_PAE|CPUID_MSR|CPUID_TSC|      /* f */ \
+    CPUID_PSE|CPUID_DE|CPUID_VME|CPUID_FP87)      /* f */
+
 static x86_def_t x86_defs[] = {
 #ifdef TARGET_X86_64
     {
@@ -127,12 +154,10 @@  static x86_def_t x86_defs[] = {
         .model = 2,
         .stepping = 3,
         .features = PPRO_FEATURES | 
-        /* these features are needed for Win64 and aren't fully implemented */
-            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |
-        /* this feature is needed for Solaris and isn't fully implemented */
-            CPUID_PSE36,
+            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
+            CPUID_PSE36,                                /* note 2 */
         .ext_features = CPUID_EXT_SSE3 | CPUID_EXT_CX16 | CPUID_EXT_POPCNT,
-        .ext2_features = (PPRO_FEATURES & 0x0183F3FF) | 
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
             CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
         .ext3_features = CPUID_EXT3_LAHF_LM | CPUID_EXT3_SVM |
             CPUID_EXT3_ABM | CPUID_EXT3_SSE4A,
@@ -155,7 +180,7 @@  static x86_def_t x86_defs[] = {
         .ext_features = CPUID_EXT_SSE3 | CPUID_EXT_MONITOR | CPUID_EXT_CX16 |
             CPUID_EXT_POPCNT,
         /* Missing: CPUID_EXT2_PDPE1GB, CPUID_EXT2_RDTSCP */
-        .ext2_features = (PPRO_FEATURES & 0x0183F3FF) | 
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
             CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX |
             CPUID_EXT2_3DNOW | CPUID_EXT2_3DNOWEXT | CPUID_EXT2_MMXEXT |
             CPUID_EXT2_FFXSR,
@@ -169,6 +194,126 @@  static x86_def_t x86_defs[] = {
         .model_id = "AMD Phenom(tm) 9550 Quad-Core Processor"
     },
     {
+        .name = "Conroe",
+        .level = 2,
+        .vendor1 = CPUID_VENDOR_INTEL_1,
+        .vendor2 = CPUID_VENDOR_INTEL_2,
+        .vendor3 = CPUID_VENDOR_INTEL_3,
+        .family = 6,	/* P6 */
+        .model = 2,
+        .stepping = 3,
+        .features = PPRO_FEATURES | 
+            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
+            CPUID_PSE36,                                /* note 2 */
+        .ext_features = CPUID_EXT_SSE3 | CPUID_EXT_SSSE3,
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
+            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
+        .ext3_features = CPUID_EXT3_LAHF_LM,
+        .xlevel = 0x8000000A,
+        .model_id = "Intel Celeron_4x0 (Conroe/Merom Class Core 2)",
+    },
+    {
+        .name = "Penryn",
+        .level = 2,
+        .vendor1 = CPUID_VENDOR_INTEL_1,
+        .vendor2 = CPUID_VENDOR_INTEL_2,
+        .vendor3 = CPUID_VENDOR_INTEL_3,
+        .family = 6,	/* P6 */
+        .model = 2,
+        .stepping = 3,
+        .features = PPRO_FEATURES | 
+            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
+            CPUID_PSE36,                                /* note 2 */
+        .ext_features = CPUID_EXT_SSE3 |
+            CPUID_EXT_CX16 | CPUID_EXT_SSSE3 | CPUID_EXT_SSE41,
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
+            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
+        .ext3_features = CPUID_EXT3_LAHF_LM,
+        .xlevel = 0x8000000A,
+        .model_id = "Intel Core 2 Duo P9xxx (Penryn Class Core 2)",
+    },
+    {
+        .name = "Nehalem",
+        .level = 2,
+        .vendor1 = CPUID_VENDOR_INTEL_1,
+        .vendor2 = CPUID_VENDOR_INTEL_2,
+        .vendor3 = CPUID_VENDOR_INTEL_3,
+        .family = 6,	/* P6 */
+        .model = 2,
+        .stepping = 3,
+        .features = PPRO_FEATURES | 
+            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
+            CPUID_PSE36,                                /* note 2 */
+        .ext_features = CPUID_EXT_SSE3 |
+            CPUID_EXT_CX16 | CPUID_EXT_SSSE3 | CPUID_EXT_SSE41 |
+            CPUID_EXT_SSE42 | CPUID_EXT_POPCNT,
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
+            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
+        .ext3_features = CPUID_EXT3_LAHF_LM,
+        .xlevel = 0x8000000A,
+        .model_id = "Intel Core i7 9xx (Nehalem Class Core i7)",
+    },
+    {
+        .name = "Opteron_G1",
+        .level = 5,
+        .vendor1 = CPUID_VENDOR_AMD_1,
+        .vendor2 = CPUID_VENDOR_AMD_2,
+        .vendor3 = CPUID_VENDOR_AMD_3,
+        .family = 15,
+        .model = 6,
+        .stepping = 1,
+        .features = PPRO_FEATURES | 
+            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
+            CPUID_PSE36,                                /* note 2 */
+        .ext_features = CPUID_EXT_SSE3,
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
+            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
+        .xlevel = 0x80000008,
+        .model_id = "AMD Opteron 240 (Gen 1 Class Opteron)",
+    },
+    {
+        .name = "Opteron_G2",
+        .level = 5,
+        .vendor1 = CPUID_VENDOR_AMD_1,
+        .vendor2 = CPUID_VENDOR_AMD_2,
+        .vendor3 = CPUID_VENDOR_AMD_3,
+        .family = 15,
+        .model = 6,
+        .stepping = 1,
+        .features = PPRO_FEATURES | 
+            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
+            CPUID_PSE36,                                /* note 2 */
+        .ext_features = CPUID_EXT_SSE3 | CPUID_EXT_CX16,
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
+            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX |
+            CPUID_EXT2_RDTSCP,
+        .ext3_features = CPUID_EXT3_SVM | CPUID_EXT3_LAHF_LM,
+        .xlevel = 0x80000008,
+        .model_id = "AMD Opteron 22xx (Gen 2 Class Opteron)",
+    },
+    {
+        .name = "Opteron_G3",
+        .level = 5,
+        .vendor1 = CPUID_VENDOR_AMD_1,
+        .vendor2 = CPUID_VENDOR_AMD_2,
+        .vendor3 = CPUID_VENDOR_AMD_3,
+        .family = 15,
+        .model = 6,
+        .stepping = 1,
+        .features = PPRO_FEATURES | 
+            CPUID_MTRR | CPUID_CLFLUSH | CPUID_MCA |    /* note 1 */
+            CPUID_PSE36,                                /* note 2 */
+        .ext_features = CPUID_EXT_SSE3 | CPUID_EXT_MONITOR |
+            CPUID_EXT_POPCNT | CPUID_EXT_CX16,
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | 
+            CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX |
+            CPUID_EXT2_RDTSCP,
+        .ext3_features = CPUID_EXT3_SVM | CPUID_EXT3_SSE4A | CPUID_EXT3_ABM |
+            CPUID_EXT3_MISALIGNSSE | CPUID_EXT3_LAHF_LM,
+        .xlevel = 0x80000008,
+        .model_id = "AMD Opteron 23xx (Gen 3 Class Opteron)",
+    },
+    {
         .name = "core2duo",
         .level = 10,
         .family = 6,
@@ -205,7 +350,7 @@  static x86_def_t x86_defs[] = {
         /* Missing: CPUID_EXT_POPCNT, CPUID_EXT_MONITOR */
         .ext_features = CPUID_EXT_SSE3 | CPUID_EXT_CX16,
         /* Missing: CPUID_EXT2_PDPE1GB, CPUID_EXT2_RDTSCP */
-        .ext2_features = (PPRO_FEATURES & 0x0183F3FF) |
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) |
             CPUID_EXT2_LM | CPUID_EXT2_SYSCALL | CPUID_EXT2_NX,
         /* Missing: CPUID_EXT3_LAHF_LM, CPUID_EXT3_CMP_LEG, CPUID_EXT3_EXTAPIC,
                     CPUID_EXT3_CR8LEG, CPUID_EXT3_ABM, CPUID_EXT3_SSE4A,
@@ -292,7 +437,7 @@  static x86_def_t x86_defs[] = {
         .model = 2,
         .stepping = 3,
         .features = PPRO_FEATURES | CPUID_PSE36 | CPUID_VME | CPUID_MTRR | CPUID_MCA,
-        .ext2_features = (PPRO_FEATURES & 0x0183F3FF) | CPUID_EXT2_MMXEXT | CPUID_EXT2_3DNOW | CPUID_EXT2_3DNOWEXT,
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | CPUID_EXT2_MMXEXT | CPUID_EXT2_3DNOW | CPUID_EXT2_3DNOWEXT,
         .xlevel = 0x80000008,
         /* XXX: put another string ? */
         .model_id = "QEMU Virtual CPU version " QEMU_VERSION,
@@ -313,12 +458,16 @@  static x86_def_t x86_defs[] = {
             CPUID_EXT_SSE3 /* PNI */ | CPUID_EXT_SSSE3,
             /* Missing: CPUID_EXT_DSCPL | CPUID_EXT_EST |
              * CPUID_EXT_TM2 | CPUID_EXT_XTPR */
-        .ext2_features = (PPRO_FEATURES & 0x0183F3FF) | CPUID_EXT2_NX,
+        .ext2_features = (PPRO_FEATURES & CPUID_EXT2_MASK) | CPUID_EXT2_NX,
         /* Missing: .ext3_features = CPUID_EXT3_LAHF_LM */
         .xlevel = 0x8000000A,
         .model_id = "Intel(R) Atom(TM) CPU N270   @ 1.60GHz",
-    },
+    }
 };
+/* notes for preceeding cpu models:
+ *   1: these features are needed for Win64 and aren't fully implemented
+ *   2: this feature is needed for Solaris and isn't fully implemented
+ */
 
 static void host_cpuid(uint32_t function, uint32_t count, uint32_t *eax,
                                uint32_t *ebx, uint32_t *ecx, uint32_t *edx);
@@ -368,6 +517,51 @@  static int cpu_x86_fill_host(x86_def_t *x86_cpu_def)
     return 0;
 }
 
+static int unavailable_host_feature(struct model_features_t *f, uint32_t mask)
+{
+    int i;
+
+    for (i = 0; i < 32; ++i)
+        if (1 << i & mask) {
+            fprintf(stderr, "warning: host cpuid %04x_%04x lacks requested"
+                " flag '%s' [0x%08x]\n",
+                f->cpuid >> 16, f->cpuid & 0xffff,
+                f->flag_names[i] ? f->flag_names[i] : "[reserved]", mask);
+            break;
+        }
+    return 0;
+}
+
+/* best effort attempt to inform user requested cpu flags aren't making
+ * their way to the guest.  Note: ft[].check_feat ideally should be
+ * specified via a guest_def field to suppress report of extraneous flags.
+ */
+static int check_features_against_host(x86_def_t *guest_def)
+{
+    x86_def_t host_def;
+    uint32_t mask;
+    int rv, i;
+    struct model_features_t ft[] = {
+        {&guest_def->features, &host_def.features,
+            ~0, feature_name, 0x00000000},
+        {&guest_def->ext_features, &host_def.ext_features,
+            ~CPUID_EXT_HYPERVISOR, ext_feature_name, 0x00000001},
+        {&guest_def->ext2_features, &host_def.ext2_features,
+            ~PPRO_FEATURES, ext2_feature_name, 0x80000000},
+        {&guest_def->ext3_features, &host_def.ext3_features,
+            ~CPUID_EXT3_SVM, ext3_feature_name, 0x80000001}};
+
+    cpu_x86_fill_host(&host_def);
+    for (rv = 0, i = 0; i < sizeof (ft) / sizeof (ft[0]); ++i)
+        for (mask = 1; mask; mask <<= 1)
+            if (ft[i].check_feat & mask && *ft[i].guest_feat & mask &&
+                !(*ft[i].host_feat & mask)) {
+                    unavailable_host_feature(&ft[i], mask);
+                    rv = 1;
+                }
+    return rv;
+}
+
 static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
 {
     unsigned int i;
@@ -471,6 +665,8 @@  static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
                 fprintf(stderr, "unrecognized feature %s\n", featurestr);
                 goto error;
             }
+        } else if (!strcmp(featurestr, "check")) {
+            check_cpuid = 1;
         } else {
             fprintf(stderr, "feature string `%s' not in format (+feature|-feature|feature=xyz)\n", featurestr);
             goto error;
@@ -485,6 +681,9 @@  static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
     x86_cpu_def->ext_features &= ~minus_ext_features;
     x86_cpu_def->ext2_features &= ~minus_ext2_features;
     x86_cpu_def->ext3_features &= ~minus_ext3_features;
+    if (check_cpuid) {
+        check_features_against_host(x86_cpu_def);
+    }
     free(s);
     return 0;
 
@@ -493,12 +692,19 @@  error:
     return -1;
 }
 
-void x86_cpu_list (FILE *f, int (*cpu_fprintf)(FILE *f, const char *fmt, ...))
+void x86_cpu_list (FILE *f, int (*cpu_fprintf)(FILE *f, const char *fmt, ...),
+                  const char *optarg)
 {
     unsigned int i;
+    unsigned char id = !strcmp("??", optarg);
 
     for (i = 0; i < ARRAY_SIZE(x86_defs); i++)
-        (*cpu_fprintf)(f, "x86 %16s\n", x86_defs[i].name);
+        if (id) {
+            (*cpu_fprintf)(f, "x86 %16s  %-48s\n", x86_defs[i].name,
+                x86_defs[i].model_id);
+        } else {
+            (*cpu_fprintf)(f, "x86 %16s\n", x86_defs[i].name);
+        }
 }
 
 static int cpu_x86_register (CPUX86State *env, const char *cpu_model)
diff --git a/vl.c b/vl.c
index e606903..b1d8490 100644
--- a/vl.c
+++ b/vl.c
@@ -4982,8 +4982,12 @@  int main(int argc, char **argv, char **envp)
                 /* hw initialization will check this */
                 if (*optarg == '?') {
 /* XXX: implement xxx_cpu_list for targets that still miss it */
-#if defined(cpu_list)
+#if defined(cpu_list_id)
+                    cpu_list_id(stdout, &fprintf, optarg);
+#elif defined(cpu_list)	/* revert to previous func definition */
                     cpu_list(stdout, &fprintf);
+#else
+#error cpu_list_id() is undefined for this architecture.
 #endif
                     exit(0);
                 } else {