diff mbox

[07/28] kvm tools: Move 'kvm__recommended_cpus' to arch-specific code

Message ID 1323246564.4009.2.camel@lappy
State New, archived
Headers show

Commit Message

Sasha Levin Dec. 7, 2011, 8:29 a.m. UTC
On Wed, 2011-12-07 at 18:28 +1100, Matt Evans wrote:
> On 07/12/11 18:24, Alexander Graf wrote:
> > 
> > On 07.12.2011, at 08:19, Matt Evans <matt@ozlabs.org> wrote:
> > 
> >> On 07/12/11 17:34, Sasha Levin wrote:
> >>> On Wed, 2011-12-07 at 17:17 +1100, Matt Evans wrote:
> >>>> On 06/12/11 19:20, Sasha Levin wrote:
> >>>>> Why is it getting moved out of generic code?
> >>>>>
> >>>>> This is used to determine the maximum amount of vcpus supported by the
> >>>>> host for a single guest, and as far as I know KVM_CAP_NR_VCPUS and
> >>>>> KVM_CAP_MAX_VCPUS are not arch specific.
> >>>>
> >>>> I checked api.txt and you're right, it isn't arch-specific.  I assumed it was,
> >>>> because PPC KVM doesn't support it ;-) I've dropped this patch and in its place
> >>>> implemented the api.txt suggestion of "if KVM_CAP_NR_VCPUS fails, use 4" instead
> >>>> of die(); you'll see that when I repost.
> >>>>
> >>>> This will have the effect of PPC being limited to 4 CPUs until the kernel
> >>>> supports that CAP.  (I'll see about this part too.)
> >>>
> >>> I went to look at which limitation PPC places on amount of vcpus in
> >>> guest, and saw this in kvmppc_core_vcpu_create() in the book3s code:
> >>>
> >>>    vcpu = kvmppc_core_vcpu_create(kvm, id);
> >>>    vcpu->arch.wqp = &vcpu->wq;
> >>>    if (!IS_ERR(vcpu))
> >>>        kvmppc_create_vcpu_debugfs(vcpu, id);
> >>>
> >>> This is wrong, right? The VCPU is dereferenced before actually checking
> >>> that it's not an error.
> >>
> >> Yeah, that's b0rk.  Alex, a patch below. :)
> > 
> > Thanks :). Will apply asap but don't have a real keyboard today :).
> 
> Ha!  Voice control on your phone, what could go wrong?
> 
> > I suppose this is stable material?
> 
> Good idea, (and if we're formal,
> Signed-off-by: Matt Evans <matt@ozlabs.org>
> ).  I suppose no one's seen a vcpu fail to be created, yet.

I also got another one, but it's **completely untested** (not even
compiled). Alex, Matt, any chance one of you can loan a temporary ppc
shell for the upcoming tests of KVM tool/ppc KVM?

---

From: Sasha Levin <levinsasha928@gmail.com>
Date: Wed, 7 Dec 2011 10:24:56 +0200
Subject: [PATCH] KVM: PPC: Use the vcpu kmem_cache when allocating new VCPUs

Currently the code kzalloc()s new VCPUs instead of using the kmem_cache
which is created when KVM is initialized.

Modify it to allocate VCPUs from that kmem_cache.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 arch/powerpc/kvm/book3s_hv.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

Comments

Avi Kivity Dec. 7, 2011, 2:11 p.m. UTC | #1
On 12/07/2011 10:29 AM, Sasha Levin wrote:
> I also got another one, but it's **completely untested** (not even
> compiled). Alex, Matt, any chance one of you can loan a temporary ppc
> shell for the upcoming tests of KVM tool/ppc KVM?
>

qemu offers free ppc shells
Sasha Levin Dec. 7, 2011, 2:22 p.m. UTC | #2
On Wed, 2011-12-07 at 16:11 +0200, Avi Kivity wrote:
> On 12/07/2011 10:29 AM, Sasha Levin wrote:
> > I also got another one, but it's **completely untested** (not even
> > compiled). Alex, Matt, any chance one of you can loan a temporary ppc
> > shell for the upcoming tests of KVM tool/ppc KVM?
> >
> 
> qemu offers free ppc shells

qemu would let me nest a guest inside a ppc guest on my x86?
Avi Kivity Dec. 7, 2011, 2:25 p.m. UTC | #3
On 12/07/2011 04:22 PM, Sasha Levin wrote:
> On Wed, 2011-12-07 at 16:11 +0200, Avi Kivity wrote:
> > On 12/07/2011 10:29 AM, Sasha Levin wrote:
> > > I also got another one, but it's **completely untested** (not even
> > > compiled). Alex, Matt, any chance one of you can loan a temporary ppc
> > > shell for the upcoming tests of KVM tool/ppc KVM?
> > >
> > 
> > qemu offers free ppc shells
>
> qemu would let me nest a guest inside a ppc guest on my x86?

On supervisor mode only (trap-and-emulate virtualization); I don't think
qemu emulates ppc hypervisor mode.
Alexander Graf Dec. 7, 2011, 3 p.m. UTC | #4
On 07.12.2011, at 15:25, Avi Kivity <avi@redhat.com> wrote:

> On 12/07/2011 04:22 PM, Sasha Levin wrote:
>> On Wed, 2011-12-07 at 16:11 +0200, Avi Kivity wrote:
>>> On 12/07/2011 10:29 AM, Sasha Levin wrote:
>>>> I also got another one, but it's **completely untested** (not even
>>>> compiled). Alex, Matt, any chance one of you can loan a temporary ppc
>>>> shell for the upcoming tests of KVM tool/ppc KVM?
>>>> 
>>> 
>>> qemu offers free ppc shells
>> 
>> qemu would let me nest a guest inside a ppc guest on my x86?
> 
> On supervisor mode only (trap-and-emulate virtualization); I don't think
> qemu emulates ppc hypervisor mode.

Yup, that should work for PR KVM. I'm not sure Matt's patches work in that mode though.

So you should be able to run qemu-system-ppc64 -M pseries -nographic to emulate a ppc box and then do the same in the guest with -enable-kvm.

If that works, you can try with kvm tool in the first guest :).

Alex

> 
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sasha Levin Dec. 7, 2011, 3:25 p.m. UTC | #5
On Wed, 2011-12-07 at 16:00 +0100, Alexander Graf wrote:
> On 07.12.2011, at 15:25, Avi Kivity <avi@redhat.com> wrote:
> 
> > On 12/07/2011 04:22 PM, Sasha Levin wrote:
> >> On Wed, 2011-12-07 at 16:11 +0200, Avi Kivity wrote:
> >>> On 12/07/2011 10:29 AM, Sasha Levin wrote:
> >>>> I also got another one, but it's **completely untested** (not even
> >>>> compiled). Alex, Matt, any chance one of you can loan a temporary ppc
> >>>> shell for the upcoming tests of KVM tool/ppc KVM?
> >>>> 
> >>> 
> >>> qemu offers free ppc shells
> >> 
> >> qemu would let me nest a guest inside a ppc guest on my x86?
> > 
> > On supervisor mode only (trap-and-emulate virtualization); I don't think
> > qemu emulates ppc hypervisor mode.
> 
> Yup, that should work for PR KVM. I'm not sure Matt's patches work in that mode though.
> 
> So you should be able to run qemu-system-ppc64 -M pseries -nographic to emulate a ppc box and then do the same in the guest with -enable-kvm.
> 
> If that works, you can try with kvm tool in the first guest :).

Tried that, got the following (qemu-kvm-1.0 on qemu-kvm.git):

  LINK  ppc64-softmmu/qemu-system-ppc64
../libhw64/i8259.o: In function `kvm_i8259_set_irq':
/root/work/src/qemu-kvm/hw/i8259.c:707: undefined reference to `apic_set_irq_delivered'
../libhw64/i8259.o: In function `pic_read_irq':
/root/work/src/qemu-kvm/hw/i8259.c:240: undefined reference to `timer_acks'
/root/work/src/qemu-kvm/hw/i8259.c:241: undefined reference to `timer_ints_to_push'
collect2: ld returned 1 exit status
make[1]: *** [qemu-system-ppc64] Error 1
make: *** [subdir-ppc64-softmmu] Error 2

What am I doing wrong?
Alexander Graf Dec. 7, 2011, 3:58 p.m. UTC | #6
On 07.12.2011, at 16:25, Sasha Levin <levinsasha928@gmail.com> wrote:

> On Wed, 2011-12-07 at 16:00 +0100, Alexander Graf wrote:
>> On 07.12.2011, at 15:25, Avi Kivity <avi@redhat.com> wrote:
>> 
>>> On 12/07/2011 04:22 PM, Sasha Levin wrote:
>>>> On Wed, 2011-12-07 at 16:11 +0200, Avi Kivity wrote:
>>>>> On 12/07/2011 10:29 AM, Sasha Levin wrote:
>>>>>> I also got another one, but it's **completely untested** (not even
>>>>>> compiled). Alex, Matt, any chance one of you can loan a temporary ppc
>>>>>> shell for the upcoming tests of KVM tool/ppc KVM?
>>>>>> 
>>>>> 
>>>>> qemu offers free ppc shells
>>>> 
>>>> qemu would let me nest a guest inside a ppc guest on my x86?
>>> 
>>> On supervisor mode only (trap-and-emulate virtualization); I don't think
>>> qemu emulates ppc hypervisor mode.
>> 
>> Yup, that should work for PR KVM. I'm not sure Matt's patches work in that mode though.
>> 
>> So you should be able to run qemu-system-ppc64 -M pseries -nographic to emulate a ppc box and then do the same in the guest with -enable-kvm.
>> 
>> If that works, you can try with kvm tool in the first guest :).
> 
> Tried that, got the following (qemu-kvm-1.0 on qemu-kvm.git):
> 
>  LINK  ppc64-softmmu/qemu-system-ppc64
> ../libhw64/i8259.o: In function `kvm_i8259_set_irq':
> /root/work/src/qemu-kvm/hw/i8259.c:707: undefined reference to `apic_set_irq_delivered'
> ../libhw64/i8259.o: In function `pic_read_irq':
> /root/work/src/qemu-kvm/hw/i8259.c:240: undefined reference to `timer_acks'
> /root/work/src/qemu-kvm/hw/i8259.c:241: undefined reference to `timer_ints_to_push'
> collect2: ld returned 1 exit status
> make[1]: *** [qemu-system-ppc64] Error 1
> make: *** [subdir-ppc64-softmmu] Error 2
> 
> What am I doing wrong?

You're probably using the wrong tree :). The qemu-kvm fork used to work a while back, but I don't autotest it. Try again with

  git://git.qemu.org/qemu.git

and make sure to have libfdt installed :)

Alex

> 
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Dec. 20, 2011, 3:23 p.m. UTC | #7
On 07.12.2011, at 09:29, Sasha Levin wrote:

> On Wed, 2011-12-07 at 18:28 +1100, Matt Evans wrote:
>> On 07/12/11 18:24, Alexander Graf wrote:
>>> 
>>> On 07.12.2011, at 08:19, Matt Evans <matt@ozlabs.org> wrote:
>>> 
>>>> On 07/12/11 17:34, Sasha Levin wrote:
>>>>> On Wed, 2011-12-07 at 17:17 +1100, Matt Evans wrote:
>>>>>> On 06/12/11 19:20, Sasha Levin wrote:
>>>>>>> Why is it getting moved out of generic code?
>>>>>>> 
>>>>>>> This is used to determine the maximum amount of vcpus supported by the
>>>>>>> host for a single guest, and as far as I know KVM_CAP_NR_VCPUS and
>>>>>>> KVM_CAP_MAX_VCPUS are not arch specific.
>>>>>> 
>>>>>> I checked api.txt and you're right, it isn't arch-specific.  I assumed it was,
>>>>>> because PPC KVM doesn't support it ;-) I've dropped this patch and in its place
>>>>>> implemented the api.txt suggestion of "if KVM_CAP_NR_VCPUS fails, use 4" instead
>>>>>> of die(); you'll see that when I repost.
>>>>>> 
>>>>>> This will have the effect of PPC being limited to 4 CPUs until the kernel
>>>>>> supports that CAP.  (I'll see about this part too.)
>>>>> 
>>>>> I went to look at which limitation PPC places on amount of vcpus in
>>>>> guest, and saw this in kvmppc_core_vcpu_create() in the book3s code:
>>>>> 
>>>>>   vcpu = kvmppc_core_vcpu_create(kvm, id);
>>>>>   vcpu->arch.wqp = &vcpu->wq;
>>>>>   if (!IS_ERR(vcpu))
>>>>>       kvmppc_create_vcpu_debugfs(vcpu, id);
>>>>> 
>>>>> This is wrong, right? The VCPU is dereferenced before actually checking
>>>>> that it's not an error.
>>>> 
>>>> Yeah, that's b0rk.  Alex, a patch below. :)
>>> 
>>> Thanks :). Will apply asap but don't have a real keyboard today :).
>> 
>> Ha!  Voice control on your phone, what could go wrong?
>> 
>>> I suppose this is stable material?
>> 
>> Good idea, (and if we're formal,
>> Signed-off-by: Matt Evans <matt@ozlabs.org>
>> ).  I suppose no one's seen a vcpu fail to be created, yet.
> 
> I also got another one, but it's **completely untested** (not even
> compiled). Alex, Matt, any chance one of you can loan a temporary ppc
> shell for the upcoming tests of KVM tool/ppc KVM?

The problem with giving you a shell on a PPC box is really that the hardware Matt's work is focusing on is not available to the public yet. I could maybe try and get you access on a G5 box, but I'm not sure how useful that is to you really, as you'll only be able to run PR KVM, not HV KVM.

> 
> ---
> 
> From: Sasha Levin <levinsasha928@gmail.com>
> Date: Wed, 7 Dec 2011 10:24:56 +0200
> Subject: [PATCH] KVM: PPC: Use the vcpu kmem_cache when allocating new VCPUs
> 
> Currently the code kzalloc()s new VCPUs instead of using the kmem_cache
> which is created when KVM is initialized.
> 
> Modify it to allocate VCPUs from that kmem_cache.
> 
> Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
> ---
> arch/powerpc/kvm/book3s_hv.c |    6 +++---
> 1 files changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 0cb137a..e309099 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -411,7 +411,7 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
> 		goto out;
> 
> 	err = -ENOMEM;
> -	vcpu = kzalloc(sizeof(struct kvm_vcpu), GFP_KERNEL);
> +	vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);

Paul, is there any rationale on why not to use the kmem cache? Are we bound by real mode magic again?


Alex

> 	if (!vcpu)
> 		goto out;
> 
> @@ -463,7 +463,7 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
> 	return vcpu;
> 
> free_vcpu:
> -	kfree(vcpu);
> +	kmem_cache_free(kvm_vcpu_cache, vcpu);
> out:
> 	return ERR_PTR(err);
> }
> @@ -471,7 +471,7 @@ out:
> void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu)
> {
> 	kvm_vcpu_uninit(vcpu);
> -	kfree(vcpu);
> +	kmem_cache_free(kvm_vcpu_cache, vcpu);
> }
> 
> static void kvmppc_set_timer(struct kvm_vcpu *vcpu)
> 
> -- 
> 
> Sasha.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Paul Mackerras Dec. 21, 2011, 10:17 p.m. UTC | #8
On Tue, Dec 20, 2011 at 04:23:28PM +0100, Alexander Graf wrote:

> > -	vcpu = kzalloc(sizeof(struct kvm_vcpu), GFP_KERNEL);
> > +	vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);
> 
> Paul, is there any rationale on why not to use the kmem cache? Are
> we bound by real mode magic again?

No, I just hadn't realized it existed.  This looks fine to me.

Acked-by: Paul Mackerras <paulus@samba.org>

Paul.
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Dec. 23, 2011, 2:05 p.m. UTC | #9
On 21.12.2011, at 23:17, Paul Mackerras wrote:

> On Tue, Dec 20, 2011 at 04:23:28PM +0100, Alexander Graf wrote:
> 
>>> -	vcpu = kzalloc(sizeof(struct kvm_vcpu), GFP_KERNEL);
>>> +	vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);
>> 
>> Paul, is there any rationale on why not to use the kmem cache? Are
>> we bound by real mode magic again?
> 
> No, I just hadn't realized it existed.  This looks fine to me.
> 
> Acked-by: Paul Mackerras <paulus@samba.org>

Thanks, applied to kvm-ppc-next.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 0cb137a..e309099 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -411,7 +411,7 @@  struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 		goto out;
 
 	err = -ENOMEM;
-	vcpu = kzalloc(sizeof(struct kvm_vcpu), GFP_KERNEL);
+	vcpu = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);
 	if (!vcpu)
 		goto out;
 
@@ -463,7 +463,7 @@  struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 	return vcpu;
 
 free_vcpu:
-	kfree(vcpu);
+	kmem_cache_free(kvm_vcpu_cache, vcpu);
 out:
 	return ERR_PTR(err);
 }
@@ -471,7 +471,7 @@  out:
 void kvmppc_core_vcpu_free(struct kvm_vcpu *vcpu)
 {
 	kvm_vcpu_uninit(vcpu);
-	kfree(vcpu);
+	kmem_cache_free(kvm_vcpu_cache, vcpu);
 }
 
 static void kvmppc_set_timer(struct kvm_vcpu *vcpu)