diff mbox

[10/12] kvm: enable smp > 1

Message ID 0af691d779965663abdd7bc708c2ad7bce2f6da0.1273699506.git.mtosatti@redhat.com
State New
Headers show

Commit Message

Marcelo Tosatti May 12, 2010, 9:25 p.m. UTC
Process INIT/SIPI requests and enable -smp > 1.

Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
 kvm-all.c          |   10 +++++-----
 kvm.h              |    2 ++
 target-i386/kvm.c  |   16 ++++++++++++++++
 target-ppc/kvm.c   |    5 +++++
 target-s390x/kvm.c |    5 +++++
 5 files changed, 33 insertions(+), 5 deletions(-)

Comments

Alexander Graf May 14, 2010, 2:06 p.m. UTC | #1
On 12.05.2010, at 23:25, Marcelo Tosatti wrote:

> Process INIT/SIPI requests and enable -smp > 1.

Does this enable real SMP or does it still only allow one vcpu to run at a time?

Alex
Avi Kivity May 14, 2010, 3:48 p.m. UTC | #2
On 05/14/2010 05:06 PM, Alexander Graf wrote:
> On 12.05.2010, at 23:25, Marcelo Tosatti wrote:
>
>    
>> Process INIT/SIPI requests and enable -smp>  1.
>>      
> Does this enable real SMP or does it still only allow one vcpu to run at a time?
>
>    

The realest ever.  Still doesn't use in-kernel irqchip (qemu-kvm does 
"real" smp with -no-kvm-irqchip as well).
Alexander Graf May 14, 2010, 3:49 p.m. UTC | #3
On 14.05.2010, at 17:48, Avi Kivity wrote:

> On 05/14/2010 05:06 PM, Alexander Graf wrote:
>> On 12.05.2010, at 23:25, Marcelo Tosatti wrote:
>> 
>>   
>>> Process INIT/SIPI requests and enable -smp>  1.
>>>     
>> Does this enable real SMP or does it still only allow one vcpu to run at a time?
>> 
>>   
> 
> The realest ever.  Still doesn't use in-kernel irqchip (qemu-kvm does "real" smp with -no-kvm-irqchip as well).

That's odd. On S390 I only get at most 100% cpu usage no matter how much -smp I pass.

Alex
Jan Kiszka May 14, 2010, 3:54 p.m. UTC | #4
Alexander Graf wrote:
> On 14.05.2010, at 17:48, Avi Kivity wrote:
> 
>> On 05/14/2010 05:06 PM, Alexander Graf wrote:
>>> On 12.05.2010, at 23:25, Marcelo Tosatti wrote:
>>>
>>>   
>>>> Process INIT/SIPI requests and enable -smp>  1.
>>>>     
>>> Does this enable real SMP or does it still only allow one vcpu to run at a time?
>>>
>>>   
>> The realest ever.  Still doesn't use in-kernel irqchip (qemu-kvm does "real" smp with -no-kvm-irqchip as well).
> 
> That's odd. On S390 I only get at most 100% cpu usage no matter how much -smp I pass.

--enable-io-thread?

If you had it disabled, it would also answer my question if -smp works
without problems without that feature.

Jan
Alexander Graf May 14, 2010, 3:58 p.m. UTC | #5
On 14.05.2010, at 17:54, Jan Kiszka wrote:

> Alexander Graf wrote:
>> On 14.05.2010, at 17:48, Avi Kivity wrote:
>> 
>>> On 05/14/2010 05:06 PM, Alexander Graf wrote:
>>>> On 12.05.2010, at 23:25, Marcelo Tosatti wrote:
>>>> 
>>>> 
>>>>> Process INIT/SIPI requests and enable -smp>  1.
>>>>> 
>>>> Does this enable real SMP or does it still only allow one vcpu to run at a time?
>>>> 
>>>> 
>>> The realest ever.  Still doesn't use in-kernel irqchip (qemu-kvm does "real" smp with -no-kvm-irqchip as well).
>> 
>> That's odd. On S390 I only get at most 100% cpu usage no matter how much -smp I pass.
> 
> --enable-io-thread?
> 
> If you had it disabled, it would also answer my question if -smp works
> without problems without that feature.

Ah, yes, that's different.

And with --enable-io-thread I finally also get the virtio-console hang again! Yay!


Alex
Udo Lembke May 19, 2010, 9:57 a.m. UTC | #6
Jan Kiszka schrieb:
> ...
> --enable-io-thread?
>
> If you had it disabled, it would also answer my question if -smp works
> without problems without that feature.
>
> Jan
>
>   
Hi,
i have a dumb question: what is the "--enable-io-thread"? Is this a 
kvm-switch?
My kvm 0.12.4 don't accept this switch. I'm know only "threads=n" as 
smp-parameter and "aio=threads" as drive-parameter.

Because i look for a solution for a better io-performance of 
windows-guest with more than one cpu...

best regards

Udo
Avi Kivity May 19, 2010, 4:21 p.m. UTC | #7
On 05/19/2010 12:57 PM, Udo Lembke wrote:
> Jan Kiszka schrieb:
>> ...
>> --enable-io-thread?
>>
>> If you had it disabled, it would also answer my question if -smp works
>> without problems without that feature.
>>
>> Jan
>>
> Hi,
> i have a dumb question: what is the "--enable-io-thread"? Is this a 
> kvm-switch?

It's a ./configure switch for upstream qemu (don't use with qemu-kvm yet).

> My kvm 0.12.4 don't accept this switch. I'm know only "threads=n" as 
> smp-parameter and "aio=threads" as drive-parameter.
>
> Because i look for a solution for a better io-performance of 
> windows-guest with more than one cpu...

Unrelated, what are your smp issues?
Udo Lembke May 19, 2010, 8:02 p.m. UTC | #8
Avi Kivity schrieb:
> On 05/19/2010 12:57 PM, Udo Lembke wrote:
>> Jan Kiszka schrieb:
>>> ...
>>> --enable-io-thread?
>>>
>>> If you had it disabled, it would also answer my question if -smp works
>>> without problems without that feature.
>>>
>>> Jan
>>>
>> Hi,
>> i have a dumb question: what is the "--enable-io-thread"? Is this a 
>> kvm-switch?
>
> It's a ./configure switch for upstream qemu (don't use with qemu-kvm 
> yet).
>
>> My kvm 0.12.4 don't accept this switch. I'm know only "threads=n" as 
>> smp-parameter and "aio=threads" as drive-parameter.
>>
>> Because i look for a solution for a better io-performance of 
>> windows-guest with more than one cpu...
>
> Unrelated, what are your smp issues?
>
If i use one cpu i got a good io-performance:
e.g. over 500MB/s at the profile "install" of the io-benchmark h2benchw.exe.
( aio=threads | SAS-Raid-0 | ftp://ftp.heise.de/pub/ct/ctsi/h2benchw.zip 
| hwbenchw.exe -p -w iotest 0)
The same test but with two cpus gives results between 27 and 298 MB/s!

Also in real life it's noticeable not only with an benchmark. I use a 
win-vm with two cpu for postscript-ripping and have a performance drop 
due to the bad io.

Udo
Avi Kivity May 20, 2010, 6:12 a.m. UTC | #9
On 05/19/2010 11:02 PM, Udo Lembke wrote:
>> Unrelated, what are your smp issues?
>>
>
> If i use one cpu i got a good io-performance:
> e.g. over 500MB/s at the profile "install" of the io-benchmark 
> h2benchw.exe.
> ( aio=threads | SAS-Raid-0 | 
> ftp://ftp.heise.de/pub/ct/ctsi/h2benchw.zip | hwbenchw.exe -p -w 
> iotest 0)
> The same test but with two cpus gives results between 27 and 298 MB/s!
>
> Also in real life it's noticeable not only with an benchmark. I use a 
> win-vm with two cpu for postscript-ripping and have a performance drop 
> due to the bad io.

What's your block device model?  virtio or ide?

What does cpu usage look like on guest or host?
Udo Lembke May 20, 2010, 7:01 a.m. UTC | #10
Avi Kivity schrieb:
> On 05/19/2010 11:02 PM, Udo Lembke wrote:
>>> Unrelated, what are your smp issues?
>>>
>>
>> If i use one cpu i got a good io-performance:
>> e.g. over 500MB/s at the profile "install" of the io-benchmark 
>> h2benchw.exe.
>> ( aio=threads | SAS-Raid-0 | 
>> ftp://ftp.heise.de/pub/ct/ctsi/h2benchw.zip | hwbenchw.exe -p -w 
>> iotest 0)
>> The same test but with two cpus gives results between 27 and 298 MB/s!
>>
>> Also in real life it's noticeable not only with an benchmark. I use a 
>> win-vm with two cpu for postscript-ripping and have a performance 
>> drop due to the bad io.
>
Hi,
> What's your block device model?  virtio or ide?
in the test described before i used virtio, but the same happens with 
ide (but of course slightly different values).
>
> What does cpu usage look like on guest or host?
On the guest it's looks like the io-process flap between the cpus. 
Windows show both cpus together are around 65% (less or more) , but if 
one CPU-usage rise, the other drop.
On the host:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 5386 root      20   0 1160m 1.0g 1552 R  109 13.5   1:23.58 kvm

The guest is a win-xp, but the same happens in real life on a win2003.

Udo
diff mbox

Patch

diff --git a/kvm-all.c b/kvm-all.c
index e766202..d06980c 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -593,11 +593,6 @@  int kvm_init(int smp_cpus)
     int ret;
     int i;
 
-    if (smp_cpus > 1) {
-        fprintf(stderr, "No SMP KVM support, use '-smp 1'\n");
-        return -EINVAL;
-    }
-
     s = qemu_mallocz(sizeof(KVMState));
 
 #ifdef KVM_CAP_SET_GUEST_DEBUG
@@ -840,6 +835,11 @@  int kvm_cpu_exec(CPUState *env)
         }
 #endif
 
+        if (kvm_arch_process_irqchip_events(env)) {
+            ret = 0;
+            break;
+        }
+
         if (env->kvm_vcpu_dirty) {
             kvm_arch_put_registers(env, KVM_PUT_RUNTIME_STATE);
             env->kvm_vcpu_dirty = 0;
diff --git a/kvm.h b/kvm.h
index 70bfbf8..5071a31 100644
--- a/kvm.h
+++ b/kvm.h
@@ -90,6 +90,8 @@  int kvm_arch_handle_exit(CPUState *env, struct kvm_run *run);
 
 int kvm_arch_pre_run(CPUState *env, struct kvm_run *run);
 
+int kvm_arch_process_irqchip_events(CPUState *env);
+
 int kvm_arch_get_registers(CPUState *env);
 
 /* state subset only touched by the VCPU itself during runtime */
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index c9ec72e..bd7a190 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -1073,6 +1073,22 @@  int kvm_arch_post_run(CPUState *env, struct kvm_run *run)
     return 0;
 }
 
+int kvm_arch_process_irqchip_events(CPUState *env)
+{
+    if (env->interrupt_request & CPU_INTERRUPT_INIT) {
+        kvm_cpu_synchronize_state(env);
+        do_cpu_init(env);
+        env->exception_index = EXCP_HALTED;
+    }
+
+    if (env->interrupt_request & CPU_INTERRUPT_SIPI) {
+        kvm_cpu_synchronize_state(env);
+        do_cpu_sipi(env);
+    }
+
+    return env->halted;
+}
+
 static int kvm_handle_halt(CPUState *env)
 {
     if (!((env->interrupt_request & CPU_INTERRUPT_HARD) &&
diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c
index aa3d432..91c0963 100644
--- a/target-ppc/kvm.c
+++ b/target-ppc/kvm.c
@@ -224,6 +224,11 @@  int kvm_arch_post_run(CPUState *env, struct kvm_run *run)
     return 0;
 }
 
+int kvm_arch_process_irqchip_events(CPUState *env)
+{
+    return 0;
+}
+
 static int kvmppc_handle_halt(CPUState *env)
 {
     if (!(env->interrupt_request & CPU_INTERRUPT_HARD) && (msr_ee)) {
diff --git a/target-s390x/kvm.c b/target-s390x/kvm.c
index 72e77b0..a2c00ac 100644
--- a/target-s390x/kvm.c
+++ b/target-s390x/kvm.c
@@ -175,6 +175,11 @@  int kvm_arch_post_run(CPUState *env, struct kvm_run *run)
     return 0;
 }
 
+int kvm_arch_process_irqchip_events(CPUState *env)
+{
+    return 0;
+}
+
 static void kvm_s390_interrupt_internal(CPUState *env, int type, uint32_t parm,
                                         uint64_t parm64, int vm)
 {