Patchwork performance counters from inside of Qemu

login
register
mail settings
Submitter Vince Weaver
Date Nov. 4, 2010, 7:20 p.m.
Message ID <alpine.DEB.2.02.1011041513360.18917@venchi.csl.cornell.edu>
Download mbox | patch
Permalink /patch/70159/
State New
Headers show

Comments

Vince Weaver - Nov. 4, 2010, 7:20 p.m.
Hello

The following patch enables simulated perf event support inside of Qemu 
for x86_64 systems. It enables support for the AMD performance MSRs enough 
to return values for the "retired_instructions" (both user and kernel) and 
"cpu_clk_unhalted" events. 

This is mostly a proof of concept, I'm not sure if anyone is interested in 
this.  It could in theory be useful for tracking down performance problems 
from iside of Qemu using existing tools.

The main missing component is lack of interrupts on counter overflow.  
This is possible to implement, I just haven't.

The eventual goal of my group is to enable performance counter support 
inside of KVM, but I was experimenting using Qemu first.

Usage:
    * Apply the patch
    * Boot into Qemu, using a kernel using 2.6.32 or newer. 
      Use the "-cpu phenom" option to select an AMD machine.
    * Use the "perf" utility (or other method of accessing perf counters, 
      such as PAPI) as you normally would. 

Sample output using a benchmark that is exactly 1 million instructions
(it counts more due to kernel overhead):

vince@debian:~$ perf stat ./million

 Performance counter stats for './million':

       9.147038  task-clock-msecs         #      0.635 CPUs
              0  context-switches         #      0.000 M/sec
              0  CPU-migrations           #      0.000 M/sec
              2  page-faults              #      0.000 M/sec
       24990915  cycles                   #   2732.132 M/sec
        1070162  instructions             #      0.043 IPC
              0  cache-references         #      0.000 M/sec
              0  cache-misses             #      0.000 M/sec

    0.014401652  seconds time elapsed

vince@debian:~$ exit
Stefan Hajnoczi - Nov. 6, 2010, 10:58 a.m.
On Thu, Nov 4, 2010 at 7:20 PM, Vince Weaver <vince@csl.cornell.edu> wrote:
> This is mostly a proof of concept, I'm not sure if anyone is interested in
> this.  It could in theory be useful for tracking down performance problems
> from iside of Qemu using existing tools.

This patch handles uniprocessor guests only?

There was a patch series a few months back by Yanmin Zhang to
implement a paravirt perf interface.  The subject line was "para virt
interface of perf to support kvm guest os statistics collection in
guest os".  It might be interesting to follow that up if you haven't
seen it already.

Stefan
Vince Weaver - Nov. 9, 2010, 7:57 p.m.
On Sat, 6 Nov 2010, Stefan Hajnoczi wrote:

> On Thu, Nov 4, 2010 at 7:20 PM, Vince Weaver <vince@csl.cornell.edu> wrote:
> > This is mostly a proof of concept, I'm not sure if anyone is interested in
> > this.  It could in theory be useful for tracking down performance problems
> > from iside of Qemu using existing tools.
> 
> This patch handles uniprocessor guests only?

yes; I typically only runs Qemu in uniprocessor mode so I hadn't thought 
about CMP support.  In theory it shouldn't require too many changes to the 
actual MSR code because the kernel handles saving/restoring the counter 
values on context switch, but it would involve separate per-core simulated 
instruction counters.  Hmmm.

> There was a patch series a few months back by Yanmin Zhang to
> implement a paravirt perf interface.  The subject line was "para virt
> interface of perf to support kvm guest os statistics collection in
> guest os".  It might be interesting to follow that up if you haven't
> seen it already.

Thanks for pointing that out!  I had been looking for something like that 
and somehow missed it at the time.

That patch seems to export a para-virtualized counter interface.  While 
useful, it would require code changes for external tools, and it also 
wouldn't allow for tools that program raw events into the counters (as
opposed to the handful of predefined kernel perf-events ones).

Thanks for the feedback.

Vince
=?utf-8?Q?Llu=C3=ADs?= - Nov. 9, 2010, 9:02 p.m.
Vince Weaver writes:
[...]
> diff --git a/target-i386/helper.c b/target-i386/helper.c
> index 26ea1e5..f2aa2d7 100644
> --- a/target-i386/helper.c
> +++ b/target-i386/helper.c
> @@ -31,6 +31,20 @@
 
>  //#define DEBUG_MMU
 
> +long long global_ins_count[3] = {0,0,0};
> +
> +void helper_insn_count(unsigned int cpl);
> +
> +void helper_insn_count(unsigned int cpl) {
> +   if (cpl==0) {
> +      global_ins_count[1]++;
> +   }
> +   else if (cpl==3) {
> +      global_ins_count[0]++;  
> +   }
> +   /* FIXME -- handle overflow interrupts */
> +}
> +
>  /* NOTE: must be called outside the CPU execute loop */
>  void cpu_reset(CPUX86State *env)
>  {
> diff --git a/target-i386/translate.c b/target-i386/translate.c
> index 7b6e3c2..1d8f95e 100644
> --- a/target-i386/translate.c
> +++ b/target-i386/translate.c
> @@ -4215,6 +4215,15 @@ static target_ulong disas_insn(DisasContext *s, target_ulong pc_start)
>      if (prefixes & PREFIX_LOCK)
>          gen_helper_lock();
 
> +    {
> +        /* vmw */
> +	TCGv const1;
> +
> +	const1 = tcg_const_i32(s->cpl);
> +        gen_helper_insn_count(const1);
> +        tcg_temp_free(const1);     
> +   }
> +      
>      /* now check op code */
>   reswitch:
>      switch(b) {

Maybe you should use a per-vCPU "ins_count" array, so that these
counters would support CMP.

In addition, instead of using a helper, you could use the following to
speed up the execution (as this will be present on each instruction):

      tcg_gen_add_i64(cpu_T[0], cpu_env, offsetof(CPUState, ins_count[s->cpl]));
      tcg_gen_addi(cpu_T[0], cpu_T[0], 1);

Haven't checked if this TCG is correct, though (and still does not check
for overflows).

In any case, I think this kind of counting has some overlapping with the
"icount" infrastructure (cpu_get_icount), so maybe you could update the
per-vCPU "ins_count" lazily using the "icount" counters (e.g., add to
"ins_count" when "icount_decr" expires or a TB ends, plus fine-sync when
reading the MSRs). Of course, then you would need to force the usage of
icount.


Lluis

Patch

diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index 2440d65..aac0efd 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -341,6 +341,16 @@ 
 #define MSR_KERNELGSBASE                0xc0000102
 #define MSR_TSC_AUX                     0xc0000103
 
+#define MSR_PERFEVTSEL0			0xc0010000
+#define MSR_PERFEVTSEL1			0xc0010001
+#define MSR_PERFEVTSEL2			0xc0010002
+#define MSR_PERFEVTSEL3			0xc0010003
+
+#define MSR_PERFCTR0			0xc0010004
+#define MSR_PERFCTR1			0xc0010005
+#define MSR_PERFCTR2			0xc0010006
+#define MSR_PERFCTR3			0xc0010007
+
 #define MSR_VM_HSAVE_PA                 0xc0010117
 
 /* cpuid_features bits */
diff --git a/target-i386/helper.c b/target-i386/helper.c
index 26ea1e5..f2aa2d7 100644
--- a/target-i386/helper.c
+++ b/target-i386/helper.c
@@ -31,6 +31,20 @@ 
 
 //#define DEBUG_MMU
 
+long long global_ins_count[3] = {0,0,0};
+
+void helper_insn_count(unsigned int cpl);
+
+void helper_insn_count(unsigned int cpl) {
+   if (cpl==0) {
+      global_ins_count[1]++;
+   }
+   else if (cpl==3) {
+      global_ins_count[0]++;  
+   }
+   /* FIXME -- handle overflow interrupts */
+}
+
 /* NOTE: must be called outside the CPU execute loop */
 void cpu_reset(CPUX86State *env)
 {
diff --git a/target-i386/helper.h b/target-i386/helper.h
index 6b518ad..6739007 100644
--- a/target-i386/helper.h
+++ b/target-i386/helper.h
@@ -1,5 +1,7 @@ 
 #include "def-helper.h"
 
+DEF_HELPER_1(insn_count, void, i32)
+
 DEF_HELPER_FLAGS_1(cc_compute_all, TCG_CALL_PURE, i32, int)
 DEF_HELPER_FLAGS_1(cc_compute_c, TCG_CALL_PURE, i32, int)
 
diff --git a/target-i386/op_helper.c b/target-i386/op_helper.c
index 43fbd0c..a80d875 100644
--- a/target-i386/op_helper.c
+++ b/target-i386/op_helper.c
@@ -3005,6 +3005,24 @@  void helper_rdmsr(void)
 {
 }
 #else
+
+extern long long global_ins_count[3];
+
+static uint64_t perf_msrs[8];
+
+static struct counter_info_t {
+    int enabled;
+    long long last_value;
+    int umask;
+    int emask;
+    int os;
+    int usr;
+} counter_info[4];
+
+#define PERF_ENABLED 1ULL<<22
+#define PERF_USR 1ULL<<16
+#define PERF_OS  1ULL<<17
+
 void helper_wrmsr(void)
 {
     uint64_t val;
@@ -3126,6 +3144,88 @@  void helper_wrmsr(void)
     case MSR_TSC_AUX:
         env->tsc_aux = val;
         break;
+     case MSR_PERFEVTSEL0:
+     case MSR_PERFEVTSEL1:
+     case MSR_PERFEVTSEL2:
+     case MSR_PERFEVTSEL3:
+       	 {
+	    int enable;
+	    int umask,emask,usr,os;
+	    int counter=(int)ECX&0x7;
+	    
+	    emask=val&0xff;
+	    umask=(val>>8)&0xff;
+	    enable=!!(val&PERF_ENABLED);
+	    usr=!!(val&PERF_USR);
+	    os=!!(val&PERF_OS);
+	    
+	    if (enable) {
+	       counter_info[counter].emask=emask;
+	       counter_info[counter].umask=umask;
+	       counter_info[counter].enabled=1;
+	       counter_info[counter].last_value=0;
+	       
+	          /* retired instructions */
+	       if (emask==0xc0) {
+	          if (usr) {
+	             counter_info[counter].last_value+=global_ins_count[0];
+	          }
+	          counter_info[counter].usr=usr;
+
+	          if (os) {
+		     counter_info[counter].last_value+=global_ins_count[1];
+	          }
+	          counter_info[counter].os=os;
+	       } 
+	          /* cycles */
+	       else if (emask==0x76) {
+		  counter_info[counter].last_value=
+		    (cpu_get_tsc(env)+env->tsc_offset);
+	       }
+	       else {
+	       }
+		  
+	    }
+	    else {
+	       if (counter_info[counter].enabled) {
+		  long long new_count=0;
+		  
+		     /* retired isntructions */
+		  if (counter_info[counter].emask==0xc0) {
+		     if (counter_info[counter].usr) {
+		        new_count+=global_ins_count[0];
+		     }
+		     if (counter_info[counter].os) {
+		        new_count+=global_ins_count[1];
+		     }
+		  
+		     perf_msrs[(ECX&0x7)+4]+=new_count-
+		              counter_info[counter].last_value;
+		  }
+		      /* cycles */
+		  else if (counter_info[counter].emask==0x76) {
+		     new_count=(cpu_get_tsc(env)+env->tsc_offset);
+		     perf_msrs[(ECX&0x7)+4]+=new_count-
+		              counter_info[counter].last_value;
+		  }
+		  else {
+		  }
+	       }
+	       
+	       counter_info[counter].enabled=0;
+
+	    }
+	 }
+       perf_msrs[ECX&0x7]=val;
+       break;
+       
+     case MSR_PERFCTR0:
+     case MSR_PERFCTR1:
+     case MSR_PERFCTR2:
+     case MSR_PERFCTR3:
+       perf_msrs[ECX&0x7]=val;
+       break;
+       
     default:
         if ((uint32_t)ECX >= MSR_MC0_CTL
             && (uint32_t)ECX < MSR_MC0_CTL + (4 * env->mcg_cap & 0xff)) {
@@ -3259,6 +3359,49 @@  void helper_rdmsr(void)
     case MSR_MCG_STATUS:
         val = env->mcg_status;
         break;
+     case MSR_PERFEVTSEL0:
+     case MSR_PERFEVTSEL1:
+     case MSR_PERFEVTSEL2:
+     case MSR_PERFEVTSEL3:       
+        val=perf_msrs[ECX&0x7];
+        break;
+       
+     case MSR_PERFCTR0:
+     case MSR_PERFCTR1:
+     case MSR_PERFCTR2:
+     case MSR_PERFCTR3:
+	 {
+	    int counter=(ECX&0x7)-4;
+	    long long value=0;
+       
+	    if ( counter_info[counter].enabled ) {
+	       
+	          /* retired instructions */
+	       if (counter_info[counter].emask==0xc0) {
+	          if (counter_info[counter].usr) {
+	             value+=global_ins_count[0];
+	          }
+	          if (counter_info[counter].os) {
+	             value+=global_ins_count[1];
+	          }
+                  val=perf_msrs[ECX&0x7]+
+	                 (value-counter_info[counter].last_value);
+	       }
+	            /* cycles */
+	       else if (counter_info[counter].emask==0x76) {
+		  val=perf_msrs[ECX&0x7]+
+		    ((cpu_get_tsc(env)+env->tsc_offset)-
+		  		  counter_info[counter].last_value);
+
+	       }
+	       else {
+		  val=perf_msrs[ECX&0x7];
+	       }
+           } else {  /* counter was disabled */
+	       val=perf_msrs[ECX&0x7];
+	   }
+	 }
+       break;       
     default:
         if ((uint32_t)ECX >= MSR_MC0_CTL
             && (uint32_t)ECX < MSR_MC0_CTL + (4 * env->mcg_cap & 0xff)) {
diff --git a/target-i386/translate.c b/target-i386/translate.c
index 7b6e3c2..1d8f95e 100644
--- a/target-i386/translate.c
+++ b/target-i386/translate.c
@@ -4215,6 +4215,15 @@  static target_ulong disas_insn(DisasContext *s, target_ulong pc_start)
     if (prefixes & PREFIX_LOCK)
         gen_helper_lock();
 
+    {
+        /* vmw */
+	TCGv const1;
+
+	const1 = tcg_const_i32(s->cpl);
+        gen_helper_insn_count(const1);
+        tcg_temp_free(const1);     
+   }
+      
     /* now check op code */
  reswitch:
     switch(b) {