Patchwork [PULL,3/4] xen: Fix vcpu initialization.

login
register
mail settings
Submitter Stefano Stabellini
Date Sept. 25, 2013, 4:51 p.m.
Message ID <1380127883-6421-3-git-send-email-stefano.stabellini@eu.citrix.com>
Download mbox | patch
Permalink /patch/277932/
State New
Headers show

Comments

Stefano Stabellini - Sept. 25, 2013, 4:51 p.m.
From: Anthony PERARD <anthony.perard@citrix.com>

Each vcpu need a evtchn binded in qemu, even those that are
offline at QEMU initialisation.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen-all.c |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

Patch

diff --git a/xen-all.c b/xen-all.c
index 10af44c..48e881b 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -614,13 +614,13 @@  static ioreq_t *cpu_get_ioreq(XenIOState *state)
     }
 
     if (port != -1) {
-        for (i = 0; i < smp_cpus; i++) {
+        for (i = 0; i < max_cpus; i++) {
             if (state->ioreq_local_port[i] == port) {
                 break;
             }
         }
 
-        if (i == smp_cpus) {
+        if (i == max_cpus) {
             hw_error("Fatal error while trying to get io event!\n");
         }
 
@@ -1115,10 +1115,10 @@  int xen_hvm_init(MemoryRegion **ram_memory)
         hw_error("map buffered IO page returned error %d", errno);
     }
 
-    state->ioreq_local_port = g_malloc0(smp_cpus * sizeof (evtchn_port_t));
+    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
 
     /* FIXME: how about if we overflow the page here? */
-    for (i = 0; i < smp_cpus; i++) {
+    for (i = 0; i < max_cpus; i++) {
         rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid,
                                         xen_vcpu_eport(state->shared_page, i));
         if (rc == -1) {