diff mbox

[4/9] main-loop: refactor qemu_system_shutdown_request()

Message ID 1361374369-19024-5-git-send-email-aliguori@us.ibm.com
State New
Headers show

Commit Message

Anthony Liguori Feb. 20, 2013, 3:32 p.m. UTC
There are two open coded versions of shutdown depending on whether
the -no-shutdown flag is used or not.  Refactor to just one.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
---
 vl.c | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

Comments

Anthony Liguori Feb. 20, 2013, 6:51 p.m. UTC | #1
Stefano Stabellini <stefano.stabellini@eu.citrix.com> writes:

> On Wed, 20 Feb 2013, Anthony Liguori wrote:
>> Paolo Bonzini <pbonzini@redhat.com> writes:
>> 
>> > Il 20/02/2013 16:32, Anthony Liguori ha scritto:
>> >> +    if (no_shutdown) {
>> >> +        vm_stop(RUN_STATE_SHUTDOWN);
>> >> +    } else {
>> >> +        main_loop_quit();
>> >> +    }
>> >
>> > Would it make sense to call vm_stop() unconditionally?  Then Xen can
>> > just use the vmstate_change handler as a hook.
>> >
>> > Similarly, for reset it can just use qemu_register_reset.
>> 
>> Yeah, that's probably doable.
>> 
>> But can't we just factor out the destroy_hvm_domain(false); to be called
>> after the main loop exits?  We could even add a machine finalize hook
>> for this purpose.  Likewise, destroy_hvm_domain(true) could be called
>> through a machine-specific reset handler.
>> 
>> Stefano, does this make sense?  I can do a quick but I'm not really
>> setup to test it.
>
> Reading the code and the comment in cpu_handle_ioreq:
>
> /*
>  * We do this before we send the response so that the tools
>  * have the opportunity to pick up on the reset before the
>  * guest resumes and does a hlt with interrupts disabled which
>  * causes Xen to powerdown the domain.
>  */
>
> it seems to me that the difficult case is reset: if the guest requests a
> machine reboot (for example via hw/piix_pci.c:rcr_write), according to
> the comment QEMU needs to call destroy_hvm_domain(true) before notifying
> the hypervisor that it is done with the IO request. The notification is
> done by xc_evtchn_notify at the end of cpu_handle_ioreq.

When any of these functions are called, the current CPU is put into
stopped state via cpu_stop_current().

Could you trigger off of that to avoid doing the xc_evtchn_notify() and
then trigger on the eventual resume_all_vcpus() to then call
xc_evtchn_notify()?

The good thing about this is that it should work for all of the
different types of events we have (suspend, etc.).

>
> One possibility would be to move the "IO request done" notification at
> the end of the main loop, after the other bottom halves.
> Of course we need to keep in mind that in the normal case (no shutdown,
> no reset), xc_evtchn_notify still needs to be called before the next
> cpu_handle_ioreq.
>
> Another possibility would be to introduce a reset and a shutdown
> notification chain, so that different QEMU sub-components can be
> notified immediately when the guest issues a reset or a shutdown
> request.

I think hooking stop/resume to deal with xc_evtchn_notify() and a second
vmstate change notifier to handle the hvm_domain_destroy() is the best
approach.

Regards,

Anthony Liguori
Stefano Stabellini Feb. 21, 2013, 1:18 p.m. UTC | #2
On Wed, 20 Feb 2013, Anthony Liguori wrote:
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> writes:
> > On Wed, 20 Feb 2013, Anthony Liguori wrote:
> >> Paolo Bonzini <pbonzini@redhat.com> writes:
> >> 
> >> > Il 20/02/2013 16:32, Anthony Liguori ha scritto:
> >> >> +    if (no_shutdown) {
> >> >> +        vm_stop(RUN_STATE_SHUTDOWN);
> >> >> +    } else {
> >> >> +        main_loop_quit();
> >> >> +    }
> >> >
> >> > Would it make sense to call vm_stop() unconditionally?  Then Xen can
> >> > just use the vmstate_change handler as a hook.
> >> >
> >> > Similarly, for reset it can just use qemu_register_reset.
> >> 
> >> Yeah, that's probably doable.
> >> 
> >> But can't we just factor out the destroy_hvm_domain(false); to be called
> >> after the main loop exits?  We could even add a machine finalize hook
> >> for this purpose.  Likewise, destroy_hvm_domain(true) could be called
> >> through a machine-specific reset handler.
> >> 
> >> Stefano, does this make sense?  I can do a quick but I'm not really
> >> setup to test it.
> >
> > Reading the code and the comment in cpu_handle_ioreq:
> >
> > /*
> >  * We do this before we send the response so that the tools
> >  * have the opportunity to pick up on the reset before the
> >  * guest resumes and does a hlt with interrupts disabled which
> >  * causes Xen to powerdown the domain.
> >  */
> >
> > it seems to me that the difficult case is reset: if the guest requests a
> > machine reboot (for example via hw/piix_pci.c:rcr_write), according to
> > the comment QEMU needs to call destroy_hvm_domain(true) before notifying
> > the hypervisor that it is done with the IO request. The notification is
> > done by xc_evtchn_notify at the end of cpu_handle_ioreq.
> 
> When any of these functions are called, the current CPU is put into
> stopped state via cpu_stop_current().
> 
> Could you trigger off of that to avoid doing the xc_evtchn_notify() and
> then trigger on the eventual resume_all_vcpus() to then call
> xc_evtchn_notify()?

I don't like the fact that something strictly related to the VM
destruction becomes tied to a generic cpu_stop event that could or
could not be called at VM destruction.

In fact even today kvmapic.c calls pause_all_vcpus (that calls
cpu_stop_current()). Granted we don't use kvmapic.c on Xen, but I think
there is no reason why other devices are going to be start calling
cpu_stop in the future.
It makes the whole thing more fragile.
diff mbox

Patch

diff --git a/vl.c b/vl.c
index 705c7f1..4810178 100644
--- a/vl.c
+++ b/vl.c
@@ -1824,12 +1824,12 @@  void qemu_system_reset(bool report)
 void qemu_system_reset_request(void)
 {
     if (no_reboot) {
-        shutdown_requested = 1;
+        qemu_system_shutdown_request();
     } else {
         reset_requested = 1;
+        cpu_stop_current();
+        qemu_notify_event();
     }
-    cpu_stop_current();
-    qemu_notify_event();
 }
 
 static gboolean qemu_system_suspend(gpointer unused)
@@ -1891,9 +1891,21 @@  void qemu_system_killed(int signal, pid_t pid)
     qemu_system_shutdown_request();
 }
 
+static void qemu_system_shutdown(void)
+{
+    qemu_kill_report();
+    monitor_protocol_event(QEVENT_SHUTDOWN, NULL);
+    if (no_shutdown) {
+        vm_stop(RUN_STATE_SHUTDOWN);
+    } else {
+        main_loop_quit();
+    }
+}
+
 void qemu_system_shutdown_request(void)
 {
     shutdown_requested = 1;
+    cpu_stop_current();
     qemu_notify_event();
 }
 
@@ -1935,14 +1947,7 @@  static void main_loop_junk(void)
 {
     RunState r;
     if (qemu_shutdown_requested()) {
-        qemu_kill_report();
-        monitor_protocol_event(QEVENT_SHUTDOWN, NULL);
-        if (no_shutdown) {
-            vm_stop(RUN_STATE_SHUTDOWN);
-        } else {
-            main_loop_quit();
-            return;
-        }
+        qemu_system_shutdown();
     }
     if (qemu_reset_requested()) {
         pause_all_vcpus();