diff mbox

[v11] kvm: notify host when the guest is panicked

Message ID 0a2274eccf1b1dd420f16359f7e1de74fa2f9fbe.1351131144.git.hutao@cn.fujitsu.com
State New
Headers show

Commit Message

Hu Tao Oct. 25, 2012, 3:42 a.m. UTC
We can know the guest is panicked when the guest runs on xen.
But we do not have such feature on kvm.

Another purpose of this feature is: management app(for example:
libvirt) can do auto dump when the guest is panicked. If management
app does not do auto dump, the guest's user can do dump by hand if
he sees the guest is panicked.

We have three solutions to implement this feature:
1. use vmcall
2. use I/O port
3. use virtio-serial.

We have decided to avoid touching hypervisor. The reason why I choose
choose the I/O port is:
1. it is easier to implememt
2. it does not depend any virtual device
3. it can work when starting the kernel

Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Hu Tao <hutao@cn.fujitsu.com>
---

changes from v10:
 
  - add a kernel parameter to disable pv-event
  - detailed documentation to describe pv event interface
  - make kvm_pv_event_init() local

 Documentation/virtual/kvm/pv_event.txt |   38 +++++++++++++++++++++++++
 arch/ia64/include/asm/kvm_para.h       |   14 ++++++++++
 arch/powerpc/include/asm/kvm_para.h    |   14 ++++++++++
 arch/s390/include/asm/kvm_para.h       |   14 ++++++++++
 arch/x86/include/asm/kvm_para.h        |   21 ++++++++++++++
 arch/x86/kernel/kvm.c                  |   48 ++++++++++++++++++++++++++++++++
 include/linux/kvm_para.h               |   18 ++++++++++++
 7 files changed, 167 insertions(+)
 create mode 100644 Documentation/virtual/kvm/pv_event.txt

Comments

Marcelo Tosatti Oct. 31, 2012, 1:12 a.m. UTC | #1
On Thu, Oct 25, 2012 at 11:42:32AM +0800, Hu Tao wrote:
> We can know the guest is panicked when the guest runs on xen.
> But we do not have such feature on kvm.
> 
> Another purpose of this feature is: management app(for example:
> libvirt) can do auto dump when the guest is panicked. If management
> app does not do auto dump, the guest's user can do dump by hand if
> he sees the guest is panicked.
> 
> We have three solutions to implement this feature:
> 1. use vmcall
> 2. use I/O port
> 3. use virtio-serial.
> 
> We have decided to avoid touching hypervisor. The reason why I choose
> choose the I/O port is:
> 1. it is easier to implememt
> 2. it does not depend any virtual device
> 3. it can work when starting the kernel

It has been asked earlier why a simple virtio device is not usable
for this (with no response IIRC).

Also, there is no high level documentation: purpose of the interface,
how a management application should use it, etc.
Wen Congyang Oct. 31, 2012, 1:48 a.m. UTC | #2
At 10/31/2012 09:12 AM, Marcelo Tosatti Wrote:
> On Thu, Oct 25, 2012 at 11:42:32AM +0800, Hu Tao wrote:
>> We can know the guest is panicked when the guest runs on xen.
>> But we do not have such feature on kvm.
>>
>> Another purpose of this feature is: management app(for example:
>> libvirt) can do auto dump when the guest is panicked. If management
>> app does not do auto dump, the guest's user can do dump by hand if
>> he sees the guest is panicked.
>>
>> We have three solutions to implement this feature:
>> 1. use vmcall
>> 2. use I/O port
>> 3. use virtio-serial.
>>
>> We have decided to avoid touching hypervisor. The reason why I choose
>> choose the I/O port is:
>> 1. it is easier to implememt
>> 2. it does not depend any virtual device
>> 3. it can work when starting the kernel
> 
> It has been asked earlier why a simple virtio device is not usable
> for this (with no response IIRC).

1. We can't use virtio device when the kernel is booting.
2. The virtio's driver can be built as a module, and if it is not loaded
   and the kernel is panicked, there is no way to notify the host.
3. I/O port is more reliable than virtio device.
   If virtio's driver has some bug, and it cause kernel panicked, we can't
   use it. The I/O port is more reliable because it only depends on notifier
   chain(If we use virtio device, it also depends on notifier chain).

Thanks
Wen Congyang

> 
> Also, there is no high level documentation: purpose of the interface,
> how a management application should use it, etc.
> 
>
Sasha Levin Oct. 31, 2012, 2:30 a.m. UTC | #3
On Tue, Oct 30, 2012 at 9:48 PM, Wen Congyang <wency@cn.fujitsu.com> wrote:
> At 10/31/2012 09:12 AM, Marcelo Tosatti Wrote:
>> It has been asked earlier why a simple virtio device is not usable
>> for this (with no response IIRC).
>
> 1. We can't use virtio device when the kernel is booting.

So the issue here is the small window between the point the guest
becomes "self aware" and to the point virtio drivers are loaded,
right?

I agree that if something happens during that interval, a
"virtio-notifier" driver won't catch that, but anything beyond that is
better done with a virtio driver, so how is the generic infrastructure
added in this patch useful to anything beyond detecting panics in that
initial interval?

> 2. The virtio's driver can be built as a module, and if it is not loaded
>    and the kernel is panicked, there is no way to notify the host.

Even if the suggested virtio-notifier driver is built as a module, it
would get auto-loaded when the guest is booting, so I'm not sure about
this point?

> 3. I/O port is more reliable than virtio device.
>    If virtio's driver has some bug, and it cause kernel panicked, we can't
>    use it. The I/O port is more reliable because it only depends on notifier
>    chain(If we use virtio device, it also depends on notifier chain).

This is like suggesting that we let KVM emulate virtio-blk on it's
own, parallel to the virtio implementation, so that even if there's a
problem with virtio-blk, KVM can emulate a virtio-blk on it's own.

Furthermore, why stop at virtio? What if the KVM code has a bug and it
doesn't pass IO properly? Or the x86 code? we still want panic
notifications if that happens...


Thanks,
Sasha
Marcelo Tosatti Oct. 31, 2012, 11:15 p.m. UTC | #4
On Tue, Oct 30, 2012 at 10:30:02PM -0400, Sasha Levin wrote:
> On Tue, Oct 30, 2012 at 9:48 PM, Wen Congyang <wency@cn.fujitsu.com> wrote:
> > At 10/31/2012 09:12 AM, Marcelo Tosatti Wrote:
> >> It has been asked earlier why a simple virtio device is not usable
> >> for this (with no response IIRC).
> >
> > 1. We can't use virtio device when the kernel is booting.
> 
> So the issue here is the small window between the point the guest
> becomes "self aware" and to the point virtio drivers are loaded,
> right?
> 
> I agree that if something happens during that interval, a
> "virtio-notifier" driver won't catch that, but anything beyond that is
> better done with a virtio driver, so how is the generic infrastructure
> added in this patch useful to anything beyond detecting panics in that
> initial interval?

Asked earlier about quantification of panics in that window (i doubt
early panics are that significant for this usecase). netconsole has
the same issue:

"This module logs kernel printk messages over UDP allowing debugging of
problem where disk logging fails and serial consoles are impractical.

It can be used either built-in or as a module. As a built-in,
netconsole initializes immediately after NIC cards and will bring up
the specified interface as soon as possible. While this doesn't allow
capture of early kernel panics, it does capture most of the boot
process."

> > 2. The virtio's driver can be built as a module, and if it is not loaded
> >    and the kernel is panicked, there is no way to notify the host.
> 
> Even if the suggested virtio-notifier driver is built as a module, it
> would get auto-loaded when the guest is booting, so I'm not sure about
> this point?

> > 3. I/O port is more reliable than virtio device.
> >    If virtio's driver has some bug, and it cause kernel panicked, we can't
> >    use it. The I/O port is more reliable because it only depends on notifier
> >    chain(If we use virtio device, it also depends on notifier chain).
> 
> This is like suggesting that we let KVM emulate virtio-blk on it's
> own, parallel to the virtio implementation, so that even if there's a
> problem with virtio-blk, KVM can emulate a virtio-blk on it's own.
> 
> Furthermore, why stop at virtio? What if the KVM code has a bug and it
> doesn't pass IO properly? Or the x86 code? we still want panic
> notifications if that happens...
Hu Tao Nov. 6, 2012, 1:58 a.m. UTC | #5
On Tue, Oct 30, 2012 at 10:30:02PM -0400, Sasha Levin wrote:
> On Tue, Oct 30, 2012 at 9:48 PM, Wen Congyang <wency@cn.fujitsu.com> wrote:
> > At 10/31/2012 09:12 AM, Marcelo Tosatti Wrote:
> >> It has been asked earlier why a simple virtio device is not usable
> >> for this (with no response IIRC).
> >
> > 1. We can't use virtio device when the kernel is booting.
> 
> So the issue here is the small window between the point the guest
> becomes "self aware" and to the point virtio drivers are loaded,
> right?
> 
> I agree that if something happens during that interval, a
> "virtio-notifier" driver won't catch that, but anything beyond that is
> better done with a virtio driver, so how is the generic infrastructure
> added in this patch useful to anything beyond detecting panics in that
> initial interval?

Another point is dependency. To make panic notification more reliable,
we have to reduce its dependency on other parts of kernel as possible as
we can.

> 
> > 2. The virtio's driver can be built as a module, and if it is not loaded
> >    and the kernel is panicked, there is no way to notify the host.
> 
> Even if the suggested virtio-notifier driver is built as a module, it
> would get auto-loaded when the guest is booting, so I'm not sure about
> this point?
> 
> > 3. I/O port is more reliable than virtio device.
> >    If virtio's driver has some bug, and it cause kernel panicked, we can't
> >    use it. The I/O port is more reliable because it only depends on notifier
> >    chain(If we use virtio device, it also depends on notifier chain).
> 
> This is like suggesting that we let KVM emulate virtio-blk on it's
> own, parallel to the virtio implementation, so that even if there's a
> problem with virtio-blk, KVM can emulate a virtio-blk on it's own.

Not the same. On virtio-blk, if we can make use of virtio, why not? If
there is a problem of virtio-blk but caused by virtio itself, just fix
it in virtio.

But in the case of panic notification, more dependency means more
chances of failure of panic notification. Say, if we use a virtio device
to do panic notification, then we will fail if: virtio itself has
problems, virtio for some reason can't be deployed(neither built-in or
as a module), or guest doesn't support virtio, etc.

We choose IO because compared to virtio device, it is not that heavy and
less problematic.

> 
> Furthermore, why stop at virtio? What if the KVM code has a bug and it
> doesn't pass IO properly? Or the x86 code? we still want panic
> notifications if that happens...

Better ideas are welcome.
Sasha Levin Nov. 9, 2012, 8:17 p.m. UTC | #6
On Mon, Nov 5, 2012 at 8:58 PM, Hu Tao <hutao@cn.fujitsu.com> wrote:
> But in the case of panic notification, more dependency means more
> chances of failure of panic notification. Say, if we use a virtio device
> to do panic notification, then we will fail if: virtio itself has
> problems, virtio for some reason can't be deployed(neither built-in or
> as a module), or guest doesn't support virtio, etc.

Add polling to your virtio device. If it didn't notify of a panic but
taking more than 20 sec to answer your poll request you can assume
it's dead.

Actually, just use virtio-serial and something in userspace on the guest.

> We choose IO because compared to virtio device, it is not that heavy and
> less problematic.

Less problematic? Heavy? Are there any known issues with virtio that
should be fixed? You make virtio sound like an old IDE drive or
something.


Thanks,
Sasha
Marcelo Tosatti Nov. 13, 2012, 2:19 a.m. UTC | #7
On Fri, Nov 09, 2012 at 03:17:39PM -0500, Sasha Levin wrote:
> On Mon, Nov 5, 2012 at 8:58 PM, Hu Tao <hutao@cn.fujitsu.com> wrote:
> > But in the case of panic notification, more dependency means more
> > chances of failure of panic notification. Say, if we use a virtio device
> > to do panic notification, then we will fail if: virtio itself has
> > problems, virtio for some reason can't be deployed(neither built-in or
> > as a module), or guest doesn't support virtio, etc.
> 
> Add polling to your virtio device. If it didn't notify of a panic but
> taking more than 20 sec to answer your poll request you can assume
> it's dead.
> 
> Actually, just use virtio-serial and something in userspace on the guest.

They want the guest to stop, so a memory dump can be taken by management
interface.

Hu Tao, lets assume port I/O is the preferred method for communication.
Now, the following comments have still not been addressed:

1) Lifecycle of the stopped guest and interaction with other stopped
states in QEMU.

2) Format of the interface for other architectures (you can choose
a different KVM supported architecture and write an example).

3) Clear/documented management interface for the feature.
Hu Tao Nov. 20, 2012, 10:09 a.m. UTC | #8
Hi Marcelo,

On Tue, Nov 13, 2012 at 12:19:08AM -0200, Marcelo Tosatti wrote:
> On Fri, Nov 09, 2012 at 03:17:39PM -0500, Sasha Levin wrote:
> > On Mon, Nov 5, 2012 at 8:58 PM, Hu Tao <hutao@cn.fujitsu.com> wrote:
> > > But in the case of panic notification, more dependency means more
> > > chances of failure of panic notification. Say, if we use a virtio device
> > > to do panic notification, then we will fail if: virtio itself has
> > > problems, virtio for some reason can't be deployed(neither built-in or
> > > as a module), or guest doesn't support virtio, etc.
> > 
> > Add polling to your virtio device. If it didn't notify of a panic but
> > taking more than 20 sec to answer your poll request you can assume
> > it's dead.
> > 
> > Actually, just use virtio-serial and something in userspace on the guest.
> 
> They want the guest to stop, so a memory dump can be taken by management
> interface.
> 
> Hu Tao, lets assume port I/O is the preferred method for communication.

Okey.

> Now, the following comments have still not been addressed:
> 
> 1) Lifecycle of the stopped guest and interaction with other stopped
> states in QEMU.

Patch 3 already deals with run state transitions. But in case I'm
missing something, could you be more specific?

> 
> 2) Format of the interface for other architectures (you can choose
> a different KVM supported architecture and write an example).
> 
> 3) Clear/documented management interface for the feature.

It is documented in patch 0: Documentation/virtual/kvm/pv_event.txt.
Does it need to be improved?
Marcelo Tosatti Nov. 20, 2012, 9:33 p.m. UTC | #9
On Tue, Nov 20, 2012 at 06:09:48PM +0800, Hu Tao wrote:
> Hi Marcelo,
> 
> On Tue, Nov 13, 2012 at 12:19:08AM -0200, Marcelo Tosatti wrote:
> > On Fri, Nov 09, 2012 at 03:17:39PM -0500, Sasha Levin wrote:
> > > On Mon, Nov 5, 2012 at 8:58 PM, Hu Tao <hutao@cn.fujitsu.com> wrote:
> > > > But in the case of panic notification, more dependency means more
> > > > chances of failure of panic notification. Say, if we use a virtio device
> > > > to do panic notification, then we will fail if: virtio itself has
> > > > problems, virtio for some reason can't be deployed(neither built-in or
> > > > as a module), or guest doesn't support virtio, etc.
> > > 
> > > Add polling to your virtio device. If it didn't notify of a panic but
> > > taking more than 20 sec to answer your poll request you can assume
> > > it's dead.
> > > 
> > > Actually, just use virtio-serial and something in userspace on the guest.
> > 
> > They want the guest to stop, so a memory dump can be taken by management
> > interface.
> > 
> > Hu Tao, lets assume port I/O is the preferred method for communication.
> 
> Okey.
> 
> > Now, the following comments have still not been addressed:
> > 
> > 1) Lifecycle of the stopped guest and interaction with other stopped
> > states in QEMU.
> 
> Patch 3 already deals with run state transitions. But in case I'm
> missing something, could you be more specific?

- What are the possibilities during migration? Say:
	- migration starts.
	- guest panics.
	- migration starts vm on other side?
- Guest stopped due to EIO.
	- guest vcpuN panics, VMEXIT but still outside QEMU.
	- QEMU EIO error, stop vm.
	- guest vcpuN completes, processes IO exit.
        - system_reset due to panic.
- Add all possibilities that should be verified (that is, interaction 
of this feature with other stopped states in QEMU).

---

- What happens if the guest has reboot-on-panic configured? Does it take
precedence over hypervisor notification?



Out of curiosity, does kexec support memory dumping?

> > 2) Format of the interface for other architectures (you can choose
> > a different KVM supported architecture and write an example).
> > 
> > 3) Clear/documented management interface for the feature.
> 
> It is documented in patch 0: Documentation/virtual/kvm/pv_event.txt.
> Does it need to be improved?

This is documentation for the host<->guest interface. There is no 
documentation on the interface for management.
Gleb Natapov Nov. 21, 2012, 9:39 a.m. UTC | #10
On Tue, Nov 20, 2012 at 07:33:49PM -0200, Marcelo Tosatti wrote:
> On Tue, Nov 20, 2012 at 06:09:48PM +0800, Hu Tao wrote:
> > Hi Marcelo,
> > 
> > On Tue, Nov 13, 2012 at 12:19:08AM -0200, Marcelo Tosatti wrote:
> > > On Fri, Nov 09, 2012 at 03:17:39PM -0500, Sasha Levin wrote:
> > > > On Mon, Nov 5, 2012 at 8:58 PM, Hu Tao <hutao@cn.fujitsu.com> wrote:
> > > > > But in the case of panic notification, more dependency means more
> > > > > chances of failure of panic notification. Say, if we use a virtio device
> > > > > to do panic notification, then we will fail if: virtio itself has
> > > > > problems, virtio for some reason can't be deployed(neither built-in or
> > > > > as a module), or guest doesn't support virtio, etc.
> > > > 
> > > > Add polling to your virtio device. If it didn't notify of a panic but
> > > > taking more than 20 sec to answer your poll request you can assume
> > > > it's dead.
> > > > 
> > > > Actually, just use virtio-serial and something in userspace on the guest.
> > > 
> > > They want the guest to stop, so a memory dump can be taken by management
> > > interface.
> > > 
> > > Hu Tao, lets assume port I/O is the preferred method for communication.
> > 
> > Okey.
> > 
> > > Now, the following comments have still not been addressed:
> > > 
> > > 1) Lifecycle of the stopped guest and interaction with other stopped
> > > states in QEMU.
> > 
> > Patch 3 already deals with run state transitions. But in case I'm
> > missing something, could you be more specific?
> 
> - What are the possibilities during migration? Say:
> 	- migration starts.
> 	- guest panics.
> 	- migration starts vm on other side?
> - Guest stopped due to EIO.
> 	- guest vcpuN panics, VMEXIT but still outside QEMU.
> 	- QEMU EIO error, stop vm.
> 	- guest vcpuN completes, processes IO exit.
>         - system_reset due to panic.
> - Add all possibilities that should be verified (that is, interaction 
> of this feature with other stopped states in QEMU).
> 
BTW I do remember getting asserts while using breakpoints via gdbstub
and stop/cont from the monitor.

> ---
> 
> - What happens if the guest has reboot-on-panic configured? Does it take
> precedence over hypervisor notification?
> 
> 
> 
> Out of curiosity, does kexec support memory dumping?
> 
> > > 2) Format of the interface for other architectures (you can choose
> > > a different KVM supported architecture and write an example).
> > > 
> > > 3) Clear/documented management interface for the feature.
> > 
> > It is documented in patch 0: Documentation/virtual/kvm/pv_event.txt.
> > Does it need to be improved?
> 
> This is documentation for the host<->guest interface. There is no 
> documentation on the interface for management.

--
			Gleb.
Hu Tao Nov. 22, 2012, 7:21 a.m. UTC | #11
On Wed, Nov 21, 2012 at 11:39:28AM +0200, Gleb Natapov wrote:
> On Tue, Nov 20, 2012 at 07:33:49PM -0200, Marcelo Tosatti wrote:
> > On Tue, Nov 20, 2012 at 06:09:48PM +0800, Hu Tao wrote:
> > > Hi Marcelo,
> > > 
> > > On Tue, Nov 13, 2012 at 12:19:08AM -0200, Marcelo Tosatti wrote:
> > > > On Fri, Nov 09, 2012 at 03:17:39PM -0500, Sasha Levin wrote:
> > > > > On Mon, Nov 5, 2012 at 8:58 PM, Hu Tao <hutao@cn.fujitsu.com> wrote:
> > > > > > But in the case of panic notification, more dependency means more
> > > > > > chances of failure of panic notification. Say, if we use a virtio device
> > > > > > to do panic notification, then we will fail if: virtio itself has
> > > > > > problems, virtio for some reason can't be deployed(neither built-in or
> > > > > > as a module), or guest doesn't support virtio, etc.
> > > > > 
> > > > > Add polling to your virtio device. If it didn't notify of a panic but
> > > > > taking more than 20 sec to answer your poll request you can assume
> > > > > it's dead.
> > > > > 
> > > > > Actually, just use virtio-serial and something in userspace on the guest.
> > > > 
> > > > They want the guest to stop, so a memory dump can be taken by management
> > > > interface.
> > > > 
> > > > Hu Tao, lets assume port I/O is the preferred method for communication.
> > > 
> > > Okey.
> > > 
> > > > Now, the following comments have still not been addressed:
> > > > 
> > > > 1) Lifecycle of the stopped guest and interaction with other stopped
> > > > states in QEMU.
> > > 
> > > Patch 3 already deals with run state transitions. But in case I'm
> > > missing something, could you be more specific?
> > 
> > - What are the possibilities during migration? Say:
> > 	- migration starts.
> > 	- guest panics.
> > 	- migration starts vm on other side?
> > - Guest stopped due to EIO.
> > 	- guest vcpuN panics, VMEXIT but still outside QEMU.
> > 	- QEMU EIO error, stop vm.
> > 	- guest vcpuN completes, processes IO exit.
> >         - system_reset due to panic.
> > - Add all possibilities that should be verified (that is, interaction 
> > of this feature with other stopped states in QEMU).

Thank you for your explanation!

> > 
> BTW I do remember getting asserts while using breakpoints via gdbstub
> and stop/cont from the monitor.

Thanks, I'll consider this too.

> 
> > ---
> > 
> > - What happens if the guest has reboot-on-panic configured? Does it take
> > precedence over hypervisor notification?

Yes. But I don't think this is what we want if pv-event is on. Users may
want to do whatever they want when the guest is panicked, but not an automatic
reboot-on-panic. What's your opinion?

> > 
> > 
> > 
> > Out of curiosity, does kexec support memory dumping?

Yes. do we have to disable kexec if pv-event is on, too?

> > 
> > > > 2) Format of the interface for other architectures (you can choose
> > > > a different KVM supported architecture and write an example).
> > > > 
> > > > 3) Clear/documented management interface for the feature.
> > > 
> > > It is documented in patch 0: Documentation/virtual/kvm/pv_event.txt.
> > > Does it need to be improved?
> > 
> > This is documentation for the host<->guest interface. There is no 
> > documentation on the interface for management.

Oh yes, I'll add this.
diff mbox

Patch

diff --git a/Documentation/virtual/kvm/pv_event.txt b/Documentation/virtual/kvm/pv_event.txt
new file mode 100644
index 0000000..247379f
--- /dev/null
+++ b/Documentation/virtual/kvm/pv_event.txt
@@ -0,0 +1,38 @@ 
+The KVM Paravirtual Event Interface
+=================================
+
+The KVM Paravirtual Event Interface defines a simple interface,
+by which guest OS can inform hypervisor that something happened.
+
+To inform hypervisor of events, guest writes a 32-bit integer to
+the Interface. Each bit of the integer represents an event, if a
+bit is set, the corresponding event happens.
+
+To query events supported by hypervisor, guest reads from the
+Interface. If a bit is set, the corresponding event is supported.
+
+The Interface supports up to 32 events. Currently there is 1 event
+defined, as follow:
+
+KVM_PV_FEATURE_PANICKED		0
+
+
+Querying whether the event can be ejected
+======================
+kvm_pv_has_feature()
+Arguments:
+	feature: The bit value of this paravirtual event to query
+
+Return Value:
+	 0: The guest kernel can't eject this paravirtual event.
+	 1: The guest kernel can eject this paravirtual event.
+
+
+Ejecting paravirtual event
+======================
+kvm_pv_eject_event()
+Arguments:
+	event: The event to be ejected.
+
+Return Value:
+	None
diff --git a/arch/ia64/include/asm/kvm_para.h b/arch/ia64/include/asm/kvm_para.h
index 2019cb9..b5ec658 100644
--- a/arch/ia64/include/asm/kvm_para.h
+++ b/arch/ia64/include/asm/kvm_para.h
@@ -31,6 +31,20 @@  static inline bool kvm_check_and_clear_guest_paused(void)
 	return false;
 }
 
+static inline int kvm_arch_pv_event_init(void)
+{
+	return 0;
+}
+
+static inline unsigned int kvm_arch_pv_features(void)
+{
+	return 0;
+}
+
+static inline void kvm_arch_pv_eject_event(unsigned int event)
+{
+}
+
 #endif
 
 #endif
diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
index c18916b..01b98c7 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -211,6 +211,20 @@  static inline bool kvm_check_and_clear_guest_paused(void)
 	return false;
 }
 
+static inline int kvm_arch_pv_event_init(void)
+{
+	return 0;
+}
+
+static inline unsigned int kvm_arch_pv_features(void)
+{
+	return 0;
+}
+
+static inline void kvm_arch_pv_eject_event(unsigned int event)
+{
+}
+
 #endif /* __KERNEL__ */
 
 #endif /* __POWERPC_KVM_PARA_H__ */
diff --git a/arch/s390/include/asm/kvm_para.h b/arch/s390/include/asm/kvm_para.h
index da44867..00ce058 100644
--- a/arch/s390/include/asm/kvm_para.h
+++ b/arch/s390/include/asm/kvm_para.h
@@ -154,6 +154,20 @@  static inline bool kvm_check_and_clear_guest_paused(void)
 	return false;
 }
 
+static inline int kvm_arch_pv_event_init(void)
+{
+	return 0;
+}
+
+static inline unsigned int kvm_arch_pv_features(void)
+{
+	return 0;
+}
+
+static inline void kvm_arch_pv_eject_event(unsigned int event)
+{
+}
+
 #endif
 
 #endif /* __S390_KVM_PARA_H */
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index eb3e9d8..4315af6 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -96,8 +96,11 @@  struct kvm_vcpu_pv_apf_data {
 #define KVM_PV_EOI_ENABLED KVM_PV_EOI_MASK
 #define KVM_PV_EOI_DISABLED 0x0
 
+#define KVM_PV_EVENT_PORT	(0x505UL)
+
 #ifdef __KERNEL__
 #include <asm/processor.h>
+#include <linux/ioport.h>
 
 extern void kvmclock_init(void);
 extern int kvm_register_clock(char *txt);
@@ -228,6 +231,24 @@  static inline void kvm_disable_steal_time(void)
 }
 #endif
 
+static inline int kvm_arch_pv_event_init(void)
+{
+	if (!request_region(KVM_PV_EVENT_PORT, 4, "KVM_PV_EVENT"))
+		return -1;
+
+	return 0;
+}
+
+static inline unsigned int kvm_arch_pv_features(void)
+{
+	return inl(KVM_PV_EVENT_PORT);
+}
+
+static inline void kvm_arch_pv_eject_event(unsigned int event)
+{
+	outl(event, KVM_PV_EVENT_PORT);
+}
+
 #endif /* __KERNEL__ */
 
 #endif /* _ASM_X86_KVM_PARA_H */
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 4180a87..c44e46f 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -62,6 +62,15 @@  static int parse_no_stealacc(char *arg)
 
 early_param("no-steal-acc", parse_no_stealacc);
 
+static int pv_event = 1;
+static int parse_no_pv_event(char *arg)
+{
+	pv_event = 0;
+	return 0;
+}
+
+early_param("no-pv-event", parse_no_pv_event);
+
 static DEFINE_PER_CPU(struct kvm_vcpu_pv_apf_data, apf_reason) __aligned(64);
 static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64);
 static int has_steal_clock = 0;
@@ -372,6 +381,17 @@  static struct notifier_block kvm_pv_reboot_nb = {
 	.notifier_call = kvm_pv_reboot_notify,
 };
 
+static int
+kvm_pv_panic_notify(struct notifier_block *nb, unsigned long code, void *unused)
+{
+	kvm_pv_eject_event(KVM_PV_EVENT_PANICKED);
+	return NOTIFY_DONE;
+}
+
+static struct notifier_block kvm_pv_panic_nb = {
+	.notifier_call = kvm_pv_panic_notify,
+};
+
 static u64 kvm_steal_clock(int cpu)
 {
 	u64 steal;
@@ -449,6 +469,34 @@  static void __init kvm_apf_trap_init(void)
 	set_intr_gate(14, &async_page_fault);
 }
 
+static void __init kvm_pv_panicked_event_init(void)
+{
+	if (!kvm_para_available())
+		return;
+
+	if (kvm_pv_has_feature(KVM_PV_FEATURE_PANICKED))
+		atomic_notifier_chain_register(&panic_notifier_list,
+			&kvm_pv_panic_nb);
+}
+
+static inline int kvm_pv_event_init(void)
+{
+	return kvm_arch_pv_event_init();
+}
+
+static int __init enable_pv_event(void)
+{
+	if (pv_event) {
+		if (kvm_pv_event_init())
+			return 0;
+
+		kvm_pv_panicked_event_init();
+	}
+
+	return 0;
+}
+arch_initcall(enable_pv_event);
+
 void __init kvm_guest_init(void)
 {
 	int i;
diff --git a/include/linux/kvm_para.h b/include/linux/kvm_para.h
index ff476dd..495e411 100644
--- a/include/linux/kvm_para.h
+++ b/include/linux/kvm_para.h
@@ -20,6 +20,12 @@ 
 #define KVM_HC_FEATURES			3
 #define KVM_HC_PPC_MAP_MAGIC_PAGE	4
 
+/* The bit of supported pv event */
+#define KVM_PV_FEATURE_PANICKED	0
+
+/* The pv event value */
+#define KVM_PV_EVENT_PANICKED	1
+
 /*
  * hypercalls use architecture specific
  */
@@ -33,5 +39,17 @@  static inline int kvm_para_has_feature(unsigned int feature)
 		return 1;
 	return 0;
 }
+
+static inline int kvm_pv_has_feature(unsigned int feature)
+{
+	if (kvm_arch_pv_features() & (1UL << feature))
+		return 1;
+	return 0;
+}
+
+static inline void kvm_pv_eject_event(unsigned int event)
+{
+	kvm_arch_pv_eject_event(event);
+}
 #endif /* __KERNEL__ */
 #endif /* __LINUX_KVM_PARA_H */