Patchwork [RFC,2/2] KVM, MCE, unpoison memory address across reboot

login
register
mail settings
Submitter Huang Ying
Date Dec. 31, 2010, 5:22 a.m.
Message ID <1293772955.22308.251.camel@yhuang-dev>
Download mbox | patch
Permalink /patch/77046/
State New
Headers show

Comments

Huang Ying - Dec. 31, 2010, 5:22 a.m.
In Linux kernel HWPoison processing implementation, the virtual
address in processes mapping the error physical memory page is marked
as HWPoison.  So that, the further accessing to the virtual
address will kill corresponding processes with SIGBUS.

If the error physical memory page is used by a KVM guest, the SIGBUS
will be sent to QEMU, and QEMU will simulate a MCE to report that
memory error to the guest OS.  If the guest OS can not recover from
the error (for example, the page is accessed by kernel code), guest OS
will reboot the system.  But because the underlying host virtual
address backing the guest physical memory is still poisoned, if the
guest system accesses the corresponding guest physical memory even
after rebooting, the SIGBUS will still be sent to QEMU and MCE will be
simulated.  That is, guest system can not recover via rebooting.

In fact, across rebooting, the contents of guest physical memory page
need not to be kept.  We can allocate a new host physical page to
back the corresponding guest physical address.

This patch fixes this issue in QEMU-KVM via calling qemu_ram_remap()
to clear the corresponding page table entry, so that make it possible
to allocate a new page to recover the issue.

Signed-off-by: Huang Ying <ying.huang@intel.com>
---
 kvm.h             |    2 ++
 qemu-kvm.c        |   37 +++++++++++++++++++++++++++++++++++++
 target-i386/kvm.c |    2 ++
 3 files changed, 41 insertions(+)
Jan Kiszka - Dec. 31, 2010, 9:10 a.m.
Am 31.12.2010 06:22, Huang Ying wrote:
> In Linux kernel HWPoison processing implementation, the virtual
> address in processes mapping the error physical memory page is marked
> as HWPoison.  So that, the further accessing to the virtual
> address will kill corresponding processes with SIGBUS.
> 
> If the error physical memory page is used by a KVM guest, the SIGBUS
> will be sent to QEMU, and QEMU will simulate a MCE to report that
> memory error to the guest OS.  If the guest OS can not recover from
> the error (for example, the page is accessed by kernel code), guest OS
> will reboot the system.  But because the underlying host virtual
> address backing the guest physical memory is still poisoned, if the
> guest system accesses the corresponding guest physical memory even
> after rebooting, the SIGBUS will still be sent to QEMU and MCE will be
> simulated.  That is, guest system can not recover via rebooting.
> 
> In fact, across rebooting, the contents of guest physical memory page
> need not to be kept.  We can allocate a new host physical page to
> back the corresponding guest physical address.
> 
> This patch fixes this issue in QEMU-KVM via calling qemu_ram_remap()
> to clear the corresponding page table entry, so that make it possible
> to allocate a new page to recover the issue.
> 
> Signed-off-by: Huang Ying <ying.huang@intel.com>
> ---
>  kvm.h             |    2 ++
>  qemu-kvm.c        |   37 +++++++++++++++++++++++++++++++++++++

What's missing in upstream to make this a uq/master patch? We are still
piling up features and fixes in qemu-kvm* that should better target
upstream directly. That's work needlessly done twice.

Is this infrastructure really arch-independent? Will there be other
users besides x86? If not, better keep it in target-i386/kvm.c.

Jan
Huang Ying - Jan. 5, 2011, 6:45 a.m.
On Fri, 2010-12-31 at 17:10 +0800, Jan Kiszka wrote:
> Am 31.12.2010 06:22, Huang Ying wrote:
> > In Linux kernel HWPoison processing implementation, the virtual
> > address in processes mapping the error physical memory page is marked
> > as HWPoison.  So that, the further accessing to the virtual
> > address will kill corresponding processes with SIGBUS.
> > 
> > If the error physical memory page is used by a KVM guest, the SIGBUS
> > will be sent to QEMU, and QEMU will simulate a MCE to report that
> > memory error to the guest OS.  If the guest OS can not recover from
> > the error (for example, the page is accessed by kernel code), guest OS
> > will reboot the system.  But because the underlying host virtual
> > address backing the guest physical memory is still poisoned, if the
> > guest system accesses the corresponding guest physical memory even
> > after rebooting, the SIGBUS will still be sent to QEMU and MCE will be
> > simulated.  That is, guest system can not recover via rebooting.
> > 
> > In fact, across rebooting, the contents of guest physical memory page
> > need not to be kept.  We can allocate a new host physical page to
> > back the corresponding guest physical address.
> > 
> > This patch fixes this issue in QEMU-KVM via calling qemu_ram_remap()
> > to clear the corresponding page table entry, so that make it possible
> > to allocate a new page to recover the issue.
> > 
> > Signed-off-by: Huang Ying <ying.huang@intel.com>
> > ---
> >  kvm.h             |    2 ++
> >  qemu-kvm.c        |   37 +++++++++++++++++++++++++++++++++++++
> 
> What's missing in upstream to make this a uq/master patch? We are still
> piling up features and fixes in qemu-kvm* that should better target
> upstream directly. That's work needlessly done twice.

OK. I will do that. Just based on uq/master is sufficient to make it an
upstream patch?

> Is this infrastructure really arch-independent? Will there be other
> users besides x86? If not, better keep it in target-i386/kvm.c.

No.  It is used only in x86.  I will move it into target-i386/kvm.c.

Best Regards,
Huang Ying
Jan Kiszka - Jan. 5, 2011, 8:14 a.m.
Am 05.01.2011 07:45, Huang Ying wrote:
> On Fri, 2010-12-31 at 17:10 +0800, Jan Kiszka wrote:
>> Am 31.12.2010 06:22, Huang Ying wrote:
>>> In Linux kernel HWPoison processing implementation, the virtual
>>> address in processes mapping the error physical memory page is marked
>>> as HWPoison.  So that, the further accessing to the virtual
>>> address will kill corresponding processes with SIGBUS.
>>>
>>> If the error physical memory page is used by a KVM guest, the SIGBUS
>>> will be sent to QEMU, and QEMU will simulate a MCE to report that
>>> memory error to the guest OS.  If the guest OS can not recover from
>>> the error (for example, the page is accessed by kernel code), guest OS
>>> will reboot the system.  But because the underlying host virtual
>>> address backing the guest physical memory is still poisoned, if the
>>> guest system accesses the corresponding guest physical memory even
>>> after rebooting, the SIGBUS will still be sent to QEMU and MCE will be
>>> simulated.  That is, guest system can not recover via rebooting.
>>>
>>> In fact, across rebooting, the contents of guest physical memory page
>>> need not to be kept.  We can allocate a new host physical page to
>>> back the corresponding guest physical address.
>>>
>>> This patch fixes this issue in QEMU-KVM via calling qemu_ram_remap()
>>> to clear the corresponding page table entry, so that make it possible
>>> to allocate a new page to recover the issue.
>>>
>>> Signed-off-by: Huang Ying <ying.huang@intel.com>
>>> ---
>>>  kvm.h             |    2 ++
>>>  qemu-kvm.c        |   37 +++++++++++++++++++++++++++++++++++++
>>
>> What's missing in upstream to make this a uq/master patch? We are still
>> piling up features and fixes in qemu-kvm* that should better target
>> upstream directly. That's work needlessly done twice.
> 
> OK. I will do that. Just based on uq/master is sufficient to make it an
> upstream patch?

This how things work: You base your upstream changes onto uq/master,
they get picked up and then merged into qemu, qemu-kvm merges upstream
back, and then you have your bits ready in both trees. Sometimes it
takes some additional tweaking the qemu-kvm after the merge, but I hope
we can significantly reduce the need for that very soon.

> 
>> Is this infrastructure really arch-independent? Will there be other
>> users besides x86? If not, better keep it in target-i386/kvm.c.
> 
> No.  It is used only in x86.  I will move it into target-i386/kvm.c.
> 

Perfect. Then you just need to extend kvm_arch_init_vcpu with your reset
registration, and both upstream and qemu-kvm will gain the feature
automatically.

Jan

Patch

--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -1803,6 +1803,7 @@  int kvm_on_sigbus_vcpu(CPUState *env, in
                 hardware_memory_error();
             }
         }
+        kvm_hwpoison_page_add(ram_addr);
         mce.addr = paddr;
         r = kvm_set_mce(env, &mce);
         if (r < 0) {
@@ -1841,6 +1842,7 @@  int kvm_on_sigbus(int code, void *addr)
                     "QEMU itself instead of guest system!: %p\n", addr);
             return 0;
         }
+        kvm_hwpoison_page_add(ram_addr);
         status = MCI_STATUS_VAL | MCI_STATUS_UC | MCI_STATUS_EN
             | MCI_STATUS_MISCV | MCI_STATUS_ADDRV | MCI_STATUS_S
             | 0xc0;
--- a/qemu-kvm.c
+++ b/qemu-kvm.c
@@ -1619,6 +1619,42 @@  int kvm_arch_init_irq_routing(void)
 }
 #endif
 
+struct HWPoisonPage;
+typedef struct HWPoisonPage HWPoisonPage;
+struct HWPoisonPage
+{
+    ram_addr_t ram_addr;
+    QLIST_ENTRY(HWPoisonPage) list;
+};
+
+static QLIST_HEAD(hwpoison_page_list, HWPoisonPage) hwpoison_page_list =
+    QLIST_HEAD_INITIALIZER(hwpoison_page_list);
+
+static void kvm_unpoison_all(void *param)
+{
+    HWPoisonPage *page, *next_page;
+
+    QLIST_FOREACH_SAFE(page, &hwpoison_page_list, list, next_page) {
+        QLIST_REMOVE(page, list);
+        qemu_ram_remap(page->ram_addr, TARGET_PAGE_SIZE);
+        qemu_free(page);
+    }
+}
+
+void kvm_hwpoison_page_add(ram_addr_t ram_addr)
+{
+    HWPoisonPage *page;
+
+    QLIST_FOREACH(page, &hwpoison_page_list, list) {
+        if (page->ram_addr == ram_addr)
+            return;
+    }
+
+    page = qemu_malloc(sizeof(HWPoisonPage));
+    page->ram_addr = ram_addr;
+    QLIST_INSERT_HEAD(&hwpoison_page_list, page, list);
+}
+
 extern int no_hpet;
 
 static int kvm_create_context(void)
@@ -1703,6 +1739,7 @@  static int kvm_create_context(void)
         }
 #endif
     }
+    qemu_register_reset(kvm_unpoison_all, NULL);
 
     return 0;
 }
--- a/kvm.h
+++ b/kvm.h
@@ -221,4 +221,6 @@  int kvm_irqchip_in_kernel(void);
 
 int kvm_set_irq(int irq, int level, int *status);
 
+void kvm_hwpoison_page_add(ram_addr_t ram_addr);
+
 #endif