diff mbox

Add option to mlock guest and qemu memory

Message ID 8631DC5930FA9E468F04F3FD3A5D00721394D3EE@USINDEM103.corp.hds.com
State New
Headers show

Commit Message

Satoru Moriya Sept. 27, 2012, 11:21 p.m. UTC
This is a first time for me to post a patch to qemu-devel.
If there is something missing/wrong, please let me know.

We have some plans to migrate old enterprise systems which require
low latency (msec order) to kvm virtualized environment. Usually,
we uses mlock to preallocate and pin down process memory in order
to avoid page allocation in latency critical path. On the other
hand, in kvm environment, mlocking in guests is not effective
because it can't avoid page reclaim in host. Actually, to avoid
guest memory reclaim, qemu has "mem-path" option that is actually
for using hugepage. But a memory region of qemu is not allocated
on hugepage, so it may be reclaimed. That may cause a latency
problem.

To avoid guest and qemu memory reclaim, this patch introduces
a new "mlock" option. With this option, we can preallocate and
pin down guest and qemu memory before booting guest OS.

Tested on Linux, x86_64 (fedora 17).

Signed-off-by: Satoru Moriya <satoru.moriya@hds.com>
---
 cpu-all.h       | 1 +
 exec.c          | 3 +++
 qemu-options.hx | 8 ++++++++
 vl.c            | 4 ++++
 4 files changed, 16 insertions(+)

Comments

Jan Kiszka Sept. 28, 2012, 8:05 a.m. UTC | #1
On 2012-09-28 01:21, Satoru Moriya wrote:
> This is a first time for me to post a patch to qemu-devel.
> If there is something missing/wrong, please let me know.
> 
> We have some plans to migrate old enterprise systems which require
> low latency (msec order) to kvm virtualized environment. Usually,
> we uses mlock to preallocate and pin down process memory in order
> to avoid page allocation in latency critical path. On the other
> hand, in kvm environment, mlocking in guests is not effective
> because it can't avoid page reclaim in host. Actually, to avoid
> guest memory reclaim, qemu has "mem-path" option that is actually
> for using hugepage. But a memory region of qemu is not allocated
> on hugepage, so it may be reclaimed. That may cause a latency
> problem.
> 
> To avoid guest and qemu memory reclaim, this patch introduces
> a new "mlock" option. With this option, we can preallocate and
> pin down guest and qemu memory before booting guest OS.

I guess this reduces the likeliness of multi-millisecond latencies for
you but not eliminate them. Of course, mlockall is part of our local
changes for real-time QEMU/KVM, but it is just one of the many pieces
required. I'm wondering how the situation is on your side.

I think mlockall should once be enabled automatically as soon as you ask
for real-time support for QEMU guests. How that should be controlled is
another question. I'm currently carrying a top-level switch "-rt
maxprio=x[,policy=y]" here, likely not the final solution. I'm not
really convinced we need to control memory locking separately. And as we
are very reluctant to add new top-level switches, this is even more
important.

> 
> Tested on Linux, x86_64 (fedora 17).
> 
> Signed-off-by: Satoru Moriya <satoru.moriya@hds.com>
> ---
>  cpu-all.h       | 1 +
>  exec.c          | 3 +++
>  qemu-options.hx | 8 ++++++++
>  vl.c            | 4 ++++
>  4 files changed, 16 insertions(+)
> 
> diff --git a/cpu-all.h b/cpu-all.h
> index 74d3681..e12e5d5 100644
> --- a/cpu-all.h
> +++ b/cpu-all.h
> @@ -503,6 +503,7 @@ extern RAMList ram_list;
>  
>  extern const char *mem_path;
>  extern int mem_prealloc;
> +extern int mem_lock;
>  
>  /* Flags stored in the low bits of the TLB virtual address.  These are
>     defined so that fast path ram access is all zeros.  */
> diff --git a/exec.c b/exec.c
> index bb6aa4a..de13bc9 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2572,6 +2572,9 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
>              }
>              memory_try_enable_merging(new_block->host, size);
>          }
> +        if (mem_lock && mlockall(MCL_CURRENT | MCL_FUTURE)) {
> +            perror("mlockall");
> +        }

This belongs to the OS abstraction layer (it's POSIX). And you only need
to call it once per process lifetime.

>      }
>      new_block->length = size;
>  
> diff --git a/qemu-options.hx b/qemu-options.hx
> index 7d97f96..9d82f15 100644
> --- a/qemu-options.hx
> +++ b/qemu-options.hx
> @@ -427,6 +427,14 @@ Preallocate memory when using -mem-path.
>  ETEXI
>  #endif
>  
> +DEF("mlock", 0, QEMU_OPTION_mlock,
> +    "-mlock          mlock guest and qemu memory\n",
> +    QEMU_ARCH_ALL)
> +STEXI
> +@item -mlock
> +mlock guest and qemu memory.
> +ETEXI
> +
>  DEF("k", HAS_ARG, QEMU_OPTION_k,
>      "-k language     use keyboard layout (for example 'fr' for French)\n",
>      QEMU_ARCH_ALL)
> diff --git a/vl.c b/vl.c
> index 8d305ca..c902084 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -187,6 +187,7 @@ const char *mem_path = NULL;
>  #ifdef MAP_POPULATE
>  int mem_prealloc = 0; /* force preallocation of physical target memory */
>  #endif
> +int mem_lock;
>  int nb_nics;
>  NICInfo nd_table[MAX_NICS];
>  int autostart;
> @@ -2770,6 +2771,9 @@ int main(int argc, char **argv, char **envp)
>                  mem_prealloc = 1;
>                  break;
>  #endif
> +            case QEMU_OPTION_mlock:
> +                mem_lock = 1;
> +                break;
>              case QEMU_OPTION_d:
>                  log_mask = optarg;
>                  break;
> 

Jan
Anthony Liguori Sept. 28, 2012, 12:33 p.m. UTC | #2
Jan Kiszka <jan.kiszka@siemens.com> writes:

> On 2012-09-28 01:21, Satoru Moriya wrote:
>> This is a first time for me to post a patch to qemu-devel.
>> If there is something missing/wrong, please let me know.
>> 
>> We have some plans to migrate old enterprise systems which require
>> low latency (msec order) to kvm virtualized environment. Usually,
>> we uses mlock to preallocate and pin down process memory in order
>> to avoid page allocation in latency critical path. On the other
>> hand, in kvm environment, mlocking in guests is not effective
>> because it can't avoid page reclaim in host. Actually, to avoid
>> guest memory reclaim, qemu has "mem-path" option that is actually
>> for using hugepage. But a memory region of qemu is not allocated
>> on hugepage, so it may be reclaimed. That may cause a latency
>> problem.
>> 
>> To avoid guest and qemu memory reclaim, this patch introduces
>> a new "mlock" option. With this option, we can preallocate and
>> pin down guest and qemu memory before booting guest OS.
>
> I guess this reduces the likeliness of multi-millisecond latencies for
> you but not eliminate them. Of course, mlockall is part of our local
> changes for real-time QEMU/KVM, but it is just one of the many pieces
> required. I'm wondering how the situation is on your side.
>
> I think mlockall should once be enabled automatically as soon as you ask
> for real-time support for QEMU guests. How that should be controlled is
> another question. I'm currently carrying a top-level switch "-rt
> maxprio=x[,policy=y]" here, likely not the final solution. I'm not
> really convinced we need to control memory locking separately. And as we
> are very reluctant to add new top-level switches, this is even more
> important.

I think you're right here although I'd suggest not abbreviating.

Regards,

Anthony Liguori

>
>> 
>> Tested on Linux, x86_64 (fedora 17).
>> 
>> Signed-off-by: Satoru Moriya <satoru.moriya@hds.com>
>> ---
>>  cpu-all.h       | 1 +
>>  exec.c          | 3 +++
>>  qemu-options.hx | 8 ++++++++
>>  vl.c            | 4 ++++
>>  4 files changed, 16 insertions(+)
>> 
>> diff --git a/cpu-all.h b/cpu-all.h
>> index 74d3681..e12e5d5 100644
>> --- a/cpu-all.h
>> +++ b/cpu-all.h
>> @@ -503,6 +503,7 @@ extern RAMList ram_list;
>>  
>>  extern const char *mem_path;
>>  extern int mem_prealloc;
>> +extern int mem_lock;
>>  
>>  /* Flags stored in the low bits of the TLB virtual address.  These are
>>     defined so that fast path ram access is all zeros.  */
>> diff --git a/exec.c b/exec.c
>> index bb6aa4a..de13bc9 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -2572,6 +2572,9 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
>>              }
>>              memory_try_enable_merging(new_block->host, size);
>>          }
>> +        if (mem_lock && mlockall(MCL_CURRENT | MCL_FUTURE)) {
>> +            perror("mlockall");
>> +        }
>
> This belongs to the OS abstraction layer (it's POSIX). And you only need
> to call it once per process lifetime.
>
>>      }
>>      new_block->length = size;
>>  
>> diff --git a/qemu-options.hx b/qemu-options.hx
>> index 7d97f96..9d82f15 100644
>> --- a/qemu-options.hx
>> +++ b/qemu-options.hx
>> @@ -427,6 +427,14 @@ Preallocate memory when using -mem-path.
>>  ETEXI
>>  #endif
>>  
>> +DEF("mlock", 0, QEMU_OPTION_mlock,
>> +    "-mlock          mlock guest and qemu memory\n",
>> +    QEMU_ARCH_ALL)
>> +STEXI
>> +@item -mlock
>> +mlock guest and qemu memory.
>> +ETEXI
>> +
>>  DEF("k", HAS_ARG, QEMU_OPTION_k,
>>      "-k language     use keyboard layout (for example 'fr' for French)\n",
>>      QEMU_ARCH_ALL)
>> diff --git a/vl.c b/vl.c
>> index 8d305ca..c902084 100644
>> --- a/vl.c
>> +++ b/vl.c
>> @@ -187,6 +187,7 @@ const char *mem_path = NULL;
>>  #ifdef MAP_POPULATE
>>  int mem_prealloc = 0; /* force preallocation of physical target memory */
>>  #endif
>> +int mem_lock;
>>  int nb_nics;
>>  NICInfo nd_table[MAX_NICS];
>>  int autostart;
>> @@ -2770,6 +2771,9 @@ int main(int argc, char **argv, char **envp)
>>                  mem_prealloc = 1;
>>                  break;
>>  #endif
>> +            case QEMU_OPTION_mlock:
>> +                mem_lock = 1;
>> +                break;
>>              case QEMU_OPTION_d:
>>                  log_mask = optarg;
>>                  break;
>> 
>
> Jan
>
> -- 
> Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
> Corporate Competence Center Embedded Linux
Jan Kiszka Sept. 28, 2012, 1:14 p.m. UTC | #3
On 2012-09-28 14:33, Anthony Liguori wrote:
> Jan Kiszka <jan.kiszka@siemens.com> writes:
> 
>> On 2012-09-28 01:21, Satoru Moriya wrote:
>>> This is a first time for me to post a patch to qemu-devel.
>>> If there is something missing/wrong, please let me know.
>>>
>>> We have some plans to migrate old enterprise systems which require
>>> low latency (msec order) to kvm virtualized environment. Usually,
>>> we uses mlock to preallocate and pin down process memory in order
>>> to avoid page allocation in latency critical path. On the other
>>> hand, in kvm environment, mlocking in guests is not effective
>>> because it can't avoid page reclaim in host. Actually, to avoid
>>> guest memory reclaim, qemu has "mem-path" option that is actually
>>> for using hugepage. But a memory region of qemu is not allocated
>>> on hugepage, so it may be reclaimed. That may cause a latency
>>> problem.
>>>
>>> To avoid guest and qemu memory reclaim, this patch introduces
>>> a new "mlock" option. With this option, we can preallocate and
>>> pin down guest and qemu memory before booting guest OS.
>>
>> I guess this reduces the likeliness of multi-millisecond latencies for
>> you but not eliminate them. Of course, mlockall is part of our local
>> changes for real-time QEMU/KVM, but it is just one of the many pieces
>> required. I'm wondering how the situation is on your side.
>>
>> I think mlockall should once be enabled automatically as soon as you ask
>> for real-time support for QEMU guests. How that should be controlled is
>> another question. I'm currently carrying a top-level switch "-rt
>> maxprio=x[,policy=y]" here, likely not the final solution. I'm not
>> really convinced we need to control memory locking separately. And as we
>> are very reluctant to add new top-level switches, this is even more
>> important.
> 
> I think you're right here although I'd suggest not abbreviating.

You mean the sense of "-realtime" instead of "-rt"?

Jan
Anthony Liguori Sept. 28, 2012, 3:54 p.m. UTC | #4
Jan Kiszka <jan.kiszka@siemens.com> writes:

> On 2012-09-28 14:33, Anthony Liguori wrote:
>> Jan Kiszka <jan.kiszka@siemens.com> writes:
>> 
>>> On 2012-09-28 01:21, Satoru Moriya wrote:
>>>> This is a first time for me to post a patch to qemu-devel.
>>>> If there is something missing/wrong, please let me know.
>>>>
>>>> We have some plans to migrate old enterprise systems which require
>>>> low latency (msec order) to kvm virtualized environment. Usually,
>>>> we uses mlock to preallocate and pin down process memory in order
>>>> to avoid page allocation in latency critical path. On the other
>>>> hand, in kvm environment, mlocking in guests is not effective
>>>> because it can't avoid page reclaim in host. Actually, to avoid
>>>> guest memory reclaim, qemu has "mem-path" option that is actually
>>>> for using hugepage. But a memory region of qemu is not allocated
>>>> on hugepage, so it may be reclaimed. That may cause a latency
>>>> problem.
>>>>
>>>> To avoid guest and qemu memory reclaim, this patch introduces
>>>> a new "mlock" option. With this option, we can preallocate and
>>>> pin down guest and qemu memory before booting guest OS.
>>>
>>> I guess this reduces the likeliness of multi-millisecond latencies for
>>> you but not eliminate them. Of course, mlockall is part of our local
>>> changes for real-time QEMU/KVM, but it is just one of the many pieces
>>> required. I'm wondering how the situation is on your side.
>>>
>>> I think mlockall should once be enabled automatically as soon as you ask
>>> for real-time support for QEMU guests. How that should be controlled is
>>> another question. I'm currently carrying a top-level switch "-rt
>>> maxprio=x[,policy=y]" here, likely not the final solution. I'm not
>>> really convinced we need to control memory locking separately. And as we
>>> are very reluctant to add new top-level switches, this is even more
>>> important.
>> 
>> I think you're right here although I'd suggest not abbreviating.
>
> You mean the sense of "-realtime" instead of "-rt"?

Yes.  Or any other word that makes sense.

Regards,

Anthony Liguori

>
> Jan
>
> -- 
> Siemens AG, Corporate Technology, CT RTC ITP SDP-DE
> Corporate Competence Center Embedded Linux
Satoru Moriya Oct. 1, 2012, 9:24 p.m. UTC | #5
Hi Jan,

Thank you for reviewing.

On 09/28/2012 04:05 AM, Jan Kiszka wrote:
> On 2012-09-28 01:21, Satoru Moriya wrote:
>> We have some plans to migrate old enterprise systems which require
>> low latency (msec order) to kvm virtualized environment. Usually,
>> we uses mlock to preallocate and pin down process memory in order
>> to avoid page allocation in latency critical path. On the other
>> hand, in kvm environment, mlocking in guests is not effective
>> because it can't avoid page reclaim in host. Actually, to avoid
>> guest memory reclaim, qemu has "mem-path" option that is actually
>> for using hugepage. But a memory region of qemu is not allocated
>> on hugepage, so it may be reclaimed. That may cause a latency
>> problem.
>>
>> To avoid guest and qemu memory reclaim, this patch introduces
>> a new "mlock" option. With this option, we can preallocate and
>> pin down guest and qemu memory before booting guest OS.
>
> I guess this reduces the likeliness of multi-millisecond latencies for
> you but not eliminate them. Of course, mlockall is part of our local
> changes for real-time QEMU/KVM, but it is just one of the many pieces
> required. I'm wondering how the situation is on your side.

You're right. I think this is a first step toward solving latency issue
on qemu/kvm.

> I think mlockall should once be enabled automatically as soon as you ask
> for real-time support for QEMU guests. How that should be controlled is
> another question. I'm currently carrying a top-level switch "-rt
> maxprio=x[,policy=y]" here, likely not the final solution. I'm not

Could you please tell me what that option actually do?
Do you have any public repositories or something for me to look at your
real-time qemu/kvm changes?

> really convinced we need to control memory locking separately. And as we
> are very reluctant to add new top-level switches, this is even more
> important.

Regards,
Satoru
Marcelo Tosatti Jan. 21, 2013, 9:43 p.m. UTC | #6
On Fri, Sep 28, 2012 at 10:05:09AM +0200, Jan Kiszka wrote:
> On 2012-09-28 01:21, Satoru Moriya wrote:
> > This is a first time for me to post a patch to qemu-devel.
> > If there is something missing/wrong, please let me know.
> > 
> > We have some plans to migrate old enterprise systems which require
> > low latency (msec order) to kvm virtualized environment. Usually,
> > we uses mlock to preallocate and pin down process memory in order
> > to avoid page allocation in latency critical path. On the other
> > hand, in kvm environment, mlocking in guests is not effective
> > because it can't avoid page reclaim in host. Actually, to avoid
> > guest memory reclaim, qemu has "mem-path" option that is actually
> > for using hugepage. But a memory region of qemu is not allocated
> > on hugepage, so it may be reclaimed. That may cause a latency
> > problem.
> > 
> > To avoid guest and qemu memory reclaim, this patch introduces
> > a new "mlock" option. With this option, we can preallocate and
> > pin down guest and qemu memory before booting guest OS.
> 
> I guess this reduces the likeliness of multi-millisecond latencies for
> you but not eliminate them. Of course, mlockall is part of our local
> changes for real-time QEMU/KVM, but it is just one of the many pieces
> required. I'm wondering how the situation is on your side.
> 
> I think mlockall should once be enabled automatically as soon as you ask
> for real-time support for QEMU guests. How that should be controlled is
> another question. I'm currently carrying a top-level switch "-rt
> maxprio=x[,policy=y]" here, likely not the final solution. I'm not
> really convinced we need to control memory locking separately. And as we
> are very reluctant to add new top-level switches, this is even more
> important.

In certain scenarios, latency induced by paging is significant
and memory locking is sufficient. 

Moreover, scenarios with untrusted guests for which latency improvement
due to mlock is desired, realtime priority is problematic (guests whose
QEMU threads have realtime priority can abuse the host system).
Satoru Moriya Jan. 22, 2013, 2:45 p.m. UTC | #7
On 01/21/2013 04:43 PM, Marcelo Tosatti wrote:
> On Fri, Sep 28, 2012 at 10:05:09AM +0200, Jan Kiszka wrote:

>> On 2012-09-28 01:21, Satoru Moriya wrote:

>>> This is a first time for me to post a patch to qemu-devel.

>>> If there is something missing/wrong, please let me know.

>>>

>>> We have some plans to migrate old enterprise systems which require 

>>> low latency (msec order) to kvm virtualized environment. Usually, we 

>>> uses mlock to preallocate and pin down process memory in order to 

>>> avoid page allocation in latency critical path. On the other hand, 

>>> in kvm environment, mlocking in guests is not effective because it 

>>> can't avoid page reclaim in host. Actually, to avoid guest memory 

>>> reclaim, qemu has "mem-path" option that is actually for using 

>>> hugepage. But a memory region of qemu is not allocated on hugepage, 

>>> so it may be reclaimed. That may cause a latency problem.

>>>

>>> To avoid guest and qemu memory reclaim, this patch introduces a new 

>>> "mlock" option. With this option, we can preallocate and pin down 

>>> guest and qemu memory before booting guest OS.

>>

>> I guess this reduces the likeliness of multi-millisecond latencies 

>> for you but not eliminate them. Of course, mlockall is part of our 

>> local changes for real-time QEMU/KVM, but it is just one of the many 

>> pieces required. I'm wondering how the situation is on your side.

>>

>> I think mlockall should once be enabled automatically as soon as you 

>> ask for real-time support for QEMU guests. How that should be 

>> controlled is another question. I'm currently carrying a top-level 

>> switch "-rt maxprio=x[,policy=y]" here, likely not the final 

>> solution. I'm not really convinced we need to control memory locking 

>> separately. And as we are very reluctant to add new top-level 

>> switches, this is even more important.

> 

> In certain scenarios, latency induced by paging is significant and 

> memory locking is sufficient.

> 

> Moreover, scenarios with untrusted guests for which latency 

> improvement due to mlock is desired, realtime priority is problematic 

> (guests whose QEMU threads have realtime priority can abuse the host system).


Right, our usecase is of multiple guests with untrusted VMs.

Regards,
Satoru
Jan Kiszka Jan. 22, 2013, 2:58 p.m. UTC | #8
On 2013-01-22 15:45, Satoru Moriya wrote:
> On 01/21/2013 04:43 PM, Marcelo Tosatti wrote:
>> On Fri, Sep 28, 2012 at 10:05:09AM +0200, Jan Kiszka wrote:
>>> On 2012-09-28 01:21, Satoru Moriya wrote:
>>>> This is a first time for me to post a patch to qemu-devel.
>>>> If there is something missing/wrong, please let me know.
>>>>
>>>> We have some plans to migrate old enterprise systems which require 
>>>> low latency (msec order) to kvm virtualized environment. Usually, we 
>>>> uses mlock to preallocate and pin down process memory in order to 
>>>> avoid page allocation in latency critical path. On the other hand, 
>>>> in kvm environment, mlocking in guests is not effective because it 
>>>> can't avoid page reclaim in host. Actually, to avoid guest memory 
>>>> reclaim, qemu has "mem-path" option that is actually for using 
>>>> hugepage. But a memory region of qemu is not allocated on hugepage, 
>>>> so it may be reclaimed. That may cause a latency problem.
>>>>
>>>> To avoid guest and qemu memory reclaim, this patch introduces a new 
>>>> "mlock" option. With this option, we can preallocate and pin down 
>>>> guest and qemu memory before booting guest OS.
>>>
>>> I guess this reduces the likeliness of multi-millisecond latencies 
>>> for you but not eliminate them. Of course, mlockall is part of our 
>>> local changes for real-time QEMU/KVM, but it is just one of the many 
>>> pieces required. I'm wondering how the situation is on your side.
>>>
>>> I think mlockall should once be enabled automatically as soon as you 
>>> ask for real-time support for QEMU guests. How that should be 
>>> controlled is another question. I'm currently carrying a top-level 
>>> switch "-rt maxprio=x[,policy=y]" here, likely not the final 
>>> solution. I'm not really convinced we need to control memory locking 
>>> separately. And as we are very reluctant to add new top-level 
>>> switches, this is even more important.
>>
>> In certain scenarios, latency induced by paging is significant and 
>> memory locking is sufficient.
>>
>> Moreover, scenarios with untrusted guests for which latency 
>> improvement due to mlock is desired, realtime priority is problematic 
>> (guests whose QEMU threads have realtime priority can abuse the host system).
> 
> Right, our usecase is of multiple guests with untrusted VMs.

If you cannot dedicate resources (CPU cores) to the guest, you can still
throttle its RT bandwidth.

Nevertheless, I'm also fine with making this property separately
controllable via -realtime. Enabling -realtime may not require setting a
priority > 0, thus will keep all threads at SCHED_OTHER in that case.
But it will default to enable mlockall. In addition, if you feel like,
-realtime mlock=true|false could be provided to make even this configurable.

Jan
diff mbox

Patch

diff --git a/cpu-all.h b/cpu-all.h
index 74d3681..e12e5d5 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -503,6 +503,7 @@  extern RAMList ram_list;
 
 extern const char *mem_path;
 extern int mem_prealloc;
+extern int mem_lock;
 
 /* Flags stored in the low bits of the TLB virtual address.  These are
    defined so that fast path ram access is all zeros.  */
diff --git a/exec.c b/exec.c
index bb6aa4a..de13bc9 100644
--- a/exec.c
+++ b/exec.c
@@ -2572,6 +2572,9 @@  ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, void *host,
             }
             memory_try_enable_merging(new_block->host, size);
         }
+        if (mem_lock && mlockall(MCL_CURRENT | MCL_FUTURE)) {
+            perror("mlockall");
+        }
     }
     new_block->length = size;
 
diff --git a/qemu-options.hx b/qemu-options.hx
index 7d97f96..9d82f15 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -427,6 +427,14 @@  Preallocate memory when using -mem-path.
 ETEXI
 #endif
 
+DEF("mlock", 0, QEMU_OPTION_mlock,
+    "-mlock          mlock guest and qemu memory\n",
+    QEMU_ARCH_ALL)
+STEXI
+@item -mlock
+mlock guest and qemu memory.
+ETEXI
+
 DEF("k", HAS_ARG, QEMU_OPTION_k,
     "-k language     use keyboard layout (for example 'fr' for French)\n",
     QEMU_ARCH_ALL)
diff --git a/vl.c b/vl.c
index 8d305ca..c902084 100644
--- a/vl.c
+++ b/vl.c
@@ -187,6 +187,7 @@  const char *mem_path = NULL;
 #ifdef MAP_POPULATE
 int mem_prealloc = 0; /* force preallocation of physical target memory */
 #endif
+int mem_lock;
 int nb_nics;
 NICInfo nd_table[MAX_NICS];
 int autostart;
@@ -2770,6 +2771,9 @@  int main(int argc, char **argv, char **envp)
                 mem_prealloc = 1;
                 break;
 #endif
+            case QEMU_OPTION_mlock:
+                mem_lock = 1;
+                break;
             case QEMU_OPTION_d:
                 log_mask = optarg;
                 break;