diff mbox series

[v7,09/15] util/mmap-alloc: Support RAM_NORESERVE via MAP_NORESERVE under Linux

Message ID 20210428133754.10713-10-david@redhat.com
State New
Headers show
Series RAM_NORESERVE, MAP_NORESERVE and hostmem "reserve" property | expand

Commit Message

David Hildenbrand April 28, 2021, 1:37 p.m. UTC
Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
effect on most shared mappings - except for hugetlbfs and anonymous memory.

Linux man page:
  "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
  space is reserved, one has the guarantee that it is possible to modify
  the mapping. When swap space is not reserved one might get SIGSEGV
  upon a write if no physical memory is available. See also the discussion
  of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
  2.6, this flag had effect only for private writable mappings."

Note that the "guarantee" part is wrong with memory overcommit in Linux.

Also, in Linux hugetlbfs is treated differently - we configure reservation
of huge pages from the pool, not reservation of swap space (huge pages
cannot be swapped).

The rough behavior is [1]:
a) !Hugetlbfs:

  1) Without MAP_NORESERVE *or* with memory overcommit under Linux
     disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
     accounting/reservation happens:
      For a file backed map
       SHARED or READ-only - 0 cost (the file is the map not swap)
       PRIVATE WRITABLE - size of mapping per instance

      For an anonymous or /dev/zero map
       SHARED   - size of mapping
       PRIVATE READ-only - 0 cost (but of little use)
       PRIVATE WRITABLE - size of mapping per instance

  2) With MAP_NORESERVE, no accounting/reservation happens.

b) Hugetlbfs:

  1) Without MAP_NORESERVE, huge pages are reserved.

  2) With MAP_NORESERVE, no huge pages are reserved.

Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
to configure it for !hugetlbfs globally; this toggle now allows
configuring it more fine-grained, not for the whole system.

The target use case is virtio-mem, which dynamically exposes memory
inside a large, sparse memory area to the VM.

[1] https://www.kernel.org/doc/Documentation/vm/overcommit-accounting

Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend and machine core
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/qemu/osdep.h |  3 ++
 softmmu/physmem.c    |  1 +
 util/mmap-alloc.c    | 69 ++++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 71 insertions(+), 2 deletions(-)

Comments

Daniel P. Berrangé May 4, 2021, 10:09 a.m. UTC | #1
On Wed, Apr 28, 2021 at 03:37:48PM +0200, David Hildenbrand wrote:
> Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
> effect on most shared mappings - except for hugetlbfs and anonymous memory.
> 
> Linux man page:
>   "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
>   space is reserved, one has the guarantee that it is possible to modify
>   the mapping. When swap space is not reserved one might get SIGSEGV
>   upon a write if no physical memory is available. See also the discussion
>   of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
>   2.6, this flag had effect only for private writable mappings."
> 
> Note that the "guarantee" part is wrong with memory overcommit in Linux.
> 
> Also, in Linux hugetlbfs is treated differently - we configure reservation
> of huge pages from the pool, not reservation of swap space (huge pages
> cannot be swapped).
> 
> The rough behavior is [1]:
> a) !Hugetlbfs:
> 
>   1) Without MAP_NORESERVE *or* with memory overcommit under Linux
>      disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
>      accounting/reservation happens:
>       For a file backed map
>        SHARED or READ-only - 0 cost (the file is the map not swap)
>        PRIVATE WRITABLE - size of mapping per instance
> 
>       For an anonymous or /dev/zero map
>        SHARED   - size of mapping
>        PRIVATE READ-only - 0 cost (but of little use)
>        PRIVATE WRITABLE - size of mapping per instance
> 
>   2) With MAP_NORESERVE, no accounting/reservation happens.
> 
> b) Hugetlbfs:
> 
>   1) Without MAP_NORESERVE, huge pages are reserved.
> 
>   2) With MAP_NORESERVE, no huge pages are reserved.
> 
> Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
> to configure it for !hugetlbfs globally; this toggle now allows
> configuring it more fine-grained, not for the whole system.
> 
> The target use case is virtio-mem, which dynamically exposes memory
> inside a large, sparse memory area to the VM.

Can you explain this use case in more real world terms, as I'm not
understanding what a mgmt app would actually do with this in
practice ?




Regards,
Daniel
David Hildenbrand May 4, 2021, 10:21 a.m. UTC | #2
On 04.05.21 12:09, Daniel P. Berrangé wrote:
> On Wed, Apr 28, 2021 at 03:37:48PM +0200, David Hildenbrand wrote:
>> Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
>> effect on most shared mappings - except for hugetlbfs and anonymous memory.
>>
>> Linux man page:
>>    "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
>>    space is reserved, one has the guarantee that it is possible to modify
>>    the mapping. When swap space is not reserved one might get SIGSEGV
>>    upon a write if no physical memory is available. See also the discussion
>>    of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
>>    2.6, this flag had effect only for private writable mappings."
>>
>> Note that the "guarantee" part is wrong with memory overcommit in Linux.
>>
>> Also, in Linux hugetlbfs is treated differently - we configure reservation
>> of huge pages from the pool, not reservation of swap space (huge pages
>> cannot be swapped).
>>
>> The rough behavior is [1]:
>> a) !Hugetlbfs:
>>
>>    1) Without MAP_NORESERVE *or* with memory overcommit under Linux
>>       disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
>>       accounting/reservation happens:
>>        For a file backed map
>>         SHARED or READ-only - 0 cost (the file is the map not swap)
>>         PRIVATE WRITABLE - size of mapping per instance
>>
>>        For an anonymous or /dev/zero map
>>         SHARED   - size of mapping
>>         PRIVATE READ-only - 0 cost (but of little use)
>>         PRIVATE WRITABLE - size of mapping per instance
>>
>>    2) With MAP_NORESERVE, no accounting/reservation happens.
>>
>> b) Hugetlbfs:
>>
>>    1) Without MAP_NORESERVE, huge pages are reserved.
>>
>>    2) With MAP_NORESERVE, no huge pages are reserved.
>>
>> Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
>> to configure it for !hugetlbfs globally; this toggle now allows
>> configuring it more fine-grained, not for the whole system.
>>
>> The target use case is virtio-mem, which dynamically exposes memory
>> inside a large, sparse memory area to the VM.
> 
> Can you explain this use case in more real world terms, as I'm not
> understanding what a mgmt app would actually do with this in
> practice ?

Let's consider huge pages for simplicity. Assume you have 128 free huge 
pages in your hypervisor that you want to dynamically assign to VMs.

Further assume you have two VMs running. A workflow could look like

1. Assign all huge pages to VM 0
2. Reassign 64 huge pages to VM 1
3. Reassign another 32 huge pages to VM 1
4. Reasssign 16 huge pages to VM 0
5. ...

Basically what we're used to doing with "ordinary" memory.

For that to work with virtio-mem, you'll have to disable reservation of 
huge pages for the virtio-mem managed memory region.

(prealloction of huge pages in virtio-mem to protect from user mistakes 
is a separate work item)

reserve=off will be the default for virtio-mem, and actual 
reservation/preallcoation will be done within virtio-mem. There could be 
use for "reserve=off" for virtio-balloon use cases as well, but I'd like 
to exclude that from the discussion for now.

Hope that answers your question, thanks.
Daniel P. Berrangé May 4, 2021, 10:32 a.m. UTC | #3
On Tue, May 04, 2021 at 12:21:25PM +0200, David Hildenbrand wrote:
> On 04.05.21 12:09, Daniel P. Berrangé wrote:
> > On Wed, Apr 28, 2021 at 03:37:48PM +0200, David Hildenbrand wrote:
> > > Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
> > > effect on most shared mappings - except for hugetlbfs and anonymous memory.
> > > 
> > > Linux man page:
> > >    "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
> > >    space is reserved, one has the guarantee that it is possible to modify
> > >    the mapping. When swap space is not reserved one might get SIGSEGV
> > >    upon a write if no physical memory is available. See also the discussion
> > >    of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
> > >    2.6, this flag had effect only for private writable mappings."
> > > 
> > > Note that the "guarantee" part is wrong with memory overcommit in Linux.
> > > 
> > > Also, in Linux hugetlbfs is treated differently - we configure reservation
> > > of huge pages from the pool, not reservation of swap space (huge pages
> > > cannot be swapped).
> > > 
> > > The rough behavior is [1]:
> > > a) !Hugetlbfs:
> > > 
> > >    1) Without MAP_NORESERVE *or* with memory overcommit under Linux
> > >       disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
> > >       accounting/reservation happens:
> > >        For a file backed map
> > >         SHARED or READ-only - 0 cost (the file is the map not swap)
> > >         PRIVATE WRITABLE - size of mapping per instance
> > > 
> > >        For an anonymous or /dev/zero map
> > >         SHARED   - size of mapping
> > >         PRIVATE READ-only - 0 cost (but of little use)
> > >         PRIVATE WRITABLE - size of mapping per instance
> > > 
> > >    2) With MAP_NORESERVE, no accounting/reservation happens.
> > > 
> > > b) Hugetlbfs:
> > > 
> > >    1) Without MAP_NORESERVE, huge pages are reserved.
> > > 
> > >    2) With MAP_NORESERVE, no huge pages are reserved.
> > > 
> > > Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
> > > to configure it for !hugetlbfs globally; this toggle now allows
> > > configuring it more fine-grained, not for the whole system.
> > > 
> > > The target use case is virtio-mem, which dynamically exposes memory
> > > inside a large, sparse memory area to the VM.
> > 
> > Can you explain this use case in more real world terms, as I'm not
> > understanding what a mgmt app would actually do with this in
> > practice ?
> 
> Let's consider huge pages for simplicity. Assume you have 128 free huge
> pages in your hypervisor that you want to dynamically assign to VMs.
> 
> Further assume you have two VMs running. A workflow could look like
> 
> 1. Assign all huge pages to VM 0
> 2. Reassign 64 huge pages to VM 1
> 3. Reassign another 32 huge pages to VM 1
> 4. Reasssign 16 huge pages to VM 0
> 5. ...
> 
> Basically what we're used to doing with "ordinary" memory.

What does this look like in terms of the memory backend configuration
when you boot VM 0 and VM 1 ?

Are you saying that we boot both VMs with

   -object hostmem-memfd,size=128G,hugetlb=yes,hugetlbsize=1G,reserve=off

and then we have another property set on 'virtio-mem' to tell it
how much/little of that 128 G, to actually give to the guest ?
How do we change that at runtime ?


> For that to work with virtio-mem, you'll have to disable reservation of huge
> pages for the virtio-mem managed memory region.
> 
> (prealloction of huge pages in virtio-mem to protect from user mistakes is a
> separate work item)
> 
> reserve=off will be the default for virtio-mem, and actual
> reservation/preallcoation will be done within virtio-mem. There could be use
> for "reserve=off" for virtio-balloon use cases as well, but I'd like to
> exclude that from the discussion for now.

The hostmem backend defaults are indepdant of frontend usage, so when you
say reserve=off is the default for virtio-mem, are you expecting the mgmt
app like libvirt to specify that ?

Regards,
Daniel
David Hildenbrand May 4, 2021, 11:04 a.m. UTC | #4
On 04.05.21 12:32, Daniel P. Berrangé wrote:
> On Tue, May 04, 2021 at 12:21:25PM +0200, David Hildenbrand wrote:
>> On 04.05.21 12:09, Daniel P. Berrangé wrote:
>>> On Wed, Apr 28, 2021 at 03:37:48PM +0200, David Hildenbrand wrote:
>>>> Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
>>>> effect on most shared mappings - except for hugetlbfs and anonymous memory.
>>>>
>>>> Linux man page:
>>>>     "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
>>>>     space is reserved, one has the guarantee that it is possible to modify
>>>>     the mapping. When swap space is not reserved one might get SIGSEGV
>>>>     upon a write if no physical memory is available. See also the discussion
>>>>     of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
>>>>     2.6, this flag had effect only for private writable mappings."
>>>>
>>>> Note that the "guarantee" part is wrong with memory overcommit in Linux.
>>>>
>>>> Also, in Linux hugetlbfs is treated differently - we configure reservation
>>>> of huge pages from the pool, not reservation of swap space (huge pages
>>>> cannot be swapped).
>>>>
>>>> The rough behavior is [1]:
>>>> a) !Hugetlbfs:
>>>>
>>>>     1) Without MAP_NORESERVE *or* with memory overcommit under Linux
>>>>        disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
>>>>        accounting/reservation happens:
>>>>         For a file backed map
>>>>          SHARED or READ-only - 0 cost (the file is the map not swap)
>>>>          PRIVATE WRITABLE - size of mapping per instance
>>>>
>>>>         For an anonymous or /dev/zero map
>>>>          SHARED   - size of mapping
>>>>          PRIVATE READ-only - 0 cost (but of little use)
>>>>          PRIVATE WRITABLE - size of mapping per instance
>>>>
>>>>     2) With MAP_NORESERVE, no accounting/reservation happens.
>>>>
>>>> b) Hugetlbfs:
>>>>
>>>>     1) Without MAP_NORESERVE, huge pages are reserved.
>>>>
>>>>     2) With MAP_NORESERVE, no huge pages are reserved.
>>>>
>>>> Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
>>>> to configure it for !hugetlbfs globally; this toggle now allows
>>>> configuring it more fine-grained, not for the whole system.
>>>>
>>>> The target use case is virtio-mem, which dynamically exposes memory
>>>> inside a large, sparse memory area to the VM.
>>>
>>> Can you explain this use case in more real world terms, as I'm not
>>> understanding what a mgmt app would actually do with this in
>>> practice ?
>>
>> Let's consider huge pages for simplicity. Assume you have 128 free huge
>> pages in your hypervisor that you want to dynamically assign to VMs.
>>
>> Further assume you have two VMs running. A workflow could look like
>>
>> 1. Assign all huge pages to VM 0
>> 2. Reassign 64 huge pages to VM 1
>> 3. Reassign another 32 huge pages to VM 1
>> 4. Reasssign 16 huge pages to VM 0
>> 5. ...
>>
>> Basically what we're used to doing with "ordinary" memory.
> 
> What does this look like in terms of the memory backend configuration
> when you boot VM 0 and VM 1 ?
> 
> Are you saying that we boot both VMs with
> 
>     -object hostmem-memfd,size=128G,hugetlb=yes,hugetlbsize=1G,reserve=off
> 
> and then we have another property set on 'virtio-mem' to tell it
> how much/little of that 128 G, to actually give to the guest ?
> How do we change that at runtime ?

Roughly, yes. We only special-case memory backends managed by virtio-mem devices.

An advanced example for a single VM could look like this:

sudo build/qemu-system-x86_64 \
	... \
	-m 4G,maxmem=64G \
	-smp sockets=2,cores=2 \
	-object hostmem-memfd,id=bmem0,size=2G,hugetlb=yes,hugetlbsize=2M \
	-numa node,nodeid=0,cpus=0-1,memdev=bmem0 \
	-object hostmem-memfd,id=bmem1,size=2G,hugetlb=yes,hugetlbsize=2M \
	-numa node,nodeid=1,cpus=2-3,memdev=bmem1 \
	... \
	-object hostmem-memfd,id=mem0,size=30G,hugetlb=yes,hugetlbsize=2M,reserve=off \
	-device virtio-mem-pci,id=vmem0,memdev=mem0,node=0,requested-size=0G \
	-object hostmem-memfd,id=mem1,size=30G,hugetlb=yes,hugetlbsize=2M,reserve=off \
	-device virtio-mem-pci,id=vmem1,memdev=mem1,node=1,requested-size=0G \
	... \

We can request a size change by adjusting the "requested-size" property (e.g., via qom-set)
and observe the current size by reading the "size" property (e.g., qom-get). Think of
it as an advanced device-local memory balloon mixed with the concept of a memory hotplug.


I suggest taking a look at the libvirt virito-mem implemetation
-- don't think it's upstream yet:

https://lkml.kernel.org/r/cover.1615982004.git.mprivozn@redhat.com

I'm CCing Michal -- I already gave him a note upfront which additional
properties we might see for memory backends (e.g., reserve, managed-size)
and virtio-mem devices (e.g., iothread, prealloc, reserve, prot).

> 
> 
>> For that to work with virtio-mem, you'll have to disable reservation of huge
>> pages for the virtio-mem managed memory region.
>>
>> (prealloction of huge pages in virtio-mem to protect from user mistakes is a
>> separate work item)
>>
>> reserve=off will be the default for virtio-mem, and actual
>> reservation/preallcoation will be done within virtio-mem. There could be use
>> for "reserve=off" for virtio-balloon use cases as well, but I'd like to
>> exclude that from the discussion for now.
> 
> The hostmem backend defaults are indepdant of frontend usage, so when you
> say reserve=off is the default for virtio-mem, are you expecting the mgmt
> app like libvirt to specify that ?

Sorry, yes exactly; only for the memory backend managed by a virtio-mem device.
Daniel P. Berrangé May 4, 2021, 11:14 a.m. UTC | #5
On Tue, May 04, 2021 at 01:04:17PM +0200, David Hildenbrand wrote:
> On 04.05.21 12:32, Daniel P. Berrangé wrote:
> > On Tue, May 04, 2021 at 12:21:25PM +0200, David Hildenbrand wrote:
> > > On 04.05.21 12:09, Daniel P. Berrangé wrote:
> > > > On Wed, Apr 28, 2021 at 03:37:48PM +0200, David Hildenbrand wrote:
> > > > > Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
> > > > > effect on most shared mappings - except for hugetlbfs and anonymous memory.
> > > > > 
> > > > > Linux man page:
> > > > >     "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
> > > > >     space is reserved, one has the guarantee that it is possible to modify
> > > > >     the mapping. When swap space is not reserved one might get SIGSEGV
> > > > >     upon a write if no physical memory is available. See also the discussion
> > > > >     of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
> > > > >     2.6, this flag had effect only for private writable mappings."
> > > > > 
> > > > > Note that the "guarantee" part is wrong with memory overcommit in Linux.
> > > > > 
> > > > > Also, in Linux hugetlbfs is treated differently - we configure reservation
> > > > > of huge pages from the pool, not reservation of swap space (huge pages
> > > > > cannot be swapped).
> > > > > 
> > > > > The rough behavior is [1]:
> > > > > a) !Hugetlbfs:
> > > > > 
> > > > >     1) Without MAP_NORESERVE *or* with memory overcommit under Linux
> > > > >        disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
> > > > >        accounting/reservation happens:
> > > > >         For a file backed map
> > > > >          SHARED or READ-only - 0 cost (the file is the map not swap)
> > > > >          PRIVATE WRITABLE - size of mapping per instance
> > > > > 
> > > > >         For an anonymous or /dev/zero map
> > > > >          SHARED   - size of mapping
> > > > >          PRIVATE READ-only - 0 cost (but of little use)
> > > > >          PRIVATE WRITABLE - size of mapping per instance
> > > > > 
> > > > >     2) With MAP_NORESERVE, no accounting/reservation happens.
> > > > > 
> > > > > b) Hugetlbfs:
> > > > > 
> > > > >     1) Without MAP_NORESERVE, huge pages are reserved.
> > > > > 
> > > > >     2) With MAP_NORESERVE, no huge pages are reserved.
> > > > > 
> > > > > Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
> > > > > to configure it for !hugetlbfs globally; this toggle now allows
> > > > > configuring it more fine-grained, not for the whole system.
> > > > > 
> > > > > The target use case is virtio-mem, which dynamically exposes memory
> > > > > inside a large, sparse memory area to the VM.
> > > > 
> > > > Can you explain this use case in more real world terms, as I'm not
> > > > understanding what a mgmt app would actually do with this in
> > > > practice ?
> > > 
> > > Let's consider huge pages for simplicity. Assume you have 128 free huge
> > > pages in your hypervisor that you want to dynamically assign to VMs.
> > > 
> > > Further assume you have two VMs running. A workflow could look like
> > > 
> > > 1. Assign all huge pages to VM 0
> > > 2. Reassign 64 huge pages to VM 1
> > > 3. Reassign another 32 huge pages to VM 1
> > > 4. Reasssign 16 huge pages to VM 0
> > > 5. ...
> > > 
> > > Basically what we're used to doing with "ordinary" memory.
> > 
> > What does this look like in terms of the memory backend configuration
> > when you boot VM 0 and VM 1 ?
> > 
> > Are you saying that we boot both VMs with
> > 
> >     -object hostmem-memfd,size=128G,hugetlb=yes,hugetlbsize=1G,reserve=off
> > 
> > and then we have another property set on 'virtio-mem' to tell it
> > how much/little of that 128 G, to actually give to the guest ?
> > How do we change that at runtime ?
> 
> Roughly, yes. We only special-case memory backends managed by virtio-mem devices.
> 
> An advanced example for a single VM could look like this:
> 
> sudo build/qemu-system-x86_64 \
> 	... \
> 	-m 4G,maxmem=64G \
> 	-smp sockets=2,cores=2 \
> 	-object hostmem-memfd,id=bmem0,size=2G,hugetlb=yes,hugetlbsize=2M \
> 	-numa node,nodeid=0,cpus=0-1,memdev=bmem0 \
> 	-object hostmem-memfd,id=bmem1,size=2G,hugetlb=yes,hugetlbsize=2M \
> 	-numa node,nodeid=1,cpus=2-3,memdev=bmem1 \
> 	... \
> 	-object hostmem-memfd,id=mem0,size=30G,hugetlb=yes,hugetlbsize=2M,reserve=off \
> 	-device virtio-mem-pci,id=vmem0,memdev=mem0,node=0,requested-size=0G \
> 	-object hostmem-memfd,id=mem1,size=30G,hugetlb=yes,hugetlbsize=2M,reserve=off \
> 	-device virtio-mem-pci,id=vmem1,memdev=mem1,node=1,requested-size=0G \
> 	... \
> 
> We can request a size change by adjusting the "requested-size" property (e.g., via qom-set)
> and observe the current size by reading the "size" property (e.g., qom-get). Think of
> it as an advanced device-local memory balloon mixed with the concept of a memory hotplug.

Ok, so in this example, the initial  GB of RAM has normal reserve=on
so if there's insufficient hugepages we'll see the startup failure IIUC.

What happens when we set qom-set requested-size=10GB at runtime, but there
are only 8 GB of hugepages left available ?

Regards,
Daniel
David Hildenbrand May 4, 2021, 11:28 a.m. UTC | #6
On 04.05.21 13:14, Daniel P. Berrangé wrote:
> On Tue, May 04, 2021 at 01:04:17PM +0200, David Hildenbrand wrote:
>> On 04.05.21 12:32, Daniel P. Berrangé wrote:
>>> On Tue, May 04, 2021 at 12:21:25PM +0200, David Hildenbrand wrote:
>>>> On 04.05.21 12:09, Daniel P. Berrangé wrote:
>>>>> On Wed, Apr 28, 2021 at 03:37:48PM +0200, David Hildenbrand wrote:
>>>>>> Let's support RAM_NORESERVE via MAP_NORESERVE on Linux. The flag has no
>>>>>> effect on most shared mappings - except for hugetlbfs and anonymous memory.
>>>>>>
>>>>>> Linux man page:
>>>>>>      "MAP_NORESERVE: Do not reserve swap space for this mapping. When swap
>>>>>>      space is reserved, one has the guarantee that it is possible to modify
>>>>>>      the mapping. When swap space is not reserved one might get SIGSEGV
>>>>>>      upon a write if no physical memory is available. See also the discussion
>>>>>>      of the file /proc/sys/vm/overcommit_memory in proc(5). In kernels before
>>>>>>      2.6, this flag had effect only for private writable mappings."
>>>>>>
>>>>>> Note that the "guarantee" part is wrong with memory overcommit in Linux.
>>>>>>
>>>>>> Also, in Linux hugetlbfs is treated differently - we configure reservation
>>>>>> of huge pages from the pool, not reservation of swap space (huge pages
>>>>>> cannot be swapped).
>>>>>>
>>>>>> The rough behavior is [1]:
>>>>>> a) !Hugetlbfs:
>>>>>>
>>>>>>      1) Without MAP_NORESERVE *or* with memory overcommit under Linux
>>>>>>         disabled ("/proc/sys/vm/overcommit_memory == 2"), the following
>>>>>>         accounting/reservation happens:
>>>>>>          For a file backed map
>>>>>>           SHARED or READ-only - 0 cost (the file is the map not swap)
>>>>>>           PRIVATE WRITABLE - size of mapping per instance
>>>>>>
>>>>>>          For an anonymous or /dev/zero map
>>>>>>           SHARED   - size of mapping
>>>>>>           PRIVATE READ-only - 0 cost (but of little use)
>>>>>>           PRIVATE WRITABLE - size of mapping per instance
>>>>>>
>>>>>>      2) With MAP_NORESERVE, no accounting/reservation happens.
>>>>>>
>>>>>> b) Hugetlbfs:
>>>>>>
>>>>>>      1) Without MAP_NORESERVE, huge pages are reserved.
>>>>>>
>>>>>>      2) With MAP_NORESERVE, no huge pages are reserved.
>>>>>>
>>>>>> Note: With "/proc/sys/vm/overcommit_memory == 0", we were already able
>>>>>> to configure it for !hugetlbfs globally; this toggle now allows
>>>>>> configuring it more fine-grained, not for the whole system.
>>>>>>
>>>>>> The target use case is virtio-mem, which dynamically exposes memory
>>>>>> inside a large, sparse memory area to the VM.
>>>>>
>>>>> Can you explain this use case in more real world terms, as I'm not
>>>>> understanding what a mgmt app would actually do with this in
>>>>> practice ?
>>>>
>>>> Let's consider huge pages for simplicity. Assume you have 128 free huge
>>>> pages in your hypervisor that you want to dynamically assign to VMs.
>>>>
>>>> Further assume you have two VMs running. A workflow could look like
>>>>
>>>> 1. Assign all huge pages to VM 0
>>>> 2. Reassign 64 huge pages to VM 1
>>>> 3. Reassign another 32 huge pages to VM 1
>>>> 4. Reasssign 16 huge pages to VM 0
>>>> 5. ...
>>>>
>>>> Basically what we're used to doing with "ordinary" memory.
>>>
>>> What does this look like in terms of the memory backend configuration
>>> when you boot VM 0 and VM 1 ?
>>>
>>> Are you saying that we boot both VMs with
>>>
>>>      -object hostmem-memfd,size=128G,hugetlb=yes,hugetlbsize=1G,reserve=off
>>>
>>> and then we have another property set on 'virtio-mem' to tell it
>>> how much/little of that 128 G, to actually give to the guest ?
>>> How do we change that at runtime ?
>>
>> Roughly, yes. We only special-case memory backends managed by virtio-mem devices.
>>
>> An advanced example for a single VM could look like this:
>>
>> sudo build/qemu-system-x86_64 \
>> 	... \
>> 	-m 4G,maxmem=64G \
>> 	-smp sockets=2,cores=2 \
>> 	-object hostmem-memfd,id=bmem0,size=2G,hugetlb=yes,hugetlbsize=2M \
>> 	-numa node,nodeid=0,cpus=0-1,memdev=bmem0 \
>> 	-object hostmem-memfd,id=bmem1,size=2G,hugetlb=yes,hugetlbsize=2M \
>> 	-numa node,nodeid=1,cpus=2-3,memdev=bmem1 \
>> 	... \
>> 	-object hostmem-memfd,id=mem0,size=30G,hugetlb=yes,hugetlbsize=2M,reserve=off \
>> 	-device virtio-mem-pci,id=vmem0,memdev=mem0,node=0,requested-size=0G \
>> 	-object hostmem-memfd,id=mem1,size=30G,hugetlb=yes,hugetlbsize=2M,reserve=off \
>> 	-device virtio-mem-pci,id=vmem1,memdev=mem1,node=1,requested-size=0G \
>> 	... \
>>
>> We can request a size change by adjusting the "requested-size" property (e.g., via qom-set)
>> and observe the current size by reading the "size" property (e.g., qom-get). Think of
>> it as an advanced device-local memory balloon mixed with the concept of a memory hotplug.
> 
> Ok, so in this example, the initial  GB of RAM has normal reserve=on
> so if there's insufficient hugepages we'll see the startup failure IIUC.

Yes, except in some NUMA configurations, as huge page reservation isn't 
numa aware; even with reservation there are cases where we can run out 
of applicable free huge pages. Usually we end up preallocating all 
memory in the memory backend just so we're on the safe side.

> 
> What happens when we set qom-set requested-size=10GB at runtime, but there
> are only 8 GB of hugepages left available ?

This is one of the user errors that will be tackled next by a dynamic 
preallocation (and/or reservation) inside virtio-mem.

Once the guest would actually touch >8 GiB, we run out of free huge 
pages and don't have huge page overcommit enabled (or huge page 
overcommit fails allocation which can happen easily), we'd essentially 
crash the VM.

Pretty much similar to messing up memory overcommit with "ordinary" 
memory and getting your VM killed by the OOM handler.

The solution is fairly easy: preallocate huge pages when resizing the 
virtio-mem device (making new huge pages available to the VM in this case).

In the simplest case this can be done using fallocate(). If you're 
interested about the dirty details where it's not that easy, take a look 
at my MADV_POPULATE_READ/MADV_POPULATE_WRITE kernel series [1]. Marek is 
working on handling virtio-mem device via an iothread, so we can do 
preallocation easily "concurrently" while the VM is running, avoiding 
holding the BQL for a long time.

[1] https://lkml.kernel.org/r/20210419135443.12822-1-david@redhat.com
diff mbox series

Patch

diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h
index af65b36698..0a7384d15c 100644
--- a/include/qemu/osdep.h
+++ b/include/qemu/osdep.h
@@ -195,6 +195,9 @@  extern "C" {
 #ifndef MAP_FIXED_NOREPLACE
 #define MAP_FIXED_NOREPLACE 0
 #endif
+#ifndef MAP_NORESERVE
+#define MAP_NORESERVE 0
+#endif
 #ifndef ENOMEDIUM
 #define ENOMEDIUM ENODEV
 #endif
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 1efb1d5193..ccc5985324 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -2230,6 +2230,7 @@  void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
                 flags = MAP_FIXED;
                 flags |= block->flags & RAM_SHARED ?
                          MAP_SHARED : MAP_PRIVATE;
+                flags |= block->flags & RAM_NORESERVE ? MAP_NORESERVE : 0;
                 if (block->fd >= 0) {
                     area = mmap(vaddr, length, PROT_READ | PROT_WRITE,
                                 flags, block->fd, offset);
diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index d0cf4aaee5..838e286ce5 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -20,6 +20,7 @@ 
 #include "qemu/osdep.h"
 #include "qemu/mmap-alloc.h"
 #include "qemu/host-utils.h"
+#include "qemu/cutils.h"
 #include "qemu/error-report.h"
 
 #define HUGETLBFS_MAGIC       0x958458f6
@@ -83,6 +84,70 @@  size_t qemu_mempath_getpagesize(const char *mem_path)
     return qemu_real_host_page_size;
 }
 
+#define OVERCOMMIT_MEMORY_PATH "/proc/sys/vm/overcommit_memory"
+static bool map_noreserve_effective(int fd, uint32_t qemu_map_flags)
+{
+#if defined(__linux__)
+    const bool readonly = qemu_map_flags & QEMU_MAP_READONLY;
+    const bool shared = qemu_map_flags & QEMU_MAP_SHARED;
+    gchar *content = NULL;
+    const char *endptr;
+    unsigned int tmp;
+
+    /*
+     * hugeltb accounting is different than ordinary swap reservation:
+     * a) Hugetlb pages from the pool are reserved for both private and
+     *    shared mappings. For shared mappings, all mappers have to specify
+     *    MAP_NORESERVE.
+     * b) MAP_NORESERVE is not affected by /proc/sys/vm/overcommit_memory.
+     */
+    if (qemu_fd_getpagesize(fd) != qemu_real_host_page_size) {
+        return true;
+    }
+
+    /*
+     * Accountable mappings in the kernel that can be affected by MAP_NORESEVE
+     * are private writable mappings (see mm/mmap.c:accountable_mapping() in
+     * Linux). For all shared or readonly mappings, MAP_NORESERVE is always
+     * implicitly active -- no reservation; this includes shmem. The only
+     * exception is shared anonymous memory, it is accounted like private
+     * anonymous memory.
+     */
+    if (readonly || (shared && fd >= 0)) {
+        return true;
+    }
+
+    /*
+     * MAP_NORESERVE is globally ignored for applicable !hugetlb mappings when
+     * memory overcommit is set to "never". Sparse memory regions aren't really
+     * possible in this system configuration.
+     *
+     * Bail out now instead of silently committing way more memory than
+     * currently desired by the user.
+     */
+    if (g_file_get_contents(OVERCOMMIT_MEMORY_PATH, &content, NULL, NULL) &&
+        !qemu_strtoui(content, &endptr, 0, &tmp) &&
+        (!endptr || *endptr == '\n')) {
+        if (tmp == 2) {
+            error_report("Skipping reservation of swap space is not supported:"
+                         " \"" OVERCOMMIT_MEMORY_PATH "\" is \"2\"");
+            return false;
+        }
+        return true;
+    }
+    /* this interface has been around since Linux 2.6 */
+    error_report("Skipping reservation of swap space is not supported:"
+                 " Could not read: \"" OVERCOMMIT_MEMORY_PATH "\"");
+    return false;
+#endif
+    /*
+     * E.g., FreeBSD used to define MAP_NORESERVE, never implemented it,
+     * and removed it a while ago.
+     */
+    error_report("Skipping reservation of swap space is not supported");
+    return false;
+}
+
 /*
  * Reserve a new memory region of the requested size to be used for mapping
  * from the given fd (if any).
@@ -131,13 +196,13 @@  static void *mmap_activate(void *ptr, size_t size, int fd,
     int flags = MAP_FIXED;
     void *activated_ptr;
 
-    if (noreserve) {
-        error_report("Skipping reservation of swap space is not supported");
+    if (noreserve && !map_noreserve_effective(fd, qemu_map_flags)) {
         return MAP_FAILED;
     }
 
     flags |= fd == -1 ? MAP_ANONYMOUS : 0;
     flags |= shared ? MAP_SHARED : MAP_PRIVATE;
+    flags |= noreserve ? MAP_NORESERVE : 0;
     if (shared && sync) {
         map_sync_flags = MAP_SYNC | MAP_SHARED_VALIDATE;
     }