diff mbox

Improve SLOF_alloc_mem_aligned()

Message ID 1473067703-10746-1-git-send-email-thuth@redhat.com
State Accepted
Headers show

Commit Message

Thomas Huth Sept. 5, 2016, 9:28 a.m. UTC
When loading a file via the spapr-vlan network interface, SLOF
currently leaks quite a lot of memory, about 12kB each time.
This happens mainly because veth_init() uses for example
SLOF_alloc_mem_aligned(8192, 4096) and similar calls to get
the memory, but the space for the additional alignment is never
returned anymore later. An easy way to ease this situation is to
improve SLOF_alloc_mem_aligned() a little bit. We normally get memory
from SLOF_alloc_mem() which is aligned pretty well already, thanks to
the buddy allocator in SLOF. So we can try to first get a memory
block with the exact size first, and only do the alignment dance
if the first allocation was not aligned yet.

Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 slof/helpers.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

Comments

Alexey Kardashevskiy Sept. 14, 2016, 5:39 a.m. UTC | #1
On 05/09/16 19:28, Thomas Huth wrote:
> When loading a file via the spapr-vlan network interface, SLOF
> currently leaks quite a lot of memory, about 12kB each time.
> This happens mainly because veth_init() uses for example
> SLOF_alloc_mem_aligned(8192, 4096) and similar calls to get
> the memory, but the space for the additional alignment is never
> returned anymore later. An easy way to ease this situation is to
> improve SLOF_alloc_mem_aligned() a little bit. We normally get memory
> from SLOF_alloc_mem() which is aligned pretty well already, thanks to
> the buddy allocator in SLOF. So we can try to first get a memory
> block with the exact size first, and only do the alignment dance
> if the first allocation was not aligned yet.
> 
> Signed-off-by: Thomas Huth <thuth@redhat.com>

I do not like it but since the allocated address does not store the size
somewhere nearby, I'll apply this anyway, thanks.


> ---
>  slof/helpers.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/slof/helpers.c b/slof/helpers.c
> index 48c34a6..b049141 100644
> --- a/slof/helpers.c
> +++ b/slof/helpers.c
> @@ -69,9 +69,14 @@ void *SLOF_alloc_mem(long size)
>  
>  void *SLOF_alloc_mem_aligned(long size, long align)
>  {
> -	unsigned long addr = (unsigned long)SLOF_alloc_mem(size + align - 1);
> -	addr = addr + align - 1;
> -	addr = addr & ~(align - 1);
> +	unsigned long addr = (unsigned long)SLOF_alloc_mem(size);
> +
> +	if (addr % align) {
> +		SLOF_free_mem((void *)addr, size);
> +		addr = (unsigned long)SLOF_alloc_mem(size + align - 1);
> +		addr = addr + align - 1;
> +		addr = addr & ~(align - 1);
> +	}
>  
>  	return (void *)addr;
>  }
>
Alexey Kardashevskiy Sept. 15, 2016, 3:49 a.m. UTC | #2
On 14/09/16 15:39, Alexey Kardashevskiy wrote:
> On 05/09/16 19:28, Thomas Huth wrote:
>> When loading a file via the spapr-vlan network interface, SLOF
>> currently leaks quite a lot of memory, about 12kB each time.
>> This happens mainly because veth_init() uses for example
>> SLOF_alloc_mem_aligned(8192, 4096) and similar calls to get
>> the memory, but the space for the additional alignment is never
>> returned anymore later. An easy way to ease this situation is to
>> improve SLOF_alloc_mem_aligned() a little bit. We normally get memory
>> from SLOF_alloc_mem() which is aligned pretty well already, thanks to
>> the buddy allocator in SLOF. So we can try to first get a memory
>> block with the exact size first, and only do the alignment dance
>> if the first allocation was not aligned yet.
>>
>> Signed-off-by: Thomas Huth <thuth@redhat.com>
> 
> I do not like it but since the allocated address does not store the size
> somewhere nearby, I'll apply this anyway, thanks.

I asked for advice and we could actually always try allocating
(size+align-1) here, and then if the address is not aligned - free the
chunk before aligned address, if aligned - free the chunk after addr+size.



> 
> 
>> ---
>>  slof/helpers.c | 11 ++++++++---
>>  1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/slof/helpers.c b/slof/helpers.c
>> index 48c34a6..b049141 100644
>> --- a/slof/helpers.c
>> +++ b/slof/helpers.c
>> @@ -69,9 +69,14 @@ void *SLOF_alloc_mem(long size)
>>  
>>  void *SLOF_alloc_mem_aligned(long size, long align)
>>  {
>> -	unsigned long addr = (unsigned long)SLOF_alloc_mem(size + align - 1);
>> -	addr = addr + align - 1;
>> -	addr = addr & ~(align - 1);
>> +	unsigned long addr = (unsigned long)SLOF_alloc_mem(size);
>> +
>> +	if (addr % align) {
>> +		SLOF_free_mem((void *)addr, size);
>> +		addr = (unsigned long)SLOF_alloc_mem(size + align - 1);
>> +		addr = addr + align - 1;
>> +		addr = addr & ~(align - 1);
>> +	}
>>  
>>  	return (void *)addr;
>>  }
>>
> 
>
Thomas Huth Sept. 15, 2016, 7:04 a.m. UTC | #3
On 15.09.2016 05:49, Alexey Kardashevskiy wrote:
> On 14/09/16 15:39, Alexey Kardashevskiy wrote:
>> On 05/09/16 19:28, Thomas Huth wrote:
>>> When loading a file via the spapr-vlan network interface, SLOF
>>> currently leaks quite a lot of memory, about 12kB each time.
>>> This happens mainly because veth_init() uses for example
>>> SLOF_alloc_mem_aligned(8192, 4096) and similar calls to get
>>> the memory, but the space for the additional alignment is never
>>> returned anymore later. An easy way to ease this situation is to
>>> improve SLOF_alloc_mem_aligned() a little bit. We normally get memory
>>> from SLOF_alloc_mem() which is aligned pretty well already, thanks to
>>> the buddy allocator in SLOF. So we can try to first get a memory
>>> block with the exact size first, and only do the alignment dance
>>> if the first allocation was not aligned yet.
>>>
>>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>>
>> I do not like it but since the allocated address does not store the size
>> somewhere nearby, I'll apply this anyway, thanks.
> 
> I asked for advice and we could actually always try allocating
> (size+align-1) here, and then if the address is not aligned - free the
> chunk before aligned address, if aligned - free the chunk after addr+size.

That could work ... but you should also be prepared that you might need
to free chunks on both ends of the allocation, e.g. when somebody asks
for a region with "align" being much bigger than "size"...

 Thomas
diff mbox

Patch

diff --git a/slof/helpers.c b/slof/helpers.c
index 48c34a6..b049141 100644
--- a/slof/helpers.c
+++ b/slof/helpers.c
@@ -69,9 +69,14 @@  void *SLOF_alloc_mem(long size)
 
 void *SLOF_alloc_mem_aligned(long size, long align)
 {
-	unsigned long addr = (unsigned long)SLOF_alloc_mem(size + align - 1);
-	addr = addr + align - 1;
-	addr = addr & ~(align - 1);
+	unsigned long addr = (unsigned long)SLOF_alloc_mem(size);
+
+	if (addr % align) {
+		SLOF_free_mem((void *)addr, size);
+		addr = (unsigned long)SLOF_alloc_mem(size + align - 1);
+		addr = addr + align - 1;
+		addr = addr & ~(align - 1);
+	}
 
 	return (void *)addr;
 }