diff mbox

Performance regression using KVM/ARM

Message ID 57192EED.2040501@suse.de
State New
Headers show

Commit Message

Alexander Graf April 21, 2016, 7:50 p.m. UTC
On 21.04.16 18:23, Christoffer Dall wrote:
> Hi,
> 
> Commit 9fac18f (oslib: allocate PROT_NONE pages on top of RAM,
> 2015-09-10) had the unfortunate side effect that memory slots registered
> with KVM no longer contain a userspace address that is aligned to a 2M
> boundary, causing the use of THP to fail in the kernel.
> 
> I fail to see where in the QEMU code we should be asking for a 2M
> alignment of our memory region.  Can someone help pointing me to the
> right place to fix this or suggest a patch?
> 
> This causes a performance regssion of hackbench on KVM/ARM of about 62%
> compared to the workload running with THP.
> 
> We have verified that this is indeed the cause of the failure by adding
> various prints to QEMU and the kernel, but unfortunatley my QEMU
> knowledge is not sufficient for me to fix it myself.
> 
> Any help would be much appreciated!

The code changed quite heavily since I last looked at it, but could you
please try whether the (untested) patch below makes a difference?


Alex

Comments

Christoffer Dall April 22, 2016, 10:01 a.m. UTC | #1
On Thu, Apr 21, 2016 at 09:50:05PM +0200, Alexander Graf wrote:
> 
> 
> On 21.04.16 18:23, Christoffer Dall wrote:
> > Hi,
> > 
> > Commit 9fac18f (oslib: allocate PROT_NONE pages on top of RAM,
> > 2015-09-10) had the unfortunate side effect that memory slots registered
> > with KVM no longer contain a userspace address that is aligned to a 2M
> > boundary, causing the use of THP to fail in the kernel.
> > 
> > I fail to see where in the QEMU code we should be asking for a 2M
> > alignment of our memory region.  Can someone help pointing me to the
> > right place to fix this or suggest a patch?
> > 
> > This causes a performance regssion of hackbench on KVM/ARM of about 62%
> > compared to the workload running with THP.
> > 
> > We have verified that this is indeed the cause of the failure by adding
> > various prints to QEMU and the kernel, but unfortunatley my QEMU
> > knowledge is not sufficient for me to fix it myself.
> > 
> > Any help would be much appreciated!
> 
> The code changed quite heavily since I last looked at it, but could you
> please try whether the (untested) patch below makes a difference?
> 
> 
Unfortunately this doesn't make any difference.  It feels to me like
we're missing specifying a 2M alignemnt in QEMU somewhere, but I can't
properly understand the links between the actual allocation, registering
mem slots with the KVM part of QEMU, and actually setting up KVM user
memory regions.

What has to happen is that the resulting struct
kvm_userspace_memory_region() has the same alignment offset from 2M (the
huge page size) of the ->guest_phys_addr and ->userspace-addr fields.

Thanks,
-Christoffer
Alexander Graf April 22, 2016, 10:06 a.m. UTC | #2
On 04/22/2016 12:01 PM, Christoffer Dall wrote:
> On Thu, Apr 21, 2016 at 09:50:05PM +0200, Alexander Graf wrote:
>>
>> On 21.04.16 18:23, Christoffer Dall wrote:
>>> Hi,
>>>
>>> Commit 9fac18f (oslib: allocate PROT_NONE pages on top of RAM,
>>> 2015-09-10) had the unfortunate side effect that memory slots registered
>>> with KVM no longer contain a userspace address that is aligned to a 2M
>>> boundary, causing the use of THP to fail in the kernel.
>>>
>>> I fail to see where in the QEMU code we should be asking for a 2M
>>> alignment of our memory region.  Can someone help pointing me to the
>>> right place to fix this or suggest a patch?
>>>
>>> This causes a performance regssion of hackbench on KVM/ARM of about 62%
>>> compared to the workload running with THP.
>>>
>>> We have verified that this is indeed the cause of the failure by adding
>>> various prints to QEMU and the kernel, but unfortunatley my QEMU
>>> knowledge is not sufficient for me to fix it myself.
>>>
>>> Any help would be much appreciated!
>> The code changed quite heavily since I last looked at it, but could you
>> please try whether the (untested) patch below makes a difference?
>>
>>
> Unfortunately this doesn't make any difference.  It feels to me like
> we're missing specifying a 2M alignemnt in QEMU somewhere, but I can't
> properly understand the links between the actual allocation, registering
> mem slots with the KVM part of QEMU, and actually setting up KVM user
> memory regions.
>
> What has to happen is that the resulting struct
> kvm_userspace_memory_region() has the same alignment offset from 2M (the
> huge page size) of the ->guest_phys_addr and ->userspace-addr fields.

Well, I would expect that the guest address space is always very big 
aligned - and definitely at least 2MB - so we're safe there.

That means we only need to align the qemu virtual address. There used to 
be a memalign() call for that, but it got replaced with direct mmap() 
and then a lot of code changed on top. Looking at the logs, I'm sure 
Paolo knows the answer though :)


Alex
diff mbox

Patch

diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index 0b4cc7f..24e73b1 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -36,7 +36,7 @@  size_t qemu_fd_getpagesize(int fd)
     }
 #endif

-    return getpagesize();
+    return 2 * 1024 * 1024;
 }

 void *qemu_ram_mmap(int fd, size_t size, size_t align, bool shared)