Message ID | 20160402153421.GA28788@intel.com |
---|---|
State | New |
Headers | show |
On 02 Apr 2016 08:34, H.J. Lu wrote: > __libc_memalign in ld.so allocates one page at a time and tries to > optimize consecutive __libc_memalign calls by hoping that the next > mmap is after the current memory allocation. > > However, the kernel hands out mmap addresses in top-down order, so > this optimization in practice never happens, with the result that we > have more mmap calls and waste a bunch of space for each __libc_memalign. > > This change makes __libc_memalign to mmap one page extra. Worst case, > the kernel never puts a backing page behind it, but best case it allows > __libc_memalign to operate much much better. For elf/tst-align --direct, > it reduces number of mmap calls from 12 to 9. > > --- a/elf/dl-minimal.c > +++ b/elf/dl-minimal.c > @@ -75,6 +75,7 @@ __libc_memalign (size_t align, size_t n) > return NULL; > nup = GLRO(dl_pagesize); > } > + nup += GLRO(dl_pagesize); should this be in the else case ? also the comment above this code needs updating -mike
diff --git a/elf/dl-minimal.c b/elf/dl-minimal.c index 762e65b..d6f87f1 100644 --- a/elf/dl-minimal.c +++ b/elf/dl-minimal.c @@ -75,6 +75,7 @@ __libc_memalign (size_t align, size_t n) return NULL; nup = GLRO(dl_pagesize); } + nup += GLRO(dl_pagesize); page = __mmap (0, nup, PROT_READ|PROT_WRITE, MAP_ANON|MAP_PRIVATE, -1, 0); if (page == MAP_FAILED)