Message ID | 20200206144055.16981-1-xiezhipeng1@huawei.com |
---|---|
State | New |
Headers | show |
Series | CVE-2019-1010023: add check for load commands mapping [#22851] | expand |
* Zhipeng Xie: > Glibc assume the following constraint in ELF specification. > > | PT_LOAD > | > | […] Loadable segment entries in the program header table appear in > | ascending order, sorted on the p_vaddr member. > > http://www.sco.com/developers/gabi/latest/ch5.pheader.html > > Some check needed to fix vulnerability in load commands mapping > reported by > > https://sourceware.org/bugzilla/show_bug.cgi?id=22851 Sorry, I don't think this fixes the issue because even with increasing, non-overlapping load segments, a crafted binary can eventually map over the dynamic loader, as explained here: <https://www.sourceware.org/ml/libc-alpha/2019-12/msg00634.html> I failed to consider in that post that we also need to protect the dynamic loader heap, which is not reflected in the load segments of the dynamic loader. We could use MAP_FIXED_NOREPLACE in ldd mode if the kernel supports it. Or it's possible that in ldd mode, we can completely drop MAP_FIXED without ill effects because we do not depend on absolute addresses anywhere (but that would need some experimentation). Using MAP_FIXED_NOREPLACE in general is problematic because there might be legacy binaries with overlapping load segments which will no longer load afterwards. Thanks, Florian
diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index ac9f09ab4c..8353fcd730 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -33,6 +33,7 @@ _dl_map_segments (struct link_map *l, int fd, struct link_map *loader) { const struct loadcmd *c = loadcmds; + ElfW(Addr) l_map_end_aligned; if (__glibc_likely (type == ET_DYN)) { @@ -61,6 +62,8 @@ _dl_map_segments (struct link_map *l, int fd, return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; l->l_map_end = l->l_map_start + maplength; + l_map_end_aligned = ((l->l_map_end + GLRO(dl_pagesize) - 1) + & ~(GLRO(dl_pagesize) - 1)); l->l_addr = l->l_map_start - c->mapstart; if (has_holes) @@ -85,10 +88,16 @@ _dl_map_segments (struct link_map *l, int fd, /* Remember which part of the address space this object uses. */ l->l_map_start = c->mapstart + l->l_addr; l->l_map_end = l->l_map_start + maplength; + l_map_end_aligned = ((l->l_map_end + GLRO(dl_pagesize) - 1) + & ~(GLRO(dl_pagesize) - 1)); l->l_contiguous = !has_holes; while (c < &loadcmds[nloadcmds]) { + if ((l->l_addr + c->mapend) > l_map_end_aligned || + (l->l_addr + c->mapstart) < l->l_map_start) + return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; + if (c->mapend > c->mapstart /* Map the segment contents from the file. */ && (__mmap ((void *) (l->l_addr + c->mapstart),
Glibc assume the following constraint in ELF specification. | PT_LOAD | | […] Loadable segment entries in the program header table appear in | ascending order, sorted on the p_vaddr member. http://www.sco.com/developers/gabi/latest/ch5.pheader.html Some check needed to fix vulnerability in load commands mapping reported by https://sourceware.org/bugzilla/show_bug.cgi?id=22851 Signed-off-by: Zhipeng Xie <xiezhipeng1@huawei.com> --- elf/dl-map-segments.h | 9 +++++++++ 1 file changed, 9 insertions(+)