From patchwork Thu Aug 8 15:13:26 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luiz Capitulino X-Patchwork-Id: 265752 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 956B92C00A1 for ; Fri, 9 Aug 2013 01:14:20 +1000 (EST) Received: from localhost ([::1]:35050 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V7Rus-0007SM-El for incoming@patchwork.ozlabs.org; Thu, 08 Aug 2013 11:14:18 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46112) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V7RuF-0007MA-5A for qemu-devel@nongnu.org; Thu, 08 Aug 2013 11:13:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V7RuA-0007pP-4c for qemu-devel@nongnu.org; Thu, 08 Aug 2013 11:13:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:61864) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V7Ru9-0007oH-Tl for qemu-devel@nongnu.org; Thu, 08 Aug 2013 11:13:34 -0400 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r78FDXDO020582 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 8 Aug 2013 11:13:33 -0400 Received: from localhost (ovpn-113-165.phx2.redhat.com [10.3.113.165]) by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r78FDWoP005926; Thu, 8 Aug 2013 11:13:33 -0400 From: Luiz Capitulino To: qemu-devel@nongnu.org Date: Thu, 8 Aug 2013 11:13:26 -0400 Message-Id: <1375974809-1757-2-git-send-email-lcapitulino@redhat.com> In-Reply-To: <1375974809-1757-1-git-send-email-lcapitulino@redhat.com> References: <1375974809-1757-1-git-send-email-lcapitulino@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 Cc: aliguori@us.ibm.com Subject: [Qemu-devel] [PULL 1/4] dump: clamp guest-provided mapping lengths to ramblock sizes X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Laszlo Ersek Even a trusted & clean-state guest can map more memory than what it was given. Since the vmcore contains RAMBlocks, mapping sizes should be clamped to RAMBlock sizes. Otherwise such oversized mappings can exceed the entire file size, and ELF parsers might refuse even the valid portion of the PT_LOAD entry. Related RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=981582 Signed-off-by: Laszlo Ersek Signed-off-by: Luiz Capitulino --- dump.c | 65 ++++++++++++++++++++++++++++++++++++++++------------------------- 1 file changed, 40 insertions(+), 25 deletions(-) diff --git a/dump.c b/dump.c index 6a3a72a..9a2f939 100644 --- a/dump.c +++ b/dump.c @@ -187,7 +187,8 @@ static int write_elf32_header(DumpState *s) } static int write_elf64_load(DumpState *s, MemoryMapping *memory_mapping, - int phdr_index, hwaddr offset) + int phdr_index, hwaddr offset, + hwaddr filesz) { Elf64_Phdr phdr; int ret; @@ -197,15 +198,12 @@ static int write_elf64_load(DumpState *s, MemoryMapping *memory_mapping, phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian); phdr.p_offset = cpu_convert_to_target64(offset, endian); phdr.p_paddr = cpu_convert_to_target64(memory_mapping->phys_addr, endian); - if (offset == -1) { - /* When the memory is not stored into vmcore, offset will be -1 */ - phdr.p_filesz = 0; - } else { - phdr.p_filesz = cpu_convert_to_target64(memory_mapping->length, endian); - } + phdr.p_filesz = cpu_convert_to_target64(filesz, endian); phdr.p_memsz = cpu_convert_to_target64(memory_mapping->length, endian); phdr.p_vaddr = cpu_convert_to_target64(memory_mapping->virt_addr, endian); + assert(memory_mapping->length >= filesz); + ret = fd_write_vmcore(&phdr, sizeof(Elf64_Phdr), s); if (ret < 0) { dump_error(s, "dump: failed to write program header table.\n"); @@ -216,7 +214,8 @@ static int write_elf64_load(DumpState *s, MemoryMapping *memory_mapping, } static int write_elf32_load(DumpState *s, MemoryMapping *memory_mapping, - int phdr_index, hwaddr offset) + int phdr_index, hwaddr offset, + hwaddr filesz) { Elf32_Phdr phdr; int ret; @@ -226,15 +225,12 @@ static int write_elf32_load(DumpState *s, MemoryMapping *memory_mapping, phdr.p_type = cpu_convert_to_target32(PT_LOAD, endian); phdr.p_offset = cpu_convert_to_target32(offset, endian); phdr.p_paddr = cpu_convert_to_target32(memory_mapping->phys_addr, endian); - if (offset == -1) { - /* When the memory is not stored into vmcore, offset will be -1 */ - phdr.p_filesz = 0; - } else { - phdr.p_filesz = cpu_convert_to_target32(memory_mapping->length, endian); - } + phdr.p_filesz = cpu_convert_to_target32(filesz, endian); phdr.p_memsz = cpu_convert_to_target32(memory_mapping->length, endian); phdr.p_vaddr = cpu_convert_to_target32(memory_mapping->virt_addr, endian); + assert(memory_mapping->length >= filesz); + ret = fd_write_vmcore(&phdr, sizeof(Elf32_Phdr), s); if (ret < 0) { dump_error(s, "dump: failed to write program header table.\n"); @@ -418,17 +414,24 @@ static int write_memory(DumpState *s, RAMBlock *block, ram_addr_t start, return 0; } -/* get the memory's offset in the vmcore */ -static hwaddr get_offset(hwaddr phys_addr, - DumpState *s) +/* get the memory's offset and size in the vmcore */ +static void get_offset_range(hwaddr phys_addr, + ram_addr_t mapping_length, + DumpState *s, + hwaddr *p_offset, + hwaddr *p_filesz) { RAMBlock *block; hwaddr offset = s->memory_offset; int64_t size_in_block, start; + /* When the memory is not stored into vmcore, offset will be -1 */ + *p_offset = -1; + *p_filesz = 0; + if (s->has_filter) { if (phys_addr < s->begin || phys_addr >= s->begin + s->length) { - return -1; + return; } } @@ -457,18 +460,26 @@ static hwaddr get_offset(hwaddr phys_addr, } if (phys_addr >= start && phys_addr < start + size_in_block) { - return phys_addr - start + offset; + *p_offset = phys_addr - start + offset; + + /* The offset range mapped from the vmcore file must not spill over + * the RAMBlock, clamp it. The rest of the mapping will be + * zero-filled in memory at load time; see + * . + */ + *p_filesz = phys_addr + mapping_length <= start + size_in_block ? + mapping_length : + size_in_block - (phys_addr - start); + return; } offset += size_in_block; } - - return -1; } static int write_elf_loads(DumpState *s) { - hwaddr offset; + hwaddr offset, filesz; MemoryMapping *memory_mapping; uint32_t phdr_index = 1; int ret; @@ -481,11 +492,15 @@ static int write_elf_loads(DumpState *s) } QTAILQ_FOREACH(memory_mapping, &s->list.head, next) { - offset = get_offset(memory_mapping->phys_addr, s); + get_offset_range(memory_mapping->phys_addr, + memory_mapping->length, + s, &offset, &filesz); if (s->dump_info.d_class == ELFCLASS64) { - ret = write_elf64_load(s, memory_mapping, phdr_index++, offset); + ret = write_elf64_load(s, memory_mapping, phdr_index++, offset, + filesz); } else { - ret = write_elf32_load(s, memory_mapping, phdr_index++, offset); + ret = write_elf32_load(s, memory_mapping, phdr_index++, offset, + filesz); } if (ret < 0) {