From patchwork Thu May 23 17:44:42 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Corey Bryant X-Patchwork-Id: 245992 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id F14912C009F for ; Fri, 24 May 2013 03:45:35 +1000 (EST) Received: from localhost ([::1]:49741 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UfZa2-0003SZ-5h for incoming@patchwork.ozlabs.org; Thu, 23 May 2013 13:45:34 -0400 Received: from eggs.gnu.org ([208.118.235.92]:38987) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UfZZS-0003Pa-Dh for qemu-devel@nongnu.org; Thu, 23 May 2013 13:45:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UfZZQ-0006Px-9U for qemu-devel@nongnu.org; Thu, 23 May 2013 13:44:58 -0400 Received: from e9.ny.us.ibm.com ([32.97.182.139]:49799) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UfZZQ-0006Pe-5b for qemu-devel@nongnu.org; Thu, 23 May 2013 13:44:56 -0400 Received: from /spool/local by e9.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 23 May 2013 13:44:55 -0400 Received: from d01dlp01.pok.ibm.com (9.56.250.166) by e9.ny.us.ibm.com (192.168.1.109) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 23 May 2013 13:44:54 -0400 Received: from d01relay07.pok.ibm.com (d01relay07.pok.ibm.com [9.56.227.147]) by d01dlp01.pok.ibm.com (Postfix) with ESMTP id 217EE38C8051 for ; Thu, 23 May 2013 13:44:53 -0400 (EDT) Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay07.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r4NHirrK56426614 for ; Thu, 23 May 2013 13:44:53 -0400 Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r4NHiqRj008649 for ; Thu, 23 May 2013 13:44:53 -0400 Received: from localhost ([9.80.101.134]) by d01av04.pok.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id r4NHiqfp008594; Thu, 23 May 2013 13:44:52 -0400 From: Corey Bryant To: qemu-devel@nongnu.org Date: Thu, 23 May 2013 13:44:42 -0400 Message-Id: <1369331087-22345-3-git-send-email-coreyb@linux.vnet.ibm.com> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1369331087-22345-1-git-send-email-coreyb@linux.vnet.ibm.com> References: <1369331087-22345-1-git-send-email-coreyb@linux.vnet.ibm.com> X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13052317-7182-0000-0000-000006E2FED7 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 32.97.182.139 Cc: kwolf@redhat.com, aliguori@us.ibm.com, stefanb@linux.vnet.ibm.com, Corey Bryant , mdroth@linux.vnet.ibm.com, lcapitulino@redhat.com, jschopp@linux.vnet.ibm.com, stefanha@redhat.com Subject: [Qemu-devel] [PATCH 2/7] vnvram: VNVRAM in-memory support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Provides support for in-memory VNVRAM entries. The in-memory entries are used for fast access to entry data such as the current or max size of an entry and the disk offset where an entry's binary blob data is stored. Signed-off-by: Corey Bryant --- vnvram.c | 196 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 195 insertions(+), 1 deletions(-) diff --git a/vnvram.c b/vnvram.c index e467198..37b7070 100644 --- a/vnvram.c +++ b/vnvram.c @@ -13,6 +13,7 @@ #include "vnvram.h" #include "block/block.h" +#include "monitor/monitor.h" /* #define VNVRAM_DEBUG @@ -69,6 +70,14 @@ typedef struct VNVRAMDrvEntry { static int vnvram_drv_entry_create(VNVRAM *, VNVRAMEntry *, uint64_t, uint32_t); static int vnvram_drv_entry_update(VNVRAM *, VNVRAMEntry *, uint64_t, uint32_t); +static int vnvram_register_entry_internal(VNVRAM *, const VNVRAMEntryName *, + uint64_t, uint32_t, uint32_t); +static VNVRAMEntry *vnvram_find_entry(VNVRAM *, const VNVRAMEntryName *); +static uint64_t vnvram_get_size_kb(VNVRAM *); + +/* Round a value up to the next SIZE */ +#define ROUNDUP(VAL, SIZE) \ + (((VAL)+(SIZE)-1) & ~((SIZE)-1)) /* * Macros for finding entries and their drive offsets @@ -154,7 +163,8 @@ static int vnvram_drv_adjust_size(VNVRAM *vnvram) int rc = 0; int64_t needed_size; - needed_size = 0; + /* qcow2 size needs to be multiple of 512 */ + needed_size = vnvram_get_size_kb(vnvram) * 1024; if (bdrv_getlength(vnvram->bds) < needed_size) { rc = bdrv_truncate(vnvram->bds, needed_size); @@ -485,3 +495,187 @@ static bool vnvram_drv_hdr_is_valid(VNVRAM *vnvram, VNVRAMDrvHdr *hdr) return true; } + +/************************ VNVRAM in-memory ***************************/ +/* High-level VNVRAM functions that work with in-memory entries. */ +/*********************************************************************/ + +/* + * Check if the specified vnvram has been created + */ +static bool vnvram_exists(VNVRAM *vnvram_target) +{ + VNVRAM *vnvram; + + QLIST_FOREACH(vnvram, &vnvrams, list) { + if (vnvram == vnvram_target) { + return true; + } + } + + return false; +} + +/* + * Get total size of the VNVRAM + */ +static uint64_t vnvram_get_size(VNVRAM *vnvram) +{ + const VNVRAMEntry *entry; + uint64_t totsize = sizeof(VNVRAMDrvHdr); + + for (entry = VNVRAM_FIRST_ENTRY(vnvram); entry != NULL; + entry = VNVRAM_NEXT_ENTRY(entry)) { + totsize += sizeof(VNVRAMDrvEntry) + entry->max_size; + } + + return totsize; +} + +/* + * Get the total size of the VNVRAM in kilobytes (rounded up to the next kb) + */ +static uint64_t vnvram_get_size_kb(VNVRAM *vnvram) +{ + return ROUNDUP(vnvram_get_size(vnvram), 1024) / 1024; +} + +/* + * Check if the VNVRAM entries are valid + */ +static bool vnvram_entries_are_valid(VNVRAM *vnvram, uint64_t drv_size) +{ + const VNVRAMEntry *i_entry, *j_entry; + + /* Entries must not overlap or point beyond end of drive size */ + for (i_entry = VNVRAM_FIRST_ENTRY(vnvram); i_entry != NULL; + i_entry = VNVRAM_NEXT_ENTRY(i_entry)) { + + uint64_t i_blob_start = i_entry->blob_offset; + uint64_t i_blob_end = i_blob_start + i_entry->max_size-1; + + if (i_entry->max_size == 0) { + DPRINTF("%s: VNVRAM entry max size shouldn't be 0\n", __func__); + return false; + } + + if (i_blob_end > drv_size) { + DPRINTF("%s: VNVRAM entry blob too large for drive\n", __func__); + return false; + } + + for (j_entry = VNVRAM_NEXT_ENTRY(i_entry); j_entry != NULL; + j_entry = VNVRAM_NEXT_ENTRY(j_entry)) { + + uint64_t j_blob_start = j_entry->blob_offset; + uint64_t j_blob_end = j_blob_start + j_entry->max_size-1; + + if (j_entry->max_size == 0) { + DPRINTF("%s: VNVRAM entry max size shouldn't be 0\n", __func__); + return false; + } + + if (j_blob_end > drv_size) { + DPRINTF("%s: VNVRAM entry blob too large for drive\n", + __func__); + return false; + } + + if ((i_blob_start >= j_blob_start && i_blob_start <= j_blob_end) || + (i_blob_end >= j_blob_start && i_blob_end <= j_blob_end)) { + DPRINTF("%s: VNVRAM entries overlap\n", __func__); + return false; + } + } + } + + return true; +} + +/* + * Synchronize the in-memory VNVRAM entries with those found on the drive. + */ +static int vnvram_sync_from_drv(VNVRAM *vnvram, VNVRAMDrvHdr *hdr) +{ + int rc = 0, num_entries = 0, i; + VNVRAMDrvEntry *drv_entries = NULL; + + rc = vnvram_drv_entries_get(vnvram, hdr, &drv_entries, &num_entries); + if (rc != 0) { + return rc; + } + + for (i = 0; i < num_entries; i++) { + rc = vnvram_register_entry_internal(vnvram, + (const VNVRAMEntryName *)&drv_entries[i].name, + drv_entries[i].blob_offset, + drv_entries[i].cur_size, + drv_entries[i].max_size); + if (rc != 0) { + goto err_exit; + } + } + + vnvram->end_offset = vnvram_get_size(vnvram); + +err_exit: + g_free(drv_entries); + + return rc; +} + +/* + * Register an entry with the in-memory entry list + */ +static int vnvram_register_entry_internal(VNVRAM *vnvram, + const VNVRAMEntryName *entry_name, + uint64_t blob_offset, + uint32_t cur_size, + uint32_t max_size) +{ + VNVRAMEntry *new_entry; + const VNVRAMEntry *existing_entry; + int rc = 0; + + existing_entry = vnvram_find_entry(vnvram, entry_name); + if (existing_entry) { + if (existing_entry->max_size != max_size) { + qerror_report(ERROR_CLASS_GENERIC_ERROR, + "VNVRAM entry already registered with different size"); + return -EINVAL; + } + /* Entry already exists with same max size - success */ + return 0; + } + + new_entry = g_new0(VNVRAMEntry, 1); + + pstrcpy(new_entry->name, sizeof(new_entry->name), (char *)entry_name); + new_entry->blob_offset = blob_offset; + new_entry->cur_size = cur_size; + new_entry->max_size = max_size; + + QLIST_INSERT_HEAD(&vnvram->entries_head, new_entry, next); + + DPRINTF("%s: VNVRAM entry '%s' registered with max_size=%"PRIu32"\n", + __func__, new_entry->name, new_entry->max_size); + + return rc; +} + +/* + * Find the in-memory VNVRAM entry with the specified name + */ +static VNVRAMEntry *vnvram_find_entry(VNVRAM *vnvram, + const VNVRAMEntryName *entry_name) +{ + VNVRAMEntry *entry; + + QLIST_FOREACH(entry, &vnvram->entries_head, next) { + if (!strncmp(entry->name, (char *)entry_name, sizeof(*entry_name))) { + return entry; + } + } + + return NULL; +}