From patchwork Fri Jul 24 06:50:57 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeremy Kerr X-Patchwork-Id: 499606 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 22CA01402A0 for ; Fri, 24 Jul 2015 16:51:22 +1000 (AEST) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 08FDB1A0BF9 for ; Fri, 24 Jul 2015 16:51:22 +1000 (AEST) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Received: from ozlabs.org (ozlabs.org [IPv6:2401:3900:2:1::2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id A599E1A098B for ; Fri, 24 Jul 2015 16:51:01 +1000 (AEST) Received: by ozlabs.org (Postfix, from userid 1023) id 8A0581402ED; Fri, 24 Jul 2015 16:51:01 +1000 (AEST) MIME-Version: 1.0 Message-Id: <1437720657.167279.481730847122.6.gpush@pablo> In-Reply-To: <1437720657.165281.93625520876.0.gpush@pablo> To: skiboot@lists.ozlabs.org From: Jeremy Kerr Date: Fri, 24 Jul 2015 14:50:57 +0800 Subject: [Skiboot] [PATCH 6/7 v2] core/mem_region: Add mem_range_is_reserved() X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" This change adds a function to check whether a range of memory is covered by one or more reservations. Signed-off-by: Jeremy Kerr --- core/mem_region.c | 43 +++++ core/test/Makefile.check | 1 core/test/run-mem_range_is_reserved.c | 216 ++++++++++++++++++++++++++ include/mem_region.h | 2 4 files changed, 262 insertions(+) diff --git a/core/mem_region.c b/core/mem_region.c index b85b1e3..3ed8006 100644 --- a/core/mem_region.c +++ b/core/mem_region.c @@ -789,6 +789,49 @@ struct mem_region *find_mem_region(const char *name) return NULL; } +bool mem_range_is_reserved(uint64_t start, uint64_t size) +{ + uint64_t end = start + size; + struct mem_region *region; + + /* We may have the range covered by a number of regions, which could + * appear in any order. So, we look for a region that covers the + * start address, and bump start up to the end of that region. + * + * We repeat until we've either bumped past the end of the range, + * or we didn't find a matching region. + * + * This has a worst-case of O(n^2), but n is well bounded by the + * small number of reservations. + */ + for (;;) { + bool found = false; + + list_for_each(®ions, region, list) { + if (!region_is_reserved(region)) + continue; + + /* does this region overlap the start address, and + * have a non-zero size? */ + if (region->start <= start && + region->start + region->len > start && + region->len) { + start = region->start + region->len; + found = true; + } + } + + /* 'end' is the first byte outside of the range */ + if (start >= end) + return true; + + if (!found) + break; + } + + return false; +} + void adjust_cpu_stacks_alloc(void) { /* CPU stacks start at 0, then when we know max possible PIR, diff --git a/core/test/Makefile.check b/core/test/Makefile.check index dfe7360..c1af4b3 100644 --- a/core/test/Makefile.check +++ b/core/test/Makefile.check @@ -8,6 +8,7 @@ CORE_TEST := core/test/run-device \ core/test/run-mem_region_release_unused \ core/test/run-mem_region_release_unused_noalloc \ core/test/run-mem_region_reservations \ + core/test/run-mem_range_is_reserved \ core/test/run-nvram-format \ core/test/run-trace core/test/run-msg \ core/test/run-pel \ diff --git a/core/test/run-mem_range_is_reserved.c b/core/test/run-mem_range_is_reserved.c new file mode 100644 index 0000000..b504326 --- /dev/null +++ b/core/test/run-mem_range_is_reserved.c @@ -0,0 +1,216 @@ +/* Copyright 2015 IBM Corp. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + * implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include + +#define BITS_PER_LONG (sizeof(long) * 8) +/* Don't include this, it's PPC-specific */ +#define __CPU_H +static unsigned int cpu_max_pir = 1; +struct cpu_thread { + unsigned int chip_id; +}; + +#include + +static void *real_malloc(size_t size) +{ + return malloc(size); +} + +static void real_free(void *p) +{ + return free(p); +} + +#undef malloc +#undef free +#undef realloc + +#include +#include + +/* We need mem_region to accept __location__ */ +#define is_rodata(p) true +#include "../mem_region.c" +#include "../malloc.c" + +/* But we need device tree to make copies of names. */ +#undef is_rodata +#define is_rodata(p) false +#include "../../libc/string/strdup.c" + +#include "../device.c" +#include +#include + +void lock(struct lock *l) +{ + assert(!l->lock_val); + l->lock_val++; +} + +void unlock(struct lock *l) +{ + assert(l->lock_val); + l->lock_val--; +} + +bool lock_held_by_me(struct lock *l) +{ + return l->lock_val; +} + +#define TEST_HEAP_ORDER 14 +#define TEST_HEAP_SIZE (1ULL << TEST_HEAP_ORDER) + +static void add_mem_node(uint64_t start, uint64_t len) +{ + struct dt_node *mem; + u64 reg[2]; + char *name; + + name = (char*)malloc(sizeof("memory@") + STR_MAX_CHARS(reg[0])); + assert(name); + + /* reg contains start and length */ + reg[0] = cpu_to_be64(start); + reg[1] = cpu_to_be64(len); + + sprintf(name, "memory@%llx", (long long)start); + + mem = dt_new(dt_root, name); + dt_add_property_string(mem, "device_type", "memory"); + dt_add_property(mem, "reg", reg, sizeof(reg)); + free(name); +} + +void add_chip_dev_associativity(struct dt_node *dev __attribute__((unused))) +{ +} + +struct test_region { + uint64_t start; + uint64_t end; +}; + +static struct test { + struct test_region regions[3]; + bool reserved; +} tests[] = { + /* empty region set */ + { { { 0 } }, false }, + + /* single exact match */ + { { { 0x1000, 0x2000 }, }, true }, + + /* overlap downwards */ + { { { 0x0fff, 0x2000 }, }, true }, + + /* overlap upwards */ + { { { 0x1000, 0x2001 }, }, true }, + + /* missing first byte */ + { { { 0x1001, 0x2000 }, }, false }, + + /* missing last byte */ + { { { 0x1000, 0x1fff }, }, false }, + + /* two regions, full coverage, split before start of range */ + { { { 0x0500, 0x1000 }, { 0x1000, 0x2500 } }, true }, + + /* two regions, full coverage, split after start of range */ + { { { 0x0500, 0x1001 }, { 0x1001, 0x2500 } }, true }, + + /* two regions, full coverage, split at middle of range */ + { { { 0x0500, 0x1500 }, { 0x1500, 0x2500 } }, true }, + + /* two regions, full coverage, split before end of range */ + { { { 0x0500, 0x1fff }, { 0x1fff, 0x2500 } }, true }, + + /* two regions, full coverage, split after end of range */ + { { { 0x0500, 0x2000 }, { 0x2000, 0x2500 } }, true }, + + /* two regions, missing byte in middle of range */ + { { { 0x0500, 0x14ff }, { 0x1500, 0x2500 } }, false }, + + /* two regions, missing byte after start of range */ + { { { 0x0500, 0x1000 }, { 0x1001, 0x2500 } }, false }, + + /* two regions, missing byte before end of range */ + { { { 0x0500, 0x1fff }, { 0x2000, 0x2500 } }, false }, +}; + +static void run_test(struct test *test) +{ + struct test_region *r; + bool reserved; + + list_head_init(®ions); + + mem_region_init(); + + /* create our reservations */ + for (r = test->regions; r->start; r++) + mem_reserve_hw("r", r->start, r->end - r->start); + + reserved = mem_range_is_reserved(0x1000, 0x1000); + + if (reserved != test->reserved) { + struct mem_region *r; + fprintf(stderr, "test failed; got %s, expected %s\n", + reserved ? "reserved" : "unreserved", + test->reserved ? "reserved" : "unreserved"); + + fprintf(stderr, "reserved regions:\n"); + + list_for_each(®ions, r, list) { + fprintf(stderr, "\t: %08x[%08x] %s\n", + r->start, r->len, r->name); + } + exit(EXIT_FAILURE); + } +} + + +int main(void) +{ + unsigned int i; + void *buf; + + /* Use malloc for the heap, so valgrind can find issues. */ + skiboot_heap.start = (long)real_malloc(TEST_HEAP_SIZE); + skiboot_heap.len = TEST_HEAP_SIZE; + + /* shift the OS reserve area out of the way of our playground */ + skiboot_os_reserve.start = 0x100000; + skiboot_os_reserve.len = 0x1000; + + dt_root = dt_new_root(""); + dt_add_property_cells(dt_root, "#address-cells", 2); + dt_add_property_cells(dt_root, "#size-cells", 2); + + buf = real_malloc(1024*1024); + add_mem_node((unsigned long)buf, 1024*1024); + + for (i = 0; i < ARRAY_SIZE(tests); i++) + run_test(&tests[i]); + + dt_free(dt_root); + real_free(buf); + real_free((void *)(long)skiboot_heap.start); + return 0; +} diff --git a/include/mem_region.h b/include/mem_region.h index a5ca315..913dbb6 100644 --- a/include/mem_region.h +++ b/include/mem_region.h @@ -74,4 +74,6 @@ void mem_reserve_hw(const char *name, uint64_t start, uint64_t len); struct mem_region *find_mem_region(const char *name); +bool mem_range_is_reserved(uint64_t start, uint64_t size); + #endif /* __MEMORY_REGION */