From patchwork Mon Dec 17 19:06:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Goldschmidt X-Patchwork-Id: 1014698 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.denx.de (client-ip=81.169.180.215; helo=lists.denx.de; envelope-from=u-boot-bounces@lists.denx.de; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="RrXRzJM9"; dkim-atps=neutral Received: from lists.denx.de (dione.denx.de [81.169.180.215]) by ozlabs.org (Postfix) with ESMTP id 43JVzx67Hfz9s3q for ; Tue, 18 Dec 2018 06:08:29 +1100 (AEDT) Received: by lists.denx.de (Postfix, from userid 105) id 8ED83C21FD2; Mon, 17 Dec 2018 19:07:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on lists.denx.de X-Spam-Level: X-Spam-Status: No, score=0.0 required=5.0 tests=FREEMAIL_FROM, RCVD_IN_MSPIKE_H2, T_DKIM_INVALID autolearn=unavailable autolearn_force=no version=3.4.0 Received: from lists.denx.de (localhost [IPv6:::1]) by lists.denx.de (Postfix) with ESMTP id 1B84DC221AA; Mon, 17 Dec 2018 19:07:00 +0000 (UTC) Received: by lists.denx.de (Postfix, from userid 105) id 5A552C21F51; Mon, 17 Dec 2018 19:06:56 +0000 (UTC) Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by lists.denx.de (Postfix) with ESMTPS id 3B559C22178 for ; Mon, 17 Dec 2018 19:06:52 +0000 (UTC) Received: by mail-wr1-f67.google.com with SMTP id z5so13421449wrt.11 for ; Mon, 17 Dec 2018 11:06:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pvUs8SEamT6jDIQd/bAwUtpDPh/4cJNCDhdYrw5MnS0=; b=RrXRzJM98sNaLR62V0Fg+YGxMqzsgRG7p/WUX3kvja9jLEToUpoB0vR05LItVK3J3J 2q7LR0Ql1GIdxsTOtXk69+GUrRxJCSRPSer80pZ1UvnqtXYDXxNMS7imS79ZmGx/zJTP zXCp/GQYsPYkH08LlIpyHBFXeldySmy5T/tF/WISw0+++WNsNwe9BLI0ZbkK1E+ErzyM Ax2Bx9z8WnQwNkwjEnwZTgAynWRlFnc2yzNgFyoBPBuwhykU3m65b0Sm8tbmBK7rmCRw v75kDO8Db9HbuJ0TSk+jZRXN4fRfs+vIR6dAxh6mSPEKrqlOMF3hQl5gKIW9B6VQdMzD SdCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pvUs8SEamT6jDIQd/bAwUtpDPh/4cJNCDhdYrw5MnS0=; b=C9iS6kR8jCwVQdBfe10yFDIfKrqC15OSs46bzLP+XxPcmNp4dcHC6I0BtmU2/20py0 9fwPIKIsvQMUtpXR/KBpb157nxJ6md8HFR+o39VbbikJtu6ZM2Ke841QNt/BLWbvzIsf iOkU+GA3Ym0/f4HqN2JENLiVhUZ3YKLH+aa77Xg+1WjmgKKCSnw44X3Hrao0X7lCjNWB C/FwptLbtfyNtl1tNLRm0o1MBDdrhwLKXk0ebwn5nl+iGAJ/3qVVmomwe1a3t5CLUD4f OP9R+325KEH4JaHf6JnAGIdOqrLQiOczbVpMqpTcVQZXfZA3tvYfvx3T26DIMS9wMh0d U2/A== X-Gm-Message-State: AA+aEWbtF/ZK9NHcrHUKFBhSgFebkYJpLerDZatLXglsCna0scSP5vcM NuXvO3pIPCVMjqFTJy8QyAY= X-Google-Smtp-Source: AFSGD/Xac4lA/dt8TW7xWXSUckz7ZyH+XS5lbFAevzAXHWIGPIkZSMouvpyUjVkKo61/xwUwdd/Uuw== X-Received: by 2002:adf:fe11:: with SMTP id n17mr11490535wrr.329.1545073611684; Mon, 17 Dec 2018 11:06:51 -0800 (PST) Received: from ubuntu.home ([2a02:8071:6a3:700:80b1:ba3d:111a:23c5]) by smtp.gmail.com with ESMTPSA id c13sm1287634wrb.38.2018.12.17.11.06.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 17 Dec 2018 11:06:50 -0800 (PST) From: Simon Goldschmidt To: Tom Rini , u-boot@lists.denx.de, Joe Hershberger Date: Mon, 17 Dec 2018 20:06:32 +0100 Message-Id: <20181217190638.21672-4-simon.k.r.goldschmidt@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181217190638.21672-1-simon.k.r.goldschmidt@gmail.com> References: <20181217190638.21672-1-simon.k.r.goldschmidt@gmail.com> Cc: Heinrich Schuchardt , Andrea Barisani Subject: [U-Boot] [PATCH v7 3/9] lib: lmb: reserving overlapping regions should fail X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.18 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" lmb_add_region handles overlapping regions wrong: instead of merging or rejecting to add a new reserved region that overlaps an existing one, it just adds the new region. Since internally the same function is used for lmb_alloc, change lmb_add_region to reject overlapping regions. Also, to keep reserved memory correct after 'free', reserved entries created by allocating memory must not set their size to a multiple of alignment but to the original size. This ensures the reserved region is completely removed when the caller calls 'lmb_free', as this one takes the same size as passed to 'lmb_alloc' etc. Add test to assert this. Signed-off-by: Simon Goldschmidt --- Changes in v7: - add braces around if/else with macros accross more than one line Changes in v6: - fix size of allocated regions that need alignment padding Changes in v5: - added a test for this bug Changes in v4: None Changes in v2: None lib/lmb.c | 11 +++--- test/lib/lmb.c | 95 +++++++++++++++++++++++++++++++++++++++++++++----- 2 files changed, 91 insertions(+), 15 deletions(-) diff --git a/lib/lmb.c b/lib/lmb.c index 6d3dcf4e09..cd297f8202 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -131,6 +131,9 @@ static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t rgn->region[i].size += size; coalesced++; break; + } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) { + /* regions overlap */ + return -1; } } @@ -269,11 +272,6 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) return addr & ~(size - 1); } -static phys_addr_t lmb_align_up(phys_addr_t addr, ulong size) -{ - return (addr + (size - 1)) & ~(size - 1); -} - phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) { long i, j; @@ -302,8 +300,7 @@ phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phy if (j < 0) { /* This area isn't reserved, take it */ if (lmb_add_region(&lmb->reserved, base, - lmb_align_up(size, - align)) < 0) + size) < 0) return 0; return base; } diff --git a/test/lib/lmb.c b/test/lib/lmb.c index fb7ca45ef1..e6acb70e76 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -227,13 +227,16 @@ static int lib_test_lmb_big(struct unit_test_state *uts) DM_TEST(lib_test_lmb_big, DM_TESTF_SCAN_PDATA | DM_TESTF_SCAN_FDT); /* Simulate 512 MiB RAM, allocate a block without previous reservation */ -static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram) +static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, + const phys_addr_t alloc_size, const ulong align) { const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; struct lmb lmb; long ret; phys_addr_t a, b; + const phys_addr_t alloc_size_aligned = (alloc_size + align - 1) & + ~(align - 1); /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram); @@ -242,20 +245,43 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram) ret = lmb_add(&lmb, ram, ram_size); ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); /* allocate a block */ - a = lmb_alloc(&lmb, 4, 1); + a = lmb_alloc(&lmb, alloc_size, align); ut_assert(a != 0); - /* and free it */ - ret = lmb_free(&lmb, a, 4); + ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + alloc_size, 0, 0, 0, 0); + /* allocate another block */ + b = lmb_alloc(&lmb, alloc_size, align); + ut_assert(b != 0); + if (alloc_size == alloc_size_aligned) { + ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - + (alloc_size_aligned * 2), alloc_size * 2, 0, 0, 0, + 0); + } else { + ASSERT_LMB(&lmb, ram, ram_size, 2, ram + ram_size - + (alloc_size_aligned * 2), alloc_size, ram + ram_size + - alloc_size_aligned, alloc_size, 0, 0); + } + /* and free them */ + ret = lmb_free(&lmb, b, alloc_size); ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + alloc_size, 0, 0, 0, 0); + ret = lmb_free(&lmb, a, alloc_size); + ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); /* allocate a block with base*/ - b = lmb_alloc_base(&lmb, 4, 1, ram_end); + b = lmb_alloc_base(&lmb, alloc_size, align, ram_end); ut_assert(a == b); + ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + alloc_size, 0, 0, 0, 0); /* and free it */ - ret = lmb_free(&lmb, b, 4); + ret = lmb_free(&lmb, b, alloc_size); ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); return 0; } @@ -265,16 +291,30 @@ static int lib_test_lmb_noreserved(struct unit_test_state *uts) int ret; /* simulate 512 MiB RAM beginning at 1GiB */ - ret = test_noreserved(uts, 0x40000000); + ret = test_noreserved(uts, 0x40000000, 4, 1); if (ret) return ret; /* simulate 512 MiB RAM beginning at 1.5GiB */ - return test_noreserved(uts, 0xE0000000); + return test_noreserved(uts, 0xE0000000, 4, 1); } DM_TEST(lib_test_lmb_noreserved, DM_TESTF_SCAN_PDATA | DM_TESTF_SCAN_FDT); +static int lib_test_lmb_unaligned_size(struct unit_test_state *uts) +{ + int ret; + + /* simulate 512 MiB RAM beginning at 1GiB */ + ret = test_noreserved(uts, 0x40000000, 5, 8); + if (ret) + return ret; + + /* simulate 512 MiB RAM beginning at 1.5GiB */ + return test_noreserved(uts, 0xE0000000, 5, 8); +} + +DM_TEST(lib_test_lmb_unaligned_size, DM_TESTF_SCAN_PDATA | DM_TESTF_SCAN_FDT); /* * Simulate a RAM that starts at 0 and allocate down to address 0, which must * fail as '0' means failure for the lmb_alloc functions. @@ -318,3 +358,42 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) } DM_TEST(lib_test_lmb_at_0, DM_TESTF_SCAN_PDATA | DM_TESTF_SCAN_FDT); + +/* Check that calling lmb_reserve with overlapping regions fails. */ +static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) +{ + const phys_addr_t ram = 0x40000000; + const phys_size_t ram_size = 0x20000000; + struct lmb lmb; + long ret; + + lmb_init(&lmb); + + ret = lmb_add(&lmb, ram, ram_size); + ut_asserteq(ret, 0); + + ret = lmb_reserve(&lmb, 0x40010000, 0x10000); + ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + 0, 0, 0, 0); + /* allocate overlapping region should fail */ + ret = lmb_reserve(&lmb, 0x40011000, 0x10000); + ut_asserteq(ret, -1); + ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + 0, 0, 0, 0); + /* allocate 3nd region */ + ret = lmb_reserve(&lmb, 0x40030000, 0x10000); + ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40010000, 0x10000, + 0x40030000, 0x10000, 0, 0); + /* allocate 2nd region */ + ret = lmb_reserve(&lmb, 0x40020000, 0x10000); + ut_assert(ret >= 0); + ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x30000, + 0, 0, 0, 0); + + return 0; +} + +DM_TEST(lib_test_lmb_overlapping_reserve, + DM_TESTF_SCAN_PDATA | DM_TESTF_SCAN_FDT);