From patchwork Fri Feb 26 13:15:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 589009 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 873B91402D9 for ; Sat, 27 Feb 2016 00:24:34 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=linaro.org header.i=@linaro.org header.b=JshMBWfr; dkim-atps=neutral Received: from localhost ([::1]:49766 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aZINk-0004PR-Gp for incoming@patchwork.ozlabs.org; Fri, 26 Feb 2016 08:24:32 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33442) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aZIFQ-0008QL-5G for qemu-devel@nongnu.org; Fri, 26 Feb 2016 08:16:02 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aZIFO-0004GD-Ea for qemu-devel@nongnu.org; Fri, 26 Feb 2016 08:15:56 -0500 Received: from mail-wm0-x233.google.com ([2a00:1450:400c:c09::233]:33415) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aZIFN-0004G2-Sp for qemu-devel@nongnu.org; Fri, 26 Feb 2016 08:15:54 -0500 Received: by mail-wm0-x233.google.com with SMTP id g62so72139712wme.0 for ; Fri, 26 Feb 2016 05:15:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9Dnp/OsMimmknI091+AdbJET4RA0G4PjtOA6dRxlP+I=; b=JshMBWfr9mw8iJxqhSkji8D+yQTySg962c+EY+akeAUj6m8PYFM+HL8POlDL1D1xOA qvBo/6EIn8exFFK2rz9ctS2wQirLToX4rEsgmHfBHLWE3dFAD9ry0WH/GDClwpxB9nEI UTWZAAa5cMCYY/dyf6fUbyWQt23mjokimm7JQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9Dnp/OsMimmknI091+AdbJET4RA0G4PjtOA6dRxlP+I=; b=SAPafPJqi85ATvyoPLWPcU6inZgXAIpNyESoxSfc27p8Wxql7Gh2GvCN7fUB8glgC6 C9almntioS7aiSXB8LZuk4NI1+c5IqpwZpcjvCN+pA9o4hu/jXu0seqO7RawMXy1EgiG wW8FVNf3nUiQZFCiK7vxagGijRItqvMTCtdE9TlCTYvPMWodH2LTFIWjEfuCVVrNMnVw NsWaYdC6sS7f4+rzAdDeHIDMntfP5X1uCB5u15RzdzYd/iIVYO48vgKhc77mt7XD1lpN swuYEsAczv+4PexDuo0pQXjnizyoFDqtRJsESqd5HG/IfDFL/7FvBnRk8Rnxi6lRxy1f kQew== X-Gm-Message-State: AD7BkJKud5dqDIiMvX0ExXEm0NtBwCJi5IxSv67t2risgWB5XHrn6N+3+c/RPakR2rbvL7b0 X-Received: by 10.28.46.82 with SMTP id u79mr3333534wmu.67.1456492553306; Fri, 26 Feb 2016 05:15:53 -0800 (PST) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id w136sm2860443wmw.0.2016.02.26.05.15.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Feb 2016 05:15:46 -0800 (PST) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 87D043E0323; Fri, 26 Feb 2016 13:15:41 +0000 (GMT) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, mark.burton@greensocs.com, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com Date: Fri, 26 Feb 2016 13:15:31 +0000 Message-Id: <1456492533-17171-10-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.7.1 In-Reply-To: <1456492533-17171-1-git-send-email-alex.bennee@linaro.org> References: <1456492533-17171-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::233 Cc: peter.maydell@linaro.org, drjones@redhat.com, a.spyridakis@virtualopensystems.com, claudio.fontana@huawei.com, qemu-devel@nongnu.org, will.deacon@arm.com, crosthwaitepeter@gmail.com, pbonzini@redhat.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , aurelien@aurel32.net, rth@twiddle.net Subject: [Qemu-devel] [RFC 09/11] arm/locking-tests: add comprehensive locking test X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org This test has been written mainly to stress multi-threaded TCG behaviour but will demonstrate failure by default on real hardware. The test takes the following parameters: - "lock" use GCC's locking semantics - "atomic" use GCC's __atomic primitives - "wfelock" use WaitForEvent sleep - "excl" use load/store exclusive semantics Also two more options allow the test to be tweaked - "noshuffle" disables the memory shuffling - "count=%ld" set your own per-CPU increment count Signed-off-by: Alex Bennée --- v2 - Don't use thumb style strexeq stuff - Add atomic and wfelock tests - Add count/noshuffle test controls - Move barrier tests to separate test file v4 - fix up unitests.cfg to use correct test name - move into "locking" group, remove barrier tests - use a table to add tests, mark which are expected to work - correctly report XFAIL --- arm/locking-test.c | 302 +++++++++++++++++++++++++++++++++++++++++++ arm/unittests.cfg | 35 +++++ config/config-arm-common.mak | 2 + 3 files changed, 339 insertions(+) create mode 100644 arm/locking-test.c diff --git a/arm/locking-test.c b/arm/locking-test.c new file mode 100644 index 0000000..fcbe17c --- /dev/null +++ b/arm/locking-test.c @@ -0,0 +1,302 @@ +#include +#include +#include +#include +#include + +#include + +#define MAX_CPUS 8 + +/* Test definition structure + * + * A simple structure that describes the test name, expected pass and + * increment function. + */ + +/* Function pointers for test */ +typedef void (*inc_fn)(int cpu); + +typedef struct { + const char *test_name; + bool should_pass; + inc_fn main_fn; +} test_descr_t; + +/* How many increments to do */ +static int increment_count = 10000000; +static int do_shuffle = 1; + +/* Shared value all the tests attempt to safely increment using + * various forms of atomic locking and exclusive behaviour. + */ +static unsigned int shared_value; + +/* PAGE_SIZE * uint32_t means we span several pages */ +__attribute__((aligned(PAGE_SIZE))) static uint32_t memory_array[PAGE_SIZE]; + +/* We use the alignment of the following to ensure accesses to locking + * and synchronisation primatives don't interfere with the page of the + * shared value + */ +__attribute__((aligned(PAGE_SIZE))) static unsigned int per_cpu_value[MAX_CPUS]; +__attribute__((aligned(PAGE_SIZE))) static cpumask_t smp_test_complete; +__attribute__((aligned(PAGE_SIZE))) struct isaac_ctx prng_context[MAX_CPUS]; + +/* Some of the approaches use a global lock to prevent contention. */ +static int global_lock; + +/* In any SMP setting this *should* fail due to cores stepping on + * each other updating the shared variable + */ +static void increment_shared(int cpu) +{ + (void)cpu; + + shared_value++; +} + +/* GCC __sync primitives are deprecated in favour of __atomic */ +static void increment_shared_with_lock(int cpu) +{ + (void)cpu; + + while (__sync_lock_test_and_set(&global_lock, 1)); + shared_value++; + __sync_lock_release(&global_lock); +} + +/* In practice even __ATOMIC_RELAXED uses ARM's ldxr/stex exclusive + * semantics */ +static void increment_shared_with_atomic(int cpu) +{ + (void)cpu; + + __atomic_add_fetch(&shared_value, 1, __ATOMIC_SEQ_CST); +} + + +/* + * Load/store exclusive with WFE (wait-for-event) + * + * See ARMv8 ARM examples: + * Use of Wait For Event (WFE) and Send Event (SEV) with locks + */ + +static void increment_shared_with_wfelock(int cpu) +{ + (void)cpu; + +#if defined(__aarch64__) + asm volatile( + " mov w1, #1\n" + " sevl\n" + " prfm PSTL1KEEP, [%[lock]]\n" + "1: wfe\n" + " ldaxr w0, [%[lock]]\n" + " cbnz w0, 1b\n" + " stxr w0, w1, [%[lock]]\n" + " cbnz w0, 1b\n" + /* lock held */ + " ldr w0, [%[sptr]]\n" + " add w0, w0, #0x1\n" + " str w0, [%[sptr]]\n" + /* now release */ + " stlr wzr, [%[lock]]\n" + : /* out */ + : [lock] "r" (&global_lock), [sptr] "r" (&shared_value) /* in */ + : "w0", "w1", "cc"); +#else + asm volatile( + " mov r1, #1\n" + "1: ldrex r0, [%[lock]]\n" + " cmp r0, #0\n" + " wfene\n" + " strexeq r0, r1, [%[lock]]\n" + " cmpeq r0, #0\n" + " bne 1b\n" + " dmb\n" + /* lock held */ + " ldr r0, [%[sptr]]\n" + " add r0, r0, #0x1\n" + " str r0, [%[sptr]]\n" + /* now release */ + " mov r0, #0\n" + " dmb\n" + " str r0, [%[lock]]\n" + " dsb\n" + " sev\n" + : /* out */ + : [lock] "r" (&global_lock), [sptr] "r" (&shared_value) /* in */ + : "r0", "r1", "cc"); +#endif +} + + +/* + * Hand-written version of the load/store exclusive + */ +static void increment_shared_with_excl(int cpu) +{ + (void)cpu; + +#if defined(__aarch64__) + asm volatile( + "1: ldxr w0, [%[sptr]]\n" + " add w0, w0, #0x1\n" + " stxr w1, w0, [%[sptr]]\n" + " cbnz w1, 1b\n" + : /* out */ + : [sptr] "r" (&shared_value) /* in */ + : "w0", "w1", "cc"); +#else + asm volatile( + "1: ldrex r0, [%[sptr]]\n" + " add r0, r0, #0x1\n" + " strex r1, r0, [%[sptr]]\n" + " cmp r1, #0\n" + " bne 1b\n" + : /* out */ + : [sptr] "r" (&shared_value) /* in */ + : "r0", "r1", "cc"); +#endif +} + +/* Test array */ +static test_descr_t tests[] = { + { "none", false, increment_shared }, + { "lock", true, increment_shared_with_lock }, + { "atomic", true, increment_shared_with_atomic }, + { "wfelock", true, increment_shared_with_wfelock }, + { "excl", true, increment_shared_with_excl } +}; + +/* The idea of this is just to generate some random load/store + * activity which may or may not race with an un-barried incremented + * of the shared counter + */ +static void shuffle_memory(int cpu) +{ + int i; + uint32_t lspat = isaac_next_uint32(&prng_context[cpu]); + uint32_t seq = isaac_next_uint32(&prng_context[cpu]); + int count = seq & 0x1f; + uint32_t val=0; + + seq >>= 5; + + for (i=0; i>= PAGE_SHIFT; + seq ^= lspat; + lspat >>= 1; + } + +} + +static inc_fn increment_function; + +static void do_increment(void) +{ + int i; + int cpu = smp_processor_id(); + + printf("CPU%d: online and ++ing\n", cpu); + + for (i=0; i < increment_count; i++) { + per_cpu_value[cpu]++; + increment_function(cpu); + + if (do_shuffle) + shuffle_memory(cpu); + } + + printf("CPU%d: Done, %d incs\n", cpu, per_cpu_value[cpu]); + + cpumask_set_cpu(cpu, &smp_test_complete); + if (cpu != 0) + halt(); +} + +static void setup_and_run_test(test_descr_t *test) +{ + unsigned int i, sum = 0; + int cpu, cpu_cnt = 0; + + increment_function = test->main_fn; + + /* fill our random page */ + for (i=0; ishould_pass) { + report("total incs %d", sum == shared_value, shared_value); + } else { + report_xfail("total incs %d", true, sum == shared_value, shared_value); + } +} + +int main(int argc, char **argv) +{ + static const unsigned char seed[] = "myseed"; + test_descr_t *test = &tests[0]; + int i; + unsigned int j; + + isaac_init(&prng_context[0], &seed[0], sizeof(seed)); + + for (i=0; i