tests: remove legacy tests already ported to unified

Legacy APIs are to be deprecated, so getting rid of tests that have been
moved to unified kernel already.

Change-Id: I752e42bc498dfdd0ea29b0b5b7b9da1dac7b1136
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This commit is contained in:
Anas Nashif 2017-03-23 07:05:16 -04:00 committed by Anas Nashif
commit aa70533244
253 changed files with 1 additions and 19308 deletions

View file

@ -1,5 +1,5 @@
[test]
tags = legacy core
tags = kernel core
platform_whitelist = qemu_x86 qemu_cortex_m3
filter = not CONFIG_X86_IAMCU
timeout = 200

View file

@ -1,4 +0,0 @@
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include $(ZEPHYR_BASE)/Makefile.test

View file

@ -1,78 +0,0 @@
Title: Context and IRQ APIs
Description:
This test verifies that the nanokernel CPU and context APIs operate as expected.
---------------------------------------------------------------------------
Building and Running Project:
This nanokernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
---------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
---------------------------------------------------------------------------
Sample Output:
tc_start() - Test Nanokernel CPU and thread routines
Initializing nanokernel objects
Testing nano_cpu_idle()
Testing interrupt locking and unlocking
Testing irq_disable() and irq_enable()
Testing sys_thread_self_get() from an ISR and task
Testing sys_execution_context_type_get() from an ISR
Testing sys_execution_context_type_get() from a task
Spawning a fiber from a task
Fiber to test sys_thread_self_get() and sys_execution_context_type_get
Fiber to test fiber_yield()
Testing sys_thread_busy_wait()
fiber busy waiting for 20000 usecs (2 ticks)
fiber busy waiting completed
Testing fiber_sleep()
fiber sleeping for 5 ticks
fiber back from sleep
Testing fiber_delayed_start() without cancellation
fiber (q order: 2, t/o: 50) is running
got fiber (q order: 2, t/o: 50) as expected
fiber (q order: 3, t/o: 75) is running
got fiber (q order: 3, t/o: 75) as expected
fiber (q order: 0, t/o: 100) is running
got fiber (q order: 0, t/o: 100) as expected
fiber (q order: 6, t/o: 125) is running
got fiber (q order: 6, t/o: 125) as expected
fiber (q order: 1, t/o: 150) is running
got fiber (q order: 1, t/o: 150) as expected
fiber (q order: 4, t/o: 175) is running
got fiber (q order: 4, t/o: 175) as expected
fiber (q order: 5, t/o: 200) is running
got fiber (q order: 5, t/o: 200) as expected
Testing fiber_delayed_start() with cancellations
cancelling [q order: 0, t/o: 100, t/o order: 0]
fiber (q order: 3, t/o: 75) is running
got (q order: 3, t/o: 75, t/o order 1074292) as expected
fiber (q order: 0, t/o: 100) is running
got (q order: 0, t/o: 100, t/o order 1074292) as expected
cancelling [q order: 3, t/o: 75, t/o order: 3]
cancelling [q order: 4, t/o: 175, t/o order: 4]
fiber (q order: 4, t/o: 175) is running
got (q order: 4, t/o: 175, t/o order 1074292) as expected
cancelling [q order: 6, t/o: 125, t/o order: 6]
PASS - main.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,4 +0,0 @@
CONFIG_NANO_TIMEOUTS=y
CONFIG_IRQ_OFFLOAD=y
CONFIG_NUM_DYNAMIC_EXC_NOERR_STUBS=1
CONFIG_LEGACY_KERNEL=y

View file

@ -1,3 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = context.o

View file

@ -1,46 +0,0 @@
APIs tested in this test set
============================
fiber_fiber_start
- start a helper fiber to help with fiber_yield() tests
task_fiber_start
- start a fiber to test fiber related functionality
fiber_yield
- Called by a higher priority fiber when there is another fiber
- Called by an equal priority fiber when there is another fiber
- Called by a lower priority fiber when there is another fiber
sys_thread_self_get
- Called from an ISR (interrupted a task)
- Called from an ISR (interrupted a fiber)
- Called from a task
- Called from a fiber
sys_execution_context_type_get
- Called from an ISR that interrupted a task
- Called from an ISR that interrupted a fiber
- Called from a task
- Called from a fiber
nano_cpu_idle
- CPU to be woken up by tick timer. Thus, after each call, the tick count
should have advanced by one tick.
irq_lock
- 1. Count the number of calls to sys_tick_get_32() before a tick expires.
- 2. Once determined, call sys_tick_get_32() many more times than that
with interrupts locked. Check that the tick count remains unchanged.
irq_unlock
- Continuation irq_lock: unlock interrupts, loop and verify the tick
count changes.
irq_offload
- Used when triggering an ISR to perform ISR context work.
irq_enable
irq_disable
- Use these routines to disable and enable timer interrupts so that they can
be tested in the same way as irq_lock() and irq_unlock().

View file

@ -1,901 +0,0 @@
/* thread.c - test nanokernel CPU and thread APIs */
/*
* Copyright (c) 2012-2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
* DESCRIPTION
* This module tests the following CPU and thread related routines:
* fiber_fiber_start(), task_fiber_start(), fiber_yield(),
* sys_thread_self_get(), sys_execution_context_type_get(), nano_cpu_idle(),
* irq_lock(), irq_unlock(),
* irq_offload(), nanoCpuExcConnect(),
* irq_enable(), irq_disable(),
*/
#include <tc_util.h>
#include <kernel_structs.h>
#include <arch/cpu.h>
#include <irq_offload.h>
#include <util_test_common.h>
/*
* Include board.h from platform to get IRQ number.
* NOTE: Cortex-M does not need IRQ numbers
*/
#if !defined(CONFIG_CPU_CORTEX_M) && !defined(CONFIG_XTENSA)
#include <board.h>
#endif
#define FIBER_STACKSIZE (384 + CONFIG_TEST_EXTRA_STACKSIZE)
#define FIBER_PRIORITY 4
#define THREAD_SELF_CMD 0
#define EXEC_CTX_TYPE_CMD 1
#define UNKNOWN_COMMAND -1
/*
* Get the timer type dependent IRQ number. If timer type
* is not defined in platform, generate an error
*/
#if defined(CONFIG_HPET_TIMER)
#define TICK_IRQ CONFIG_HPET_TIMER_IRQ
#elif defined(CONFIG_LOAPIC_TIMER)
#if defined(CONFIG_LOAPIC)
#define TICK_IRQ CONFIG_LOAPIC_TIMER_IRQ
#else
/* MVIC case */
#define TICK_IRQ CONFIG_MVIC_TIMER_IRQ
#endif
#elif defined(CONFIG_XTENSA)
#include <xtensa_timer.h>
#define TICK_IRQ XT_TIMER_INTNUM
#elif defined(CONFIG_ALTERA_AVALON_TIMER)
#define TICK_IRQ TIMER_0_IRQ
#elif defined(CONFIG_ARCV2_TIMER)
#define TICK_IRQ IRQ_TIMER0
#elif defined(CONFIG_PULPINO_TIMER)
#define TICK_IRQ PULP_TIMER_A_CMP_IRQ
#elif defined(CONFIG_RISCV_MACHINE_TIMER)
#define TICK_IRQ RISCV_MACHINE_TIMER_IRQ
#elif defined(CONFIG_CPU_CORTEX_M)
/*
* The Cortex-M use the SYSTICK exception for the system timer, which is
* not considered an IRQ by the irq_enable/Disable APIs.
*/
#else
/* generate an error */
#error Timer type is not defined for this platform
#endif
/* Nios II and RISCV32 without CONFIG_RISCV_HAS_CPU_IDLE
* do not have a power saving instruction, so nano_cpu_idle()
* returns immediately
*/
#if !defined(CONFIG_NIOS2) && \
(!defined(CONFIG_RISCV32) || defined(CONFIG_RISCV_HAS_CPU_IDLE))
#define HAS_POWERSAVE_INSTRUCTION
#endif
typedef struct {
int command; /* command to process */
int error; /* error value (if any) */
union {
void *data; /* pointer to data to use or return */
int value; /* value to be passed or returned */
};
} ISR_INFO;
typedef int (*disable_int_func)(int);
typedef void (*enable_int_func)(int);
static struct nano_sem sem_fiber;
static struct nano_timer timer;
static struct nano_sem reply_timeout;
struct nano_fifo timeout_order_fifo;
static int fiber_detected_error;
static int fiber_evidence;
static char __stack fiber_stack1[FIBER_STACKSIZE];
static char __stack fiber_stack2[FIBER_STACKSIZE];
static ISR_INFO isr_info;
/**
*
* @brief Handler to perform various actions from within an ISR context
*
* This routine is the ISR handler for isr_handler_trigger(). It performs
* the command requested in <isr_info.command>.
*
* @return N/A
*/
static void isr_handler(void *data)
{
ARG_UNUSED(data);
switch (isr_info.command) {
case THREAD_SELF_CMD:
isr_info.data = (void *) sys_thread_self_get();
break;
case EXEC_CTX_TYPE_CMD:
isr_info.value = sys_execution_context_type_get();
break;
default:
isr_info.error = UNKNOWN_COMMAND;
break;
}
}
static void isr_handler_trigger(void)
{
irq_offload(isr_handler, NULL);
}
/**
*
* @brief Initialize nanokernel objects
*
* This routine initializes the nanokernel objects used in this module's tests.
*
* @return TC_PASS
*/
static int nano_init_objects(void)
{
nano_sem_init(&sem_fiber);
nano_sem_init(&reply_timeout);
nano_timer_init(&timer, NULL);
nano_fifo_init(&timeout_order_fifo);
return TC_PASS;
}
#ifdef HAS_POWERSAVE_INSTRUCTION
/**
*
* @brief Test the nano_cpu_idle() routine
*
* This tests the nano_cpu_idle() routine. The first thing it does is align to
* a tick boundary. The only source of interrupts while the test is running is
* expected to be the tick clock timer which should wake the CPU. Thus after
* each call to nano_cpu_idle(), the tick count should be one higher.
*
* @return TC_PASS on success
* @return TC_FAIL on failure
*/
static int test_nano_cpu_idle(void)
{
int tick; /* current tick count */
int i; /* loop variable */
/* Align to a "tick boundary". */
tick = sys_tick_get_32();
while (tick == sys_tick_get_32()) {
}
tick = sys_tick_get_32();
for (i = 0; i < 5; i++) { /* Repeat the test five times */
nano_cpu_idle();
tick++;
if (sys_tick_get_32() != tick) {
return TC_FAIL;
}
}
return TC_PASS;
}
#endif
/**
*
* @brief A wrapper for irq_lock()
*
* @return irq_lock() return value
*/
int irq_lock_wrapper(int unused)
{
ARG_UNUSED(unused);
return irq_lock();
}
/**
*
* @brief A wrapper for irq_unlock()
*
* @return N/A
*/
void irq_unlock_wrapper(int imask)
{
irq_unlock(imask);
}
/**
*
* @brief A wrapper for irq_disable()
*
* @return <irq>
*/
int irq_disable_wrapper(int irq)
{
irq_disable(irq);
return irq;
}
/**
*
* @brief A wrapper for irq_enable()
*
* @return N/A
*/
void irq_enable_wrapper(int irq)
{
irq_enable(irq);
}
/**
*
* @brief Test routines for disabling and enabling ints
*
* This routine tests the routines for disabling and enabling interrupts.
* These include irq_lock() and irq_unlock(), irq_disable() and irq_enable().
*
* @return TC_PASS on success
* @return TC_FAIL on failure
*/
static int test_nano_interrupts(disable_int_func disable_int,
enable_int_func enable_int, int irq)
{
unsigned long long count = 0;
unsigned long long i = 0;
int tick;
int tick2;
int imask;
/* Align to a "tick boundary" */
tick = sys_tick_get_32();
while (sys_tick_get_32() == tick) {
}
tick++;
while (sys_tick_get_32() == tick) {
count++;
}
/*
* Inflate <count> so that when we loop later, many ticks should have
* elapsed during the loop. This later loop will not exactly match the
* previous loop, but it should be close enough in structure that when
* combined with the inflated count, many ticks will have passed.
*/
count <<= 4;
imask = disable_int(irq);
tick = sys_tick_get_32();
for (i = 0; i < count; i++) {
sys_tick_get_32();
}
tick2 = sys_tick_get_32();
/*
* Re-enable interrupts before returning (for both success and failure
* cases).
*/
enable_int(imask);
if (tick2 != tick) {
return TC_FAIL;
}
/* Now repeat with interrupts unlocked. */
for (i = 0; i < count; i++) {
sys_tick_get_32();
}
return (tick == sys_tick_get_32()) ? TC_FAIL : TC_PASS;
}
/**
*
* @brief Test some nano context routines from a task
*
* This routines tests the sys_thread_self_get() and
* sys_execution_context_type_get() routines from both a task and an ISR (that
* interrupted a task). Checking those routines with fibers are done
* elsewhere.
*
* @return TC_PASS on success
* @return TC_FAIL on failure
*/
static int test_nano_ctx_task(void)
{
nano_thread_id_t self_thread_id;
TC_PRINT("Testing sys_thread_self_get() from an ISR and task\n");
self_thread_id = sys_thread_self_get();
isr_info.command = THREAD_SELF_CMD;
isr_info.error = 0;
/* isr_info is modified by the isr_handler routine */
isr_handler_trigger();
if (isr_info.error || isr_info.data != (void *)self_thread_id) {
/*
* Either the ISR detected an error, or the ISR context ID
* does not match the interrupted task's thread ID.
*/
return TC_FAIL;
}
TC_PRINT("Testing sys_execution_context_type_get() from an ISR\n");
isr_info.command = EXEC_CTX_TYPE_CMD;
isr_info.error = 0;
isr_handler_trigger();
if (isr_info.error || isr_info.value != NANO_CTX_ISR) {
return TC_FAIL;
}
TC_PRINT("Testing sys_execution_context_type_get() from a task\n");
if (sys_execution_context_type_get() != NANO_CTX_TASK) {
return TC_FAIL;
}
return TC_PASS;
}
/**
*
* @brief Test the various context/thread routines from a fiber
*
* This routines tests the sys_thread_self_get and
* sys_execution_context_type_get() routines from both a fiber and an ISR (that
* interrupted a fiber). Checking those routines with tasks are done
* elsewhere.
*
* This routine may set <fiber_detected_error> to the following values:
* 1 - if fiber ID matches that of the task
* 2 - if thread ID taken during ISR does not match that of the fiber
* 3 - sys_execution_context_type_get() when called from an ISR is not
* NANO_TYPE_ISR
* 4 - sys_execution_context_type_get() when called from a fiber is not
* NANO_TYPE_FIBER
*
* @return TC_PASS on success
* @return TC_FAIL on failure
*/
static int test_nano_fiber(nano_thread_id_t task_thread_id)
{
nano_thread_id_t self_thread_id;
self_thread_id = sys_thread_self_get();
if (self_thread_id == task_thread_id) {
fiber_detected_error = 1;
return TC_FAIL;
}
isr_info.command = THREAD_SELF_CMD;
isr_info.error = 0;
isr_handler_trigger();
if (isr_info.error || isr_info.data != (void *)self_thread_id) {
/*
* Either the ISR detected an error, or the ISR context ID
* does not match the interrupted fiber's thread ID.
*/
fiber_detected_error = 2;
return TC_FAIL;
}
isr_info.command = EXEC_CTX_TYPE_CMD;
isr_info.error = 0;
isr_handler_trigger();
if (isr_info.error || (isr_info.value != NANO_CTX_ISR)) {
fiber_detected_error = 3;
return TC_FAIL;
}
if (sys_execution_context_type_get() != NANO_CTX_FIBER) {
fiber_detected_error = 4;
return TC_FAIL;
}
return TC_PASS;
}
/**
*
* @brief Entry point to the fiber's helper
*
* This routine is the entry point to the fiber's helper fiber. It is used to
* help test the behaviour of the fiber_yield() routine.
*
* @param arg1 unused
* @param arg2 unused
*
* @return N/A
*/
#define fiber_priority_set(fiber, new_prio) task_priority_set(fiber, new_prio)
static void fiber_helper(int arg1, int arg2)
{
nano_thread_id_t self_thread_id;
ARG_UNUSED(arg1);
ARG_UNUSED(arg2);
/*
* This fiber starts off at a higher priority than fiber_entry().
* Thus, it should execute immediately.
*/
fiber_evidence++;
/* Test that helper will yield to a fiber of equal priority */
self_thread_id = sys_thread_self_get();
/* Lower priority to that of fiber_entry() */
fiber_priority_set(self_thread_id, self_thread_id->base.prio + 1);
fiber_yield(); /* Yield to fiber of equal priority */
fiber_evidence++;
/* <fiber_evidence> should now be 2 */
}
/**
*
* @brief Test the fiber_yield() routine
*
* This routine tests the fiber_yield() routine. It starts another fiber
* (thus also testing fiber_fiber_start()) and checks that behaviour of
* fiber_yield() against the cases of there being a higher priority fiber,
* a lower priority fiber, and another fiber of equal priority.
*
* On error, it may set <fiber_detected_error> to one of the following values:
* 10 - helper fiber ran prematurely
* 11 - fiber_yield() did not yield to a higher priority fiber
* 12 - fiber_yield() did not yield to an equal prioirty fiber
* 13 - fiber_yield() yielded to a lower priority fiber
*
* @return TC_PASS on success
* @return TC_FAIL on failure
*/
static int test_fiber_yield(void)
{
nano_thread_id_t self_thread_id;
/*
* Start a fiber of higher priority. Note that since the new fiber is
* being started from a fiber, it will not automatically switch to the
* fiber as it would if done from a task.
*/
self_thread_id = sys_thread_self_get();
fiber_evidence = 0;
fiber_fiber_start(fiber_stack2, FIBER_STACKSIZE, fiber_helper,
0, 0, FIBER_PRIORITY - 1, 0);
if (fiber_evidence != 0) {
/* ERROR! Helper spawned at higher */
fiber_detected_error = 10; /* priority ran prematurely. */
return TC_FAIL;
}
/*
* Test that the fiber will yield to the higher priority helper.
* <fiber_evidence> is still 0.
*/
fiber_yield();
if (fiber_evidence == 0) {
/* ERROR! Did not yield to higher */
fiber_detected_error = 11; /* priority fiber. */
return TC_FAIL;
}
if (fiber_evidence > 1) {
/* ERROR! Helper did not yield to */
fiber_detected_error = 12; /* equal priority fiber. */
return TC_FAIL;
}
/*
* Raise the priority of fiber_entry(). Calling fiber_yield() should
* not result in switching to the helper.
*/
fiber_priority_set(self_thread_id, self_thread_id->base.prio - 1);
fiber_yield();
if (fiber_evidence != 1) {
/* ERROR! Context switched to a lower */
fiber_detected_error = 13; /* priority fiber! */
return TC_FAIL;
}
/*
* Block on <sem_fiber>. This will allow the helper fiber to complete.
* The main task will wake this fiber.
*/
nano_fiber_sem_take(&sem_fiber, TICKS_UNLIMITED);
return TC_PASS;
}
/**
* @brief Entry point to fiber started by the task
*
* This routine is the entry point to the fiber started by the task.
*
* @param task_thread_id thread ID of the spawning task
* @param arg1 unused
*
* @return N/A
*/
static void fiber_entry(int task_thread_id, int arg1)
{
int rv;
ARG_UNUSED(arg1);
fiber_evidence++; /* Prove to the task that the fiber has run */
nano_fiber_sem_take(&sem_fiber, TICKS_UNLIMITED);
rv = test_nano_fiber((nano_thread_id_t)task_thread_id);
if (rv != TC_PASS) {
return;
}
/* Allow the task to print any messages before the next test runs */
nano_fiber_sem_take(&sem_fiber, TICKS_UNLIMITED);
rv = test_fiber_yield();
if (rv != TC_PASS) {
return;
}
}
/*
* Timeout tests
*
* Test the fiber_sleep() API, as well as the fiber_delayed_start() ones.
*/
#include <tc_nano_timeout_common.h>
struct timeout_order {
void *link_in_fifo;
int32_t timeout;
int timeout_order;
int q_order;
};
struct timeout_order timeouts[] = {
{0, TIMEOUT(2), 2, 0},
{0, TIMEOUT(4), 4, 1},
{0, TIMEOUT(0), 0, 2},
{0, TIMEOUT(1), 1, 3},
{0, TIMEOUT(5), 5, 4},
{0, TIMEOUT(6), 6, 5},
{0, TIMEOUT(3), 3, 6},
};
#define NUM_TIMEOUT_FIBERS ARRAY_SIZE(timeouts)
static char __stack timeout_stacks[NUM_TIMEOUT_FIBERS][FIBER_STACKSIZE];
/* a fiber busy waits, then reports through a fifo */
static void test_busy_wait(int ticks, int unused)
{
uint32_t usecs;
ARG_UNUSED(unused);
usecs = ticks * sys_clock_us_per_tick;
TC_PRINT("Fiber busy waiting for %d usecs (%d ticks)\n", usecs, ticks);
sys_thread_busy_wait(usecs);
TC_PRINT("Fiber busy waiting completed\n");
/*
* Ideally the test should verify that the correct number of ticks
* have elapsed. However, when running under QEMU, the tick interrupt
* may be processed on a very irregular basis, meaning that far
* fewer than the expected number of ticks may occur for a given
* number of clock cycles vs. what would ordinarily be expected.
*
* Consequently, the best we can do for now to test busy waiting is
* to invoke the API and verify that it returns. (If it takes way
* too long, or never returns, the main test task may be able to
* time out and report an error.)
*/
nano_fiber_sem_give(&reply_timeout);
}
/* a fiber sleeps and times out, then reports through a fifo */
static void test_fiber_sleep(int timeout, int unused)
{
int64_t orig_ticks = sys_tick_get();
ARG_UNUSED(unused);
TC_PRINT(" fiber sleeping for %d ticks\n", timeout);
fiber_sleep(timeout);
TC_PRINT(" fiber back from sleep\n");
if (!is_timeout_in_range(orig_ticks, timeout)) {
return;
}
nano_fiber_sem_give(&reply_timeout);
}
/* a fiber is started with a delay, then it reports that it ran via a fifo */
static void delayed_fiber(int num, int unused)
{
struct timeout_order *timeout = &timeouts[num];
ARG_UNUSED(unused);
TC_PRINT(" fiber (q order: %d, t/o: %d) is running\n",
timeout->q_order, timeout->timeout);
nano_fiber_fifo_put(&timeout_order_fifo, timeout);
}
static int test_timeout(void)
{
struct timeout_order *data;
int32_t timeout;
int rv;
int i;
/* test sys_thread_busy_wait() */
TC_PRINT("Testing sys_thread_busy_wait()\n");
timeout = 2;
task_fiber_start(timeout_stacks[0], FIBER_STACKSIZE, test_busy_wait,
(int)timeout, 0, FIBER_PRIORITY, 0);
rv = nano_task_sem_take(&reply_timeout, timeout + 2);
if (!rv) {
TC_ERROR(" *** task timed out waiting for "
"sys_thread_busy_wait()\n");
return TC_FAIL;
}
/* test fiber_sleep() */
TC_PRINT("Testing fiber_sleep()\n");
timeout = 5;
task_fiber_start(timeout_stacks[0], FIBER_STACKSIZE, test_fiber_sleep,
(int)timeout, 0, FIBER_PRIORITY, 0);
rv = nano_task_sem_take(&reply_timeout, timeout + 5);
if (!rv) {
TC_ERROR(" *** task timed out waiting for fiber on "
"fiber_sleep().\n");
return TC_FAIL;
}
/* test fiber_delayed_start() without cancellation */
TC_PRINT("Testing fiber_delayed_start() without cancellation\n");
for (i = 0; i < NUM_TIMEOUT_FIBERS; i++) {
task_fiber_delayed_start(timeout_stacks[i], FIBER_STACKSIZE,
delayed_fiber, i, 0, 5, 0,
timeouts[i].timeout);
}
for (i = 0; i < NUM_TIMEOUT_FIBERS; i++) {
data = nano_task_fifo_get(&timeout_order_fifo,
TIMEOUT_TWO_INTERVALS);
if (!data) {
TC_ERROR(" *** timeout while waiting for delayed fiber\n");
return TC_FAIL;
}
if (data->timeout_order != i) {
TC_ERROR(" *** wrong delayed fiber ran (got %d, "
"expected %d)\n", data->timeout_order, i);
return TC_FAIL;
}
TC_PRINT(" got fiber (q order: %d, t/o: %d) as expected\n",
data->q_order, data->timeout);
}
/* ensure no more fibers fire */
data = nano_task_fifo_get(&timeout_order_fifo, TIMEOUT_TWO_INTERVALS);
if (data) {
TC_ERROR(" *** got something unexpected in the fifo\n");
return TC_FAIL;
}
/* test fiber_delayed_start() with cancellation */
TC_PRINT("Testing fiber_delayed_start() with cancellations\n");
int cancellations[] = {0, 3, 4, 6};
int num_cancellations = ARRAY_SIZE(cancellations);
int next_cancellation = 0;
nano_thread_id_t delayed_fibers[NUM_TIMEOUT_FIBERS];
for (i = 0; i < NUM_TIMEOUT_FIBERS; i++) {
nano_thread_id_t id;
id = task_fiber_delayed_start(timeout_stacks[i],
FIBER_STACKSIZE, delayed_fiber, i,
0, 5, 0, timeouts[i].timeout);
delayed_fibers[i] = id;
}
for (i = 0; i < NUM_TIMEOUT_FIBERS; i++) {
int j;
if (i == cancellations[next_cancellation]) {
TC_PRINT(" cancelling "
"[q order: %d, t/o: %d, t/o order: %d]\n",
timeouts[i].q_order, timeouts[i].timeout, i);
for (j = 0; j < NUM_TIMEOUT_FIBERS; j++) {
if (timeouts[j].timeout_order == i) {
break;
}
}
if (j == NUM_TIMEOUT_FIBERS) {
TC_ERROR(" *** array overrun: all timeout order values should have been between the boundaries\n");
return TC_FAIL;
}
task_fiber_delayed_start_cancel(delayed_fibers[j]);
++next_cancellation;
continue;
}
data = nano_task_fifo_get(&timeout_order_fifo,
TIMEOUT_TEN_INTERVALS);
if (!data) {
TC_ERROR(" *** timeout while waiting for delayed fiber\n");
return TC_FAIL;
}
if (data->timeout_order != i) {
TC_ERROR(" *** wrong delayed fiber ran (got %d, "
"expected %d)\n", data->timeout_order, i);
return TC_FAIL;
}
TC_PRINT(" got (q order: %d, t/o: %d, t/o order %d) "
"as expected\n", data->q_order, data->timeout,
data->timeout_order);
}
if (num_cancellations != next_cancellation) {
TC_ERROR(" *** wrong number of cancellations (expected %d, "
"got %d\n", num_cancellations, next_cancellation);
return TC_FAIL;
}
/* ensure no more fibers fire */
data = nano_task_fifo_get(&timeout_order_fifo, TIMEOUT_TWO_INTERVALS);
if (data) {
TC_ERROR(" *** got something unexpected in the fifo\n");
return TC_FAIL;
}
return TC_PASS;
}
/**
* @brief Entry point to timer tests
*
* This is the entry point to the CPU and thread tests.
*
* @return N/A
*/
void main(void)
{
int rv; /* return value from tests */
fiber_detected_error = 0;
fiber_evidence = 0;
TC_START("Test Nanokernel CPU and thread routines");
TC_PRINT("Initializing nanokernel objects\n");
rv = nano_init_objects();
if (rv != TC_PASS) {
goto tests_done;
}
#ifdef HAS_POWERSAVE_INSTRUCTION
TC_PRINT("Testing nano_cpu_idle()\n");
rv = test_nano_cpu_idle();
if (rv != TC_PASS) {
goto tests_done;
}
#endif
TC_PRINT("Testing interrupt locking and unlocking\n");
rv = test_nano_interrupts(irq_lock_wrapper, irq_unlock_wrapper, -1);
if (rv != TC_PASS) {
goto tests_done;
}
#ifdef TICK_IRQ
/* Disable interrupts coming from the timer. */
TC_PRINT("Testing irq_disable() and irq_enable()\n");
rv = test_nano_interrupts(irq_disable_wrapper, irq_enable_wrapper,
TICK_IRQ);
if (rv != TC_PASS) {
goto tests_done;
}
#endif
TC_PRINT("Testing some nano context routines\n");
rv = test_nano_ctx_task();
if (rv != TC_PASS) {
goto tests_done;
}
TC_PRINT("Spawning a fiber from a task\n");
fiber_evidence = 0;
task_fiber_start(fiber_stack1, FIBER_STACKSIZE, fiber_entry,
(int)sys_thread_self_get(), 0, FIBER_PRIORITY, 0);
if (fiber_evidence != 1) {
rv = TC_FAIL;
TC_ERROR(" - fiber did not execute as expected!\n");
goto tests_done;
}
/*
* The fiber ran, now wake it so it can test sys_thread_self_get and
* sys_execution_context_type_get.
*/
TC_PRINT("Fiber to test sys_thread_self_get() and "
"sys_execution_context_type_get\n");
nano_task_sem_give(&sem_fiber);
if (fiber_detected_error != 0) {
rv = TC_FAIL;
TC_ERROR(" - failure detected in fiber; "
"fiber_detected_error = %d\n", fiber_detected_error);
goto tests_done;
}
TC_PRINT("Fiber to test fiber_yield()\n");
nano_task_sem_give(&sem_fiber);
if (fiber_detected_error != 0) {
rv = TC_FAIL;
TC_ERROR(" - failure detected in fiber; "
"fiber_detected_error = %d\n", fiber_detected_error);
goto tests_done;
}
nano_task_sem_give(&sem_fiber);
rv = test_timeout();
if (rv != TC_PASS) {
goto tests_done;
}
tests_done:
TC_END_RESULT(rv);
TC_END_REPORT(rv);
}

View file

@ -1,2 +0,0 @@
[test]
tags = legacy core bat_commit

View file

@ -1,5 +0,0 @@
MDEF_FILE = prj.mdef
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,51 +0,0 @@
Title: Offload to the Kernel Service Fiber
Description:
This test verifies that the microkernel task_offload_to_fiber() API operates as
expected.
This test has two tasks that increment a counter. The routine that
increments the counter is invoked from _k_server() due to the two tasks
calling task_offload_to_fiber(). The final result of the counter is expected
to be the the number of times task_offload_to_fiber() was called to increment
the counter as the incrementing was done in the context of _k_server().
This is done with time slicing both disabled and enabled to ensure that the
result always matches the number of times task_offload_to_fiber() is called.
--------------------------------------------------------------------------------
Building and Running Project:
This microkernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Sample Output:
tc_start() - Test Microkernel Critical Section API
Obtained expected <criticalVar> value of 10209055
Enabling time slicing ...
Obtained expected <criticalVar> value of 15123296
===================================================================
PASS - RegressionTask.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,2 +0,0 @@
CONFIG_NUM_TASK_PRIORITIES=50
CONFIG_LEGACY_KERNEL=y

View file

@ -1,11 +0,0 @@
% Application : test microkernel critical section API
% TASK NAME PRIO ENTRY STACK GROUPS
% ===================================================
TASK ALTTASK 12 AlternateTask 1024 [EXE]
TASK REGRESSTASK 12 RegressionTask 1024 [EXE]
% SEMA NAME
% ================
SEMA ALT_SEM
SEMA REGRESS_SEM

View file

@ -1,3 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = critical.o

View file

@ -1,154 +0,0 @@
/* critical.c - test the task_offload_to_fiber() API */
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
DESCRIPTION
This module tests the task_offload_to_fiber() API.
*/
#include <zephyr.h>
#include <tc_util.h>
#include <sections.h>
#define NUM_TICKS 500
#define TEST_TIMEOUT 2000
static uint32_t criticalVar = 0;
static uint32_t altTaskIterations = 0;
/**
*
* @brief Routine to be called from _k_server()
*
* This routine increments the global variable <criticalVar>.
*
* @return 0
*/
int criticalRtn(void)
{
volatile uint32_t x;
x = criticalVar;
criticalVar = x + 1;
return 0;
}
/**
*
* @brief Common code for invoking task_offload_to_fiber()
*
* @param count number of critical section calls made thus far
*
* @return number of critical section calls made by task
*/
uint32_t criticalLoop(uint32_t count)
{
int32_t ticks;
ticks = sys_tick_get_32();
while (sys_tick_get_32() < ticks + NUM_TICKS) {
task_offload_to_fiber(criticalRtn, &criticalVar);
count++;
}
return count;
}
/**
*
* @brief Alternate task
*
* This routine calls task_offload_to_fiber() many times.
*
* @return N/A
*/
void AlternateTask(void)
{
task_sem_take(ALT_SEM, TICKS_UNLIMITED); /* Wait to be activated */
altTaskIterations = criticalLoop(altTaskIterations);
task_sem_give(REGRESS_SEM);
task_sem_take(ALT_SEM, TICKS_UNLIMITED); /* Wait to be re-activated */
altTaskIterations = criticalLoop(altTaskIterations);
task_sem_give(REGRESS_SEM);
}
/**
*
* @brief Regression task
*
* This routine calls task_offload_to_fiber() many times. It also checks to
* ensure that the number of times it is called matches the global variable
* <criticalVar>.
*
* @return N/A
*/
void RegressionTask(void)
{
uint32_t nCalls = 0;
int status;
TC_START("Test Microkernel Critical Section API\n");
task_sem_give(ALT_SEM); /* Activate AlternateTask() */
nCalls = criticalLoop(nCalls);
/* Wait for AlternateTask() to complete */
status = task_sem_take(REGRESS_SEM, TEST_TIMEOUT);
if (status != RC_OK) {
TC_ERROR("Timed out waiting for REGRESS_SEM\n");
goto errorReturn;
}
if (criticalVar != nCalls + altTaskIterations) {
TC_ERROR("Unexpected value for <criticalVar>. Expected %d, got %d\n",
nCalls + altTaskIterations, criticalVar);
goto errorReturn;
}
TC_PRINT("Obtained expected <criticalVar> value of %u\n", criticalVar);
TC_PRINT("Enabling time slicing ...\n");
sys_scheduler_time_slice_set(1, 10);
task_sem_give(ALT_SEM); /* Re-activate AlternateTask() */
nCalls = criticalLoop(nCalls);
/* Wait for AlternateTask() to finish */
status = task_sem_take(REGRESS_SEM, TEST_TIMEOUT);
if (status != RC_OK) {
TC_ERROR("Timed out waiting for REGRESS_SEM\n");
goto errorReturn;
}
if (criticalVar != nCalls + altTaskIterations) {
TC_ERROR("Unexpected value for <criticalVar>. Expected %d, got %d\n",
nCalls + altTaskIterations, criticalVar);
goto errorReturn;
}
TC_PRINT("Obtained expected <criticalVar> value of %u\n", criticalVar);
TC_END_RESULT(TC_PASS);
TC_END_REPORT(TC_PASS);
return;
errorReturn:
TC_END_RESULT(TC_FAIL);
TC_END_REPORT(TC_FAIL);
}

View file

@ -1,2 +0,0 @@
[test]
tags = legacy core bat_commit

View file

@ -1,5 +0,0 @@
MDEF_FILE = prj.mdef
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,54 +0,0 @@
Title: Microkernel early sleep functionality test
Description:
Test verifies that a task_sleep() function can be used during the system
initialization, then it tests that when the k_server() starts, task_sleep()
call makes another task run.
For fibers, test that fiber_sleep() called during the system
initialization puts a fiber to sleep for the provided amount of ticks,
then check that fiber_sleep() called from a fiber running on the
fully functioning microkernel puts that fiber to sleep for the proiveded
amount of ticks.
--------------------------------------------------------------------------------
Building and Running Project:
This microkernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Sample Output:
tc_start() - Test early and regular task and fiber sleep functionality
Test fiber_sleep() call during the system initialization
Test task_sleep() call during the system initialization
- At SECONDARY level
- At NANOKERNEL level
- At MICROKERNEL level
- At APPLICATION level
Test task_sleep() call on a running system
Test fiber_sleep() call on a running system
===================================================================
PASS - RegressionTask.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,2 +0,0 @@
CONFIG_NANO_TIMEOUTS=y
CONFIG_LEGACY_KERNEL=y

View file

@ -1,10 +0,0 @@
% Application : test microkernel early sleep functionality
% TASK NAME PRIO ENTRY STACK GROUPS
% ===================================================
TASK REGRESSTASK 5 RegressionTask 1024 [EXE]
TASK ALTERTASK 10 AlternateTask 1024 [EXE]
% SEMA NAME
% ================
SEMA TEST_FIBER_SEM

View file

@ -1,3 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = early_sleep.o

View file

@ -1,359 +0,0 @@
/*
* Copyright (c) 2016 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
* @file
* @brief Test early sleeping microkernel mechanism
*
* This test verifies that both fiber_sleep() and task_sleep()
* can each be used to put the calling thread to sleep for a specified
* number of ticks during system initialization (before k_server() starts)
* as well as after the microkernel initializes (after k_server() starts).
*
* To ensure that the nanokernel timeout both operates correctly during
* system initialization and that it allows fibers to sleep for a specified
* number of ticks the test has a fiber invoke fiber_sleep() before the init
* task invokes task_sleep(). The fiber sleep time is less than that of the
* task sleep time so that the fiber will wake before the init task wakes.
*/
#include <zephyr.h>
#include <tc_util.h>
#include <init.h>
#define FIBER_TICKS_TO_SLEEP 40
#define TASK_TICKS_TO_SLEEP 50
/* time that the task was actually sleeping */
static int task_actual_sleep_ticks;
static int task_actual_sleep_nano_ticks;
static int task_actual_sleep_micro_ticks;
static int task_actual_sleep_app_ticks;
/* time that the fiber was actually sleeping */
static volatile int fiber_actual_sleep_ticks;
/*
* Flag is changed by lower priority task to make sure
* that sleeping did not go in a tight toop
*/
static bool alternate_task_run;
/* test fiber synchronization semaphore */
static struct nano_sem test_fiber_sem;
/**
*
* @brief Put task to sleep and measure time it really slept
*
* @param ticks_to_sleep number of ticks for a task to sleep
*
* @return number of ticks the task actually slept
*/
int test_task_sleep(int ticks_to_sleep)
{
uint32_t start_time;
uint32_t stop_time;
start_time = sys_cycle_get_32();
task_sleep(ticks_to_sleep);
stop_time = sys_cycle_get_32();
return (stop_time - start_time) / sys_clock_hw_cycles_per_tick;
}
/**
*
* @brief Put fiber to sleep and measure time it really slept
*
* @param ticks_to_sleep number of ticks for a fiber to sleep
*
* @return number of ticks the fiber actually slept
*/
int test_fiber_sleep(int ticks_to_sleep)
{
uint32_t start_time;
uint32_t stop_time;
start_time = sys_cycle_get_32();
fiber_sleep(ticks_to_sleep);
stop_time = sys_cycle_get_32();
return (stop_time - start_time) / sys_clock_hw_cycles_per_tick;
}
/**
*
* @brief Early task sleep test
*
* Note: it will be used to test the early sleep at SECONDARY level too
*
* Call task_sleep() and checks the time sleep actually
* took to make sure that task actually slept
*
* @return 0
*/
static int test_early_task_sleep(struct device *unused)
{
ARG_UNUSED(unused);
task_actual_sleep_ticks = test_task_sleep(TASK_TICKS_TO_SLEEP);
return 0;
}
SYS_INIT(test_early_task_sleep, SECONDARY, CONFIG_KERNEL_INIT_PRIORITY_DEVICE);
/**
*
* @brief Early task sleep test in NANOKERNEL level only
*
* Call task_sleep() and checks the time sleep actually
* took to make sure that task actually slept
*
* @return 0
*/
static int test_early_task_sleep_in_nanokernel_level(struct device *unused)
{
ARG_UNUSED(unused);
task_actual_sleep_nano_ticks = test_task_sleep(TASK_TICKS_TO_SLEEP);
return 0;
}
SYS_INIT(test_early_task_sleep_in_nanokernel_level,
NANOKERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEVICE);
/**
*
* @brief Early task sleep test in MICROKERNEL level only
*
* Call task_sleep() and checks the time sleep actually
* took to make sure that task actually slept
*
* @return 0
*/
static int test_early_task_sleep_in_microkernel_level(struct device *unused)
{
ARG_UNUSED(unused);
task_actual_sleep_micro_ticks = test_task_sleep(TASK_TICKS_TO_SLEEP);
return 0;
}
SYS_INIT(test_early_task_sleep_in_microkernel_level,
MICROKERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEVICE);
/**
*
* @brief Early task sleep test in APPLICATION level only
*
* Call task_sleep() and checks the time sleep actually
* took to make sure that task actually slept
*
* @return 0
*/
static int test_early_task_sleep_in_application_level(struct device *unused)
{
ARG_UNUSED(unused);
task_actual_sleep_app_ticks = test_task_sleep(TASK_TICKS_TO_SLEEP);
return 0;
}
SYS_INIT(test_early_task_sleep_in_application_level,
APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEVICE);
/**
*
* @brief Fiber function that measures fiber sleep time
*
* @return N/A
*/
static void test_fiber(int ticks_to_sleep, int unused)
{
ARG_UNUSED(unused);
while (1) {
fiber_actual_sleep_ticks = test_fiber_sleep(ticks_to_sleep);
fiber_sem_give(TEST_FIBER_SEM);
nano_sem_take(&test_fiber_sem, TICKS_UNLIMITED);
}
}
#define STACKSIZE 512
char __stack test_fiber_stack[STACKSIZE];
/**
*
* @brief Initialize test fiber data
*
* @return 0
*/
static int test_fiber_start(struct device *unused)
{
ARG_UNUSED(unused);
fiber_actual_sleep_ticks = 0;
nano_sem_init(&test_fiber_sem);
task_fiber_start(&test_fiber_stack[0], STACKSIZE,
(nano_fiber_entry_t) test_fiber,
FIBER_TICKS_TO_SLEEP, 0, 7, 0);
return 0;
}
SYS_INIT(test_fiber_start, SECONDARY, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
/**
*
* @brief Lower priority task to make sure that main task really sleeps
*
*
*
* @return N/A
*/
void AlternateTask(void)
{
alternate_task_run = true;
}
/**
*
* @brief Regression task
*
* Checks the results of the early sleep
*
* @return N/A
*/
void RegressionTask(void)
{
TC_START("Test early and regular task and fiber sleep functionality\n");
alternate_task_run = false;
TC_PRINT("Test fiber_sleep() call during the system initialization\n");
/*
* Make sure that the fiber_sleep() called during the
* initialization has returned.
* fiber_sleep() invoked during the initialization for the
* shorter period that task_sleep() should return by now.
*/
if (task_sem_take(TEST_FIBER_SEM, TICKS_NONE) != RC_OK) {
TC_ERROR("fiber_sleep() has not returned while expected\n");
}
/*
* Check that the fiber_sleep() called during the system
* initialization put the fiber to sleep for the specified
* amount of time
*
* On heavily loaded systems QEMU may demonstrate a drift
* of hardware clock ticks to system clock. Test verifies
* that sleep took at least not less amount of time.
* Allow up to 1 tick variance as the test may not have put
* the task to sleep on a tick boundary.
*/
if ((fiber_actual_sleep_ticks + 1) < FIBER_TICKS_TO_SLEEP) {
TC_ERROR("fiber_sleep() time is too small: %d\n",
fiber_actual_sleep_ticks);
goto error_out;
}
/*
* Check that the task_sleep() called during the system
* initialization puts the task to sleep for the specified
* amount of time
*/
TC_PRINT("Test task_sleep() call during the system initialization\n");
TC_PRINT("- At SECONDARY level\n");
if ((task_actual_sleep_ticks + 1) < TASK_TICKS_TO_SLEEP) {
TC_ERROR("task_sleep() time is is too small: %d\n",
task_actual_sleep_ticks);
goto error_out;
}
/*
* Check that the task_sleep() called during the system
* initialization at NANOKERNEL level puts the task to sleep for
* the specified amount of time
*/
TC_PRINT("- At NANOKERNEL level\n");
if ((task_actual_sleep_nano_ticks + 1) < TASK_TICKS_TO_SLEEP) {
TC_ERROR("task_sleep() time is is too small: %d\n",
task_actual_sleep_nano_ticks);
goto error_out;
}
/*
* Check that the task_sleep() called during the system
* initialization at MICROKERNEL level puts the task to sleep for
* the specified amount of time
*/
TC_PRINT("- At MICROKERNEL level\n");
if ((task_actual_sleep_micro_ticks + 1) < TASK_TICKS_TO_SLEEP) {
TC_ERROR("task_sleep() time is is too small: %d\n",
task_actual_sleep_micro_ticks);
goto error_out;
}
/*
* Check that the task_sleep() called during the system
* initialization at APPLICATION level puts the task to sleep for
* the specified amount of time
*/
TC_PRINT("- At APPLICATION level\n");
if ((task_actual_sleep_app_ticks + 1) < TASK_TICKS_TO_SLEEP) {
TC_ERROR("task_sleep() time is is too small: %d\n",
task_actual_sleep_app_ticks);
goto error_out;
}
/*
* Check that the task_sleep() called during the normal
* microkernel work put the task to sleep for the specified
* amount of time
*/
TC_PRINT("Test task_sleep() call on a running system\n");
task_actual_sleep_ticks = test_task_sleep(TASK_TICKS_TO_SLEEP);
if ((task_actual_sleep_ticks + 1) < TASK_TICKS_TO_SLEEP) {
TC_ERROR("task_sleep() time is too small: %d\n",
task_actual_sleep_ticks);
goto error_out;
}
/* check that calling task_sleep() allowed the lower priority task run */
if (!alternate_task_run) {
TC_ERROR("Lower priority task did not run during task_sleep()\n");
goto error_out;
}
/*
* Check that the fiber_sleep() called during the normal
* microkernel work put the fiber to sleep for the specified
* amount of time
*/
TC_PRINT("Test fiber_sleep() call on a running system\n");
fiber_actual_sleep_ticks = 0;
nano_sem_give(&test_fiber_sem);
/* wait for the test fiber return from the sleep */
task_sem_take(TEST_FIBER_SEM, TICKS_UNLIMITED);
if ((fiber_actual_sleep_ticks + 1) < FIBER_TICKS_TO_SLEEP) {
TC_ERROR("fiber_sleep() time is too small: %d\n",
fiber_actual_sleep_ticks);
goto error_out;
}
TC_END_RESULT(TC_PASS);
TC_END_REPORT(TC_PASS);
return;
error_out:
TC_END_RESULT(TC_FAIL);
TC_END_REPORT(TC_FAIL);
}

View file

@ -1,2 +0,0 @@
[test]
tags = legacy core bat_commit

View file

@ -1,5 +0,0 @@
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,43 +0,0 @@
Title: Test errno
Description:
A simple application verifies the errno value is per-thread and saved during
context switches.
--------------------------------------------------------------------------------
Building and Running Project:
This nanokernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Sample Output:
task, errno before starting fibers: abad1dea
fiber 0, errno before sleep: babef00d
fiber 1, errno before sleep: deadbeef
fiber 1, errno after sleep: deadbeef
fiber 0, errno after sleep: babef00d
task, errno after running fibers: abad1dea
===================================================================
PASS - main.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,2 +0,0 @@
CONFIG_NANO_TIMEOUTS=y
CONFIG_LEGACY_KERNEL=y

View file

@ -1,3 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = main.o

View file

@ -1,82 +0,0 @@
/*
* Copyright (c) 2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr.h>
#include <errno.h>
#include <tc_util.h>
#define N_FIBERS 2
#define STACK_SIZE 384
static __stack char stacks[N_FIBERS][STACK_SIZE];
static int errno_values[N_FIBERS + 1] = {
0xbabef00d,
0xdeadbeef,
0xabad1dea,
};
struct result {
void *q;
int pass;
};
struct result result[N_FIBERS];
struct nano_fifo fifo;
static void errno_fiber(int n, int my_errno)
{
errno = my_errno;
printk("fiber %d, errno before sleep: %x\n", n, errno);
fiber_sleep(3 - n);
if (errno == my_errno) {
result[n].pass = 1;
}
printk("fiber %d, errno after sleep: %x\n", n, errno);
nano_fiber_fifo_put(&fifo, &result[n]);
}
void main(void)
{
int rv = TC_PASS;
nano_fifo_init(&fifo);
errno = errno_values[N_FIBERS];
printk("task, errno before starting fibers: %x\n", errno);
for (int ii = 0; ii < N_FIBERS; ii++) {
result[ii].pass = TC_FAIL;
}
for (int ii = 0; ii < N_FIBERS; ii++) {
task_fiber_start(stacks[ii], STACK_SIZE, errno_fiber,
ii, errno_values[ii], ii + 5, 0);
}
for (int ii = 0; ii < N_FIBERS; ii++) {
struct result *p = nano_task_fifo_get(&fifo, 10);
if (!p || !p->pass) {
rv = TC_FAIL;
}
}
printk("task, errno after running fibers: %x\n", errno);
if (errno != errno_values[N_FIBERS]) {
rv = TC_FAIL;
}
TC_END_RESULT(rv);
TC_END_REPORT(rv);
}

View file

@ -1,5 +0,0 @@
[test]
tags = legacy core
# Make sure it has enough memory
filter = not ((CONFIG_DEBUG or CONFIG_ASSERT)) and ( CONFIG_SRAM_SIZE >= 32
or CONFIG_DCCM_SIZE >= 32 or CONFIG_RAM_SIZE >= 32)

View file

@ -1,6 +0,0 @@
MDEF_FILE = prj.mdef
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,45 +0,0 @@
Title: Event APIs
Description:
This test verifies that the microkernel event APIs operate as expected.
--------------------------------------------------------------------------------
Building and Running Project:
This microkernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Sample Output:
tc_start() - Test Microkernel Events
Microkernel objects initialized
Testing task_event_recv(TICKS_NONE) and task_event_send() ...
Testing task_event_recv(TICKS_UNLIMITED) and task_event_send() ...
Testing task_event_recv(timeout) and task_event_send() ...
Testing isr_event_send() ...
Testing fiber_event_send() ...
Testing task_event_handler_set() ...
===================================================================
PASS - RegressionTask.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,3 +0,0 @@
CONFIG_IRQ_OFFLOAD=y
CONFIG_LEGACY_KERNEL=y
CONFIG_BLUETOOTH=n

View file

@ -1,15 +0,0 @@
% Application : test microkernel event APIs
% TASK NAME PRIO ENTRY STACK GROUPS
% ==================================================
TASK tStartTask 5 RegressionTask 2048 [EXE]
TASK tAlternate 6 AlternateTask 2048 [EXE]
% EVENT NAME ENTRY
% =========================
EVENT EVENT_ID NULL
EVENT ALT_EVENT NULL
% SEMA NAME
% ==================
SEMA ALTERNATE_SEM

View file

@ -1,3 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = events.o test_fiber.o

View file

@ -1,577 +0,0 @@
/*
* Copyright (c) 2012-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Test microkernel event APIs
*
* This modules tests the following event APIs:
* task_event_handler_set()
* task_event_send()
* isr_event_send()
* task_event_recv()
*/
#include <tc_util.h>
#include <zephyr.h>
#include <arch/cpu.h>
#include <toolchain.h>
#include <irq_offload.h>
#include <util_test_common.h>
typedef struct {
kevent_t event;
} ISR_INFO;
static int evidence = 0;
static ISR_INFO isrInfo;
static int handlerRetVal = 0;
extern void testFiberInit(void);
extern struct nano_sem fiberSem; /* semaphore that allows test control the fiber */
extern kevent_t _k_event_list_end[];
/**
*
* @brief ISR handler to signal an event
*
* @return N/A
*/
void isr_event_signal_handler(void *data)
{
ISR_INFO *pInfo = (ISR_INFO *) data;
isr_event_send(pInfo->event);
}
static void _trigger_isrEventSignal(void)
{
irq_offload(isr_event_signal_handler, &isrInfo);
}
/**
*
* @brief Release the test fiber
*
* @return N/A
*/
void releaseTestFiber(void)
{
nano_task_sem_give(&fiberSem);
}
/**
*
* @brief Initialize objects used in this microkernel test suite
*
* @return N/A
*/
void microObjectsInit(void)
{
testFiberInit();
TC_PRINT("Microkernel objects initialized\n");
}
/**
*
* @brief Test the task_event_recv(TICKS_NONE) API
*
* There are two cases to be tested here. The first is for testing for an
* event when there is one. The second is for testing for an event when there
* are none. Note that the "consumption" of the event gets confirmed by the
* order in which the latter two checks are done.
*
* @return TC_PASS on success, TC_FAIL on failure
*/
int eventNoWaitTest(void)
{
int rv; /* return value from task_event_xxx() calls */
/* Signal an event */
rv = task_event_send(EVENT_ID);
if (rv != RC_OK) {
TC_ERROR("task_event_send() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
rv = task_event_recv(EVENT_ID, TICKS_NONE);
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
/* No event has been signalled */
rv = task_event_recv(EVENT_ID, TICKS_NONE);
if (rv != RC_FAIL) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_FAIL);
return TC_FAIL;
}
return TC_PASS;
}
/**
*
* @brief Test the task_event_recv(TICKS_UNLIMITED) API
*
* This test checks task_event_recv(TICKS_UNLIMITED) against the following
* cases:
* 1. There is already an event waiting (signalled from a task and ISR).
* 2. The current task must wait on the event until it is signalled
* from either another task, an ISR or a fiber.
*
* @return TC_PASS on success, TC_FAIL on failure
*/
int eventWaitTest(void)
{
int rv; /* return value from task_event_xxx() calls */
int i; /* loop counter */
/*
* task_event_recv() to return immediately as there will already be
* an event by a task.
*/
task_event_send(EVENT_ID);
rv = task_event_recv(EVENT_ID, TICKS_UNLIMITED);
if (rv != RC_OK) {
TC_ERROR("Task: task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
/*
* task_event_recv() to return immediately as there will already be
* an event made ready by an ISR.
*/
isrInfo.event = EVENT_ID;
_trigger_isrEventSignal();
rv = task_event_recv(EVENT_ID, TICKS_UNLIMITED);
if (rv != RC_OK) {
TC_ERROR("ISR: task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
/*
* task_event_recv() to return immediately as there will already be
* an event made ready by a fiber.
*/
releaseTestFiber();
rv = task_event_recv(EVENT_ID, TICKS_UNLIMITED);
if (rv != RC_OK) {
TC_ERROR("Fiber: task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
task_sem_give(ALTERNATE_SEM); /* Wake the AlternateTask */
/*
* The 1st pass, task_event_recv() will be signalled from a task,
* from an ISR for the second and from a fiber third.
*/
for (i = 0; i < 3; i++) {
rv = task_event_recv(EVENT_ID, TICKS_UNLIMITED);
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
}
return TC_PASS;
}
/**
*
* @brief Test the task_event_recv(timeout) API
*
* This test checks task_event_recv(timeout) against the following cases:
* 1. The current task times out while waiting for the event.
* 2. There is already an event waiting (signalled from a task).
* 3. The current task must wait on the event until it is signalled
* from either another task, an ISR or a fiber.
*
* @return TC_PASS on success, TC_FAIL on failure
*/
int eventTimeoutTest(void)
{
int rv; /* return value from task_event_xxx() calls */
int i; /* loop counter */
/* Timeout while waiting for the event */
rv = task_event_recv(EVENT_ID, MSEC(100));
if (rv != RC_TIME) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_TIME);
return TC_FAIL;
}
/* Let there be an event already waiting to be tested */
task_event_send(EVENT_ID);
rv = task_event_recv(EVENT_ID, MSEC(100));
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
task_sem_give(ALTERNATE_SEM); /* Wake AlternateTask() */
/*
* The 1st pass, task_event_recv(timeout) will be signalled from a task,
* from an ISR for the second and from a fiber for the third.
*/
for (i = 0; i < 3; i++) {
rv = task_event_recv(EVENT_ID, MSEC(100));
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
}
return TC_PASS;
}
/**
*
* @brief Test the isr_event_send() API
*
* Although other tests have done some testing using isr_event_send(), none
* of them have demonstrated that signalling an event more than once does not
* "queue" events. That is, should two or more signals of the same event occur
* before it is tested, it can only be tested for successfully once.
*
* @return TC_PASS on success, TC_FAIL on failure
*/
int isrEventSignalTest(void)
{
int rv; /* return value from task_event_recv() */
/*
* The single case of an event made ready has already been tested.
* Trigger two ISR event signals. Only one should be detected.
*/
isrInfo.event = EVENT_ID;
_trigger_isrEventSignal();
_trigger_isrEventSignal();
rv = task_event_recv(EVENT_ID, TICKS_NONE);
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
/* The second event signal should be "lost" */
rv = task_event_recv(EVENT_ID, TICKS_NONE);
if (rv != RC_FAIL) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_FAIL);
return TC_FAIL;
}
return TC_PASS;
}
/**
*
* @brief Test the fiber_event_send() API
*
* Signalling an event by fiber_event_send() more than once does not "queue"
* events. That is, should two or more signals of the same event occur before
* it is tested, it can only be tested for successfully once.
*
* @return TC_PASS on success, TC_FAIL on failure
*/
int fiberEventSignalTest(void)
{
int rv; /* return value from task_event_recv(TICKS_NONE) */
/*
* Trigger two fiber event signals. Only one should be detected.
*/
releaseTestFiber();
rv = task_event_recv(EVENT_ID, TICKS_NONE);
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_OK);
return TC_FAIL;
}
/* The second event signal should be "lost" */
rv = task_event_recv(EVENT_ID, TICKS_NONE);
if (rv != RC_FAIL) {
TC_ERROR("task_event_recv() returned %d, not %d\n", rv, RC_FAIL);
return TC_FAIL;
}
return TC_PASS;
}
/**
*
* @brief Handler to run on EVENT_ID event
*
* @param event signalled event
*
* @return <handlerRetVal>
*/
int eventHandler(int event)
{
ARG_UNUSED(event);
evidence++;
return handlerRetVal; /* 0 if not to wake waiting task; 1 if to wake */
}
/**
*
* @brief Handler to run on ALT_EVENT event
*
* @param event signalled event
*
* @return 1
*/
int altEventHandler(int event)
{
ARG_UNUSED(event);
evidence += 100;
return 1;
}
/**
*
* @brief Test the task_event_handler_set() API
*
* This test checks that the event handler is set up properly when
* task_event_handler_set() is called. It shows that event handlers are tied
* to the specified event and that the return value from the handler affects
* whether the event wakes a task waiting upon that event.
*
* @return TC_PASS on success, TC_FAIL on failure
*/
int eventSignalHandlerTest(void)
{
int rv; /* return value from task_event_xxx() calls */
/*
* NOTE: We cannot test for the validity of an event ID, since
* task_event_handler_set() only checks for valid event IDs via an
* __ASSERT() and only in debug kernels (an __ASSERT() stops the system).
*/
/* Expect this call to task_event_handler_set() to succeed */
rv = task_event_handler_set(EVENT_ID, eventHandler);
if (rv != RC_OK) {
TC_ERROR("task_event_handler_set() returned %d not %d\n",
rv, RC_OK);
return TC_FAIL;
}
/* Enable another handler to show that two handlers can be installed */
rv = task_event_handler_set(ALT_EVENT, altEventHandler);
if (rv != RC_OK) {
TC_ERROR("task_event_handler_set() returned %d not %d\n",
rv, RC_OK);
return TC_FAIL;
}
/*
* The alternate task should signal the event, but the handler will
* return 0 and the waiting task will not be woken up. Thus, it should
* timeout and get an RC_TIME return code.
*/
task_sem_give(ALTERNATE_SEM); /* Wake alternate task */
rv = task_event_recv(EVENT_ID, MSEC(100));
if (rv != RC_TIME) {
TC_ERROR("task_event_recv() returned %d not %d\n", rv, RC_TIME);
return TC_FAIL;
}
/*
* The alternate task should signal the event, and the handler will
* return 1 this time, which will wake the waiting task.
*/
task_sem_give(ALTERNATE_SEM); /* Wake alternate task again */
rv = task_event_recv(EVENT_ID, MSEC(100));
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d not %d\n", rv, RC_OK);
return TC_FAIL;
}
if (evidence != 2) {
TC_ERROR("Expected event handler evidence to be %d not %d\n",
2, evidence);
return TC_FAIL;
}
/*
* Signal the alternate event. This demonstrates that two event handlers
* can be simultaneously installed for two different events.
*/
task_event_send(ALT_EVENT);
if (evidence != 102) {
TC_ERROR("Expected event handler evidence to be %d not %d\n",
2, evidence);
return TC_FAIL;
}
/* Uninstall the event handlers */
rv = task_event_handler_set(EVENT_ID, NULL);
if (rv != RC_OK) {
TC_ERROR("task_event_handler_set() returned %d not %d\n",
rv, RC_OK);
return TC_FAIL;
}
rv = task_event_handler_set(ALT_EVENT, NULL);
if (rv != RC_OK) {
TC_ERROR("task_event_handler_set() returned %d not %d\n",
rv, RC_OK);
return TC_FAIL;
}
task_event_send(EVENT_ID);
task_event_send(ALT_EVENT);
if (evidence != 102) {
TC_ERROR("Event handlers did not uninstall\n");
return TC_FAIL;
}
/* Clear out the waiting events */
rv = task_event_recv(EVENT_ID, TICKS_NONE);
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d not %d\n", rv, RC_OK);
return TC_FAIL;
}
rv = task_event_recv(ALT_EVENT, TICKS_NONE);
if (rv != RC_OK) {
TC_ERROR("task_event_recv() returned %d not %d\n", rv, RC_OK);
return TC_FAIL;
}
return TC_PASS;
}
/**
*
* @brief Alternate task to signal various events to a waiting task
*
* @return N/A
*/
void AlternateTask(void)
{
/* Wait for eventWaitTest() to run. */
task_sem_take(ALTERNATE_SEM, TICKS_UNLIMITED);
task_event_send(EVENT_ID);
releaseTestFiber();
_trigger_isrEventSignal();
/* Wait for eventTimeoutTest() to run. */
task_sem_take(ALTERNATE_SEM, TICKS_UNLIMITED);
task_event_send(EVENT_ID);
releaseTestFiber();
_trigger_isrEventSignal();
/*
* Wait for eventSignalHandlerTest() to run.
*
* When <handlerRetVal> is zero (0), the waiting task will not get woken up
* after the event handler for EVENT_ID runs. When it is one (1), the
* waiting task will get woken up after the event handler for EVENT_ID runs.
*/
task_sem_take(ALTERNATE_SEM, TICKS_UNLIMITED);
handlerRetVal = 0;
task_event_send(EVENT_ID);
task_sem_take(ALTERNATE_SEM, TICKS_UNLIMITED);
handlerRetVal = 1;
task_event_send(EVENT_ID);
}
/**
*
* @brief Main entry point to the test suite
*
* @return N/A
*/
void RegressionTask(void)
{
int tcRC; /* test case return code */
TC_START("Test Microkernel Events\n");
microObjectsInit();
TC_PRINT("Testing task_event_recv(TICKS_NONE) and task_event_send() ...\n");
tcRC = eventNoWaitTest();
if (tcRC != TC_PASS) {
goto doneTests;
}
TC_PRINT("Testing task_event_recv(TICKS_UNLIMITED) and task_event_send() ...\n");
tcRC = eventWaitTest();
if (tcRC != TC_PASS) {
goto doneTests;
}
TC_PRINT("Testing task_event_recv(timeout) and task_event_send() ...\n");
tcRC = eventTimeoutTest();
if (tcRC != TC_PASS) {
goto doneTests;
}
TC_PRINT("Testing isr_event_send() ...\n");
tcRC = isrEventSignalTest();
if (tcRC != TC_PASS) {
goto doneTests;
}
TC_PRINT("Testing fiber_event_send() ...\n");
tcRC = fiberEventSignalTest();
if (tcRC != TC_PASS) {
goto doneTests;
}
TC_PRINT("Testing task_event_handler_set() ...\n");
tcRC = eventSignalHandlerTest();
if (tcRC != TC_PASS) {
goto doneTests;
}
doneTests:
TC_END_RESULT(tcRC);
TC_END_REPORT(tcRC);
}

View file

@ -1,70 +0,0 @@
/* test_fiber.c - test fiber functions */
/*
* Copyright (c) 2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
DESCRIPTION
The module implements functions for the fiber that tests
event signaling
*/
#include <zephyr.h>
#define N_TESTS 10 /* number of tests to run */
#define FIBER_PRIORITY 6
#define FIBER_STACK_SIZE 1024
/* exports */
struct nano_sem fiberSem; /* semaphore that allows test control the fiber */
static char __stack fiberStack[FIBER_STACK_SIZE]; /* test fiber stack size */
/**
*
* @brief The test fiber entry function
*
* Fiber waits on the semaphore controlled by the test task
* It signals the event for the eventWaitTest() function
* in single and cycle test, for eventTimeoutTest()
*
* @return N/A
*/
static void testFiberEntry(void)
{
/* signal event for eventWaitTest() */
/* single test */
nano_fiber_sem_take(&fiberSem, TICKS_UNLIMITED);
fiber_event_send(EVENT_ID);
/* test in cycle */
nano_fiber_sem_take(&fiberSem, TICKS_UNLIMITED);
fiber_event_send(EVENT_ID);
/* signal event for eventTimeoutTest() */
nano_fiber_sem_take(&fiberSem, TICKS_UNLIMITED);
fiber_event_send(EVENT_ID);
/*
* Signal two events for fiberEventSignalTest ().
* It has to detect only one
*/
nano_fiber_sem_take(&fiberSem, TICKS_UNLIMITED);
fiber_event_send(EVENT_ID);
fiber_event_send(EVENT_ID);
}
/**
*
* @brief Initializes variables and starts the test fiber
*
* @return N/A
*/
void testFiberInit(void)
{
nano_sem_init(&fiberSem);
task_fiber_start(fiberStack, FIBER_STACK_SIZE, (nano_fiber_entry_t)testFiberEntry,
0, 0, FIBER_PRIORITY, 0);
}

View file

@ -1,2 +0,0 @@
[test]
tags = legacy core bat_commit

View file

@ -1,5 +0,0 @@
MDEF_FILE = prj.mdef
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,86 +0,0 @@
Title: FIFO APIs
Description:
This test verifies that the microkernel FIFO APIs operate as expected.
--------------------------------------------------------------------------------
Building and Running Project:
This microkernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Sample Output:
tc_start() - Test Microkernel FIFO
myData[0] = 1,
myData[1] = 101,
myData[2] = 201,
myData[3] = 301,
myData[4] = 401,
===================================================================
PASS - fillFIFO.
verifyQueueData: i=0, successfully get data 1
verifyQueueData: i=1, successfully get data 101
verifyQueueData: i=2, FIFOQ is empty. No data.
===================================================================
PASS - verifyQueueData.
===================================================================
PASS - fillFIFO.
RegressionTask: About to putWT with data 401
RegressionTask: FIFO Put time out as expected for data 401
verifyQueueData: i=0, successfully get data 1
verifyQueueData: i=1, successfully get data 101
===================================================================
PASS - verifyQueueData.
===================================================================
PASS - fillFIFO.
RegressionTask: 2 element in queue
RegressionTask: Successfully purged queue
RegressionTask: confirm 0 element in queue
===================================================================
RegressionTask: About to GetW data
Starts MicroTestFifoTask
MicroTestFifoTask: Puts element 999
RegressionTask: GetW get back 999
MicroTestFifoTask: FIRegressionTask: GetWT timeout expected
===================================================================
PASS - fillFIFO.
RegressionTask: about to putW data 999
FOPut OK for 999
MicroTestFifoTask: About to purge queue
RegressionTask: PutW ok when queue is purged while waiting
===================================================================
PASS - fillFIFO.
RegressionTask: about to putW data 401
MicroTestFifoTask: Successfully purged queue
MicroTestFifoTask: About to dequeue 1 element
RegressionTask: PutW success for data 401
===================================================================
RegressionTask: Get back data 101
RegressionTask: Get back data 401
RegressionTask: queue is empty. Test Done!
MicroTestFifoTask: task_fifo_get got back correct data 1
===================================================================
PASS - MicroTestFifoTask.
===================================================================
PASS - RegressionTask.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,4 +0,0 @@
CONFIG_ASSERT=y
CONFIG_ASSERT_LEVEL=2
CONFIG_IRQ_OFFLOAD=y
CONFIG_LEGACY_KERNEL=y

View file

@ -1,15 +0,0 @@
% Application : test microkernel FIFO APIs
% TASK NAME PRIO ENTRY STACK GROUPS
% ====================================================
TASK tStartTask 5 RegressionTask 2048 [EXE]
TASK helperTask 7 MicroTestFifoTask 2048 [EXE]
% FIFO NAME DEPTH WIDTH
% ========================
FIFO FIFOQ 2 4
% SEMA NAME
% =============================
SEMA SEMSIG_MicroTestFifoTask
SEMA SEM_TestDone

View file

@ -1,3 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = fifo.o

View file

@ -1,614 +0,0 @@
/*
* Copyright (c) 2012-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Test microkernel FIFO APIs
*
* This module tests the following FIFO routines:
*
* task_fifo_put
* task_fifo_get
* task_fifo_size_get
* task_fifo_purge
*
* Scenarios tested include:
* - Check number of elements in queue when queue is empty, full or
* while it is being dequeued
* - Verify the data being dequeued are in correct order
* - Verify the return codes are correct for the APIs
*/
#include <tc_util.h>
#include <stdbool.h>
#include <zephyr.h>
#define MULTIPLIER 100 /* Used to initialize myData */
#define NUM_OF_ELEMENT 5 /* Number of elements in myData array */
#define DEPTH_OF_FIFO_QUEUE 2 /* FIFO queue depth--see prj.mdef */
#define SPECIAL_DATA 999 /* Special number to put in queue */
static int myData[NUM_OF_ELEMENT];
static int tcRC = TC_PASS; /* test case return code */
#ifdef TEST_PRIV_FIFO
DEFINE_FIFO(FIFOQ, 2, 4);
#endif
/**
*
* @brief Initialize data array
*
* This routine initializes the myData array used in the FIFO tests.
*
* @return N/A
*/
void initMyData(void)
{
for (int i = 0; i < NUM_OF_ELEMENT; i++) {
myData[i] = i * MULTIPLIER + 1;
} /* for */
} /* initMyData */
/**
*
* @brief Print data array
*
* This routine prints myData array.
*
* @return N/A
*/
void printMyData(void)
{
for (int i = 0; i < NUM_OF_ELEMENT; i++) {
PRINT_DATA("myData[%d] = %d,\n", i, myData[i]);
} /* for */
} /* printMyData */
/**
*
* @brief Verify return value
*
* This routine verifies current value against expected value
* and returns true if they are the same.
*
* @param expectRetValue expect value
* @param currentRetValue current value
*
* @return true, false
*/
bool verifyRetValue(int expectRetValue, int currentRetValue)
{
return (expectRetValue == currentRetValue);
} /* verifyRetValue */
/**
*
* @brief Initialize microkernel objects
*
* This routine initializes the microkernel objects used in the FIFO tests.
*
* @return N/A
*/
void initMicroObjects(void)
{
initMyData();
printMyData();
} /* initMicroObjects */
/**
*
* @brief Fills up the FIFO queue
*
* This routine fills the FIFO queue with myData array. This assumes the
* queue is empty before we put in elements.
*
* @param queue FIFO queue
* @param numElements Number of elements used to inserted into the queue
*
* @return TC_PASS, TC_FAIL
*
* Also updates tcRC when result is TC_FAIL.
*/
int fillFIFO(kfifo_t queue, int numElements)
{
int result = TC_PASS; /* TC_PASS or TC_FAIL for this function */
int retValue; /* return value from task_fifo_xxx APIs */
for (int i = 0; i < numElements; i++) {
retValue = task_fifo_put(queue, &myData[i], TICKS_NONE);
switch (retValue) {
case RC_OK:
/* TC_PRINT("i=%d, successfully put in data=%d\n", i, myData[i]); */
if (i >= DEPTH_OF_FIFO_QUEUE) {
TC_ERROR("Incorrect return value of RC_OK when i = %d\n", i);
result = TC_FAIL;
goto exitTest3;
}
break;
case RC_FAIL:
/* TC_PRINT("i=%d, FIFOQ is full. Cannot put data=%d\n", i, myData[i]); */
if (i < DEPTH_OF_FIFO_QUEUE) {
TC_ERROR("Incorrect return value of RC_FAIL when i = %d\n", i);
result = TC_FAIL;
goto exitTest3;
}
break;
default:
TC_ERROR("Incorrect return value of %d when i = %d\n", retValue, i);
result = TC_FAIL;
goto exitTest3;
} /* switch */
} /* for */
exitTest3:
if (result == TC_FAIL) {
tcRC = TC_FAIL;
}
TC_END_RESULT(result);
return result;
} /* fillFIFO */
/**
*
* @brief Task to test FIFO queue
*
* This routine is run in three context switches:
* - it puts an element to the FIFO queue
* - it purges the FIFO queue
* - it dequeues an element from the FIFO queue
*
* @return N/A
*/
void MicroTestFifoTask(void)
{
int retValue; /* return value of task_fifo_xxx interface */
int locData = SPECIAL_DATA; /* variable to pass data to and from queue */
/* (1) Wait for semaphore: put element test */
task_sem_take(SEMSIG_MicroTestFifoTask, TICKS_UNLIMITED);
TC_PRINT("Starts %s\n", __func__);
/* Put one element */
TC_PRINT("%s: Puts element %d\n", __func__, locData);
retValue = task_fifo_put(FIFOQ, &locData, TICKS_NONE);
/*
* Execution is switched back to RegressionTask (a higher priority task)
* which is not block anymore.
*/
if (verifyRetValue(RC_OK, retValue)) {
TC_PRINT("%s: FIFOPut OK for %d\n", __func__, locData);
} else {
TC_ERROR("FIFOPut failed, retValue %d\n", retValue);
tcRC = TC_FAIL;
goto exitTest4;
}
/*
* (2) Wait for semaphore: purge queue test. Purge queue while another
* task is in task_fifo_put(TICKS_UNLIMITED). This is to test return
* value of the task_fifo_put(TICKS_UNLIMITED) interface.
*/
task_sem_take(SEMSIG_MicroTestFifoTask, TICKS_UNLIMITED);
/*
* RegressionTask is waiting to put data into FIFO queue, which is
* full. We purge the queue here and the task_fifo_put(TICKS_UNLIMITED)
* interface will terminate the wait and return RC_FAIL.
*/
TC_PRINT("%s: About to purge queue\n", __func__);
retValue = task_fifo_purge(FIFOQ);
/*
* Execution is switched back to RegressionTask (a higher priority task)
* which is not block anymore.
*/
if (verifyRetValue(RC_OK, retValue)) {
TC_PRINT("%s: Successfully purged queue\n", __func__);
} else {
TC_ERROR("Problem purging queue, %d\n", retValue);
tcRC = TC_FAIL;
goto exitTest4;
}
/* (3) Wait for semaphore: get element test */
task_sem_take(SEMSIG_MicroTestFifoTask, TICKS_UNLIMITED);
TC_PRINT("%s: About to dequeue 1 element\n", __func__);
retValue = task_fifo_get(FIFOQ, &locData, TICKS_NONE);
/*
* Execution is switched back to RegressionTask (a higher priority task)
* which is not block anymore
*/
if ((retValue != RC_OK) || (locData != myData[0])) {
TC_ERROR("task_fifo_get failed,\n retValue %d OR got data %d while expect %d\n"
, retValue, locData, myData[0]);
tcRC = TC_FAIL;
goto exitTest4;
} else {
TC_PRINT("%s: task_fifo_get got back correct data %d\n", __func__, locData);
}
exitTest4:
TC_END_RESULT(tcRC);
/* Allow RegressionTask to print final result of the test */
task_sem_give(SEM_TestDone);
}
/**
*
* @brief Verifies data in queue is correct
*
* This routine assumes that the queue is full when this function is called.
* It counts the number of elements in the queue, dequeues elements and verifies
* that they are in the right order. Expect the dequeue order as: myData[0],
* myData[1].
*
* @param loopCnt number of elements passed to the for loop
*
* @return TC_PASS, TC_FAIL
*
* Also updates tcRC when result is TC_FAIL.
*/
int verifyQueueData(int loopCnt)
{
int result = TC_PASS; /* TC_PASS or TC_FAIL for this function */
int retValue; /* task_fifo_xxx interface return value */
int locData; /* local variable used for passing data */
/*
* Counts elements using task_fifo_size_get interface. Dequeues elements from
* FIFOQ. Test for proper return code when FIFO queue is empty using
* task_fifo_get interface.
*/
for (int i = 0; i < loopCnt; i++) {
/* Counts number of elements */
retValue = task_fifo_size_get(FIFOQ);
if (!verifyRetValue(DEPTH_OF_FIFO_QUEUE-i, retValue)) {
TC_ERROR("i=%d, incorrect number of FIFO elements in queue: %d, expect %d\n"
, i, retValue, DEPTH_OF_FIFO_QUEUE-i);
result = TC_FAIL;
goto exitTest2;
} else {
/* TC_PRINT("%s: i=%d, %d elements in queue\n", __func__, i, retValue); */
}
/* Dequeues element */
retValue = task_fifo_get(FIFOQ, &locData, TICKS_NONE);
switch (retValue) {
case RC_OK:
if ((i >= DEPTH_OF_FIFO_QUEUE) || (locData != myData[i])) {
TC_ERROR("RC_OK but got wrong data %d for i=%d\n", locData, i);
result = TC_FAIL;
goto exitTest2;
}
TC_PRINT("%s: i=%d, successfully get data %d\n", __func__, i, locData);
break;
case RC_FAIL:
if (i < DEPTH_OF_FIFO_QUEUE) {
TC_ERROR("RC_FAIL but i is only %d\n", i);
result = TC_FAIL;
goto exitTest2;
}
TC_PRINT("%s: i=%d, FIFOQ is empty. No data.\n", __func__, i);
break;
default:
TC_ERROR("i=%d, incorrect return value %d\n", i, retValue);
result = TC_FAIL;
goto exitTest2;
} /* switch */
} /* for */
exitTest2:
if (result == TC_FAIL) {
tcRC = TC_FAIL;
}
TC_END_RESULT(result);
return result;
} /* verifyQueueData */
/**
*
* @brief Main task to test FIFO queue
*
* This routine initializes data, fills the FIFO queue and verifies the
* data in the queue is in correct order when items are being dequeued.
* It also tests the wait (with and without timeouts) to put data into
* queue when the queue is full. The queue is purged at some point
* and checked to see if the number of elements is correct.
* The get wait interfaces (with and without timeouts) are also tested
* and data verified.
*
* @return N/A
*/
void RegressionTask(void)
{
int retValue; /* task_fifo_xxx interface return value */
int locData; /* local variable used for passing data */
int result; /* result from utility functions */
TC_START("Test Microkernel FIFO");
initMicroObjects();
/*
* Checks number of elements in queue, expect 0. Test task_fifo_size_get
* interface.
*/
retValue = task_fifo_size_get(FIFOQ);
if (!verifyRetValue(0, retValue)) {
TC_ERROR("Incorrect number of FIFO elements in queue: %d\n", retValue);
tcRC = TC_FAIL;
goto exitTest;
}
/*
* FIFOQ is only two elements deep. Test for proper return code when
* FIFO queue is full. Test task_fifo_put(TICKS_NONE) interface.
*/
result = fillFIFO(FIFOQ, NUM_OF_ELEMENT);
if (result == TC_FAIL) { /* terminate test */
TC_ERROR("Failed fillFIFO.\n");
goto exitTest;
}
/*
* Checks number of elements in FIFO queue, should be full. Also verifies
* data is in correct order. Test task_fifo_size_get and task_fifo_get interface.
*/
result = verifyQueueData(DEPTH_OF_FIFO_QUEUE + 1);
if (result == TC_FAIL) { /* terminate test */
TC_ERROR("Failed verifyQueueData.\n");
goto exitTest;
}
/*----------------------------------------------------------------------------*/
/* Fill FIFO queue */
result = fillFIFO(FIFOQ, NUM_OF_ELEMENT);
if (result == TC_FAIL) { /* terminate test */
TC_ERROR("Failed fillFIFO.\n");
goto exitTest;
}
/*
* Put myData[4] into queue with wait, test task_fifo_put(timeout)
* interface. Queue is full, so this data did not make it into queue.
* Expect return code of RC_TIME.
*/
TC_PRINT("%s: About to putWT with data %d\n", __func__, myData[4]);
retValue = task_fifo_put(FIFOQ, &myData[4], 2); /* wait for 2 ticks */
if (verifyRetValue(RC_TIME, retValue)) {
TC_PRINT("%s: FIFO Put time out as expected for data %d\n"
, __func__, myData[4]);
} else {
TC_ERROR("Failed task_fifo_put for data %d, retValue %d\n",
myData[4], retValue);
tcRC = TC_FAIL;
goto exitTest;
}
/* Queue is full at this stage. Verify data is correct. */
result = verifyQueueData(DEPTH_OF_FIFO_QUEUE);
if (result == TC_FAIL) { /* terminate test */
TC_ERROR("Failed verifyQueueData.\n");
goto exitTest;
}
/*----------------------------------------------------------------------------*/
/* Fill FIFO queue. Check number of elements in queue, should be 2. */
result = fillFIFO(FIFOQ, NUM_OF_ELEMENT);
if (result == TC_FAIL) { /* terminate test */
TC_ERROR("Failed fillFIFO.\n");
goto exitTest;
}
retValue = task_fifo_size_get(FIFOQ);
if (verifyRetValue(DEPTH_OF_FIFO_QUEUE, retValue)) {
TC_PRINT("%s: %d element in queue\n", __func__, retValue);
} else {
TC_ERROR("Incorrect number of FIFO elements in queue: %d\n", retValue);
tcRC = TC_FAIL;
goto exitTest;
}
/*
* Purge queue, check number of elements in queue. Test task_fifo_purge
* interface.
*/
retValue = task_fifo_purge(FIFOQ);
if (verifyRetValue(RC_OK, retValue)) {
TC_PRINT("%s: Successfully purged queue\n", __func__);
} else {
TC_ERROR("Problem purging queue, %d\n", retValue);
tcRC = TC_FAIL;
goto exitTest;
}
/* Count number of elements in queue */
retValue = task_fifo_size_get(FIFOQ);
if (verifyRetValue(0, retValue)) {
TC_PRINT("%s: confirm %d element in queue\n", __func__, retValue);
} else {
TC_ERROR("Incorrect number of FIFO elements in queue: %d\n", retValue);
tcRC = TC_FAIL;
goto exitTest;
}
PRINT_LINE;
/*----------------------------------------------------------------------------*/
/*
* Semaphore to allow MicroTestFifoTask to run, but MicroTestFifoTask is lower
* priority, so it won't run until this current task is blocked
* in task_fifo_get interface later.
*/
task_sem_give(SEMSIG_MicroTestFifoTask);
/*
* Test task_fifo_get interface.
* Expect MicroTestFifoTask to run and insert SPECIAL_DATA into queue.
*/
TC_PRINT("%s: About to GetW data\n", __func__);
retValue = task_fifo_get(FIFOQ, &locData, TICKS_UNLIMITED);
if ((retValue != RC_OK) || (locData != SPECIAL_DATA)) {
TC_ERROR("Failed task_fifo_get interface for data %d, retValue %d\n"
, locData, retValue);
tcRC = TC_FAIL;
goto exitTest;
} else {
TC_PRINT("%s: GetW get back %d\n", __func__, locData);
}
/* MicroTestFifoTask may have modified tcRC */
if (tcRC == TC_FAIL) { /* terminate test */
TC_ERROR("tcRC failed.");
goto exitTest;
}
/*
* Test task_fifo_get(timeout) interface. Try to get more data, but
* there is none before it times out.
*/
retValue = task_fifo_get(FIFOQ, &locData, 2);
if (verifyRetValue(RC_TIME, retValue)) {
TC_PRINT("%s: GetWT timeout expected\n", __func__);
} else {
TC_ERROR("Failed task_fifo_get interface for retValue %d\n", retValue);
tcRC = TC_FAIL;
goto exitTest;
}
/*----------------------------------------------------------------------------*/
/* Fill FIFO queue */
result = fillFIFO(FIFOQ, NUM_OF_ELEMENT);
if (result == TC_FAIL) { /* terminate test */
TC_ERROR("Failed fillFIFO.\n");
goto exitTest;
}
/* Semaphore to allow MicroTestFifoTask to run */
task_sem_give(SEMSIG_MicroTestFifoTask);
/* MicroTestFifoTask may have modified tcRC */
if (tcRC == TC_FAIL) { /* terminate test */
TC_ERROR("tcRC failed.");
goto exitTest;
}
/* Queue is full */
locData = SPECIAL_DATA;
TC_PRINT("%s: about to putW data %d\n", __func__, locData);
retValue = task_fifo_put(FIFOQ, &locData, TICKS_UNLIMITED);
/*
* Execution is switched to MicroTestFifoTask, which will purge the queue.
* When the queue is purged while other tasks are waiting to put data into
* queue, the return value will be RC_FAIL.
*/
if (verifyRetValue(RC_FAIL, retValue)) {
TC_PRINT("%s: PutW ok when queue is purged while waiting\n", __func__);
} else {
TC_ERROR("Failed task_fifo_put interface when queue is purged, retValue %d\n"
, retValue);
tcRC = TC_FAIL;
goto exitTest;
}
/*----------------------------------------------------------------------------*/
/* Fill FIFO queue */
result = fillFIFO(FIFOQ, NUM_OF_ELEMENT);
if (result == TC_FAIL) { /* terminate test */
TC_ERROR("Failed fillFIFO.\n");
goto exitTest;
}
/* Semaphore to allow MicroTestFifoTask to run */
task_sem_give(SEMSIG_MicroTestFifoTask);
/* MicroTestFifoTask may have modified tcRC */
if (tcRC == TC_FAIL) { /* terminate test */
TC_ERROR("tcRC failed.");
goto exitTest;
}
/* Queue is full */
TC_PRINT("%s: about to putW data %d\n", __func__, myData[4]);
retValue = task_fifo_put(FIFOQ, &myData[4], TICKS_UNLIMITED);
/* Execution is switched to MicroTestFifoTask, which will dequeue one element */
if (verifyRetValue(RC_OK, retValue)) {
TC_PRINT("%s: PutW success for data %d\n", __func__, myData[4]);
} else {
TC_ERROR("Failed task_fifo_put interface for data %d, retValue %d\n"
, myData[4], retValue);
tcRC = TC_FAIL;
goto exitTest;
}
PRINT_LINE;
/*----------------------------------------------------------------------------*/
/*
* Dequeue all data to check. Expect data in the queue to be:
* myData[1], myData[4]. myData[0] was dequeued by MicroTestFifoTask.
*/
/* Get first data */
retValue = task_fifo_get(FIFOQ, &locData, TICKS_NONE);
if ((retValue != RC_OK) || (locData != myData[1])) {
TC_ERROR("Get back data %d, retValue %d\n", locData, retValue);
tcRC = TC_FAIL;
goto exitTest;
} else {
TC_PRINT("%s: Get back data %d\n", __func__, locData);
}
/* Get second data */
retValue = task_fifo_get(FIFOQ, &locData, TICKS_NONE);
if ((retValue != RC_OK) || (locData != myData[4])) {
TC_ERROR("Get back data %d, retValue %d\n", locData, retValue);
tcRC = TC_FAIL;
goto exitTest;
} else {
TC_PRINT("%s: Get back data %d\n", __func__, locData);
}
/* Queue should be empty */
retValue = task_fifo_get(FIFOQ, &locData, TICKS_NONE);
if (retValue != RC_FAIL) {
TC_ERROR("%s: incorrect retValue %d\n", __func__, retValue);
tcRC = TC_FAIL;
goto exitTest;
} else {
TC_PRINT("%s: queue is empty. Test Done!\n", __func__);
}
task_sem_take(SEM_TestDone, TICKS_UNLIMITED);
exitTest:
TC_END_RESULT(tcRC);
TC_END_REPORT(tcRC);
} /* RegressionTask */

View file

@ -1,2 +0,0 @@
[test]
tags = legacy core bat_commit

View file

@ -1,4 +0,0 @@
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,121 +0,0 @@
Title: FIFO APIs
Description:
This test verifies that the nanokernel FIFO APIs operate as expected.
---------------------------------------------------------------------------
Building and Running Project:
This nanokernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
---------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
---------------------------------------------------------------------------
Sample Output:
tc_start() - Test Nanokernel FIFO
Test Task FIFO Put
TASK FIFO Put Order: 001056dc, 00104ed4, 001046c0, 00103e80,
===================================================================
Test Fiber FIFO Get
FIBER FIFO Get: count = 0, ptr is 001056dc
FIBER FIFO Get: count = 1, ptr is 00104ed4
FIBER FIFO Get: count = 2, ptr is 001046c0
FIBER FIFO Get: count = 3, ptr is 00103e80
PASS - fiber1.
===================================================================
Test Fiber FIFO Put
FIBER FIFO Put Order: 00103e80, 001046c0, 00104ed4, 001056dc,
===================================================================
Test Task FIFO Get
TASK FIFO Get: count = 0, ptr is 00103e80
TASK FIFO Get: count = 1, ptr is 001046c0
TASK FIFO Get: count = 2, ptr is 00104ed4
TASK FIFO Get: count = 3, ptr is 001056dc
===================================================================
Test Task FIFO Get Wait Interfaces
TASK FIFO Put to queue2: 001056dc
Test Fiber FIFO Get Wait Interfaces
FIBER FIFO Get from queue2: 001056dc
FIBER FIFO Put to queue1: 00104ed4
TASK FIFO Get from queue1: 00104ed4
TASK FIFO Put to queue2: 001046c0
FIBER FIFO Get from queue2: 001046c0
FIBER FIFO Put to queue1: 00103e80
PASS - testFiberFifoGetW.
===================================================================
Test ISR FIFO (invoked from Fiber)
ISR FIFO Get from queue1: 00103e80
ISR FIFO (running in fiber context) Put Order:
001056dc, 00104ed4, 001046c0, 00103e80,
PASS - testIsrFifoFromFiber.
PASS - fiber2.
PASS - testTaskFifoGetW.
===================================================================
Test ISR FIFO (invoked from Task)
Get from queue1: count = 0, ptr is 001056dc
Get from queue1: count = 1, ptr is 00104ed4
Get from queue1: count = 2, ptr is 001046c0
Get from queue1: count = 3, ptr is 00103e80
Test ISR FIFO (invoked from Task) - put 001056dc and get back 001056dc
PASS - testIsrFifoFromTask.
===================================================================
test nano_task_fifo_get with timeout > 0
nano_task_fifo_get timed out as expected
nano_task_fifo_get got fifo in time, as expected
testing timeouts of 5 fibers on same fifo
got fiber (q order: 2, t/o: 10, fifo 200049c0) as expected
got fiber (q order: 3, t/o: 15, fifo 200049c0) as expected
got fiber (q order: 0, t/o: 20, fifo 200049c0) as expected
got fiber (q order: 4, t/o: 25, fifo 200049c0) as expected
got fiber (q order: 1, t/o: 30, fifo 200049c0) as expected
testing timeouts of 9 fibers on different fifos
got fiber (q order: 0, t/o: 10, fifo 200049cc) as expected
got fiber (q order: 5, t/o: 15, fifo 200049c0) as expected
got fiber (q order: 7, t/o: 20, fifo 200049c0) as expected
got fiber (q order: 1, t/o: 25, fifo 200049c0) as expected
got fiber (q order: 8, t/o: 30, fifo 200049cc) as expected
got fiber (q order: 2, t/o: 35, fifo 200049c0) as expected
got fiber (q order: 6, t/o: 40, fifo 200049c0) as expected
got fiber (q order: 4, t/o: 45, fifo 200049cc) as expected
got fiber (q order: 3, t/o: 50, fifo 200049cc) as expected
testing 5 fibers timing out, but obtaining the data in time
(except the last one, which times out)
got fiber (q order: 0, t/o: 20, fifo 200049c0) as expected
got fiber (q order: 1, t/o: 30, fifo 200049c0) as expected
got fiber (q order: 2, t/o: 10, fifo 200049c0) as expected
got fiber (q order: 3, t/o: 15, fifo 200049c0) as expected
got fiber (q order: 4, t/o: 25, fifo 200049c0) as expected
===================================================================
PASS - test_timeout.
===================================================================
===================================================================
PASS - main.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,5 +0,0 @@
CONFIG_NANO_TIMEOUTS=y
CONFIG_ASSERT=y
CONFIG_ASSERT_LEVEL=2
CONFIG_IRQ_OFFLOAD=y
CONFIG_LEGACY_KERNEL=y

View file

@ -1,3 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = fifo.o fifo_timeout.o

View file

@ -1,829 +0,0 @@
/*
* Copyright (c) 2012-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
* @file
* @brief Test nanokernel FIFO APIs
*
* This module tests four basic scenarios with the usage of the following FIFO
* routines:
*
* nano_fiber_fifo_get, nano_fiber_fifo_put
* nano_task_fifo_get, nano_task_fifo_put
* nano_isr_fifo_get, nano_isr_fifo_put
*
* Scenario #1
* Task enters items into a queue, starts the fiber and waits for a semaphore.
* Fiber extracts all items from the queue and enters some items back into
* the queue. Fiber gives the semaphore for task to continue. Once the control
* is returned back to task, task extracts all items from the queue.
* Scenario #2
* Task enters an item into queue2, starts a fiber and extract an item from
* queue1 once the item is there. The fiber will extract an item from queue2
* once the item is there and and enter an item to queue1. The flow of control
* goes from task to fiber and so forth.
* Scenario #3
* Tests the ISR interfaces. Function testIsrFifoFromFiber gets an item from
* the fifo queue in ISR context. It then enters four items into the queue
* and finishes execution. Control is returned back to function
* testTaskFifoGetW which also finished it's execution and returned to main.
* Finally function testIsrFifoFromTask is run and it gets all data from
* the queue and puts and gets one last item to the queue. All these are run
* in ISR context.
*
* Scenario #4:
* Timeout scenarios with multiple FIFOs and fibers.
*/
#include <zephyr.h>
#include <tc_util.h>
#include <misc/__assert.h>
#include <misc/util.h>
#include <irq_offload.h>
#include <sections.h>
#include <util_test_common.h>
#define FIBER_STACKSIZE 384
#define NUM_FIFO_ELEMENT 4
#define INVALID_DATA NULL
#define TCERR1(count) TC_ERROR("Didn't get back correct FIFO, count %d\n", count)
#define TCERR2 TC_ERROR("Didn't get back correct FIFO\n")
#define TCERR3 TC_ERROR("The queue should be empty!\n")
struct isr_fifo_info {
struct nano_fifo *fifo_ptr; /* FIFO */
void *data; /* pointer to data to add */
};
char __stack fiberStack1[FIBER_STACKSIZE];
char __stack fiberStack2[FIBER_STACKSIZE];
char __stack fiberStack3[FIBER_STACKSIZE];
struct nano_fifo nanoFifoObj;
struct nano_fifo nanoFifoObj2;
struct nano_sem nanoSemObj1; /* Used to block/wake-up fiber1 */
struct nano_sem nanoSemObj2; /* Used to block/wake-up fiber2 */
struct nano_sem nanoSemObj3; /* Used to block/wake-up fiber3 */
struct nano_sem nanoSemObjTask; /* Used to block/wake-up task */
struct nano_timer timer;
void *timerData[1];
int myFifoData1[4];
int myFifoData2[2];
int myFifoData3[4];
int myFifoData4[2];
void * const pMyFifoData1 = (void *)myFifoData1;
void * const pMyFifoData2 = (void *)myFifoData2;
void * const pMyFifoData3 = (void *)myFifoData3;
void * const pMyFifoData4 = (void *)myFifoData4;
void * const pPutList1[NUM_FIFO_ELEMENT] = {
(void *)myFifoData1,
(void *)myFifoData2,
(void *)myFifoData3,
(void *)myFifoData4
};
void * const pPutList2[NUM_FIFO_ELEMENT] = {
(void *)myFifoData4,
(void *)myFifoData3,
(void *)myFifoData2,
(void *)myFifoData1
};
/* for put_list tests */
struct nano_fifo fifo_list;
struct nano_sem sem_list;
struct packet_list {
void *next;
int n;
};
int retCode = TC_PASS;
static struct isr_fifo_info isrFifoInfo = {&nanoFifoObj, NULL};
void fiber1(void);
void fiber2(void);
void fiber3(void);
void initNanoObjects(void);
void testTaskFifoGetW(void);
extern int test_fifo_timeout(void);
/**
*
* @brief Add an item to a FIFO
*
* This routine is the ISR handler for _trigger_nano_isr_fifo_put(). It adds
* an item to the FIFO in the context of an ISR.
*
* @param parameter pointer to ISR handler parameter
*
* @return N/A
*/
void isr_fifo_put(void *parameter)
{
struct isr_fifo_info *pInfo = (struct isr_fifo_info *) parameter;
nano_isr_fifo_put(pInfo->fifo_ptr, pInfo->data);
}
static void _trigger_nano_isr_fifo_put(void)
{
irq_offload(isr_fifo_put, &isrFifoInfo);
}
/**
*
* @brief Get an item from a FIFO
*
* This routine is the ISR handler for _trigger_nano_isr_fifo_get(). It gets
* an item from the FIFO in the context of an ISR.
*
* @param parameter pointer to ISR handler parameter
*
* @return N/A
*/
void isr_fifo_get(void *parameter)
{
struct isr_fifo_info *pInfo = (struct isr_fifo_info *) parameter;
pInfo->data = nano_isr_fifo_get(pInfo->fifo_ptr, TICKS_NONE);
}
static void _trigger_nano_isr_fifo_get(void)
{
irq_offload(isr_fifo_get, &isrFifoInfo);
}
/**
*
* @brief Entry point for the first fiber
*
* @return N/A
*/
void fiber1(void)
{
void *pData; /* pointer to FIFO object get from the queue */
int count = 0; /* counter */
/* Wait for fiber1 to be activated. */
nano_fiber_sem_take(&nanoSemObj1, TICKS_UNLIMITED);
/* Wait for data to be added to <nanoFifoObj> by task */
pData = nano_fiber_fifo_get(&nanoFifoObj, TICKS_UNLIMITED);
if (pData != pPutList1[0]) {
TC_ERROR("fiber1 (1) - expected %p, got %p\n",
pPutList1[0], pData);
retCode = TC_FAIL;
return;
}
/* Wait for data to be added to <nanoFifoObj2> by fiber3 */
pData = nano_fiber_fifo_get(&nanoFifoObj2, TICKS_UNLIMITED);
if (pData != pPutList2[0]) {
TC_ERROR("fiber1 (2) - expected %p, got %p\n",
pPutList2[0], pData);
retCode = TC_FAIL;
return;
}
/* Wait for fiber1 to be reactivated */
nano_fiber_sem_take(&nanoSemObj1, TICKS_UNLIMITED);
TC_PRINT("Test Fiber FIFO Get\n\n");
/* Get all FIFOs */
while ((pData = nano_fiber_fifo_get(&nanoFifoObj, TICKS_NONE)) != NULL) {
TC_PRINT("FIBER FIFO Get: count = %d, ptr is %p\n", count, pData);
if ((count >= NUM_FIFO_ELEMENT) || (pData != pPutList1[count])) {
TCERR1(count);
retCode = TC_FAIL;
return;
}
count++;
}
TC_END_RESULT(retCode);
PRINT_LINE;
/*
* Entries in the FIFO queue have to be unique.
* Put data.
*/
TC_PRINT("Test Fiber FIFO Put\n");
TC_PRINT("\nFIBER FIFO Put Order: ");
for (int i = 0; i < NUM_FIFO_ELEMENT; i++) {
nano_fiber_fifo_put(&nanoFifoObj, pPutList2[i]);
TC_PRINT(" %p,", pPutList2[i]);
}
TC_PRINT("\n");
PRINT_LINE;
/* Give semaphore to allow the main task to run */
nano_fiber_sem_give(&nanoSemObjTask);
} /* fiber1 */
/**
*
* @brief Test the nano_fiber_fifo_get(TICKS_UNLIMITED) interface
*
* This function tests the fifo put and get wait interfaces in a fiber.
* It gets data from nanoFifoObj2 queue and puts data to nanoFifoObj queue.
*
* @return N/A
*/
void testFiberFifoGetW(void)
{
void *pGetData; /* pointer to FIFO object get from the queue */
void *pPutData; /* pointer to FIFO object to put to the queue */
TC_PRINT("Test Fiber FIFO Get Wait Interfaces\n\n");
pGetData = nano_fiber_fifo_get(&nanoFifoObj2, TICKS_UNLIMITED);
TC_PRINT("FIBER FIFO Get from queue2: %p\n", pGetData);
/* Verify results */
if (pGetData != pMyFifoData1) {
retCode = TC_FAIL;
TCERR2;
return;
}
pPutData = pMyFifoData2;
TC_PRINT("FIBER FIFO Put to queue1: %p\n", pPutData);
nano_fiber_fifo_put(&nanoFifoObj, pPutData);
pGetData = nano_fiber_fifo_get(&nanoFifoObj2, TICKS_UNLIMITED);
TC_PRINT("FIBER FIFO Get from queue2: %p\n", pGetData);
/* Verify results */
if (pGetData != pMyFifoData3) {
retCode = TC_FAIL;
TCERR2;
return;
}
pPutData = pMyFifoData4;
TC_PRINT("FIBER FIFO Put to queue1: %p\n", pPutData);
nano_fiber_fifo_put(&nanoFifoObj, pPutData);
TC_END_RESULT(retCode);
} /* testFiberFifoGetW */
/**
*
* @brief Test ISR FIFO routines (triggered from fiber)
*
* This function tests the fifo put and get interfaces in the ISR context.
* It is invoked from a fiber.
*
* We use nanoFifoObj queue to put and get data.
*
* @return N/A
*/
void testIsrFifoFromFiber(void)
{
void *pGetData; /* pointer to FIFO object get from the queue */
TC_PRINT("Test ISR FIFO (invoked from Fiber)\n\n");
/* This is data pushed by function testFiberFifoGetW */
_trigger_nano_isr_fifo_get();
pGetData = isrFifoInfo.data;
TC_PRINT("ISR FIFO Get from queue1: %p\n", pGetData);
if (isrFifoInfo.data != pMyFifoData4) {
retCode = TC_FAIL;
TCERR2;
return;
}
/* Verify that the queue is empty */
_trigger_nano_isr_fifo_get();
pGetData = isrFifoInfo.data;
if (pGetData != NULL) {
TC_PRINT("Get from queue1: %p\n", pGetData);
retCode = TC_FAIL;
TCERR3;
return;
}
/* Put more item into queue */
TC_PRINT("\nISR FIFO (running in fiber) Put Order:\n");
for (int i = 0; i < NUM_FIFO_ELEMENT; i++) {
isrFifoInfo.data = pPutList1[i];
TC_PRINT(" %p,", pPutList1[i]);
_trigger_nano_isr_fifo_put();
}
TC_PRINT("\n");
TC_END_RESULT(retCode);
} /* testIsrFifoFromFiber */
/**
*
* @brief Test ISR FIFO routines (triggered from task)
*
* This function tests the fifo put and get interfaces in the ISR context.
* It is invoked from a task.
*
* We use nanoFifoObj queue to put and get data.
*
* @return N/A
*/
void testIsrFifoFromTask(void)
{
void *pGetData; /* pointer to FIFO object get from the queue */
void *pPutData; /* pointer to FIFO object put to queue */
int count = 0; /* counter */
TC_PRINT("Test ISR FIFO (invoked from Task)\n\n");
/* This is data pushed by function testIsrFifoFromFiber
* Get all FIFOs
*/
_trigger_nano_isr_fifo_get();
pGetData = isrFifoInfo.data;
while (pGetData != NULL) {
TC_PRINT("Get from queue1: count = %d, ptr is %p\n", count, pGetData);
if ((count >= NUM_FIFO_ELEMENT) || (pGetData != pPutList1[count])) {
TCERR1(count);
retCode = TC_FAIL;
return;
}
/* Get the next element */
_trigger_nano_isr_fifo_get();
pGetData = isrFifoInfo.data;
count++;
} /* while */
/* Put data into queue and get it again */
pPutData = pPutList2[3];
isrFifoInfo.data = pPutData;
_trigger_nano_isr_fifo_put();
isrFifoInfo.data = NULL; /* force data to a new value */
/* Get data from queue */
_trigger_nano_isr_fifo_get();
pGetData = isrFifoInfo.data;
/* Verify data */
if (pGetData != pPutData) {
retCode = TC_FAIL;
TCERR2;
return;
}
TC_PRINT("\nTest ISR FIFO (invoked from Task) - put %p and get back %p\n",
pPutData, pGetData);
TC_END_RESULT(retCode);
} /* testIsrFifoFromTask */
/**
*
* @brief Entry point for the second fiber
*
* @return N/A
*/
void fiber2(void)
{
void *pData; /* pointer to FIFO object from the queue */
/* Wait for fiber2 to be activated */
nano_fiber_sem_take(&nanoSemObj2, TICKS_UNLIMITED);
/* Wait for data to be added to <nanoFifoObj> */
pData = nano_fiber_fifo_get(&nanoFifoObj, TICKS_UNLIMITED);
if (pData != pPutList1[1]) {
TC_ERROR("fiber2 (1) - expected %p, got %p\n",
pPutList1[1], pData);
retCode = TC_FAIL;
return;
}
/* Wait for data to be added to <nanoFifoObj2> by fiber3 */
pData = nano_fiber_fifo_get(&nanoFifoObj2, TICKS_UNLIMITED);
if (pData != pPutList2[1]) {
TC_ERROR("fiber2 (2) - expected %p, got %p\n",
pPutList2[1], pData);
retCode = TC_FAIL;
return;
}
/* Wait for fiber2 to be reactivated */
nano_fiber_sem_take(&nanoSemObj2, TICKS_UNLIMITED);
/* Fiber #2 has been reactivated by main task */
for (int i = 0; i < 4; i++) {
pData = nano_fiber_fifo_get(&nanoFifoObj, TICKS_UNLIMITED);
if (pData != pPutList1[i]) {
TC_ERROR("fiber2 (3) - iteration %d expected %p, got %p\n",
i, pPutList1[i], pData);
retCode = TC_FAIL;
return;
}
}
nano_fiber_sem_give(&nanoSemObjTask); /* Wake main task */
/* Wait for fiber2 to be reactivated */
nano_fiber_sem_take(&nanoSemObj2, TICKS_UNLIMITED);
testFiberFifoGetW();
PRINT_LINE;
testIsrFifoFromFiber();
TC_END_RESULT(retCode);
} /* fiber2 */
/**
*
* @brief Entry point for the third fiber
*
* @return N/A
*/
void fiber3(void)
{
void *pData;
/* Wait for fiber3 to be activated */
nano_fiber_sem_take(&nanoSemObj3, TICKS_UNLIMITED);
/* Put two items onto <nanoFifoObj2> to unblock fibers #1 and #2. */
nano_fiber_fifo_put(&nanoFifoObj2, pPutList2[0]); /* Wake fiber1 */
nano_fiber_fifo_put(&nanoFifoObj2, pPutList2[1]); /* Wake fiber2 */
/* Wait for fiber3 to be re-activated */
nano_fiber_sem_take(&nanoSemObj3, TICKS_UNLIMITED);
/* Immediately get the data from <nanoFifoObj2>. */
pData = nano_fiber_fifo_get(&nanoFifoObj2, TICKS_UNLIMITED);
if (pData != pPutList2[0]) {
retCode = TC_FAIL;
TC_ERROR("fiber3 (1) - got %p from <nanoFifoObj2>, expected %p\n",
pData, pPutList2[0]);
}
/* Put three items onto the FIFO for the task to get */
nano_fiber_fifo_put(&nanoFifoObj2, pPutList2[0]);
nano_fiber_fifo_put(&nanoFifoObj2, pPutList2[1]);
nano_fiber_fifo_put(&nanoFifoObj2, pPutList2[2]);
/* Sleep for 2 seconds */
nano_fiber_timer_start(&timer, SECONDS(2));
nano_fiber_timer_test(&timer, TICKS_UNLIMITED);
/* Put final item onto the FIFO for the task to get */
nano_fiber_fifo_put(&nanoFifoObj2, pPutList2[3]);
/* Wait for fiber3 to be re-activated (not expected to occur) */
nano_fiber_sem_take(&nanoSemObj3, TICKS_UNLIMITED);
}
/**
*
* @brief Test the nano_task_fifo_get(TICKS_UNLIMITED) interface
*
* This is in a task. It puts data to nanoFifoObj2 queue and gets
* data from nanoFifoObj queue.
*
* @return N/A
*/
void testTaskFifoGetW(void)
{
void *pGetData; /* pointer to FIFO object get from the queue */
void *pPutData; /* pointer to FIFO object to put to the queue */
PRINT_LINE;
TC_PRINT("Test Task FIFO Get Wait Interfaces\n\n");
pPutData = pMyFifoData1;
TC_PRINT("TASK FIFO Put to queue2: %p\n", pPutData);
nano_task_fifo_put(&nanoFifoObj2, pPutData);
/* Activate fiber2 */
nano_task_sem_give(&nanoSemObj2);
pGetData = nano_task_fifo_get(&nanoFifoObj, TICKS_UNLIMITED);
TC_PRINT("TASK FIFO Get from queue1: %p\n", pGetData);
/* Verify results */
if (pGetData != pMyFifoData2) {
retCode = TC_FAIL;
TCERR2;
return;
}
pPutData = pMyFifoData3;
TC_PRINT("TASK FIFO Put to queue2: %p\n", pPutData);
nano_task_fifo_put(&nanoFifoObj2, pPutData);
TC_END_RESULT(retCode);
} /* testTaskFifoGetW */
/**
*
* @brief Initialize nanokernel objects
*
* This routine initializes the nanokernel objects used in the FIFO tests.
*
* @return N/A
*/
void initNanoObjects(void)
{
nano_fifo_init(&nanoFifoObj);
nano_fifo_init(&nanoFifoObj2);
nano_fifo_init(&fifo_list);
nano_sem_init(&nanoSemObj1);
nano_sem_init(&nanoSemObj2);
nano_sem_init(&nanoSemObj3);
nano_sem_init(&nanoSemObjTask);
nano_sem_init(&sem_list);
nano_timer_init(&timer, timerData);
} /* initNanoObjects */
/* fifo_put_list */
sys_slist_t list;
struct packet_list packets[8];
char __stack __noinit stacks_list[2][512];
void fiber_list_0(int a, int b)
{
ARG_UNUSED(a);
ARG_UNUSED(b);
struct packet_list *p;
p = nano_fiber_fifo_get(&fifo_list, TICKS_UNLIMITED);
if (p->n != 0) {
retCode = TC_FAIL;
TC_ERROR(" *** %s did not get expected element %d\n",
__func__, 0);
return;
}
printk("%s got element %d, as expected\n", __func__, 0);
p = nano_fiber_fifo_get(&fifo_list, TICKS_UNLIMITED);
if (p->n != 2) {
retCode = TC_FAIL;
TC_ERROR(" *** %s did not get expected element %d\n",
__func__, 2);
return;
}
printk("%s got element %d, as expected\n", __func__, 2);
sys_slist_init(&list);
for (int ii = 3; ii < 8; ii++) {
sys_slist_append(&list, (sys_snode_t *)&packets[ii]);
}
fiber_yield(); /* collegue takes 1 */
nano_fiber_fifo_put_slist(&fifo_list, &list);
fiber_yield(); /* collegue takes 3 */
/* I take the rest */
for (int ii = 4; ii < 8; ii++) {
p = nano_fiber_fifo_get(&fifo_list, SECONDS(1));
if (p->n != ii) {
TC_ERROR(" *** %s did not get expected element %d\n",
__func__, ii);
retCode = TC_FAIL;
return;
}
printk("%s got element %d, as expected\n",
__func__, ii);
}
nano_fiber_sem_give(&sem_list);
}
static void fiber_list_1(int a, int b)
{
ARG_UNUSED(a);
ARG_UNUSED(b);
struct packet_list *p;
p = nano_fiber_fifo_get(&fifo_list, TICKS_UNLIMITED);
if (p->n != 1) {
retCode = TC_FAIL;
TC_ERROR(" *** %s did not get expected element %d\n",
__func__, 1);
return;
}
printk("%s got element %d, as expected\n", __func__, 1);
p = nano_fiber_fifo_get(&fifo_list, TICKS_UNLIMITED);
if (p->n != 3) {
retCode = TC_FAIL;
TC_ERROR(" *** %s did not get expected element %d\n",
__func__, 3);
return;
}
printk("%s got element %d, as expected\n", __func__, 3);
}
static void test_fifo_put_list(void)
{
PRINT_LINE;
task_fiber_start(stacks_list[0], 512, fiber_list_0, 0, 0, 7, 0);
task_fiber_start(stacks_list[1], 512, fiber_list_1, 0, 0, 7, 0);
for (int ii = 0; ii < 8; ii++) {
packets[ii].n = ii;
}
packets[0].next = &packets[1];
packets[1].next = &packets[2];
packets[2].next = NULL;
nano_task_fifo_put_list(&fifo_list, &packets[0], &packets[2]);
nano_task_sem_take(&sem_list, SECONDS(5));
TC_END_RESULT(retCode);
}
/**
*
* @brief Entry point to FIFO tests
*
* This is the entry point to the FIFO tests.
*
* @return N/A
*/
void main(void)
{
void *pData; /* pointer to FIFO object get from the queue */
int count = 0; /* counter */
TC_START("Test Nanokernel FIFO");
/* Initialize the FIFO queues and semaphore */
initNanoObjects();
/* Create and start the three (3) fibers. */
task_fiber_start(&fiberStack1[0], FIBER_STACKSIZE, (nano_fiber_entry_t) fiber1,
0, 0, 7, 0);
task_fiber_start(&fiberStack2[0], FIBER_STACKSIZE, (nano_fiber_entry_t) fiber2,
0, 0, 7, 0);
task_fiber_start(&fiberStack3[0], FIBER_STACKSIZE, (nano_fiber_entry_t) fiber3,
0, 0, 7, 0);
/*
* The three fibers have each blocked on a different semaphore. Giving
* the semaphore nanoSemObjX will unblock fiberX (where X = {1, 2, 3}).
*
* Activate fibers #1 and #2. They will each block on nanoFifoObj.
*/
nano_task_sem_give(&nanoSemObj1);
nano_task_sem_give(&nanoSemObj2);
/* Put two items into <nanoFifoObj> to unblock fibers #1 and #2. */
nano_task_fifo_put(&nanoFifoObj, pPutList1[0]); /* Wake fiber1 */
nano_task_fifo_put(&nanoFifoObj, pPutList1[1]); /* Wake fiber2 */
/* Activate fiber #3 */
nano_task_sem_give(&nanoSemObj3);
/*
* All three fibers should be blocked on their semaphores. Put data into
* <nanoFifoObj2>. Fiber #3 will read it after it is reactivated.
*/
nano_task_fifo_put(&nanoFifoObj2, pPutList2[0]);
nano_task_sem_give(&nanoSemObj3); /* Reactivate fiber #3 */
for (int i = 0; i < 4; i++) {
pData = nano_task_fifo_get(&nanoFifoObj2, TICKS_UNLIMITED);
if (pData != pPutList2[i]) {
TC_ERROR("nano_task_fifo_get() expected %p, got %p\n",
pPutList2[i], pData);
goto exit;
}
}
/* Add items to <nanoFifoObj> for fiber #2 */
for (int i = 0; i < 4; i++) {
nano_task_fifo_put(&nanoFifoObj, pPutList1[i]);
}
nano_task_sem_give(&nanoSemObj2); /* Activate fiber #2 */
/* Wait for fibers to finish */
nano_task_sem_take(&nanoSemObjTask, TICKS_UNLIMITED);
if (retCode == TC_FAIL) {
goto exit;
}
/*
* Entries in the FIFO queue have to be unique.
* Put data to queue.
*/
TC_PRINT("Test Task FIFO Put\n");
TC_PRINT("\nTASK FIFO Put Order: ");
for (int i = 0; i < NUM_FIFO_ELEMENT; i++) {
nano_task_fifo_put(&nanoFifoObj, pPutList1[i]);
TC_PRINT(" %p,", pPutList1[i]);
}
TC_PRINT("\n");
PRINT_LINE;
nano_task_sem_give(&nanoSemObj1); /* Activate fiber1 */
if (retCode == TC_FAIL) {
goto exit;
}
/*
* Wait for fiber1 to complete execution. (Using a semaphore gives
* the fiber the freedom to do blocking-type operations if it wants to.)
*/
nano_task_sem_take(&nanoSemObjTask, TICKS_UNLIMITED);
TC_PRINT("Test Task FIFO Get\n");
/* Get all FIFOs */
while ((pData = nano_task_fifo_get(&nanoFifoObj, TICKS_NONE)) != NULL) {
TC_PRINT("TASK FIFO Get: count = %d, ptr is %p\n", count, pData);
if ((count >= NUM_FIFO_ELEMENT) || (pData != pPutList2[count])) {
TCERR1(count);
retCode = TC_FAIL;
goto exit;
}
count++;
}
/* Test FIFO Get Wait interfaces*/
testTaskFifoGetW();
PRINT_LINE;
testIsrFifoFromTask();
PRINT_LINE;
/* test timeouts */
if (test_fifo_timeout() != TC_PASS) {
retCode = TC_FAIL;
goto exit;
}
PRINT_LINE;
/* test put_list/slist */
test_fifo_put_list();
exit:
TC_END_RESULT(retCode);
TC_END_REPORT(retCode);
}

View file

@ -1,490 +0,0 @@
/*
* Copyright (c) 2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr.h>
#include <tc_util.h>
#include <misc/__assert.h>
#include <misc/util.h>
/* timeout tests
*
* Test the nano_xxx_fifo_wait_timeout() APIs.
*
* First, the task waits with a timeout and times out. Then it wait with a
* timeout, but gets the data in time.
*
* Then, multiple timeout tests are done for the fibers, to test the ordering
* of queueing/dequeueing when timeout occurs, first on one fifo, then on
* multiple fifos.
*
* Finally, multiple fibers pend on one fifo, and they all get the
* data in time, except the last one: this tests that the timeout is
* recomputed correctly when timeouts are aborted.
*/
#include <tc_nano_timeout_common.h>
#define FIBER_PRIORITY 5
#if defined(CONFIG_DEBUG) && defined(CONFIG_ASSERT)
#define FIBER_STACKSIZE 512
#else
#define FIBER_STACKSIZE 384
#endif
struct scratch_fifo_packet {
void *link_in_fifo;
void *data_if_needed;
};
struct reply_packet {
void *link_in_fifo;
int reply;
};
#define NUM_SCRATCH_FIFO_PACKETS 20
struct scratch_fifo_packet scratch_fifo_packets[NUM_SCRATCH_FIFO_PACKETS];
struct nano_fifo scratch_fifo_packets_fifo;
void *get_scratch_packet(void)
{
void *packet = nano_fifo_get(&scratch_fifo_packets_fifo, TICKS_NONE);
__ASSERT_NO_MSG(packet);
return packet;
}
void put_scratch_packet(void *packet)
{
nano_fifo_put(&scratch_fifo_packets_fifo, packet);
}
static struct nano_fifo fifo_timeout[2];
struct nano_fifo timeout_order_fifo;
struct timeout_order_data {
void *link_in_fifo;
struct nano_fifo *fifo;
int32_t timeout;
int timeout_order;
int q_order;
};
struct timeout_order_data timeout_order_data[] = {
{0, &fifo_timeout[0], TIMEOUT(2), 2, 0},
{0, &fifo_timeout[0], TIMEOUT(4), 4, 1},
{0, &fifo_timeout[0], TIMEOUT(0), 0, 2},
{0, &fifo_timeout[0], TIMEOUT(1), 1, 3},
{0, &fifo_timeout[0], TIMEOUT(3), 3, 4},
};
struct timeout_order_data timeout_order_data_mult_fifo[] = {
{0, &fifo_timeout[1], TIMEOUT(0), 0, 0},
{0, &fifo_timeout[0], TIMEOUT(3), 3, 1},
{0, &fifo_timeout[0], TIMEOUT(5), 5, 2},
{0, &fifo_timeout[1], TIMEOUT(8), 8, 3},
{0, &fifo_timeout[1], TIMEOUT(7), 7, 4},
{0, &fifo_timeout[0], TIMEOUT(1), 1, 5},
{0, &fifo_timeout[0], TIMEOUT(6), 6, 6},
{0, &fifo_timeout[0], TIMEOUT(2), 2, 7},
{0, &fifo_timeout[1], TIMEOUT(4), 4, 8},
};
#define TIMEOUT_ORDER_NUM_FIBERS ARRAY_SIZE(timeout_order_data_mult_fifo)
static char __stack timeout_stacks[TIMEOUT_ORDER_NUM_FIBERS][FIBER_STACKSIZE];
/* a fiber sleeps then puts data on the fifo */
static void test_fiber_put_timeout(int fifo, int timeout)
{
fiber_sleep((int32_t)timeout);
nano_fiber_fifo_put((struct nano_fifo *)fifo, get_scratch_packet());
}
/* a fiber pends on a fifo then times out */
static void test_fiber_pend_and_timeout(int data, int unused)
{
struct timeout_order_data *d = (void *)data;
int32_t orig_ticks = sys_tick_get();
void *packet;
ARG_UNUSED(unused);
packet = nano_fiber_fifo_get(d->fifo, d->timeout);
if (packet) {
TC_ERROR(" *** timeout of %d did not time out.\n",
d->timeout);
return;
}
if (!is_timeout_in_range(orig_ticks, d->timeout)) {
return;
}
nano_fiber_fifo_put(&timeout_order_fifo, d);
}
/* the task spins several fibers that pend and timeout on fifos */
static int test_multiple_fibers_pending(struct timeout_order_data *test_data,
int test_data_size)
{
int ii;
for (ii = 0; ii < test_data_size; ii++) {
task_fiber_start(timeout_stacks[ii], FIBER_STACKSIZE,
test_fiber_pend_and_timeout,
(int)&test_data[ii], 0,
FIBER_PRIORITY, 0);
}
for (ii = 0; ii < test_data_size; ii++) {
struct timeout_order_data *data =
nano_task_fifo_get(&timeout_order_fifo, TICKS_UNLIMITED);
if (data->timeout_order == ii) {
TC_PRINT(" got fiber (q order: %d, t/o: %d, fifo %p) as expected\n",
data->q_order, data->timeout, data->fifo);
} else {
TC_ERROR(" *** fiber %d woke up, expected %d\n",
data->timeout_order, ii);
return TC_FAIL;
}
}
return TC_PASS;
}
/* a fiber pends on a fifo with a timeout and gets the data in time */
static void test_fiber_pend_and_get_data(int data, int unused)
{
struct timeout_order_data *d = (void *)data;
void *packet;
ARG_UNUSED(unused);
packet = nano_fiber_fifo_get(d->fifo, d->timeout);
if (!packet) {
TC_PRINT(" *** fiber (q order: %d, t/o: %d, fifo %p) timed out!\n",
d->q_order, d->timeout, d->fifo);
return;
}
put_scratch_packet(packet);
nano_fiber_fifo_put(&timeout_order_fifo, d);
}
/* the task spins fibers that get fifo data in time, except the last one */
static int test_multiple_fibers_get_data(struct timeout_order_data *test_data,
int test_data_size)
{
struct timeout_order_data *data;
int ii;
for (ii = 0; ii < test_data_size-1; ii++) {
task_fiber_start(timeout_stacks[ii], FIBER_STACKSIZE,
test_fiber_pend_and_get_data,
(int)&test_data[ii], 0,
FIBER_PRIORITY, 0);
}
task_fiber_start(timeout_stacks[ii], FIBER_STACKSIZE,
test_fiber_pend_and_timeout,
(int)&test_data[ii], 0,
FIBER_PRIORITY, 0);
for (ii = 0; ii < test_data_size-1; ii++) {
nano_task_fifo_put(test_data[ii].fifo, get_scratch_packet());
data = nano_task_fifo_get(&timeout_order_fifo, TICKS_UNLIMITED);
if (data->q_order == ii) {
TC_PRINT(" got fiber (q order: %d, t/o: %d, fifo %p) as expected\n",
data->q_order, data->timeout, data->fifo);
} else {
TC_ERROR(" *** fiber %d woke up, expected %d\n",
data->q_order, ii);
return TC_FAIL;
}
}
data = nano_task_fifo_get(&timeout_order_fifo, TICKS_UNLIMITED);
if (data->q_order == ii) {
TC_PRINT(" got fiber (q order: %d, t/o: %d, fifo %p) as expected\n",
data->q_order, data->timeout, data->fifo);
} else {
TC_ERROR(" *** fiber %d woke up, expected %d\n",
data->timeout_order, ii);
return TC_FAIL;
}
return TC_PASS;
}
/* try getting data on fifo with special timeout value, return result in fifo */
static void test_fiber_ticks_special_values(int packet, int special_value)
{
struct reply_packet *reply_packet = (void *)packet;
reply_packet->reply =
!!nano_fiber_fifo_get(&fifo_timeout[0], special_value);
nano_fiber_fifo_put(&timeout_order_fifo, reply_packet);
}
/* the timeout test entry point */
int test_fifo_timeout(void)
{
int64_t orig_ticks;
int32_t timeout;
int rv;
void *packet, *scratch_packet;
int test_data_size;
int ii;
struct reply_packet reply_packet;
nano_fifo_init(&fifo_timeout[0]);
nano_fifo_init(&fifo_timeout[1]);
nano_fifo_init(&timeout_order_fifo);
nano_fifo_init(&scratch_fifo_packets_fifo);
for (ii = 0; ii < NUM_SCRATCH_FIFO_PACKETS; ii++) {
scratch_fifo_packets[ii].data_if_needed = (void *)ii;
nano_task_fifo_put(&scratch_fifo_packets_fifo,
&scratch_fifo_packets[ii]);
}
/* test nano_task_fifo_get() with timeout */
timeout = 10;
orig_ticks = sys_tick_get();
packet = nano_task_fifo_get(&fifo_timeout[0], timeout);
if (packet) {
TC_ERROR(" *** timeout of %d did not time out.\n", timeout);
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
if ((sys_tick_get() - orig_ticks) < timeout) {
TC_ERROR(" *** task did not wait long enough on timeout of %d.\n",
timeout);
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
/* test nano_task_fifo_get with timeout of 0 */
packet = nano_task_fifo_get(&fifo_timeout[0], 0);
if (packet) {
TC_ERROR(" *** timeout of 0 did not time out.\n");
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
/* test nano_task_fifo_get with timeout > 0 */
TC_PRINT("test nano_task_fifo_get with timeout > 0\n");
timeout = 3;
orig_ticks = sys_tick_get();
packet = nano_task_fifo_get(&fifo_timeout[0], timeout);
if (packet) {
TC_ERROR(" *** timeout of %d did not time out.\n",
timeout);
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
if (!is_timeout_in_range(orig_ticks, timeout)) {
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
TC_PRINT("nano_task_fifo_get timed out as expected\n");
/*
* test nano_task_fifo_get with a timeout and fiber that puts
* data on the fifo on time
*/
timeout = 5;
orig_ticks = sys_tick_get();
task_fiber_start(timeout_stacks[0], FIBER_STACKSIZE,
test_fiber_put_timeout, (int)&fifo_timeout[0],
timeout,
FIBER_PRIORITY, 0);
packet = nano_task_fifo_get(&fifo_timeout[0], (int)(timeout + 5));
if (!packet) {
TC_ERROR(" *** data put in time did not return valid pointer.\n");
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
put_scratch_packet(packet);
if (!is_timeout_in_range(orig_ticks, timeout)) {
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
TC_PRINT("nano_task_fifo_get got fifo in time, as expected\n");
/*
* test nano_task_fifo_get with TICKS_NONE and no data
* unavailable.
*/
if (nano_task_fifo_get(&fifo_timeout[0], TICKS_NONE)) {
TC_ERROR("task with TICKS_NONE got data, but shouldn't have\n");
return TC_FAIL;
}
TC_PRINT("task with TICKS_NONE did not get data, as expected\n");
/*
* test nano_task_fifo_get with TICKS_NONE and some data
* available.
*/
scratch_packet = get_scratch_packet();
nano_task_fifo_put(&fifo_timeout[0], scratch_packet);
if (!nano_task_fifo_get(&fifo_timeout[0], TICKS_NONE)) {
TC_ERROR("task with TICKS_NONE did not get available data\n");
return TC_FAIL;
}
put_scratch_packet(scratch_packet);
TC_PRINT("task with TICKS_NONE got available data, as expected\n");
/*
* test nano_task_fifo_get with TICKS_UNLIMITED and the
* data available.
*/
TC_PRINT("Trying to take available data with TICKS_UNLIMITED:\n"
" will hang the test if it fails.\n");
scratch_packet = get_scratch_packet();
nano_task_fifo_put(&fifo_timeout[0], scratch_packet);
if (!nano_task_fifo_get(&fifo_timeout[0], TICKS_UNLIMITED)) {
TC_ERROR(" *** This will never be hit!!! .\n");
return TC_FAIL;
}
put_scratch_packet(scratch_packet);
TC_PRINT("task with TICKS_UNLIMITED got available data, as expected\n");
/* test fiber with timeout of TICKS_NONE not getting data on empty fifo */
task_fiber_start(timeout_stacks[0], FIBER_STACKSIZE,
test_fiber_ticks_special_values,
(int)&reply_packet, TICKS_NONE, FIBER_PRIORITY, 0);
if (!nano_task_fifo_get(&timeout_order_fifo, TICKS_NONE)) {
TC_ERROR(" *** fiber should have run and filled the fifo.\n");
return TC_FAIL;
}
if (reply_packet.reply != 0) {
TC_ERROR(" *** fiber should not have obtained the data.\n");
return TC_FAIL;
}
TC_PRINT("fiber with TICKS_NONE did not get data, as expected\n");
/* test fiber with timeout of TICKS_NONE getting data when available */
scratch_packet = get_scratch_packet();
nano_task_fifo_put(&fifo_timeout[0], scratch_packet);
task_fiber_start(timeout_stacks[0], FIBER_STACKSIZE,
test_fiber_ticks_special_values,
(int)&reply_packet, TICKS_NONE, FIBER_PRIORITY, 0);
put_scratch_packet(scratch_packet);
if (!nano_task_fifo_get(&timeout_order_fifo, TICKS_NONE)) {
TC_ERROR(" *** fiber should have run and filled the fifo.\n");
return TC_FAIL;
}
if (reply_packet.reply != 1) {
TC_ERROR(" *** fiber should have obtained the data.\n");
return TC_FAIL;
}
TC_PRINT("fiber with TICKS_NONE got available data, as expected\n");
/* test fiber with TICKS_UNLIMITED timeout getting data when availalble */
scratch_packet = get_scratch_packet();
nano_task_fifo_put(&fifo_timeout[0], scratch_packet);
task_fiber_start(timeout_stacks[0], FIBER_STACKSIZE,
test_fiber_ticks_special_values,
(int)&reply_packet, TICKS_UNLIMITED, FIBER_PRIORITY, 0);
put_scratch_packet(scratch_packet);
if (!nano_task_fifo_get(&timeout_order_fifo, TICKS_NONE)) {
TC_ERROR(" *** fiber should have run and filled the fifo.\n");
return TC_FAIL;
}
if (reply_packet.reply != 1) {
TC_ERROR(" *** fiber should have obtained the data.\n");
return TC_FAIL;
}
TC_PRINT("fiber with TICKS_UNLIMITED got available data, as expected\n");
/* test multiple fibers pending on the same fifo with different timeouts */
test_data_size = ARRAY_SIZE(timeout_order_data);
TC_PRINT("testing timeouts of %d fibers on same fifo\n", test_data_size);
rv = test_multiple_fibers_pending(timeout_order_data, test_data_size);
if (rv != TC_PASS) {
TC_ERROR(" *** fibers did not time out in the right order\n");
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
/* test mult. fibers pending on different fifos with different timeouts */
test_data_size = ARRAY_SIZE(timeout_order_data_mult_fifo);
TC_PRINT("testing timeouts of %d fibers on different fifos\n",
test_data_size);
rv = test_multiple_fibers_pending(timeout_order_data_mult_fifo,
test_data_size);
if (rv != TC_PASS) {
TC_ERROR(" *** fibers did not time out in the right order\n");
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
/*
* test multiple fibers pending on same fifo with different timeouts, but
* getting the data in time, except the last one.
*/
test_data_size = ARRAY_SIZE(timeout_order_data);
TC_PRINT("testing %d fibers timing out, but obtaining the data in time\n"
"(except the last one, which times out)\n",
test_data_size);
rv = test_multiple_fibers_get_data(timeout_order_data, test_data_size);
if (rv != TC_PASS) {
TC_ERROR(" *** fibers did not get the data in the right order\n");
TC_END_RESULT(TC_FAIL);
return TC_FAIL;
}
TC_END_RESULT(TC_PASS);
return TC_PASS;
}

View file

@ -1,5 +0,0 @@
[test]
tags = legacy core
# Make sure it has enough memory
filter = not ((CONFIG_DEBUG or CONFIG_ASSERT)) and ( CONFIG_SRAM_SIZE >= 32
or CONFIG_DCCM_SIZE >= 32 or CONFIG_RAM_SIZE >= 32)

View file

@ -1,8 +0,0 @@
MDEF_FILE = prj.mdef
BOARD ?= qemu_x86
CONF_FILE = prj.conf
SOURCE_DIR := $(ZEPHYR_BASE)/tests/legacy/kernel/test_fifo/microkernel/src
# Enable testing for private microkernel FIFOs
CFLAGS = -DTEST_PRIV_FIFO
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,87 +0,0 @@
Title: Private FIFOs
Description:
This test verifies that the microkernel FIFO APIs operate as expected.
This also verifies the mechanism to define private FIFO and its usage.
--------------------------------------------------------------------------------
Building and Running Project:
This microkernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Sample Output:
tc_start() - Test Microkernel FIFO
myData[0] = 1,
myData[1] = 101,
myData[2] = 201,
myData[3] = 301,
myData[4] = 401,
===================================================================
PASS - fillFIFO.
verifyQueueData: i=0, successfully get data 1
verifyQueueData: i=1, successfully get data 101
verifyQueueData: i=2, FIFOQ is empty. No data.
===================================================================
PASS - verifyQueueData.
===================================================================
PASS - fillFIFO.
RegressionTask: About to putWT with data 401
RegressionTask: FIFO Put time out as expected for data 401
verifyQueueData: i=0, successfully get data 1
verifyQueueData: i=1, successfully get data 101
===================================================================
PASS - verifyQueueData.
===================================================================
PASS - fillFIFO.
RegressionTask: 2 element in queue
RegressionTask: Successfully purged queue
RegressionTask: confirm 0 element in queue
===================================================================
RegressionTask: About to GetW data
Starts MicroTestFifoTask
MicroTestFifoTask: Puts element 999
RegressionTask: GetW get back 999
MicroTestFifoTask: FIRegressionTask: GetWT timeout expected
===================================================================
PASS - fillFIFO.
RegressionTask: about to putW data 999
FOPut OK for 999
MicroTestFifoTask: About to purge queue
RegressionTask: PutW ok when queue is purged while waiting
===================================================================
PASS - fillFIFO.
RegressionTask: about to putW data 401
MicroTestFifoTask: Successfully purged queue
MicroTestFifoTask: About to dequeue 1 element
RegressionTask: PutW success for data 401
===================================================================
RegressionTask: Get back data 101
RegressionTask: Get back data 401
RegressionTask: queue is empty. Test Done!
MicroTestFifoTask: task_fifo_get got back correct data 1
===================================================================
PASS - MicroTestFifoTask.
===================================================================
PASS - RegressionTask.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,2 +0,0 @@
# empty
CONFIG_LEGACY_KERNEL=y

View file

@ -1,21 +0,0 @@
% Please keep this in-sync with ../test_fifo/microkernel/prj.mdef
% except those specified below
% Application : test microkernel FIFO APIs
% TASK NAME PRIO ENTRY STACK GROUPS
% ====================================================
TASK tStartTask 5 RegressionTask 2048 [EXE]
TASK helperTask 7 MicroTestFifoTask 2048 [EXE]
% FIFOQ is defined in source code. So keep this
% commented out.
%
% FIFO NAME DEPTH WIDTH
% ========================
% FIFO FIFOQ 2 4
% SEMA NAME
% =============================
SEMA SEMSIG_MicroTestFifoTask
SEMA SEM_TestDone

View file

@ -1,2 +0,0 @@
[test]
tags = legacy bat_commit core

View file

@ -1,5 +0,0 @@
MDEF_FILE = prj.mdef
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,65 +0,0 @@
Title: Shared Floating Point Support
Description:
This test uses two tasks to independently compute pi, while two other tasks
load and store floating point registers and check for corruption. This tests
the ability of tasks to safely share floating point hardware resources, even
when switching occurs preemptively. (Note that both sets of tests run
concurrently even though they report their progress at different times.)
The demonstration utilizes microkernel mutex APIs, timers, semaphores,
round robin scheduling, and floating point support.
--------------------------------------------------------------------------------
Building and Running Project:
This microkernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Advanced:
Depending upon the board's speed, the frequency of test output may range from
every few seconds to every few minutes. The speed of the test can be controlled
through the variable PI_NUM_ITERATIONS (default 700000). Lowering this value
will increase the test's speed, but at the expense of the calculation's
precision.
make qemu PI_NUM_ITERATIONS=100000
--------------------------------------------------------------------------------
Sample Output:
Floating point sharing tests started
===================================================================
Pi calculation OK after 50 (high) + 1 (low) tests (computed 3.141594)
Load and store OK after 100 (high) + 29695 (low) tests
Pi calculation OK after 150 (high) + 2 (low) tests (computed 3.141594)
Load and store OK after 200 (high) + 47593 (low) tests
Pi calculation OK after 250 (high) + 4 (low) tests (computed 3.141594)
Load and store OK after 300 (high) + 66674 (low) tests
Pi calculation OK after 350 (high) + 5 (low) tests (computed 3.141594)
Load and store OK after 400 (high) + 83352 (low) tests
Pi calculation OK after 450 (high) + 7 (low) tests (computed 3.141594)
Load and store OK after 500 (high) + 92290 (low) tests
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,6 +0,0 @@
CONFIG_FLOAT=y
CONFIG_SSE=y
CONFIG_FP_SHARING=y
CONFIG_SSE_FP_MATH=y
CONFIG_STDOUT_CONSOLE=y
CONFIG_LEGACY_KERNEL=y

View file

@ -1,8 +0,0 @@
% Application : floating point sharing test
% TASK NAME PRIO ENTRY STACK GROUPS
% =======================================================
TASK load_low 10 load_store_low 2048 [EXE]
TASK load_high 5 load_store_high 2048 [EXE]
TASK pi_low 10 calculate_pi_low 2048 [EXE]
TASK pi_high 5 calculate_pi_high 2048 [EXE]

View file

@ -1,11 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y += main.o pi.o
# Some boards are significantly slower than others resulting in the test
# output being in the range of every few seconds to every few minutes. To
# compensate for this, one can control the number of iterations in the PI
# calculation through PI_NUM_ITERATIONS. Lowering this value will increase
# the speed of the test but it will come at the expense of precision.
PI_NUM_ITERATIONS ?= 700000
ccflags-y += "-DPI_NUM_ITERATIONS=${PI_NUM_ITERATIONS}"

View file

@ -1,120 +0,0 @@
/**
* @file
* @brief common definitions for the FPU sharing test application
*/
/*
* Copyright (c) 2011-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _FLOATCONTEXT_H
#define _FLOATCONTEXT_H
/*
* Each architecture must define the following structures (which may be empty):
* 'struct fp_volatile_register_set'
* 'struct fp_non_volatile_register_set'
*
* Each architecture must also define the following macros:
* SIZEOF_FP_VOLATILE_REGISTER_SET
* SIZEOF_FP_NON_VOLATILE_REGISTER_SET
* Those macros are used as sizeof(<an empty structure>) is compiler specific;
* that is, it may evaluate to a non-zero value.
*
* Each architecture shall also have custom implementations of:
* _load_all_float_registers()
* _load_then_store_all_float_registers()
* _store_all_float_registers()
*/
#if defined(CONFIG_ISA_IA32)
#define FP_OPTION 0
/*
* In the future, the struct definitions may need to be refined based on the
* specific IA-32 processor, but for now only the Pentium4 is supported:
*
* 8 x 80 bit floating point registers (ST[0] -> ST[7])
* 8 x 128 bit XMM registers (XMM[0] -> XMM[7])
*
* All these registers are considered volatile across a function invocation.
*/
struct fp_register {
unsigned char reg[10];
};
struct xmm_register {
unsigned char reg[16];
};
struct fp_volatile_register_set {
struct xmm_register xmm[8]; /* XMM[0] -> XMM[7] */
struct fp_register st[8]; /* ST[0] -> ST[7] */
};
struct fp_non_volatile_register_set {
/* No non-volatile floating point registers */
};
#define SIZEOF_FP_VOLATILE_REGISTER_SET sizeof(struct fp_volatile_register_set)
#define SIZEOF_FP_NON_VOLATILE_REGISTER_SET 0
#elif defined(CONFIG_CPU_CORTEX_M4)
#define FP_OPTION 0
/*
* Registers s0..s15 are volatile and do not
* need to be preserved across function calls.
*/
struct fp_volatile_register_set {
float s[16];
};
/*
* Registers s16..s31 are non-volatile and
* need to be preserved across function calls.
*/
struct fp_non_volatile_register_set {
float s[16];
};
#define SIZEOF_FP_VOLATILE_REGISTER_SET \
sizeof(struct fp_volatile_register_set)
#define SIZEOF_FP_NON_VOLATILE_REGISTER_SET \
sizeof(struct fp_non_volatile_register_set)
#else
#error "Architecture must provide the following definitions:\n" \
"\t'struct fp_volatile_registers'\n" \
"\t'struct fp_non_volatile_registers'\n" \
"\t'SIZEOF_FP_VOLATILE_REGISTER_SET'\n" \
"\t'SIZEOF_FP_NON_VOLATILE_REGISTER_SET'\n"
#endif /* CONFIG_ISA_IA32 */
/* the set of ALL floating point registers */
struct fp_register_set {
struct fp_volatile_register_set fp_volatile;
struct fp_non_volatile_register_set fp_non_volatile;
};
#define SIZEOF_FP_REGISTER_SET \
(SIZEOF_FP_VOLATILE_REGISTER_SET + SIZEOF_FP_NON_VOLATILE_REGISTER_SET)
/*
* The following constants define the initial byte value used by the background
* task, and the fiber when loading up the floating point registers.
*/
#define MAIN_FLOAT_REG_CHECK_BYTE (unsigned char)0xe5
#define FIBER_FLOAT_REG_CHECK_BYTE (unsigned char)0xf9
extern int fpu_sharing_error;
#endif /* _FLOATCONTEXT_H */

View file

@ -1,90 +0,0 @@
/**
* @file
* @brief ARM Cortex-M4 GCC specific floating point register macros
*/
/*
* Copyright (c) 2016, Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _FLOAT_REGS_ARM_GCC_H
#define _FLOAT_REGS_ARM_GCC_H
#if !defined(__GNUC__) || !defined(CONFIG_CPU_CORTEX_M4)
#error __FILE__ goes only with Cortex-M4 GCC
#endif
#include <toolchain.h>
#include "float_context.h"
/**
*
* @brief Load all floating point registers
*
* This function loads ALL floating point registers pointed to by @a regs.
* It is expected that a subsequent call to _store_all_float_registers()
* will be issued to dump the floating point registers to memory.
*
* The format/organization of 'struct fp_register_set'; the generic C test
* code (main.c) merely treat the register set as an array of bytes.
*
* The only requirement is that the arch specific implementations of
* _load_all_float_registers() and _store_all_float_registers() agree
* on the format.
*
* @return N/A
*/
static inline void _load_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"vldmia %0, {s0-s15};\n\t"
"vldmia %1, {s16-s31};\n\t"
:: "r" (&regs->fp_volatile), "r" (&regs->fp_non_volatile)
);
}
/**
*
* @brief Dump all floating point registers to memory
*
* This function stores ALL floating point registers to the memory buffer
* specified by @a regs. It is expected that a previous invocation of
* _load_all_float_registers() occurred to load all the floating point
* registers from a memory buffer.
*
* @return N/A
*/
static inline void _store_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"vstmia %0, {s0-s15};\n\t"
"vstmia %1, {s16-s31};\n\t"
:: "r" (&regs->fp_volatile), "r" (&regs->fp_non_volatile)
: "memory"
);
}
/**
*
* @brief Load then dump all float registers to memory
*
* This function loads ALL floating point registers from the memory buffer
* specified by @a regs, and then stores them back to that buffer.
*
* This routine is called by a high priority thread prior to calling a primitive
* that pends and triggers a co-operative context switch to a low priority
* thread.
*
* @return N/A
*/
static inline void _load_then_store_all_float_registers(struct fp_register_set *regs)
{
_load_all_float_registers(regs);
_store_all_float_registers(regs);
}
#endif /* _FLOAT_REGS_ARM_GCC_H */

View file

@ -1,157 +0,0 @@
/**
* @file
* @brief Intel x86 GCC specific floating point register macros
*/
/*
* Copyright (c) 2015, Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _FLOAT_REGS_X86_GCC_H
#define _FLOAT_REGS_X86_GCC_H
#if !defined(__GNUC__) || !defined(CONFIG_ISA_IA32)
#error __FILE__ goes only with x86 GCC
#endif
#include <toolchain.h>
#include "float_context.h"
/**
*
* @brief Load all floating point registers
*
* This function loads ALL floating point registers pointed to by @a regs.
* It is expected that a subsequent call to _store_all_float_registers()
* will be issued to dump the floating point registers to memory.
*
* The format/organization of 'struct fp_register_set'; the generic C test
* code (main.c) merely treat the register set as an array of bytes.
*
* The only requirement is that the arch specific implementations of
* _load_all_float_registers(), _store_all_float_registers() and
* _load_then_store_all_float_registers() agree on the format.
*
* @return N/A
*/
static inline void _load_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"movdqu 0(%0), %%xmm0\n\t;"
"movdqu 16(%0), %%xmm1\n\t;"
"movdqu 32(%0), %%xmm2\n\t;"
"movdqu 48(%0), %%xmm3\n\t;"
"movdqu 64(%0), %%xmm4\n\t;"
"movdqu 80(%0), %%xmm5\n\t;"
"movdqu 96(%0), %%xmm6\n\t;"
"movdqu 112(%0), %%xmm7\n\t;"
"fldt 128(%0)\n\t;"
"fldt 138(%0)\n\t;"
"fldt 148(%0)\n\t;"
"fldt 158(%0)\n\t;"
"fldt 168(%0)\n\t;"
"fldt 178(%0)\n\t;"
"fldt 188(%0)\n\t;"
"fldt 198(%0)\n\t;"
:: "r" (regs)
);
}
/**
*
* @brief Load then dump all float registers to memory
*
* This function loads ALL floating point registers from the memory buffer
* specified by @a regs, and then stores them back to that buffer.
*
* This routine is called by a high priority thread prior to calling a primitive
* that pends and triggers a co-operative context switch to a low priority
* thread. Because the kernel doesn't save floating point context for
* co-operative context switches, the x87 FPU register stack must be put back
* in an empty state before the switch occurs in case the next task to perform
* floating point operations was also co-operatively switched out and simply
* inherits the existing x87 FPU state (expecting the stack to be empty).
*
* @return N/A
*/
static inline void _load_then_store_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"movdqu 0(%0), %%xmm0\n\t;"
"movdqu 16(%0), %%xmm1\n\t;"
"movdqu 32(%0), %%xmm2\n\t;"
"movdqu 48(%0), %%xmm3\n\t;"
"movdqu 64(%0), %%xmm4\n\t;"
"movdqu 80(%0), %%xmm5\n\t;"
"movdqu 96(%0), %%xmm6\n\t;"
"movdqu 112(%0), %%xmm7\n\t;"
"fldt 128(%0)\n\t;"
"fldt 138(%0)\n\t;"
"fldt 148(%0)\n\t;"
"fldt 158(%0)\n\t;"
"fldt 168(%0)\n\t;"
"fldt 178(%0)\n\t;"
"fldt 188(%0)\n\t;"
"fldt 198(%0)\n\t;"
/* pop the x87 FPU registers back to memory */
"fstpt 198(%0)\n\t;"
"fstpt 188(%0)\n\t;"
"fstpt 178(%0)\n\t;"
"fstpt 168(%0)\n\t;"
"fstpt 158(%0)\n\t;"
"fstpt 148(%0)\n\t;"
"fstpt 138(%0)\n\t;"
"fstpt 128(%0)\n\t;"
:: "r" (regs)
);
}
/**
*
* @brief Dump all floating point registers to memory
*
* This function stores ALL floating point registers to the memory buffer
* specified by @a regs. It is expected that a previous invocation of
* _load_all_float_registers() occurred to load all the floating point
* registers from a memory buffer.
*
* @return N/A
*/
static inline void _store_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"movdqu %%xmm0, 0(%0)\n\t;"
"movdqu %%xmm1, 16(%0)\n\t;"
"movdqu %%xmm2, 32(%0)\n\t;"
"movdqu %%xmm3, 48(%0)\n\t;"
"movdqu %%xmm4, 64(%0)\n\t;"
"movdqu %%xmm5, 80(%0)\n\t;"
"movdqu %%xmm6, 96(%0)\n\t;"
"movdqu %%xmm7, 112(%0)\n\t;"
"fstpt 198(%0)\n\t;"
"fstpt 188(%0)\n\t;"
"fstpt 178(%0)\n\t;"
"fstpt 168(%0)\n\t;"
"fstpt 158(%0)\n\t;"
"fstpt 148(%0)\n\t;"
"fstpt 138(%0)\n\t;"
"fstpt 128(%0)\n\t;"
:: "r" (regs) : "memory"
);
}
#endif /* _FLOAT_REGS_X86_GCC_H */

View file

@ -1,325 +0,0 @@
/* main.c - load/store portion of FPU sharing test */
/*
* Copyright (c) 2011-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
DESCRIPTION
This module implements the load/store portion of the FPU sharing test. This
microkernel version of this test utilizes a pair of tasks, while the nanokernel
verions utilizes a task and a fiber.
The load/store test validates the nanokernel's floating point unit context
save/restore mechanism. This test utilizes a pair of threads of different
priorities that each use the floating point registers. The context
switching that occurs exercises the kernel's ability to properly preserve the
floating point registers. The test also exercises the kernel's ability to
automatically enable floating point support for a task, if supported.
FUTURE IMPROVEMENTS
On architectures where the non-integer capabilities are provided in a hierarchy,
for example on IA-32 the USE_FP and USE_SSE options are provided, this test
should be enhanced to ensure that the architectures' _Swap() routine doesn't
context switch more registers that it needs to (which would represent a
performance issue). For example, on the IA-32, the test should issue
a fiber_fp_disable() from main(), and then indicate that only x87 FPU
registers will be utilized (fiber_fp_enable()). The fiber should continue
to load ALL non-integer registers, but main() should validate that only the
x87 FPU registers are being saved/restored.
*/
#ifndef CONFIG_FLOAT
#error Rebuild with the FLOAT config option enabled
#endif
#ifndef CONFIG_FP_SHARING
#error Rebuild with the FP_SHARING config option enabled
#endif
#if defined(CONFIG_ISA_IA32)
#ifndef CONFIG_SSE
#error Rebuild with the SSE config option enabled
#endif
#endif /* CONFIG_ISA_IA32 */
#include <zephyr.h>
#if defined(CONFIG_ISA_IA32)
#if defined(__GNUC__)
#include <float_regs_x86_gcc.h>
#else
#include <float_regs_x86_other.h>
#endif /* __GNUC__ */
#elif defined(CONFIG_CPU_CORTEX_M4)
#if defined(__GNUC__)
#include <float_regs_arm_gcc.h>
#else
#include <float_regs_arm_other.h>
#endif /* __GNUC__ */
#endif
#include <arch/cpu.h>
#include <tc_util.h>
#include "float_context.h"
#include <stddef.h>
#include <string.h>
#define MAX_TESTS 500
/* space for float register load/store area used by low priority task */
static struct fp_register_set float_reg_set_load;
static struct fp_register_set float_reg_set_store;
/* space for float register load/store area used by high priority thread */
static struct fp_register_set float_reg_set;
/* flag indicating that an error has occurred */
int fpu_sharing_error;
/*
* Test counters are "volatile" because GCC may not update them properly
* otherwise. (See description of pi calculation test for more details.)
*/
static volatile unsigned int load_store_low_count = 0;
static volatile unsigned int load_store_high_count = 0;
/**
*
* @brief Low priority FPU load/store thread
*
* @return N/A
*/
void load_store_low(void)
{
unsigned int i;
unsigned char init_byte;
unsigned char *store_ptr = (unsigned char *)&float_reg_set_store;
unsigned char *load_ptr = (unsigned char *)&float_reg_set_load;
volatile char volatile_stack_var = 0;
PRINT_DATA("Floating point sharing tests started\n");
PRINT_LINE;
/*
* For microkernel builds, preemption tasks are specified in the .mdef
* file.
*
* Enable round robin scheduling to allow both the low priority pi
* computation and load/store tasks to execute. The high priority pi
* computation and load/store tasks will preempt the low priority tasks
* periodically.
*/
sys_scheduler_time_slice_set(1, 10);
/*
* Initialize floating point load buffer to known values;
* these values must be different than the value used in other threads.
*/
init_byte = MAIN_FLOAT_REG_CHECK_BYTE;
for (i = 0; i < SIZEOF_FP_REGISTER_SET; i++) {
load_ptr[i] = init_byte++;
}
/* Keep cranking forever, or until an error is detected. */
for (load_store_low_count = 0; ; load_store_low_count++) {
/*
* Clear store buffer to erase all traces of any previous
* floating point values that have been saved.
*/
memset(&float_reg_set_store, 0, SIZEOF_FP_REGISTER_SET);
/*
* Utilize an architecture specific function to load all the
* floating point registers with known values.
*/
_load_all_float_registers(&float_reg_set_load);
/*
* Waste some cycles to give the high priority load/store
* thread an opportunity to run when the low priority thread is
* using the floating point registers.
*
* IMPORTANT: This logic requires that sys_tick_get_32() not
* perform any floating point operations!
*/
while ((sys_tick_get_32() % 5) != 0) {
/*
* Use a volatile variable to prevent compiler
* optimizing out the spin loop.
*/
++volatile_stack_var;
}
/*
* Utilize an architecture specific function to dump the
* contents of all floating point registers to memory.
*/
_store_all_float_registers(&float_reg_set_store);
/*
* Compare each byte of buffer to ensure the expected value is
* present, indicating that the floating point registers weren't
* impacted by the operation of the high priority thread(s).
*
* Display error message and terminate if discrepancies are
* detected.
*/
init_byte = MAIN_FLOAT_REG_CHECK_BYTE;
for (i = 0; i < SIZEOF_FP_REGISTER_SET; i++) {
if (store_ptr[i] != init_byte) {
TC_ERROR("load_store_low found 0x%x instead of 0x%x @ offset 0x%x\n",
store_ptr[i],
init_byte, i);
TC_ERROR("Discrepancy found during iteration %d\n",
load_store_low_count);
fpu_sharing_error = 1;
}
init_byte++;
}
/*
* Terminate if a test error has been reported.
*/
if (fpu_sharing_error) {
TC_END_RESULT(TC_FAIL);
TC_END_REPORT(TC_FAIL);
return;
}
#if defined(CONFIG_ISA_IA32)
/*
* After every 1000 iterations (arbitrarily chosen), explicitly
* disable floating point operations for the task. The
* subsequent execution of _load_all_float_registers() will result
* in an exception to automatically re-enable floating point
* support for the task.
*
* The purpose of this part of the test is to exercise the
* task_float_disable() API, and to also continue exercising
* the (exception based) floating enabling mechanism.
*/
if ((load_store_low_count % 1000) == 0) {
task_float_disable(sys_thread_self_get());
}
#elif defined(CONFIG_CPU_CORTEX_M4)
/*
* The routine task_float_disable() allows for thread-level
* granularity for disabling floating point. Furthermore, it
* is useful for testing on the fly thread enabling of floating
* point. Neither of these capabilities are currently supported
* for ARM.
*/
#endif
}
}
/**
*
* @brief High priority FPU load/store thread
*
* @return N/A
*/
void load_store_high(void)
{
unsigned int i;
unsigned char init_byte;
unsigned char *reg_set_ptr = (unsigned char *)&float_reg_set;
/* test until the specified time limit, or until an error is detected */
while (1) {
/*
* Initialize the float_reg_set structure by treating it as
* a simple array of bytes (the arrangement and actual number
* of registers is not important for this generic C code). The
* structure is initialized by using the byte value specified
* by the constant FIBER_FLOAT_REG_CHECK_BYTE, and then
* incrementing the value for each successive location in the
* float_reg_set structure.
*
* The initial byte value, and thus the contents of the entire
* float_reg_set structure, must be different for each
* thread to effectively test the nanokernel's ability to
* properly save/restore the floating point values during a
* context switch.
*/
init_byte = FIBER_FLOAT_REG_CHECK_BYTE;
for (i = 0; i < SIZEOF_FP_REGISTER_SET; i++) {
reg_set_ptr[i] = init_byte++;
}
/*
* Utilize an architecture specific function to load all the
* floating point registers with the contents of the
* float_reg_set structure.
*
* The goal of the loading all floating point registers with
* values that differ from the values used in other threads is
* to help determine whether the floating point register
* save/restore mechanism in the nanokernel's context switcher
* is operating correctly.
*
* When a subsequent nano_fiber_timer_test() invocation is
* performed, a (cooperative) context switch back to the
* preempted task will occur. This context switch should result
* in restoring the state of the task's floating point
* registers when the task was swapped out due to the
* occurrence of the timer tick.
*/
_load_then_store_all_float_registers(&float_reg_set);
/*
* Relinquish the processor for the remainder of the current
* system clock tick, so that lower priority threads get a
* chance to run.
*
* This exercises the ability of the nanokernel to restore the
* FPU state of a low priority thread _and_ the ability of the
* nanokernel to provide a "clean" FPU state to this thread
* once the sleep ends.
*/
task_sleep(1);
/* periodically issue progress report */
if ((++load_store_high_count % 100) == 0) {
PRINT_DATA("Load and store OK after %u (high) + %u (low) tests\n",
load_store_high_count,
load_store_low_count);
}
/* terminate testing if specified limit has been reached */
if (load_store_high_count == MAX_TESTS) {
TC_END_RESULT(TC_PASS);
TC_END_REPORT(TC_PASS);
return;
}
}
}

View file

@ -1,157 +0,0 @@
/* pi.c - pi computation portion of FPU sharing test */
/*
* Copyright (c) 2011-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
DESCRIPTION
This module is used for the microkernel version of the FPU sharing test,
and supplements the basic load/store test by incorporating two additional
threads that utilize the floating point unit.
Testing utilizes a pair of tasks that independently compute pi. The lower
priority task is regularly preempted by the higher priority task, thereby
testing whether floating point context information is properly preserved.
The following formula is used to compute pi:
pi = 4 * (1 - 1/3 + 1/5 - 1/7 + 1/9 - ... )
This series converges to pi very slowly. For example, performing 50,000
iterations results in an accuracy of 3 decimal places.
A reference value of pi is computed once at the start of the test. All
subsequent computations must produce the same value, otherwise an error
has occurred.
*/
#include <zephyr.h>
#include <stdio.h>
#include <tc_util.h>
#include <float_context.h>
/*
* PI_NUM_ITERATIONS: This macro is defined in the project's Makefile and
* is configurable from the command line.
*/
static double reference_pi = 0.0f;
/*
* Test counters are "volatile" because GCC wasn't properly updating
* calc_pi_low_count properly when calculate_pi_low() contained a "return"
* in its error handling logic -- the value was incremented in a register,
* but never written back to memory. (Seems to be a compiler bug!)
*/
static volatile unsigned int calc_pi_low_count = 0;
static volatile unsigned int calc_pi_high_count = 0;
/**
*
* @brief Entry point for the low priority pi compute task
*
* @return N/A
*/
void calculate_pi_low(void)
{
volatile double pi; /* volatile to avoid optimizing out of loop */
double divisor = 3.0;
double sign = -1.0;
unsigned int ix;
/* loop forever, unless an error is detected */
while (1) {
sign = -1.0;
pi = 1.0;
divisor = 3.0;
for (ix = 0; ix < PI_NUM_ITERATIONS; ix++) {
pi += sign / divisor;
divisor += 2.0;
sign *= -1.0;
}
pi *= 4;
if (reference_pi == 0.0f) {
reference_pi = pi;
} else if (reference_pi != pi) {
TC_ERROR("Computed pi %1.6f, reference pi %1.6f\n",
pi, reference_pi);
fpu_sharing_error = 1;
return;
}
++calc_pi_low_count;
}
}
/**
*
* @brief Entry point for the high priority pi compute task
*
* @return N/A
*/
void calculate_pi_high(void)
{
volatile double pi; /* volatile to avoid optimizing out of loop */
double divisor = 3.0;
double sign = -1.0;
unsigned int ix;
/* loop forever, unless an error is detected */
while (1) {
sign = -1.0;
pi = 1.0;
divisor = 3.0;
for (ix = 0; ix < PI_NUM_ITERATIONS; ix++) {
pi += sign / divisor;
divisor += 2.0;
sign *= -1.0;
}
/*
* Relinquish the processor for the remainder of the current
* system clock tick, so that lower priority threads get a
* chance to run.
*
* This exercises the ability of the nanokernel to restore the
* FPU state of a low priority thread _and_ the ability of the
* nanokernel to provide a "clean" FPU state to this thread
* once the sleep ends.
*/
task_sleep(1);
pi *= 4;
if (reference_pi == 0.0f) {
reference_pi = pi;
} else if (reference_pi != pi) {
TC_ERROR("Computed pi %1.6f, reference pi %1.6f\n",
pi, reference_pi);
fpu_sharing_error = 1;
return;
}
/* periodically issue progress report */
if ((++calc_pi_high_count % 100) == 50) {
printf("Pi calculation OK after %u (high) + %u (low) tests (computed %1.6f)\n",
calc_pi_high_count, calc_pi_low_count, pi);
}
}
}

View file

@ -1,15 +0,0 @@
[test_x86]
tags = legacy core
platform_whitelist = qemu_x86
slow = true
# One may expect this test to take about two or three minutes to finish
# under normal circumstances. On a heavily loaded machine, extra time
# may be required--hence the 10 minute timeout.
timeout = 600
[test_arm]
tags = legacy core
platform_whitelist = frdm_k64f
slow = true
extra_args = PI_NUM_ITERATIONS=70000
timeout = 600

View file

@ -1,4 +0,0 @@
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include $(ZEPHYR_BASE)/Makefile.test

View file

@ -1,44 +0,0 @@
Title: Shared Floating Point Support
Description:
This test uses the background task and a fiber to independently load and
store floating point registers and check for corruption. This tests the
ability of contexts to safely share floating point hardware resources, even
when switching occurs preemptively.
--------------------------------------------------------------------------------
Building and Running Project:
This nanokernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Sample Output:
Floating point sharing tests started
===================================================================
Load and store OK after 100 (high) + 83270 (low) tests
Load and store OK after 200 (high) + 164234 (low) tests
Load and store OK after 300 (high) + 245956 (low) tests
Load and store OK after 400 (high) + 330408 (low) tests
Load and store OK after 500 (high) + 411981 (low) tests
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,5 +0,0 @@
CONFIG_FLOAT=y
CONFIG_SSE=y
CONFIG_FP_SHARING=y
CONFIG_SSE_FP_MATH=y
CONFIG_LEGACY_KERNEL=y

View file

@ -1,11 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y += main.o
# Some boards are significantly slower than others resulting in the test
# output being in the range of every few seconds to every few minutes. To
# compensate for this, one can control the number of iterations in the PI
# calculation through PI_NUM_ITERATIONS. Lowering this value will increase
# the speed of the test but it will come at the expense of precision.
PI_NUM_ITERATIONS ?= 700000
ccflags-y += "-DPI_NUM_ITERATIONS=${PI_NUM_ITERATIONS}"

View file

@ -1,120 +0,0 @@
/**
* @file
* @brief common definitions for the FPU sharing test application
*/
/*
* Copyright (c) 2011-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _FLOATCONTEXT_H
#define _FLOATCONTEXT_H
/*
* Each architecture must define the following structures (which may be empty):
* 'struct fp_volatile_register_set'
* 'struct fp_non_volatile_register_set'
*
* Each architecture must also define the following macros:
* SIZEOF_FP_VOLATILE_REGISTER_SET
* SIZEOF_FP_NON_VOLATILE_REGISTER_SET
* Those macros are used as sizeof(<an empty structure>) is compiler specific;
* that is, it may evaluate to a non-zero value.
*
* Each architecture shall also have custom implementations of:
* _load_all_float_registers()
* _load_then_store_all_float_registers()
* _store_all_float_registers()
*/
#if defined(CONFIG_ISA_IA32)
#define FP_OPTION 0
/*
* In the future, the struct definitions may need to be refined based on the
* specific IA-32 processor, but for now only the Pentium4 is supported:
*
* 8 x 80 bit floating point registers (ST[0] -> ST[7])
* 8 x 128 bit XMM registers (XMM[0] -> XMM[7])
*
* All these registers are considered volatile across a function invocation.
*/
struct fp_register {
unsigned char reg[10];
};
struct xmm_register {
unsigned char reg[16];
};
struct fp_volatile_register_set {
struct xmm_register xmm[8]; /* XMM[0] -> XMM[7] */
struct fp_register st[8]; /* ST[0] -> ST[7] */
};
struct fp_non_volatile_register_set {
/* No non-volatile floating point registers */
};
#define SIZEOF_FP_VOLATILE_REGISTER_SET sizeof(struct fp_volatile_register_set)
#define SIZEOF_FP_NON_VOLATILE_REGISTER_SET 0
#elif defined(CONFIG_CPU_CORTEX_M4)
#define FP_OPTION 0
/*
* Registers s0..s15 are volatile and do not
* need to be preserved across function calls.
*/
struct fp_volatile_register_set {
float s[16];
};
/*
* Registers s16..s31 are non-volatile and
* need to be preserved across function calls.
*/
struct fp_non_volatile_register_set {
float s[16];
};
#define SIZEOF_FP_VOLATILE_REGISTER_SET \
sizeof(struct fp_volatile_register_set)
#define SIZEOF_FP_NON_VOLATILE_REGISTER_SET \
sizeof(struct fp_non_volatile_register_set)
#else
#error "Architecture must provide the following definitions:\n" \
"\t'struct fp_volatile_registers'\n" \
"\t'struct fp_non_volatile_registers'\n" \
"\t'SIZEOF_FP_VOLATILE_REGISTER_SET'\n" \
"\t'SIZEOF_FP_NON_VOLATILE_REGISTER_SET'\n"
#endif /* CONFIG_ISA_IA32 */
/* the set of ALL floating point registers */
struct fp_register_set {
struct fp_volatile_register_set fp_volatile;
struct fp_non_volatile_register_set fp_non_volatile;
};
#define SIZEOF_FP_REGISTER_SET \
(SIZEOF_FP_VOLATILE_REGISTER_SET + SIZEOF_FP_NON_VOLATILE_REGISTER_SET)
/*
* The following constants define the initial byte value used by the background
* task, and the fiber when loading up the floating point registers.
*/
#define MAIN_FLOAT_REG_CHECK_BYTE (unsigned char)0xe5
#define FIBER_FLOAT_REG_CHECK_BYTE (unsigned char)0xf9
extern int fpu_sharing_error;
#endif /* _FLOATCONTEXT_H */

View file

@ -1,90 +0,0 @@
/**
* @file
* @brief ARM Cortex-M4 GCC specific floating point register macros
*/
/*
* Copyright (c) 2016, Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _FLOAT_REGS_ARM_GCC_H
#define _FLOAT_REGS_ARM_GCC_H
#if !defined(__GNUC__) || !defined(CONFIG_CPU_CORTEX_M4)
#error __FILE__ goes only with Cortex-M4 GCC
#endif
#include <toolchain.h>
#include "float_context.h"
/**
*
* @brief Load all floating point registers
*
* This function loads ALL floating point registers pointed to by @a regs.
* It is expected that a subsequent call to _store_all_float_registers()
* will be issued to dump the floating point registers to memory.
*
* The format/organization of 'struct fp_register_set'; the generic C test
* code (main.c) merely treat the register set as an array of bytes.
*
* The only requirement is that the arch specific implementations of
* _load_all_float_registers() and _store_all_float_registers() agree
* on the format.
*
* @return N/A
*/
static inline void _load_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"vldmia %0, {s0-s15};\n\t"
"vldmia %1, {s16-s31};\n\t"
:: "r" (&regs->fp_volatile), "r" (&regs->fp_non_volatile)
);
}
/**
*
* @brief Dump all floating point registers to memory
*
* This function stores ALL floating point registers to the memory buffer
* specified by @a regs. It is expected that a previous invocation of
* _load_all_float_registers() occurred to load all the floating point
* registers from a memory buffer.
*
* @return N/A
*/
static inline void _store_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"vstmia %0, {s0-s15};\n\t"
"vstmia %1, {s16-s31};\n\t"
:: "r" (&regs->fp_volatile), "r" (&regs->fp_non_volatile)
: "memory"
);
}
/**
*
* @brief Load then dump all float registers to memory
*
* This function loads ALL floating point registers from the memory buffer
* specified by @a regs, and then stores them back to that buffer.
*
* This routine is called by a high priority thread prior to calling a primitive
* that pends and triggers a co-operative context switch to a low priority
* thread.
*
* @return N/A
*/
static inline void _load_then_store_all_float_registers(struct fp_register_set *regs)
{
_load_all_float_registers(regs);
_store_all_float_registers(regs);
}
#endif /* _FLOAT_REGS_ARM_GCC_H */

View file

@ -1,157 +0,0 @@
/**
* @file
* @brief Intel x86 GCC specific floating point register macros
*/
/*
* Copyright (c) 2015, Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _FLOAT_REGS_X86_GCC_H
#define _FLOAT_REGS_X86_GCC_H
#if !defined(__GNUC__) || !defined(CONFIG_ISA_IA32)
#error __FILE__ goes only with x86 GCC
#endif
#include <toolchain.h>
#include "float_context.h"
/**
*
* @brief Load all floating point registers
*
* This function loads ALL floating point registers pointed to by @a regs.
* It is expected that a subsequent call to _store_all_float_registers()
* will be issued to dump the floating point registers to memory.
*
* The format/organization of 'struct fp_register_set'; the generic C test
* code (main.c) merely treat the register set as an array of bytes.
*
* The only requirement is that the arch specific implementations of
* _load_all_float_registers(), _store_all_float_registers() and
* _load_then_store_all_float_registers() agree on the format.
*
* @return N/A
*/
static inline void _load_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"movdqu 0(%0), %%xmm0\n\t;"
"movdqu 16(%0), %%xmm1\n\t;"
"movdqu 32(%0), %%xmm2\n\t;"
"movdqu 48(%0), %%xmm3\n\t;"
"movdqu 64(%0), %%xmm4\n\t;"
"movdqu 80(%0), %%xmm5\n\t;"
"movdqu 96(%0), %%xmm6\n\t;"
"movdqu 112(%0), %%xmm7\n\t;"
"fldt 128(%0)\n\t;"
"fldt 138(%0)\n\t;"
"fldt 148(%0)\n\t;"
"fldt 158(%0)\n\t;"
"fldt 168(%0)\n\t;"
"fldt 178(%0)\n\t;"
"fldt 188(%0)\n\t;"
"fldt 198(%0)\n\t;"
:: "r" (regs)
);
}
/**
*
* @brief Load then dump all float registers to memory
*
* This function loads ALL floating point registers from the memory buffer
* specified by @a regs, and then stores them back to that buffer.
*
* This routine is called by a high priority thread prior to calling a primitive
* that pends and triggers a co-operative context switch to a low priority
* thread. Because the kernel doesn't save floating point context for
* co-operative context switches, the x87 FPU register stack must be put back
* in an empty state before the switch occurs in case the next task to perform
* floating point operations was also co-operatively switched out and simply
* inherits the existing x87 FPU state (expecting the stack to be empty).
*
* @return N/A
*/
static inline void _load_then_store_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"movdqu 0(%0), %%xmm0\n\t;"
"movdqu 16(%0), %%xmm1\n\t;"
"movdqu 32(%0), %%xmm2\n\t;"
"movdqu 48(%0), %%xmm3\n\t;"
"movdqu 64(%0), %%xmm4\n\t;"
"movdqu 80(%0), %%xmm5\n\t;"
"movdqu 96(%0), %%xmm6\n\t;"
"movdqu 112(%0), %%xmm7\n\t;"
"fldt 128(%0)\n\t;"
"fldt 138(%0)\n\t;"
"fldt 148(%0)\n\t;"
"fldt 158(%0)\n\t;"
"fldt 168(%0)\n\t;"
"fldt 178(%0)\n\t;"
"fldt 188(%0)\n\t;"
"fldt 198(%0)\n\t;"
/* pop the x87 FPU registers back to memory */
"fstpt 198(%0)\n\t;"
"fstpt 188(%0)\n\t;"
"fstpt 178(%0)\n\t;"
"fstpt 168(%0)\n\t;"
"fstpt 158(%0)\n\t;"
"fstpt 148(%0)\n\t;"
"fstpt 138(%0)\n\t;"
"fstpt 128(%0)\n\t;"
:: "r" (regs)
);
}
/**
*
* @brief Dump all floating point registers to memory
*
* This function stores ALL floating point registers to the memory buffer
* specified by @a regs. It is expected that a previous invocation of
* _load_all_float_registers() occurred to load all the floating point
* registers from a memory buffer.
*
* @return N/A
*/
static inline void _store_all_float_registers(struct fp_register_set *regs)
{
__asm__ volatile (
"movdqu %%xmm0, 0(%0)\n\t;"
"movdqu %%xmm1, 16(%0)\n\t;"
"movdqu %%xmm2, 32(%0)\n\t;"
"movdqu %%xmm3, 48(%0)\n\t;"
"movdqu %%xmm4, 64(%0)\n\t;"
"movdqu %%xmm5, 80(%0)\n\t;"
"movdqu %%xmm6, 96(%0)\n\t;"
"movdqu %%xmm7, 112(%0)\n\t;"
"fstpt 198(%0)\n\t;"
"fstpt 188(%0)\n\t;"
"fstpt 178(%0)\n\t;"
"fstpt 168(%0)\n\t;"
"fstpt 158(%0)\n\t;"
"fstpt 148(%0)\n\t;"
"fstpt 138(%0)\n\t;"
"fstpt 128(%0)\n\t;"
:: "r" (regs) : "memory"
);
}
#endif /* _FLOAT_REGS_X86_GCC_H */

View file

@ -1,345 +0,0 @@
/* main.c - load/store portion of FPU sharing test */
/*
* Copyright (c) 2011-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
DESCRIPTION
This module implements the load/store portion of the FPU sharing test. This
microkernel version of this test utilizes a pair of tasks, while the nanokernel
verions utilizes a task and a fiber.
The load/store test validates the nanokernel's floating point unit context
save/restore mechanism. This test utilizes a pair of threads of different
priorities that each use the floating point registers. The context
switching that occurs exercises the kernel's ability to properly preserve the
floating point registers. The test also exercises the kernel's ability to
automatically enable floating point support for a task, if supported.
FUTURE IMPROVEMENTS
On architectures where the non-integer capabilities are provided in a hierarchy,
for example on IA-32 the USE_FP and USE_SSE options are provided, this test
should be enhanced to ensure that the architectures' _Swap() routine doesn't
context switch more registers that it needs to (which would represent a
performance issue). For example, on the IA-32, the test should issue
a fiber_fp_disable() from main(), and then indicate that only x87 FPU
registers will be utilized (fiber_fp_enable()). The fiber should continue
to load ALL non-integer registers, but main() should validate that only the
x87 FPU registers are being saved/restored.
*/
#ifndef CONFIG_FLOAT
#error Rebuild with the FLOAT config option enabled
#endif
#ifndef CONFIG_FP_SHARING
#error Rebuild with the FP_SHARING config option enabled
#endif
#if defined(CONFIG_ISA_IA32)
#ifndef CONFIG_SSE
#error Rebuild with the SSE config option enabled
#endif
#endif /* CONFIG_ISA_IA32 */
#include <zephyr.h>
#if defined(CONFIG_ISA_IA32)
#if defined(__GNUC__)
#include <float_regs_x86_gcc.h>
#else
#include <float_regs_x86_other.h>
#endif /* __GNUC__ */
#elif defined(CONFIG_CPU_CORTEX_M4)
#if defined(__GNUC__)
#include <float_regs_arm_gcc.h>
#else
#include <float_regs_arm_other.h>
#endif /* __GNUC__ */
#endif
#include <arch/cpu.h>
#include <tc_util.h>
#include "float_context.h"
#include <stddef.h>
#include <string.h>
#define MAX_TESTS 500
/* space for float register load/store area used by low priority task */
static struct fp_register_set float_reg_set_load;
static struct fp_register_set float_reg_set_store;
/* space for float register load/store area used by high priority thread */
static struct fp_register_set float_reg_set;
/* stack for high priority fiber (also use .bss for float_reg_set) */
static char __stack fiber_stack[1024];
static struct nano_timer fiber_timer;
static void *dummy_timer_data; /* allocate just enough room for a pointer */
/* flag indicating that an error has occurred */
int fpu_sharing_error;
/*
* Test counters are "volatile" because GCC may not update them properly
* otherwise. (See description of pi calculation test for more details.)
*/
static volatile unsigned int load_store_low_count = 0;
static volatile unsigned int load_store_high_count = 0;
static void load_store_high(int, int);
/**
*
* @brief Low priority FPU load/store thread
*
* @return N/A
*/
void main(void)
{
unsigned int i;
unsigned char init_byte;
unsigned char *store_ptr = (unsigned char *)&float_reg_set_store;
unsigned char *load_ptr = (unsigned char *)&float_reg_set_load;
volatile char volatile_stack_var = 0;
PRINT_DATA("Floating point sharing tests started\n");
PRINT_LINE;
/*
* Start a single fiber which will regularly preempt the background
* task, and perform similiar floating point register manipulations
* that the background task performs; except that a different constant
* is loaded into the floating point registers.
*/
task_fiber_start(fiber_stack,
sizeof(fiber_stack),
load_store_high,
0, /* arg1 */
0, /* arg2 */
5, /* priority */
FP_OPTION /* options */
);
/*
* Initialize floating point load buffer to known values;
* these values must be different than the value used in other threads.
*/
init_byte = MAIN_FLOAT_REG_CHECK_BYTE;
for (i = 0; i < SIZEOF_FP_REGISTER_SET; i++) {
load_ptr[i] = init_byte++;
}
/* Keep cranking forever, or until an error is detected. */
for (load_store_low_count = 0; ; load_store_low_count++) {
/*
* Clear store buffer to erase all traces of any previous
* floating point values that have been saved.
*/
memset(&float_reg_set_store, 0, SIZEOF_FP_REGISTER_SET);
/*
* Utilize an architecture specific function to load all the
* floating point registers with known values.
*/
_load_all_float_registers(&float_reg_set_load);
/*
* Waste some cycles to give the high priority load/store
* thread an opportunity to run when the low priority thread is
* using the floating point registers.
*
* IMPORTANT: This logic requires that sys_tick_get_32() not
* perform any floating point operations!
*/
while ((sys_tick_get_32() % 5) != 0) {
/*
* Use a volatile variable to prevent compiler
* optimizing out the spin loop.
*/
++volatile_stack_var;
}
/*
* Utilize an architecture specific function to dump the
* contents of all floating point registers to memory.
*/
_store_all_float_registers(&float_reg_set_store);
/*
* Compare each byte of buffer to ensure the expected value is
* present, indicating that the floating point registers weren't
* impacted by the operation of the high priority thread(s).
*
* Display error message and terminate if discrepancies are
* detected.
*/
init_byte = MAIN_FLOAT_REG_CHECK_BYTE;
for (i = 0; i < SIZEOF_FP_REGISTER_SET; i++) {
if (store_ptr[i] != init_byte) {
TC_ERROR("load_store_low found 0x%x instead of 0x%x @ offset 0x%x\n",
store_ptr[i],
init_byte, i);
TC_ERROR("Discrepancy found during iteration %d\n",
load_store_low_count);
fpu_sharing_error = 1;
}
init_byte++;
}
/*
* Terminate if a test error has been reported.
*/
if (fpu_sharing_error) {
TC_END_RESULT(TC_FAIL);
TC_END_REPORT(TC_FAIL);
return;
}
#if defined(CONFIG_ISA_IA32)
/*
* After every 1000 iterations (arbitrarily chosen), explicitly
* disable floating point operations for the task. The
* subsequent execution of _load_all_float_registers() will result
* in an exception to automatically re-enable floating point
* support for the task.
*
* The purpose of this part of the test is to exercise the
* task_float_disable() API, and to also continue exercising
* the (exception based) floating enabling mechanism.
*/
if ((load_store_low_count % 1000) == 0) {
task_float_disable(sys_thread_self_get());
}
#elif defined(CONFIG_CPU_CORTEX_M4)
/*
* The routine task_float_disable() allows for thread-level
* granularity for disabling floating point. Furthermore, it
* is useful for testing on the fly thread enabling of floating
* point. Neither of these capabilities are currently supported
* for ARM.
*/
#endif
}
}
/**
*
* @brief High priority FPU load/store thread
*
* @return N/A
*/
void load_store_high(int unused1, int unused2)
{
unsigned int i;
unsigned char init_byte;
unsigned char *reg_set_ptr = (unsigned char *)&float_reg_set;
ARG_UNUSED(unused1);
ARG_UNUSED(unused2);
/* initialize timer; data field is not used */
nano_timer_init(&fiber_timer, (void *)dummy_timer_data);
/* test until the specified time limit, or until an error is detected */
while (1) {
/*
* Initialize the float_reg_set structure by treating it as
* a simple array of bytes (the arrangement and actual number
* of registers is not important for this generic C code). The
* structure is initialized by using the byte value specified
* by the constant FIBER_FLOAT_REG_CHECK_BYTE, and then
* incrementing the value for each successive location in the
* float_reg_set structure.
*
* The initial byte value, and thus the contents of the entire
* float_reg_set structure, must be different for each
* thread to effectively test the nanokernel's ability to
* properly save/restore the floating point values during a
* context switch.
*/
init_byte = FIBER_FLOAT_REG_CHECK_BYTE;
for (i = 0; i < SIZEOF_FP_REGISTER_SET; i++) {
reg_set_ptr[i] = init_byte++;
}
/*
* Utilize an architecture specific function to load all the
* floating point registers with the contents of the
* float_reg_set structure.
*
* The goal of the loading all floating point registers with
* values that differ from the values used in other threads is
* to help determine whether the floating point register
* save/restore mechanism in the nanokernel's context switcher
* is operating correctly.
*
* When a subsequent nano_fiber_timer_test() invocation is
* performed, a (cooperative) context switch back to the
* preempted task will occur. This context switch should result
* in restoring the state of the task's floating point
* registers when the task was swapped out due to the
* occurrence of the timer tick.
*/
_load_then_store_all_float_registers(&float_reg_set);
/*
* Relinquish the processor for the remainder of the current
* system clock tick, so that lower priority threads get a
* chance to run.
*
* This exercises the ability of the nanokernel to restore the
* FPU state of a low priority thread _and_ the ability of the
* nanokernel to provide a "clean" FPU state to this thread
* once the sleep ends.
*/
nano_fiber_timer_start(&fiber_timer, 1);
nano_fiber_timer_test(&fiber_timer, TICKS_UNLIMITED);
/* periodically issue progress report */
if ((++load_store_high_count % 100) == 0) {
PRINT_DATA("Load and store OK after %u (high) + %u (low) tests\n",
load_store_high_count,
load_store_low_count);
}
/* terminate testing if specified limit has been reached */
if (load_store_high_count == MAX_TESTS) {
TC_END_RESULT(TC_PASS);
TC_END_REPORT(TC_PASS);
return;
}
}
}

View file

@ -1,157 +0,0 @@
/* pi.c - pi computation portion of FPU sharing test */
/*
* Copyright (c) 2011-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
DESCRIPTION
This module is used for the microkernel version of the FPU sharing test,
and supplements the basic load/store test by incorporating two additional
threads that utilize the floating point unit.
Testing utilizes a pair of tasks that independently compute pi. The lower
priority task is regularly preempted by the higher priority task, thereby
testing whether floating point context information is properly preserved.
The following formula is used to compute pi:
pi = 4 * (1 - 1/3 + 1/5 - 1/7 + 1/9 - ... )
This series converges to pi very slowly. For example, performing 50,000
iterations results in an accuracy of 3 decimal places.
A reference value of pi is computed once at the start of the test. All
subsequent computations must produce the same value, otherwise an error
has occurred.
*/
#include <zephyr.h>
#include <stdio.h>
#include <tc_util.h>
#include <float_context.h>
/*
* PI_NUM_ITERATIONS: This macro is defined in the project's Makefile and
* is configurable from the command line.
*/
static double reference_pi = 0.0f;
/*
* Test counters are "volatile" because GCC wasn't properly updating
* calc_pi_low_count properly when calculate_pi_low() contained a "return"
* in its error handling logic -- the value was incremented in a register,
* but never written back to memory. (Seems to be a compiler bug!)
*/
static volatile unsigned int calc_pi_low_count = 0;
static volatile unsigned int calc_pi_high_count = 0;
/**
*
* @brief Entry point for the low priority pi compute task
*
* @return N/A
*/
void calculate_pi_low(void)
{
volatile double pi; /* volatile to avoid optimizing out of loop */
double divisor = 3.0;
double sign = -1.0;
unsigned int ix;
/* loop forever, unless an error is detected */
while (1) {
sign = -1.0;
pi = 1.0;
divisor = 3.0;
for (ix = 0; ix < PI_NUM_ITERATIONS; ix++) {
pi += sign / divisor;
divisor += 2.0;
sign *= -1.0;
}
pi *= 4;
if (reference_pi == 0.0f) {
reference_pi = pi;
} else if (reference_pi != pi) {
TC_ERROR("Computed pi %1.6f, reference pi %1.6f\n",
pi, reference_pi);
fpu_sharing_error = 1;
return;
}
++calc_pi_low_count;
}
}
/**
*
* @brief Entry point for the high priority pi compute task
*
* @return N/A
*/
void calculate_pi_high(void)
{
volatile double pi; /* volatile to avoid optimizing out of loop */
double divisor = 3.0;
double sign = -1.0;
unsigned int ix;
/* loop forever, unless an error is detected */
while (1) {
sign = -1.0;
pi = 1.0;
divisor = 3.0;
for (ix = 0; ix < PI_NUM_ITERATIONS; ix++) {
pi += sign / divisor;
divisor += 2.0;
sign *= -1.0;
}
/*
* Relinquish the processor for the remainder of the current
* system clock tick, so that lower priority threads get a
* chance to run.
*
* This exercises the ability of the nanokernel to restore the
* FPU state of a low priority thread _and_ the ability of the
* nanokernel to provide a "clean" FPU state to this thread
* once the sleep ends.
*/
task_sleep(1);
pi *= 4;
if (reference_pi == 0.0f) {
reference_pi = pi;
} else if (reference_pi != pi) {
TC_ERROR("Computed pi %1.6f, reference pi %1.6f\n",
pi, reference_pi);
fpu_sharing_error = 1;
return;
}
/* periodically issue progress report */
if ((++calc_pi_high_count % 100) == 50) {
printf("Pi calculation OK after %u (high) + %u (low) tests (computed %1.6f)\n",
calc_pi_high_count, calc_pi_low_count, pi);
}
}
}

View file

@ -1,7 +0,0 @@
[test_x86]
tags = legacy core
platform_whitelist = qemu_x86
[test_arm]
tags = legacy core
platform_whitelist = frdm_k64f

View file

@ -1,5 +0,0 @@
MDEF_FILE = prj.mdef
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

View file

@ -1,49 +0,0 @@
Title: Kernel Access to Standard Libraries
Description:
This test verifies kernel access to the standard C libraries.
It is intended to catch issues in which a library is completely absent
or non-functional, and is NOT intended to be a comprehensive test suite
of all functionality provided by the libraries.
--------------------------------------------------------------------------------
Building and Running Project:
This microkernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
--------------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
--------------------------------------------------------------------------------
Sample Output:
Starting standard libraries tests
===================================================================
Validating access to supported libraries
Testing ctype.h library ...
Testing inttypes.h library ...
Testing iso646.h library ...
Testing limits.h library ...
Testing stdbool.h library ...
Testing stddef.h library ...
Testing stdint.h library ...
Testing string.h library ...
Validation complete
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1 +0,0 @@
CONFIG_LEGACY_KERNEL=y

View file

@ -1,11 +0,0 @@
% Application : test standard libraries
% TASK NAME PRIO ENTRY STACK GROUPS
% ===================================================
TASK MONITORTASK 4 MonitorTaskEntry 2048 [EXE]
TASK tStartTask 5 RegressionTaskEntry 2048 [EXE]
% SEMA NAME
% =================
SEMA SEM_TASKDONE
SEMA SEM_TASKFAIL

View file

@ -1,4 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = libraries.o
obj-y += main.o

View file

@ -1,405 +0,0 @@
/* libraries.c - test access to the minimal C libraries */
/*
* Copyright (c) 2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
DESCRIPTION
This module verifies that the various minimal C libraries can be used.
IMPORTANT: The module only ensures that each supported library is present,
and that a bare minimum of its functionality is operating correctly. It does
NOT guarantee that ALL standards-defined functionality is present, nor does
it guarantee that ALL functionality provided is working correctly.
*/
#include <zephyr.h>
#include <misc/__assert.h>
#include <tc_util.h>
#include <limits.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdint.h>
#include <string.h>
/*
* variables used during limits library testing; must be marked as "volatile"
* to prevent compiler from computing results at compile time
*/
volatile long longMax = LONG_MAX;
volatile long longOne = 1L;
/**
*
* @brief Test implementation-defined constants library
*
* @return TC_PASS or TC_FAIL
*/
int limitsTest(void)
{
TC_PRINT("Testing limits.h library ...\n");
if (longMax + longOne != LONG_MIN) {
return TC_FAIL;
}
return TC_PASS;
}
/**
*
* @brief Test boolean types and values library
*
* @return TC_PASS or TC_FAIL
*/
int stdboolTest(void)
{
TC_PRINT("Testing stdbool.h library ...\n");
if ((true != 1) || (false != 0)) {
return TC_FAIL;
}
return TC_PASS;
}
/*
* variables used during stddef library testing; must be marked as "volatile"
* to prevent compiler from computing results at compile time
*/
volatile long longVariable;
volatile size_t sizeOfLongVariable = sizeof(longVariable);
/**
*
* @brief Test standard type definitions library
*
* @return TC_PASS or TC_FAIL
*/
int stddefTest(void)
{
TC_PRINT("Testing stddef.h library ...\n");
if (sizeOfLongVariable != 4) {
return TC_FAIL;
}
return TC_PASS;
}
/*
* variables used during stdint library testing; must be marked as "volatile"
* to prevent compiler from computing results at compile time
*/
volatile uint8_t unsignedByte = 0xff;
volatile uint32_t unsignedInt = 0xffffff00;
/**
*
* @brief Test integer types library
*
* @return TC_PASS or TC_FAIL
*/
int stdintTest(void)
{
TC_PRINT("Testing stdint.h library ...\n");
if (unsignedInt + unsignedByte + 1u != 0) {
return TC_FAIL;
}
return TC_PASS;
}
/*
* variables used during string library testing
*/
#define BUFSIZE 10
char buffer[BUFSIZE];
/**
*
* @brief Test string memset
*
* @return TC_PASS or TC_FAIL
*/
int memset_test(void)
{
TC_PRINT("\tmemset ...\t");
memset(buffer, 'a', BUFSIZE);
if (buffer[0] != 'a' || buffer[BUFSIZE-1] != 'a') {
TC_PRINT("failed\n");
return TC_FAIL;
}
TC_PRINT("passed\n");
return TC_PASS;
}
/**
*
* @brief Test string length function
*
* @return TC_PASS or TC_FAIL
*/
int strlen_test(void)
{
TC_PRINT("\tstrlen ...\t");
memset(buffer, '\0', BUFSIZE);
memset(buffer, 'b', 5); /* 5 is BUFSIZE / 2 */
if (strlen(buffer) != 5) {
TC_PRINT("failed\n");
return TC_FAIL;
}
TC_PRINT("passed\n");
return TC_PASS;
}
/**
*
* @brief Test string compare function
*
* @return TC_PASS or TC_FAIL
*/
int strcmp_test(void)
{
strcpy(buffer, "eeeee");
TC_PRINT("\tstrcmp less ...\t");
if (strcmp(buffer, "fffff") >= 0) {
TC_PRINT("failed\n");
return TC_FAIL;
} else {
TC_PRINT("passed\n");
}
TC_PRINT("\tstrcmp equal ...\t");
if (strcmp(buffer, "eeeee") != 0) {
TC_PRINT("failed\n");
return TC_FAIL;
} else {
TC_PRINT("passed\n");
}
TC_PRINT("\tstrcmp greater ...\t");
if (strcmp(buffer, "ddddd") <= 0) {
TC_PRINT("failed\n");
return TC_FAIL;
} else {
TC_PRINT("passed\n");
}
return TC_PASS;
}
/**
*
* @brief Test string N compare function
*
* @return TC_PASS or TC_FAIL
*/
int strncmp_test(void)
{
const char pattern[] = "eeeeeeeeeeee";
/* Note we don't want to count the final \0 that sizeof will */
__ASSERT_NO_MSG(sizeof(pattern) - 1 > BUFSIZE);
memcpy(buffer, pattern, BUFSIZE);
TC_PRINT("\tstrncmp 0 ...\t");
if (strncmp(buffer, "fffff", 0) != 0) {
TC_PRINT("failed\n");
return TC_FAIL;
} else {
TC_PRINT("passed\n");
}
TC_PRINT("\tstrncmp 3 ...\t");
if (strncmp(buffer, "eeeff", 3) != 0) {
TC_PRINT("failed\n");
return TC_FAIL;
} else {
TC_PRINT("passed\n");
}
TC_PRINT("\tstrncmp 10 ...\t");
if (strncmp(buffer, "eeeeeeeeeeeff", BUFSIZE) != 0) {
TC_PRINT("failed\n");
return TC_FAIL;
} else {
TC_PRINT("passed\n");
}
return TC_PASS;
}
/**
*
* @brief Test string copy function
*
* @return TC_PASS or TC_FAIL
*/
int strcpy_test(void)
{
TC_PRINT("\tstrcpy ...\t");
memset(buffer, '\0', BUFSIZE);
strcpy(buffer, "10 chars!\0");
if (strcmp(buffer, "10 chars!\0") != 0) {
TC_PRINT("failed\n");
return TC_FAIL;
}
TC_PRINT("passed\n");
return TC_PASS;
}
/**
*
* @brief Test string N copy function
*
* @return TC_PASS or TC_FAIL
*/
int strncpy_test(void)
{
TC_PRINT("\tstrncpy ...\t");
memset(buffer, '\0', BUFSIZE);
strncpy(buffer, "This is over 10 characters", BUFSIZE);
/* Purposely different values */
if (strncmp(buffer, "This is over 20 characters", BUFSIZE) != 0) {
TC_PRINT("failed\n");
return TC_FAIL;
}
TC_PRINT("passed\n");
return TC_PASS;
}
/**
*
* @brief Test string scanning function
*
* @return TC_PASS or TC_FAIL
*/
int strchr_test(void)
{
char *rs = NULL;
TC_PRINT("\tstrchr ...\t");
memset(buffer, '\0', BUFSIZE);
strncpy(buffer, "Copy 10", BUFSIZE);
rs = strchr(buffer, '1');
if (!rs) {
TC_PRINT("failed\n");
return TC_FAIL;
}
if (strncmp(rs, "10", 2) != 0) {
TC_PRINT("failed\n");
return TC_FAIL;
}
TC_PRINT("passed\n");
return TC_PASS;
}
/**
*
* @brief Test memory comparison function
*
* @return TC_PASS or TC_FAIL
*/
int memcmp_test(void)
{
unsigned char m1[5] = { 1, 2, 3, 4, 5 };
unsigned char m2[5] = { 1, 2, 3, 4, 6 };
TC_PRINT("\tmemcmp ...\t");
if (memcmp(m1, m2, 4)) {
TC_PRINT("failed\n");
return TC_FAIL;
}
if (!memcmp(m1, m2, 5)) {
TC_PRINT("failed\n");
return TC_FAIL;
}
TC_PRINT("passed\n");
return TC_PASS;
}
/**
*
* @brief Test string operations library
* * @return TC_PASS or TC_FAIL
*/
int stringTest(void)
{
TC_PRINT("Testing string.h library ...\n");
if (memset_test() || strlen_test() || strcmp_test() || strcpy_test() ||
strncpy_test() || strncmp_test() || strchr_test() ||
memcmp_test()) {
return TC_FAIL;
}
return TC_PASS;
}
/**
*
* @brief Main task in the test suite
*
* This is the entry point to the main task used by the standard libraries test
* suite. It tests each library in turn until a failure is detected or all
* libraries have been tested successfully.
*
* @return TC_PASS or TC_FAIL
*/
int RegressionTask(void)
{
TC_PRINT("Validating access to supported libraries\n");
if (limitsTest() || stdboolTest() || stddefTest() ||
stdintTest() || stringTest()) {
TC_PRINT("Library validation failed\n");
return TC_FAIL;
}
TC_PRINT("Validation complete\n");
return TC_PASS;
}

View file

@ -1,94 +0,0 @@
/* main.c - test access to standard libraries */
/*
* Copyright (c) 2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/*
DESCRIPTION
This module contains the entry points for the tasks used by the standard
libraries test application.
Each test task entry point invokes a test routine that returns a success/failure
indication, then gives a corresponding semaphore. An additional task monitors
these semaphores until it detects a failure or the completion of all test tasks,
then announces the result of the test.
NOTE: At present only a single test task is used, but more tasks may be added
in the future to enhance test coverage.
*/
#include <tc_util.h>
#include <zephyr.h>
#include <util_test_common.h>
#define NUM_TEST_TASKS 1 /* # of test tasks to monitor */
/* # ticks to wait for test completion */
#define TIMEOUT (60 * sys_clock_ticks_per_sec)
/*
* Note that semaphore group entries are arranged so that resultSems[TC_PASS]
* refers to SEM_TASKDONE and resultSems[TC_FAIL] refers to SEM_TASKFAIL.
*/
static ksem_t resultSems[] = { SEM_TASKDONE, SEM_TASKFAIL, ENDLIST };
/**
*
* @brief Entry point for RegressionTask
*
* This routine signals "task done" or "task fail", based on the return code of
* RegressionTask.
*
* @return N/A
*/
void RegressionTaskEntry(void)
{
extern int RegressionTask(void);
task_sem_give(resultSems[RegressionTask()]);
}
/**
*
* @brief Entry point for MonitorTask
*
* This routine keeps tabs on the progress of the tasks doing the actual testing
* and generates the final test case summary message.
*
* @return N/A
*/
void MonitorTaskEntry(void)
{
ksem_t result;
int tasksDone;
PRINT_DATA("Starting standard libraries tests\n");
PRINT_LINE;
/*
* the various test tasks start executing automatically;
* wait for all tasks to complete or a failure to occur,
* then issue the appropriate test case summary message
*/
for (tasksDone = 0; tasksDone < NUM_TEST_TASKS; tasksDone++) {
result = task_sem_group_take(resultSems, TIMEOUT);
if (result != resultSems[TC_PASS]) {
if (result != resultSems[TC_FAIL]) {
TC_ERROR("Monitor task timed out\n");
}
TC_END_REPORT(TC_FAIL);
return;
}
}
TC_END_RESULT(TC_PASS);
TC_END_REPORT(TC_PASS);
}

View file

@ -1,2 +0,0 @@
[test]
tags = legacy bat_commit core

View file

@ -1,4 +0,0 @@
CONF_FILE = prj.conf
BOARD ?= qemu_x86
include $(ZEPHYR_BASE)/Makefile.test

View file

@ -1,85 +0,0 @@
Title: LIFO APIs
Description:
This test verifies that the nanokernel LIFO APIs operate as expected.
---------------------------------------------------------------------------
Building and Running Project:
This nanokernel project outputs to the console. It can be built and executed
on QEMU as follows:
make qemu
---------------------------------------------------------------------------
Troubleshooting:
Problems caused by out-dated project information can be addressed by
issuing one of the following commands then rebuilding the project:
make clean # discard results of previous builds
# but keep existing configuration info
or
make pristine # discard results of previous builds
# and restore pre-defined configuration info
---------------------------------------------------------------------------
Sample Output:
tc_start() - Test Nanokernel LIFO
Nano objects initialized
Fiber waiting on an empty LIFO
Task waiting on an empty LIFO
Fiber to get LIFO items without waiting
Task to get LIFO items without waiting
ISR to get LIFO items without waiting
First pass
multiple-waiter fiber 0 receiving item...
multiple-waiter fiber 1 receiving item...
multiple-waiter fiber 2 receiving item...
multiple-waiter fiber 0 got correct item, giving semaphore
multiple-waiter fiber 1 got correct item, giving semaphore
multiple-waiter fiber 2 got correct item, giving semaphore
Task took multi-waiter reply semaphore 3 times, as expected.
Second pass
multiple-waiter fiber 0 receiving item...
multiple-waiter fiber 0 got correct item, giving semaphore
multiple-waiter fiber 1 receiving item...
multiple-waiter fiber 2 receiving item...
multiple-waiter fiber 1 got correct item, giving semaphore
multiple-waiter fiber 2 got correct item, giving semaphore
Task took multi-waiter reply semaphore 3 times, as expected.
test nano_task_lifo_get() with timeout > 0
nano_task_lifo_get() timed out as expected
nano_task_lifo_get() got lifo in time, as expected
testing timeouts of 5 fibers on same lifo
got fiber (q order: 2, t/o: 10, lifo 20005ff0) as expected
got fiber (q order: 3, t/o: 15, lifo 20005ff0) as expected
got fiber (q order: 0, t/o: 20, lifo 20005ff0) as expected
got fiber (q order: 4, t/o: 25, lifo 20005ff0) as expected
got fiber (q order: 1, t/o: 30, lifo 20005ff0) as expected
testing timeouts of 9 fibers on different lifos
got fiber (q order: 0, t/o: 10, lifo 20005ffc) as expected
got fiber (q order: 5, t/o: 15, lifo 20005ff0) as expected
got fiber (q order: 7, t/o: 20, lifo 20005ff0) as expected
got fiber (q order: 1, t/o: 25, lifo 20005ff0) as expected
got fiber (q order: 8, t/o: 30, lifo 20005ffc) as expected
got fiber (q order: 2, t/o: 35, lifo 20005ff0) as expected
got fiber (q order: 6, t/o: 40, lifo 20005ff0) as expected
got fiber (q order: 4, t/o: 45, lifo 20005ffc) as expected
got fiber (q order: 3, t/o: 50, lifo 20005ffc) as expected
testing 5 fibers timing out, but obtaining the data in time
(except the last one, which times out)
got fiber (q order: 0, t/o: 20, lifo 20005ff0) as expected
got fiber (q order: 1, t/o: 30, lifo 20005ff0) as expected
got fiber (q order: 2, t/o: 10, lifo 20005ff0) as expected
got fiber (q order: 3, t/o: 15, lifo 20005ff0) as expected
got fiber (q order: 4, t/o: 25, lifo 20005ff0) as expected
===================================================================
PASS - main.
===================================================================
PROJECT EXECUTION SUCCESSFUL

View file

@ -1,5 +0,0 @@
CONFIG_NANO_TIMEOUTS=y
CONFIG_ASSERT=y
CONFIG_ASSERT_LEVEL=2
CONFIG_IRQ_OFFLOAD=y
CONFIG_LEGACY_KERNEL=y

View file

@ -1,3 +0,0 @@
ccflags-y += -I${ZEPHYR_BASE}/tests/include
obj-y = lifo.o

View file

@ -1,35 +0,0 @@
This LIFO test set covers the following scenarios.
nano_fiber_lifo_get(TICKS_UNLIMITED)
- Getting an item from an emtpy LIFO (involves blocking and waking)
- Getting an item from a non-empty LIFO (no blocking)
nano_task_lifo_get(TICKS_UNLIMITED)
- Getting an item from an emtpy LIFO (involves blocking and waking)
- Getting an item from a non-empty LIFO (no blocking)
nano_isr_lifo_get(TICKS_NONE)
- Getting an item from a non-empty LIFO (no blocking)
- Getting an item from an empty LIFO (no blocking, returns NULL)
nano_fiber_lifo_get(TICKS_NONE)
- Getting an item from a non-empty LIFO (no blocking)
- Getting an item from an empty LIFO (no blocking, returns NULL)
nano_task_lifo_get(TICKS_NONE)
- Getting an item from a non-empty LIFO (no blocking)
- Getting an item from an empty LIFO (no blocking, returns NULL)
nano_fiber_lifo_put
- Waking a task blocked on an empty LIFO
- Putting an item into an empty LIFO that upon which nothing is blocked
- Putting an item into a non-empty LIFO
nano_task_lifo_put
- Waking a fiber blocked on an empty LIFO
- Putting an item into an empty LIFO that upon which nothing is blocked
- Putting an item into a non-empty LIFO
nano_isr_lifo_put
- Putting an item into an empty LIFO that upon which nothing is blocked
- Putting an item into a non-empty LIFO

File diff suppressed because it is too large Load diff

View file

@ -1,5 +0,0 @@
[test]
tags = legacy core
# Make sure it has enough memory
filter = not ((CONFIG_DEBUG or CONFIG_ASSERT)) and ( CONFIG_SRAM_SIZE >= 32
or CONFIG_DCCM_SIZE >= 32 or CONFIG_RAM_SIZE >= 32)

View file

@ -1,5 +0,0 @@
MDEF_FILE = prj.mdef
BOARD ?= qemu_x86
CONF_FILE = prj.conf
include ${ZEPHYR_BASE}/Makefile.test

Some files were not shown because too many files have changed in this diff Show more