kernel: delete old micro and nanokernel documentation

Change-Id: Id1685930dd11f4b5038d5f98da978c6348b67966
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit is contained in:
Andrew Boie 2016-11-03 12:13:27 -07:00 committed by Anas Nashif
commit fcfddc0f5d
44 changed files with 1 additions and 6451 deletions

View file

@ -17,8 +17,6 @@ The use of the Zephyr APIs is the same for all SoCs and boards.
.. toctree::
:maxdepth: 2
nanokernel_api.rst
microkernel_api.rst
device.rst
bluetooth.rst
io_interfaces.rst

View file

@ -1,155 +0,0 @@
.. microkernel_api:
Microkernel APIs
################
.. contents::
:depth: 1
:local:
:backlinks: top
Events
******
The microkernel's :dfn:`event` objects are an implementation of traditional
binary semaphores.
For an overview and important information about Events, see :ref:`microkernel_events`.
------
.. doxygengroup:: microkernel_event
:project: Zephyr
:content-only:
FIFOs
*****
The microkernel's :dfn:`FIFO` object type is an implementation of a traditional
first in, first out queue.
A FIFO allows tasks to asynchronously send and receive fixed-size data items.
For an overview and important information about FIFOs, see :ref:`microkernel_fifos`.
------
.. doxygengroup:: microkernel_fifo
:project: Zephyr
:content-only:
Pipes
*****
The microkernel's :dfn:`pipe` object type is an implementation of a traditional
anonymous pipe. A pipe allows a task to send a byte stream to another task.
For an overview and important information about Pipes, see :ref:`microkernel_pipes`.
------
.. doxygengroup:: microkernel_pipe
:project: Zephyr
:content-only:
Mailboxes
*********
The microkernel's :dfn:`mailbox` object type is an implementation of a
traditional message queue that allows tasks to exchange messages.
For an overview and important information about Mailboxes, see :ref:`microkernel_mailboxes`.
------
.. doxygengroup:: microkernel_mailbox
:project: Zephyr
:content-only:
Memory Maps
***********
The microkernel's memory map objects provide dynamic allocation and
release of fixed-size memory blocks.
For an overview and important information about Memory Maps, see :ref:`microkernel_memory_maps`.
------
.. doxygengroup:: microkernel_memorymap
:project: Zephyr
:content-only:
Memory Pools
************
The microkernel's :dfn:`memory pool` objects provide dynamic allocation and
release of variable-size memory blocks.
For an overview and important information about Memory Pools, see :ref:`microkernel_memory_pools`.
------
.. doxygengroup:: microkernel_memorypool
:project: Zephyr
:content-only:
Mutexes
*******
The microkernel's :dfn:`mutex` objects provide reentrant mutex
capabilities with basic priority inheritance. A mutex allows
tasks to safely share a resource by ensuring mutual exclusivity
while the resource is being accessed by a task.
For an overview and important information about Mutexes, see :ref:`microkernel_mutexes`.
------
.. doxygengroup:: microkernel_mutex
:project: Zephyr
:content-only:
Semaphores
**********
The microkernel's :dfn:`semaphore` objects are an implementation of traditional
counting semaphores.
For an overview and important information about Semaphores, see :ref:`microkernel_semaphores`.
------
.. doxygengroup:: microkernel_semaphore
:project: Zephyr
:content-only:
Timers
******
A :dfn:`microkernel timer` allows a task to determine whether or not a
specified time limit has been reached while the task is busy performing
other work. The timer uses the kernel's system clock, measured in
ticks, to monitor the passage of time.
For an overview and important information about Timers, see :ref:`microkernel_timers`.
------
.. doxygengroup:: microkernel_timer
:project: Zephyr
:content-only:
Tasks
*****
A task is a preemptible thread of execution that implements a portion of
an application's processing. It is normally used for processing that
is too lengthy or too complex to be performed by a fiber or an ISR.
For an overview and important information about Tasks, see :ref:`microkernel_tasks`.
------
.. doxygengroup:: microkernel_task
:project: Zephyr
:content-only:

View file

@ -1,141 +0,0 @@
.. nanokernel_api:
Nanokernel APIs
###############
.. contents::
:depth: 1
:local:
:backlinks: top
Fibers
******
A :dfn:`fiber` is a lightweight, non-preemptible thread of execution that
implements a portion of an application's processing. Fibers are often
used in device drivers and for performance-critical work.
For an overview and important information about Fibers, see :ref:`nanokernel_fibers`.
------
.. doxygengroup:: nanokernel_fiber
:project: Zephyr
:content-only:
Tasks
******
A :dfn:`task` is a preemptible thread of execution that implements a portion of
an application's processing. It is normally used to perform processing that is
too lengthy or too complex to be performed by a fiber or an ISR.
For an overview and important information about Tasks, see :ref:`nanokernel_tasks`.
------
.. doxygengroup:: nanokernel_task
:project: Zephyr
:content-only:
Semaphores
**********
The nanokernel's :dfn:`semaphore` object type is an implementation of a
traditional counting semaphore. It is mainly intended for use by fibers.
For an overview and important information about Semaphores, see :ref:`nanokernel_synchronization`.
------
.. doxygengroup:: nanokernel_semaphore
:project: Zephyr
:content-only:
LIFOs
*****
The nanokernel's LIFO object type is an implementation of a traditional
last in, first out queue. It is mainly intended for use by fibers.
For an overview and important information about LIFOs, see :ref:`nanokernel_lifos`.
------
.. doxygengroup:: nanokernel_lifo
:project: Zephyr
:content-only:
FIFOs
*****
The nanokernel's FIFO object type is an implementation of a traditional
first in, first out queue. It is mainly intended for use by fibers.
For an overview and important information about FIFOs, see :ref:`nanokernel_fifos`.
------
.. doxygengroup:: nanokernel_fifo
:project: Zephyr
:content-only:
Ring Buffers
************
The ring buffer is an array-based
circular buffer, stored in first-in-first-out order. Concurrency control of
ring buffers is not implemented at this level.
For an overview and important information about ring buffers, see :ref:`nanokernel_ring_buffers`.
------
.. doxygengroup:: nanokernel_ringbuffer
:project: Zephyr
:content-only:
Stacks
******
The nanokernel's stack object type is an implementation of a traditional
last in, first out queue for a limited number of 32-bit data values.
It is mainly intended for use by fibers.
For an overview and important information about stacks, see :ref:`nanokernel_stacks`.
------
.. doxygengroup:: nanokernel_stack
:project: Zephyr
:content-only:
Timers
******
The nanokernel's :dfn:`timer` object type uses the kernel's system clock to
monitor the passage of time, as measured in ticks. It is mainly intended for use
by fibers.
For an overview and important information about timers, see :ref:`nanokernel_timers`.
------
.. doxygengroup:: nanokernel_timer
:project: Zephyr
:content-only:
Kernel Event Logger
*******************
The kernel event logger is a standardized mechanism to record events within the
Kernel while providing a single interface for the user to collect the data.
For an overview and important information about the kernel event logger API,
see :ref:`nanokernel_event_logger`.
------
.. doxygengroup:: nanokernel_event_logger
:project: Zephyr
:content-only:

View file

@ -31,7 +31,6 @@ Sections
introduction/introducing_zephyr.rst
getting_started/getting_started.rst
board/board.rst
kernel/kernel.rst
kernel_v2/kernel.rst
drivers/drivers.rst
subsystems/subsystems.rst

View file

@ -1,15 +0,0 @@
.. _common:
Common Kernel Services
######################
This section describes kernel services that are provided in both
microkernel applications and nanokernel applications.
.. toctree::
:maxdepth: 1
common_contexts.rst
common_kernel_clocks.rst
common_atomic.rst
common_float.rst

View file

@ -1,105 +0,0 @@
.. _common_atomic:
Atomic Services
###############
Concepts
********
The kernel supports an atomic 32-bit data type called :c:type:`atomic_t`.
A variable of this type can be read and modified by any task, fiber, or ISR
in an uninterruptible manner. This guarantees that the desired operation
will not be interfered with due to the scheduling of a higher priority context,
even if the higher priority context manipulates the same variable.
Purpose
*******
Use the atomic services to implement critical section processing that only
requires the manipulation of a single 32-bit data item.
.. note::
Using an atomic variable is typically far more efficient than using
other techniques to implement critical sections such as using
a microkernel mutex, offloading the processing to a fiber, or
locking interrupts.
Usage
*****
Example: Implementing an Uninterruptible Counter
================================================
This code shows how a function can keep track of the number of times
it has been invoked. Since the count is incremented atomically, there is
no risk that it will become corrupted in mid-increment if the routine is
interrupted by the scheduling of a higher priority context that also
calls the routine.
.. code-block:: c
atomic_t call_count;
int call_counting_routine(void)
{
/* increment invocation counter */
atomic_inc(&call_count);
/* do rest of routine's processing */
...
}
APIs
****
The following atomic operation APIs are provided by :file:`atomic.h`:
:c:func:`atomic_get()`
Reads an atomic variable.
:c:func:`atomic_set()`
Writes an atomic variable.
:c:func:`atomic_clear()`
Clears an atomic variable.
:c:func:`atomic_add()`
Performs an addition operation on an atomic variable.
:c:func:`atomic_sub()`
Performs a subtraction operation on an atomic variable.
:c:func:`atomic_inc()`
Performs an increment operation on an atomic variable.
:c:func:`atomic_dec()`
Performs a decrement operation on an atomic variable.
:c:func:`atomic_and()`
Perform an "and" operation on an atomic variable.
:c:func:`atomic_or()`
Perform an "or" operation on an atomic variable.
:c:func:`atomic_xor()`
Perform a "xor" operation on an atomic variable.
:c:func:`atomic_nand()`
Perform a "nand" operation on an atomic variable.
:c:func:`atomic_cas()`
Performs compare-and-set operation on an atomic variable.
:c:func:`atomic_set_bit()`
Sets specified bit of an atomic variable to 1.
:c:func:`atomic_clear_bit()`
Sets specified bit of an atomic variable to 0.
:c:func:`atomic_test_bit()`
Reads specified bit of an atomic variable.
:c:func:`atomic_test_and_set_bit()`
Reads specified bit of an atomic variable and sets it to 1.
:c:func:`atomic_test_and_clear_bit()`
Reads specified bit of an atomic variable and sets it to 0.

View file

@ -1,101 +0,0 @@
.. _common_contexts:
Execution Context Services
##########################
Concepts
********
Every kernel execution context has an associated *type* that indicates whether
the context is a task, a fiber, or the kernel's interrupt handling context.
All task and fiber contexts have a unique *thread identifier* value used to
uniquely identify them. Each task and fiber can also support a 32-bit *thread
custom data* value. This value is accessible only by the task or fiber itself,
and may be used by the application for any purpose. The default custom data
value for a task or fiber is zero.
.. note::
The custom data value is not available to ISRs because these operate
only within the shared kernel interrupt handling context.
The kernel permits a task or a fiber to perform a ``busy wait``, thus delaying
its processing for a specified time period. Such a delay occurs without
requiring the kernel to perform context switching, as it typically does with
timer and timeout services.
Purpose
*******
Use kernel execution context services when writing code that should
operate differently when it is executed by different contexts.
Use the ``busy wait`` service when the required delay is too short to
warrant context switching to another task or fiber. The ``busy wait``
service may also be used when performing a delay as part of the
nanokernel's background task in a nanokernel-only system; this task is
not allowed to voluntarily relinquish the CPU.
Usage
*****
Configuring Custom Data Support
===============================
Use the :option:`CONFIG_THREAD_CUSTOM_DATA` configuration option
to enable support for thread custom data. By default, custom data
support is disabled.
Example: Performing Execution Context-Specific Processing
=========================================================
This code shows how a routine can use a thread's custom data value
to limit the number of times a thread may call the routine. Note that
counting is not performed when the routine is called by an ISR; ISRs
do not have a custom data value.
.. note::
Obviously, only a single routine can use this technique
since it monopolizes the use of the custom data value.
.. code-block:: c
#define CALL_LIMIT 7
int call_tracking_routine(void)
{
uint32_t call_count;
if (sys_execution_context_type_get() != NANO_CTX_ISR) {
call_count = (uint32_t)sys_thread_custom_data_get();
if (call_count == CALL_LIMIT)
return -1;
call_count++;
sys_thread_custom_data_set((void *)call_count);
}
/* do rest of routine's processing */
...
}
APIs
****
The following kernel execution context APIs are common to both
:file:`microkernel.h` and :file:`nanokernel.h`:
:c:func:`sys_thread_self_get()`
Gets thread identifier of currently executing task or fiber.
:c:func:`sys_execution_context_type_get()`
Gets type of currently executing context (i.e. task, fiber, or ISR).
:c:func:`sys_thread_custom_data_set()`
Writes custom data for currently executing task or fiber.
:c:func:`sys_thread_custom_data_get()`
Reads custom data for currently executing task or fiber.

View file

@ -1,198 +0,0 @@
.. _common_float:
Floating Point Services
#######################
.. note::
Floating point services are currently available only for boards
based on the ARM Cortex-M4 or the Intel x86 architectures. The
services provided are architecture specific.
Concepts
********
The kernel allows an application's tasks and fibers to use floating point
registers on board configurations that support these registers.
.. note::
The kernel does not support the use of floating point registers by ISRs.
The kernel can be configured to provide only the floating point services
required by an application. Three modes of operation are supported,
which are described below. In addition, the kernel's support for the SSE
registers can be included or omitted, as desired.
No FP registers mode
====================
This mode is used when the application has no tasks or fibers that use
floating point registers. It is the kernel's default floating point services
mode.
If a task or fiber uses any floating point register,
the kernel generates a fatal error condition and aborts the thread.
Unshared FP registers mode
==========================
This mode is used when the application has only a single task or fiber
that uses floating point registers.
The kernel initializes the floating point registers so they can be used
by any task or fiber. The floating point registers are left unchanged
whenever a context switch occurs.
.. note::
Incorrect operation may result if two or more tasks or fibers use
floating point registers, as the kernel does not attempt to detect
(or prevent) multiple threads from using these registers.
Shared FP registers mode
========================
This mode is used when the application has two or more threads that use
floating point registers. Depending upon the underlying CPU architecture,
the kernel supports one or more of the following thread sub-classes:
* non-user: A thread that cannot use any floating point registers
* FPU user: A thread that can use the standard floating point registers
* SSE user: A thread that can use both the standard floating point registers
and SSE registers
The kernel initializes the floating point registers so they can be used
by any task or fiber, then saves and restores these registers during
context switches to ensure the computations performed by each FPU user
or SSE user are not impacted by the computations performed by the other users.
On the ARM Cortex-M4 architecture the kernel treats *all* tasks and fibers
as FPU users when shared FP registers mode is enabled. This means that the
floating point registers are saved and restored during a context switch, even
when the associated threads are not using them. Each task and fiber must
provide an extra 132 bytes of stack space where these register values can
be saved.
On the x86 architecture the kernel treats each task and fiber as a non-user,
FPU user or SSE user on a case-by-case basis. A "lazy save" algorithm is used
during context switching which updates the floating point registers only when
it is absolutely necessary. For example, the registers are *not* saved when
switching from an FPU user to a non-user thread, and then back to the original
FPU user. The following table indicates the amount of additional stack space a
thread must provide so the registers can be saved properly.
=========== =============== ==========================
Thread type FP register use Extra stack space required
=========== =============== ==========================
fiber any 0 bytes
task none 0 bytes
task FPU 108 bytes
task SSE 464 bytes
=========== =============== ==========================
The x86 kernel automatically detects that a given task or fiber is using
the floating point registers the first time the thread accesses them.
The thread is tagged as an SSE user if the kernel has been configured
to support the SSE registers, or as an FPU user if the SSE registers are
not supported. If this would result in a thread that is an FPU user being
tagged as an SSE user, or if the application wants to avoid the exception
handling overhead involved in auto-tagging threads, it is possible to
pre-tag a thread using one of the techniques listed below.
* An x86 task or fiber can tag itself as an FPU user or SSE user by calling
:c:func:`task_float_enable()` or :c:func:`fiber_float_enable()`
once it has started executing.
* An x86 fiber can be tagged as an FPU user or SSE user by its creator
by calling :c:func:`fiber_start()` with the :c:macro:`USE_FP` or
:c:macro:`USE_SSE` option, respectively.
* A microkernel task can be tagged as an FPU user or SSE user by adding it
to the :c:macro:`FPU` task group or the :c:macro:`SSE` task group
when the task is defined.
.. note::
Adding the task to the :c:macro:`FPU` or :c:macro:`SSE` task groups
by calling :c:func:`task_group_join()` does *not* tag the task
as an FPU user or SSE user.
If an x86 thread uses the floating point registers infrequently it can call
:c:func:`task_float_disable()` or :c:func:`fiber_float_disable()` as
appropriate to remove its tagging as an FPU user or SSE user. This eliminates
the need for the kernel to take steps to preserve the contents of the floating
point registers during context switches when there is no need to do so.
When the thread again needs to use the floating point registers it can re-tag
itself as an FPU user or SSE user using one of the techniques listed above.
Purpose
*******
Use the kernel floating point services when an application needs to
perform floating point operations.
Usage
*****
Configuring Floating Point Services
===================================
To configure unshared FP registers mode, enable the :option:`CONFIG_FLOAT`
configuration option and leave the :option:`CONFIG_FP_SHARING` configuration option
disabled.
To configure shared FP registers mode, enable both the :option:`CONFIG_FLOAT`
configuration option and the :option:`CONFIG_FP_SHARING` configuration option.
Also, ensure that any task that uses the floating point registers has
sufficient added stack space for saving floating point register values
during context switches, as described above.
Use the :option:`CONFIG_SSE` configuration option to enable support for
SSEx instructions (x86 only).
Example: Performing Floating Point Arithmetic
=============================================
This code shows how a routine can use floating point arithmetic to avoid
overflow issues when computing the average of a series of integer values.
Note that no special coding is required if the kernel is properly configured.
.. code-block:: c
int average(int *values, int num_values)
{
double sum;
int i;
sum = 0.0;
for (i = 0; i < num_values; i++) {
sum += *values;
values++;
}
return (int)((sum / num_values) + 0.5);
}
APIs
****
The following floating point services APIs (x86 only) are provided by
:file:`microkernel.h` and by :file:`nanokernel.h`:
:c:func:`fiber_float_enable()`
Tells the kernel that the specified task or fiber is now an FPU user
or SSE user.
:c:func:`task_float_enable()`
Tells the kernel that the specified task or fiber is now an FPU user
or SSE user.
:c:func:`fiber_float_disable()`
Tells the kernel that the specified task or fiber is no longer an FPU user
or SSE user.
:c:func:`task_float_disable()`
Tells the kernel that the specified task or fiber is no longer an FPU user
or SSE user.

View file

@ -1,144 +0,0 @@
.. _common_kernel_clocks:
Kernel Clocks
#############
Concepts
********
The kernel supports two distinct clocks.
* A 64-bit *system clock*, which is the foundation for the kernel's
time-based services. This clock is a counter measured in *ticks*,
and increments at a frequency determined by the application.
The kernel allows this clock to be accessed directly by reading
the timer. It can also be accessed indirectly by using a kernel
timer or timeout capability.
* A 32-bit *hardware clock*, which is used as the source of the ticks
for the system clock. This clock is a counter measured in unspecified
units (called *cycles*), and increments at a frequency determined by
the hardware.
The kernel allows this clock to be accessed directly by reading
the timer.
The kernel also provides a number of variables that can be used
to convert the time units used by the clocks into standard time units
(e.g. seconds, milliseconds, nanoseconds, etc), and to convert between
the two types of clock time units.
Purpose
*******
Use the system clock for time-based processing that does not require
high precision, such as implementing time limits or time delays.
Use the hardware clock for time-based processing that requires higher
precision than the system clock can provide, such as fine-grained
time measurements.
.. note::
The high frequency of the hardware clock, combined with its 32-bit size,
means that counter rollover must be taken into account when taking
high-precision measurements over an extended period of time.
Usage
*****
Configuring the Kernel Clocks
=============================
Use the :option:`CONFIG_SYS_CLOCK_TICKS_PER_SEC` configuration option
to specify how many ticks occur every second. Setting this value
to zero disables all system clock and hardware clock capabilities.
.. note::
Making the system clock frequency value larger allows the system clock
to provide finer-grained timing, but also increases the amount of work
the kernel has to do to process ticks (since they occur more frequently).
Example: Measuring Time with Normal Precision
=============================================
This code uses the system clock to determine how many ticks have elapsed
between two points in time.
.. code-block:: c
int64_t time_stamp;
int64_t ticks_spent;
/* capture initial time stamp */
time_stamp = sys_tick_get();
/* do work for some (extended) period of time */
...
/* compute how long the work took & update time stamp */
ticks_spent = sys_tick_delta(&time_stamp);
Example: Measuring Time with High Precision
===========================================
This code uses the hardware clock to determine how many ticks have elapsed
between two points in time.
.. code-block:: c
uint32_t start_time;
uint32_t stop_time;
uint32_t cycles_spent;
uint32_t nanoseconds_spent;
/* capture initial time stamp */
start_time = sys_cycle_get_32();
/* do work for some (short) period of time */
...
/* capture final time stamp */
stop_time = sys_cycle_get_32();
/* compute how long the work took (assumes no counter rollover) */
cycles_spent = stop_time - start_time;
nanoseconds_spent = SYS_CLOCK_HW_CYCLES_TO_NS(cycles_spent);
APIs
****
Kernel clock APIs provided by :file:`microkernel.h`
===================================================
:cpp:func:`sys_tick_get()`, :cpp:func:`sys_tick_get_32()`
Read the system clock.
:cpp:func:`sys_tick_delta()`, :cpp:func:`sys_tick_delta_32()`
Compute the elapsed time since an earlier system clock reading.
Kernel clock APIs common to both :file:`microkernel.h`and :file:`nanokernel.h`
==============================================================================
:cpp:func:`sys_tick_get()`, :cpp:func:`sys_tick_get_32()`
Read the system clock.
:cpp:func:`sys_tick_delta()`, :cpp:func:`sys_tick_delta_32()`
Compute the elapsed time since an earlier system clock reading.
:cpp:func:`sys_cycle_get_32()`
Read hardware clock.
Kernel clock variables common to both :file:`microkernel.h` and :file:`nanokernel.h`
====================================================================================
:c:data:`sys_clock_ticks_per_sec`
The number of system clock ticks in a single second.
:c:data:`sys_clock_hw_cycles_per_sec`
The number of hardware clock cycles in a single second.
:c:data:`sys_clock_us_per_tick`
The number of microseconds in a single system clock tick.
:c:data:`sys_clock_hw_cycles_per_tick`
The number of hardware clock cycles in a single system clock tick.

View file

@ -1,31 +0,0 @@
.. _kernel:
Zephyr Kernel Primer
####################
This section describes the major features of the Zephyr kernel
and how to use them.
.. toctree::
:maxdepth: 2
overview/overview.rst
common/common.rst
microkernel/microkernel.rst
nanokernel/nanokernel.rst
.. rubric:: Abbreviations
+---------------+-------------------------------------------------------------------+
| Abbreviations | Definition |
+===============+===================================================================+
| API | Application Program Interface: typically a defined set |
| | of routines and protocols for building software inputs and output |
| | mechanisms. |
+---------------+-------------------------------------------------------------------+
| ISR | Interrupt Service Routine |
+---------------+-------------------------------------------------------------------+
| IDT | Interrupt Descriptor Table |
+---------------+-------------------------------------------------------------------+
| XIP | eXecute In Place |
+---------------+-------------------------------------------------------------------+

View file

@ -1,18 +0,0 @@
.. _microkernel:
Microkernel Services
####################
This section describes the various services provided by the microkernel.
These services are available in microkernel applications, but not
nanokernel applications.
.. toctree::
:maxdepth: 1
microkernel_tasks
microkernel_fibers.rst
microkernel_timers
microkernel_memory
microkernel_synchronization
microkernel_data

View file

@ -1,13 +0,0 @@
.. _microkernel_data:
Data Passing Services
#####################
This section contains the information about all data passing services available in the microkernel.
.. toctree::
:maxdepth: 2
microkernel_fifos
microkernel_mailboxes
microkernel_pipes

View file

@ -1,232 +0,0 @@
.. _microkernel_events:
Events
######
Concepts
********
The microkernel's :dfn:`event` objects are an implementation of traditional
binary semaphores.
Any number of events can be defined in a microkernel system. An event is
typically *sent* by a task, fiber, or ISR and *received* by a task, which then
takes some action in response. Events are the easiest and most efficient way to
synchronize operations between two different execution contexts.
Each event has a **name** that uniquely identifies it, and an associated
**event state**. Each event starts off in the ``clear`` state. Once that event
gets sent, it is placed into the ``set`` state (where it remains) until it is
received. When the event is received, it reverts back to the ``clear`` state.
Sending an event that is already set is permitted; however, this does not affect
the existing state, it and does not allow the receiving task to recognize whether
the event has been sent more than once.
The receiving task can test the state of an event and decide whether or not
to block it. The kernel allows only a single receiving task to wait for a given
event; if a second task attempts to wait, its receive operation immediately
returns a failure indication.
Each event also has an optional **event handler** function, which is executed
by the microkernel server fiber when the event is sent. An event handler
function lets an event be processed without requiring the kernel to schedule
a receiving task; this allows an event to be processed more quickly.
When an event handler determines that the event can be ignored, or that it
can process the event without the assistance of a task, the event handler
returns a value of zero, and the event's state is left unchanged. When an event
handler determines that additional processing *is* required, it returns a
non-zero value, and the event's state is changed to *set* (if it isn't already
set).
An event handler function can be used to improve the efficiency of event
processing by the receiving task. In some situations, event handlers can even
eliminate the need for a receiving task. Any event that does not require
an event handler can specify the :c:macro:`NULL` function. The event handler
function is passed the name of the event being sent each time it is invoked,
allowing the same function to be shared by multiple events. An event's event
handler function is specified at compile-time, but can be changed subsequently
at run-time.
Purpose
*******
Use an event to signal a task to take action in response to a condition
detected by another task, a fiber, or an ISR.
Use an event handler to allow the microkernel server fiber to handle an event,
prior to (or instead of) letting a task handle the event.
Usage
*****
Defining an Event
=================
The following parameters must be defined:
*name*
This specifies a unique name for the event.
*handler*
This specifies the name of the event handler function,
which should have the following form:
.. code-block:: c
int <entry_point>(int event)
{
/* start handling event; return zero if all done, */
/* or non-zero to let receiving task handle event */
...
}
If no event handler is required specify :c:macro:`NULL`.
Public Event
------------
Define the event in the application's MDEF using the following syntax:
.. code-block:: console
EVENT name handler
For example, the file :file:`projName.mdef` defines two events as follows:
.. code-block:: console
% EVENT NAME ENTRY
% ==========================================
EVENT KEYPRESS validate_keypress
EVENT BUTTONPRESS NULL
A public event can be referenced by name from any source file that includes
the file :file:`zephyr.h`.
Private Event
-------------
Define the event in a source file with the following syntax:
.. code-block:: c
DEFINE_EVENT(name, handler);
Example: Defining a Private Event, Enabling it from Elsewhere in the Application
================================================================================
This code defines a private event named ``PRIV_EVENT`` which has no associated
event handler function.
.. code-block:: c
DEFINE_EVENT(PRIV_EVENT, NULL);
To enable this event from a different source file, use the following syntax:
.. code-block:: c
extern const kevent_t PRIV_EVENT;
Example: Signaling an Event from an ISR
=======================================
This code signals an event during the processing of an interrupt.
.. code-block:: c
void keypress_interrupt_handler(void *arg)
{
...
isr_event_signal(KEYPRESS);
...
}
Example: Consuming an Event using a Task
========================================
This code processes events of a single type using a task.
.. code-block:: c
void keypress_task(void)
{
/* consume key presses */
while (1) {
/* wait for a key press to be signalled */
task_event_recv(KEYPRESS, TICKS_NONE);
/* determine what key was pressed */
char c = get_keypress();
/* process key press */
...
}
}
Example: Filtering Event Signals using an Event Handler
=======================================================
This code registers an event handler to filter out unwanted events,
allowing the receiving task to wake up only when needed.
.. code-block:: c
int validate_keypress(int event_id_is_unused)
{
/* determine what key was pressed */
char c = get_keypress();
/* signal task only if key pressed was a digit */
if ((c >= '0') && (c <= '9')) {
/* save key press information */
...
/* event is signalled to task */
return 1;
} else {
/* event is not signalled to task */
return 0;
}
}
void keypress_task(void)
{
/* register the filtering routine */
task_event_handler_set(KEYPRESS, validate_keypress);
/* consume key presses */
while (1) {
/* wait for a key press to be signalled */
task_event_recv(KEYPRESS, TICKS_NONE);
/* process saved key press, which must be a digit */
...
}
}
APIs
****
Event APIs provided by :file:`microkernel.h`
============================================
:cpp:func:`isr_event_send()`
Signal an event from an ISR.
:cpp:func:`fiber_event_send()`
Signal an event from a fiber.
:cpp:func:`task_event_send()`
Signal an event from a task.
:cpp:func:`task_event_recv()`
Wait for an event signal for a specified time period.
:cpp:func:`task_event_handler_set()`
Register an event handler function for an event.

View file

@ -1,47 +0,0 @@
.. _microkernel_fibers:
Fiber Services
##############
Concepts
********
A :dfn:`fiber` is a lightweight, non-preemptible thread of execution that
implements a portion of an application's processing. Fiber-based services are
often used in device drivers and for performance-critical work.
A microkernel application can use all of the fiber capabilities that are
available to a nanokernel application; for more information see
:ref:`nanokernel_fibers`.
While a fiber often uses one or more nanokernel object types to carry
out its work, it also can interact with microkernel events and semaphores
to a limited degree. For example, a fiber can signal a task by giving a
microkernel semaphore, but it cannot take a microkernel semaphore. For more
information see :ref:`microkernel_events` and :ref:`microkernel_semaphores`.
.. _microkernel_server_fiber:
Microkernel Server Fiber
========================
The microkernel automatically spawns a system thread, known as the
*microkernel server* fiber, which performs most operations involving
microkernel objects. The nanokernel scheduler decides which fibers
get scheduled and when; it will schedule the microkernel server fiber
when there are no fibers of a higher priority.
By default, the microkernel server fiber has priority 0 (that is,
the highest priority). However, this can be changed. If you drop its
priority, the nanokernel scheduler will give precedence to other,
higher-priority fibers, such as time-sensitive device driver or
application fibers.
Both the fiber's stack size and scheduling priority can be configured
with the :option:`CONFIG_MICROKERNEL_SERVER_STACK_SIZE` and
:option:`CONFIG_MICROKERNEL_SERVER_PRIORITY` configuration options,
respectively.
See also :ref:`microkernel_server`.

View file

@ -1,193 +0,0 @@
.. _microkernel_fifos:
FIFOs
#####
Concepts
********
The microkernel's :dfn:`FIFO` object type is an implementation of a traditional
first in, first out queue.
A FIFO allows tasks to asynchronously send and receive fixed-size data items.
Each FIFO has an associated ring buffer used to hold data items that have been
sent but not yet received.
Any number of FIFOs can be defined in a microkernel system. Each FIFO needs:
* A **name** that uniquely identifies it.
* A **maximum quantity** of data items that can be queued in its ring buffer.
* The **data item size**, measured in bytes, of each data item it can handle.
A task sends a data item by specifying a pointer to an area containing the data
to be sent; the size of the data area must equal the FIFO's data item size.
The data is either given directly to a receiving task (if one is waiting), or
copied to the FIFO's ring buffer (if space is available). When a FIFO is full,
the sending task can wait for space to become available.
Any number of tasks may wait on a full FIFO simultaneously; when space for
a data item becomes available, that space is given to the highest-priority
task that has waited the longest.
A task receives a data item by specifying a pointer to an area to receive
the data; the size of the receiving area must equal the FIFO's data item size.
The data is copied from the FIFO's ring buffer (if it contains data items)
or taken directly from a sending task (if the FIFO is empty). When a FIFO
is empty the task may choose to wait for a data item to become available.
Any number of tasks may wait on an empty FIFO simultaneously; when a data item
becomes available it is given to the highest priority task that has waited
the longest.
Purpose
*******
Use a FIFO to transfer small data items between tasks in an asynchronous and
anonymous manner.
.. note::
A FIFO can be used to transfer large data items, if desired. However,
it is often preferable to send pointers to large data items to avoid
copying the data. The microkernel's memory map and memory pool object
types can be helpful for data transfers of this sort.
A synchronous transfer can be achieved by using the microkernel's mailbox
object type.
A non-anonymous transfer can be achieved by having the sending task
embed its name in the data it sends, where it can be retrieved by
the receiving task. However, there is no straightforward way for the
sending task to determine the name of the task that received its data.
The microkernel's mailbox object type *does* support non-anonymous data
transfer.
Usage
*****
Defining a FIFO
===============
The following parameters must be defined:
*name*
This specifies a unique name for the FIFO.
*depth*
This specifies the maximum number of data items
that can exist at any one time.
*width*
This specifies the size (in bytes) of each data item.
Public FIFO
-----------
Define the FIFO in the application's :file:`.MDEF` file with the
following syntax:
.. code-block:: console
FIFO name depth width
For example, the file :file:`projName.mdef` defines a FIFO
that holds up to 10 items that are each 12 bytes long as follows:
.. code-block:: console
% FIFO NAME DEPTH WIDTH
% =============================
FIFO SIGNAL_FIFO 10 12
A public FIFO can be referenced by name from any source file that includes
the file :file:`zephyr.h`.
Private FIFO
------------
Define the FIFO in a source file using the following syntax:
.. code-block:: c
DEFINE_FIFO(fifo_name, depth, width)
For example, the following code defines a private FIFO named ``PRIV_FIFO``.
.. code-block:: c
DEFINE_FIFO(PRIV_FIFO, 10, 12);
To access this FIFO from a different source file, use the following syntax:
.. code-block:: c
extern const kfifo_t PRIV_FIFO;
Example: Writing to a FIFO
==========================
This code uses a FIFO to pass data items from a producing task to
one or more consuming tasks. If the FIFO fills up because the consumers
can't keep up, throw away all existing data so newer data can be saved.
.. code-block:: c
void producer_task(void)
{
struct data_item_t data;
while (1) {
/* create data item to send (e.g. measurement, timestamp, ...) */
data = ...
/* send data to consumers */
while (task_fifo_put(SIGNAL_FIFO, &data, TICKS_NONE) != RC_OK) {
/* FIFO is full */
task_fifo_purge(SIGNAL_FIFO);
}
/* data item was successfully added to FIFO */
}
}
Example: Reading from a FIFO
============================
This code uses a FIFO to process data items generated by one or more
producing tasks.
.. code-block:: c
void consumer_task(void)
{
struct data_item_t data;
while (1) {
/* get a data item */
task_fifo_get(SIGNAL_FIFO, &data, TICKS_UNLIMITED);
/* process data item */
...
}
}
APIs
****
FIFO APIs provided by :file:`microkernel.h`
===========================================
:cpp:func:`task_fifo_put()`
Write item to a FIFO, or wait for a specified time period if the FIFO is
full.
:cpp:func:`task_fifo_get()`
Read item from a FIFO, or wait for a specified time period if the FIFO is
empty.
:c:func:`task_fifo_purge()`
Discard all items in a FIFO and unblock any tasks waiting to read or write
an item.
:c:func:`task_fifo_size_get()`
Read the number of items currently in a FIFO.

View file

@ -1,645 +0,0 @@
.. _microkernel_mailboxes:
Mailboxes
#########
Concepts
********
The microkernel's :dfn:`mailbox` object type is an implementation of a
traditional message queue.
A mailbox allows tasks to exchange messages. A task that sends a message is
known as the *sending task*, while a task that receives the message is known
as the *receiving task*. Messages may not be sent or received by fibers or
ISRs, nor may a given message be received by more than one task;
point-to-multipoint messaging is not supported.
A mailbox has a queue of messages that have been sent, but not yet received.
The messages in the queue are sorted by priority, allowing a higher priority
message to be received before a lower priority message that was sent earlier.
Messages of equal priority are handled in a first in, first out manner.
Any number of mailboxes can be defined in a microkernel system. Each mailbox
needs a **name** that uniquely identifies it. A mailbox does not limit the
number of messages it can queue, nor does it place limits on the size of the
message it handles.
The content of a message is stored in an array of bytes, called the
*message data*. The size and format of the message data is application-defined,
and can vary from one message to the next. Message data may be stored in a
buffer provided by the task that sends or receives the message, or in a memory
pool block. The message data portion of a message is optional; a message without
any message data is called an *empty message*.
The life cycle of a message is fairly simple. A message is created when it
is given to a mailbox by the sending task. The message is then owned
by the mailbox until it is given to a receiving task. The receiving task may
retrieve the message data when it receives the message from the mailbox,
or it may perform data retrieval during a second, subsequent mailbox operation.
Only when data retrieval has been performed is the message deleted by the
mailbox.
A message can be exchanged non-anonymously or anonymously between a :dfn:`sending`
and :dfn:`receiving` task. A sending task can specify the name of the task to which
the message is being sent, or it can specify that any task may receive the message.
Likewise, a receiving task can specify the name of the task from which it wishes to
receive a message, or it can specify that it is willing to receive a message from
any task. A message is exchanged only when the requirements for both the sending task
and receiving task are satisfied; such tasks are said to be *compatible*.
For example, task A sends a message to task B, but it will be received by task B
only if the latter tries to receive a message from task A (or from any task). The
exchange will not occur if task B tries to receive a message from task C. The message
can never be received by task C, even if it is trying to receive a message from task
A (or from any task).
Messages can be exchanged :dfn:`synchronously` or :dfn:`asynchronously`. In a
synchronous exchange, the sending task blocks until the message has been fully
processed by the receiving task. In an asynchronous exchange, the sending task
does not wait until the message has been received by another task before continuing;
this allows the task to do other work (such as gather data that will be used
in the next message) *before* the message is given to a receiving task and
fully processed. The technique used for a given message exchange is determined
by the sending task.
The synchronous exchange technique provides an inherent form of flow control,
preventing a sending task from generating messages faster than they can be
consumed by receiving tasks. The asynchronous exchange technique provides an
optional form of flow control, which allows a sending task to determine
if a previously sent message still exists before sending a subsequent message.
Message Descriptor
==================
A :dfn:`message descriptor` is a data structure that specifies where a message's
data is located, and how the message is to be handled by the mailbox. Both the
sending task and the receiving task pass a message descriptor to the mailbox
when accessing a mailbox. The mailbox uses both message descriptors to perform
a message exchange between compatible sending and receiving tasks. The mailbox
also updates some fields of the descriptors during the exchange, allowing both
tasks to know what occurred.
A message descriptor is a structure of type :c:type:`struct k_msg`. The fields
listed below are available for application use; all other fields are for
kernel use only.
*info*
A 32-bit value that is exchanged by the message sender and receiver,
and whose meaning is defined by the application. This exchange is
bi-directional, allowing the sender to pass a value to the receiver
during any message exchange, and allowing the receiver to pass a value
to the sender during a synchronous message exchange.
*size*
The message data size, in bytes. Set it to zero when sending an empty
message, or when discarding the message data of a received message.
The mailbox updates this field with the actual number of data bytes
exchanged once the message is received.
*tx_data*
A pointer to the sending task's message data buffer. Set it to
:c:macro:`NULL` when sending a memory pool block, or when sending
an empty message. (Not used when receiving a message.)
*tx_block*
The descriptor for the memory pool block containing the sending task's
message data. (Not used when sending a message data buffer,
or when sending an empty message. Not used when receiving a message.)
*rx_data*
A pointer to the receiving task's message data buffer. Set it to
:c:macro:`NULL` when the message's data is not wanted, or when it will be
retrieved by a subsequent mailbox operation. (Not used when sending
a message.)
*tx_task*
The name of the sending task. Set it to :c:macro:`ANYTASK` to receive
a message sent by any task. The mailbox updates this field with the
actual sender's name once the message is received. (Not used when
sending a message.)
*rx_task*
The name of the receiving task. Set it to :c:macro:`ANYTASK` to allow
any task to receive the message. The mailbox updates this field with
the actual receiver's name once the message is received, but only if
the message is sent synchronously. (Not used when receiving a message.)
Sending a Message
=================
A task sends a message by first creating the message data to be sent (if any).
The data may be placed in a message buffer -- such as an array or structure
variable -- whose contents are copied to an area supplied by the receiving task
during the message exchange. Alternatively, the data may be placed in a block
allocated from a memory pool, which gets handed off to the receiving task
during the exchange. A message buffer is typically used when the data volume
flowing through is small, and the cost of copying the data is less than the
cost of allocating and freeing a memory pool block. A memory pool block *must*
be used when a non-empty message is sent asynchronously.
Next, the task creates a message descriptor that characterizes the message
to be sent, as described in the previous section.
Finally, the task calls one of the mailbox send APIs to initiate the
message exchange. The message is immediately given to a compatible receiving
task, if one is currently waiting for a message. Otherwise, the message is added
to the mailbox's queue of messages, according to the priority specified by
the sending task. Typically, a sending task sets the message priority to
its own task priority level, allowing messages sent by higher priority tasks
to take precedence over those sent by lower priority tasks.
For a synchronous send operation, the operation normally completes when a
receiving task has both received the message and retrieved the message data.
If the message is not received before the waiting period specified by the
sending task is reached, the message is removed from the mailbox's queue
and the sending task continues processing. When a send operation completes
successfully the sending task can examine the message descriptor to determine
which task received the message and how much data was exchanged, as well as
the application-defined info value supplied by the receiving task.
.. note::
A synchronous send operation may block the sending task indefinitely -- even
when the task specifies a maximum waiting period -- since the waiting period
only limits how long the mailbox waits before the message is received
by another task. Once a message is received there is no limit to the time
the receiving task may take to retrieve the message data and unblock
the sending task.
For an asynchronous send operation, the operation always completes immediately.
This allows the sending task to continue processing regardless of whether the
message is immediately given to a receiving task or is queued by the mailbox.
The sending task may optionally specify a semaphore that the mailbox gives
when the message is deleted by the mailbox (for example, when the message has been
received and its data retrieved by a receiving task). The use of a semaphore
allows the sending task to easily implement a flow control mechanism that
ensures that the mailbox holds no more than an application-specified number
of messages from a sending task (or set of sending tasks) at any point in time.
Receiving a Message
===================
A task receives a message by first creating a message descriptor that
characterizes the message it wants to receive. It then calls one of the
mailbox receive APIs. The mailbox searches its queue of messages
and takes the first one it finds that satisfies both the sending and
receiving tasks' message descriptor criteria. If no compatible message
exists, the receiving task may choose to wait for one to be sent. If no
compatible message appears before the waiting period specified
by the receiving task is reached, the receive operation fails and
the receiving task continues processing. Once a receive operation completes
successfully the receiving task can examine the message descriptor
to determine which task sent the message, how much data was exchanged,
and the application-defined info value supplied by the sending task.
The receiving task controls both the quantity of data it retrieves from an
incoming message and where the data ends up. The task may choose to take
all of the data in the message, to take only the initial part of the data,
or to take no data at all. Similarly, the task may choose to have the data
copied into a buffer area of its choice or to have it placed in a memory
pool block. A message buffer is typically used when the volume of data
involved is small, and the cost of copying the data is less than the cost
of allocating and freeing a memory pool block.
The following sections outline various approaches a receiving task may use
when retrieving message data.
Retrieving Data Immediately into a Buffer
-----------------------------------------
The most straightforward way for a task to retrieve message data is to
specify a buffer when the message is received. The task indicates
both the location of the buffer (which must not be :c:macro:`NULL`)
and its size (which must be greater than zero).
The mailbox copies the message's data to the buffer as part of the
receive operation. If the buffer is not big enough to contain all of the
message's data, any uncopied data is lost. If the message is not big enough
to fill all of the buffer with data, the unused portion of the buffer is
left unchanged. In all cases the mailbox updates the receiving task's
message descriptor to indicate how many data bytes were copied (if any).
The immediate data retrieval technique is best suited for applications involving
small messages where the maximum size of a message is known in advance.
.. note::
This technique can be used when the message data is actually located
in a memory pool block supplied by the sending task. The mailbox copies
the data into the buffer specified by the receiving task, then automatically
frees the block back to its memory pool. This allows a receiving task
to retrieve message data without having to know whether the data
was sent using a buffer or a block.
Retrieving Data Subsequently into a Buffer
------------------------------------------
A receiving task may choose to retrieve no message data at the time the message
is received, so that it can retrieve the data into a buffer at a later time.
The task does this by specifying a buffer location of :c:macro:`NULL`
and a size indicating the maximum amount of data it is willing to retrieve
later (which must be greater than or equal to zero).
The mailbox does not copy any message data as part of the receive operation.
However, the mailbox still updates the receiving task's message descriptor
to indicate how many data bytes are available for retrieval.
The receiving task must then respond as follows:
* If the message descriptor size is zero, then either the received message is
an empty message or the receiving task did not want to receive any
message data. The receiving task does not need to take any further action
since the mailbox has already completed data retrieval and deleted the
message.
* If the message descriptor size is non-zero and the receiving task still
wants to retrieve the data, the task must supply a buffer large enough
to hold the data. The task first sets the message descriptor's
*rx_data* field to the address of the buffer, then calls
:c:func:`task_mbox_data_get()`. This instructs the mailbox to copy the data
and delete the message.
* If the message descriptor size is non-zero and the receiving task does *not*
want to retrieve the data, the task sets the message descriptor's
*size* field to zero and calls :c:func:`task_mbox_data_get()`.
This instructs the mailbox to delete the message without copying the data.
The subsequent data retrieval technique is suitable for applications where
immediate retrieval of message data is undesirable. For example, it can be
used when memory limitations make it impractical for the receiving task to
always supply a buffer capable of holding the largest possible incoming message.
.. note::
This technique can be used when the message data is actually located
in a memory pool block supplied by the sending task. The mailbox copies
the data into the buffer specified by the receiving task, then automatically
frees the block back to its memory pool. This allows a receiving task
to retrieve message data without having to know whether the data
was sent using a buffer or a block.
Retrieving Data Subsequently into a Block
-----------------------------------------
A receiving task may choose to retrieve message data into a memory pool block,
rather than a buffer area of its choice. This is done in much the same way
as retrieving data subsequently into a buffer---the receiving task first
receives the message without its data, then retrieves the data by calling
:c:func:`task_mbox_data_block_get()`. The latter call fills in the block
descriptor supplied by the receiving task, allowing the task to access the data.
This call also causes the mailbox to delete the received message, since
data retrieval has been completed. The receiving task is then responsible
for freeing the block back to the memory pool when the data is no longer needed.
This technique is best suited for applications where the message data has
been sent using a memory pool block, either because a large amount of data
is involved or because the message was sent asynchronously.
.. note::
This technique can be used when the message data is located in a buffer
supplied by the sending task. The mailbox automatically allocates a memory
pool block and copies the message data into it. However, this is much less
efficient than simply retrieving the data into a buffer supplied by the
receiving task. In addition, the receiving task must be designed to handle
cases where the data retrieval operation fails because the mailbox cannot
allocate a suitable block from the memory pool. If such cases are possible,
the receiving task can call :c:func:`task_mbox_data_block_get_wait()` or
:c:func:`task_mbox_data_block_get_wait_timeout()` to permit the task to wait
until a suitable block can be allocated. Alternatively, the task can use
:c:func:`task_mbox_data_get()` to inform the mailbox that it no longer wishes
to receive the data at all, allowing the mailbox to release the message.
Purpose
*******
Use a mailbox to transfer data items between tasks whenever the capabilities
of a FIFO are insufficient.
Usage
*****
Defining a Mailbox
==================
The following parameters must be defined:
*name*
This specifies a unique name for the mailbox.
Public Mailbox
--------------
Define the mailbox in the application's MDEF using the following syntax:
.. code-block:: console
MAILBOX name
For example, the file :file:`projName.mdef` defines a mailbox as follows:
.. code-block:: console
% MAILBOX NAME
% ==========================
MAILBOX REQUEST_BOX
A public mailbox can be referenced by name from any source file that
includes the file :file:`zephyr.h`.
Private Mailbox
---------------
Define the mailbox in a source file using the following syntax:
.. code-block:: c
DEFINE_MAILBOX(name);
For example, the following code defines a private mailbox named ``PRIV_MBX``.
.. code-block:: c
DEFINE_MAILBOX(PRIV_MBX);
The mailbox ``PRIV_MBX`` can be used in the same style as those
defined in the MDEF.
To use this mailbox from a different source file use the following syntax:
.. code-block:: c
extern const kmbox_t PRIV_MBX;
Example: Sending a Variable-Sized Mailbox Message
=================================================
This code uses a mailbox to synchronously pass variable-sized requests
from a producing task to any consuming task that wants it. The message
"info" field is used to exchange information about the maximum size buffer
that each task can handle.
.. code-block:: c
void producer_task(void)
{
char buffer[100];
int buffer_bytes_used;
struct k_msg send_msg;
k_priority_t send_priority = task_priority_get();
while (1) {
/* generate data to send */
...
buffer_bytes_used = ... ;
memcpy(buffer, source, buffer_bytes_used);
/* prepare to send message */
send_msg.info = buffer_bytes_used;
send_msg.size = buffer_bytes_used;
send_msg.tx_data = buffer;
send_msg.rx_task = ANYTASK;
/* send message and wait until a consumer receives it */
task_mbox_put(REQUEST_BOX, send_priority,
&send_msg,TICKS_UNLIMITED);
/* info, size, and rx_task fields have been updated */
/* verify that message data was fully received */
if (send_msg.size < buffer_bytes_used) {
printf("some message data dropped during transfer!");
printf("receiver only had room for %d bytes", send_msg.info);
}
}
}
Example: Receiving a Variable-Sized Mailbox Message
===================================================
This code uses a mailbox to process variable-sized requests from any
producing task, using the immediate data retrieval technique. The message
"info" field is used to exchange information about the maximum size buffer
that each task can handle.
.. code-block:: c
void consumer_task(void)
{
struct k_msg recv_msg;
char buffer[100];
int i;
int total;
while (1) {
/* prepare to receive message */
recv_msg.info = 100;
recv_msg.size = 100;
recv_msg.rx_data = buffer;
recv_msg.rx_task = ANYTASK;
/* get a data item, waiting as long as needed */
task_mbox_get(REQUEST_BOX, &recv_msg, TICKS_UNLIMITED);
/* info, size, and tx_task fields have been updated */
/* verify that message data was fully received */
if (recv_msg.info != recv_msg.size) {
printf("some message data dropped during transfer!");
printf("sender tried to send %d bytes", recv_msg.info);
}
/* compute sum of all message bytes (from 0 to 100 of them) */
total = 0;
for (i = 0; i < recv_msg.size; i++) {
total += buffer[i];
}
}
}
Example: Sending an Empty Mailbox Message
=========================================
This code uses a mailbox to synchronously pass 4 byte random values
to any consuming task that wants one. The message "info" field is
large enough to carry the information being exchanged, so the data buffer
portion of the message isn't used.
.. code-block:: c
void producer_task(void)
{
struct k_msg send_msg;
k_priority_t send_priority = task_priority_get();
while (1) {
/* generate random value to send */
uint32_t random_value = sys_rand32_get();
/* prepare to send empty message */
send_msg.info = random_value;
send_msg.size = 0;
send_msg.tx_data = NULL;
send_msg.rx_task = ANYTASK;
/* send message and wait until a consumer receives it */
task_mbox_put(REQUEST_BOX, send_priority,
&send_msg, TICKS_UNLIMITED);
/* no need to examine the receiver's "info" value */
}
}
Example: Deferring the Retrieval of Message Data
================================================
This code uses a mailbox's subsequent data retrieval mechanism to get message
data from a producing task only if the message meets certain criteria,
thereby eliminating unneeded data copying. The message "info" field supplied
by the sender is used to classify the message.
.. code-block:: c
void consumer_task(void)
{
struct k_msg recv_msg;
char buffer[10000];
while (1) {
/* prepare to receive message */
recv_msg.size = 10000;
recv_msg.rx_data = NULL;
recv_msg.rx_task = ANYTASK;
/* get message, but not its data */
task_mbox_get(REQUEST_BOX, &recv_msg, TICKS_UNLIMITED);
/* get message data for only certain types of messages */
if (is_message_type_ok(recv_msg.info)) {
/* retrieve message data and delete the message */
recv_msg.rx_data = buffer;
task_mbox_data_get(&recv_msg);
/* process data in "buffer" */
...
} else {
/* ignore message data and delete the message */
recv_msg.size = 0;
task_mbox_data_get(&recv_msg);
}
}
}
Example: Sending an Asynchronous Mailbox Message
================================================
This code uses a mailbox to send asynchronous messages using memory blocks
obtained from ``TXPOOL``, thereby eliminating unneeded data copying when
exchanging large messages. The optional semaphore capability is used to hold off
the sending of a new message until the previous message has been consumed,
so that a backlog of messages doesn't build up when the consuming task is unable
to keep up.
.. code-block:: c
void producer_task(void)
{
struct k_msg send_msg;
kpriority_t send_priority = task_priority_get();
volatile char *hw_buffer;
/* indicate that all previous messages have been processed */
task_sem_give(MY_SEMA);
while (1) {
/* allocate memory block that will hold message data */
task_mem_pool_alloc(&send_msg.tx_block, TXPOOL,
4096, TICKS_UNLIMITED);
/* keep saving hardware-generated data in the memory block */
/* until the previous message has been received by the consumer */
do {
memcpy(send_msg.tx_block.pointer_to_data, hw_buffer, 4096);
} while (task_sem_take(MY_SEMA, TICKS_NONE) != RC_OK);
/* finish preparing to send message */
send_msg.size = 4096;
send_msg.rx_task = ANYTASK;
/* send message containing most current data and loop around */
task_mbox_block_put(REQUEST_BOX, send_priority, &send_msg, MY_SEMA);
}
}
Example: Receiving an Asynchronous Mailbox Message
==================================================
This code uses a mailbox to receive messages sent asynchronously using a
memory block, thereby eliminating unneeded data copying when processing
a large message.
.. code-block:: c
void consumer_task(void)
{
struct k_msg recv_msg;
struct k_block recv_block;
int total;
char *data_ptr;
int i;
while (1) {
/* prepare to receive message */
recv_msg.size = 10000;
recv_msg.rx_data = NULL;
recv_msg.rx_task = ANYTASK;
/* get message, but not its data */
task_mbox_get(REQUEST_BOX, &recv_msg, TICKS_UNLIMITED);
/* get message data as a memory block and discard message */
task_mbox_data_block_get(&recv_msg, &recv_block, RXPOOL,
TICKS_UNLIMITED);
/* compute sum of all message bytes in memory block */
total = 0;
data_ptr = (char *)(recv_block.pointer_to_data);
for (i = 0; i < recv_msg.size; i++) {
total += data_ptr++;
}
/* release memory block containing data */
task_mem_pool_free(&recv_block);
}
}
.. note::
An incoming message that was sent synchronously is also processed correctly
by this algorithm, since the mailbox automatically creates a memory block
containing the message data using ``RXPOOL``. However, the performance benefit
of using the asynchronous approach is lost.
APIs
****
The following APIs for mailbox operations are provided by the kernel:
:cpp:func:`task_mbox_put()`
Send synchronous message to a receiving task, with time limited waiting.
:c:func:`task_mbox_block_put()`
Send asynchronous message to a receiving task, or to a mailbox queue.
:cpp:func:`task_mbox_get()`
Get message from a mailbox, with time limited waiting.
:c:func:`task_mbox_data_get()`
Retrieve message data into a buffer.
:cpp:func:`task_mbox_data_block_get()`
Retrieve message data into a block, with time limited waiting.

View file

@ -1,13 +0,0 @@
.. _microkernel_memory:
Memory Management Services
##########################
This section contains the information about the memory management services available in the
microkernel.
.. toctree::
:maxdepth: 2
microkernel_memory_maps
microkernel_memory_pools

View file

@ -1,185 +0,0 @@
.. _microkernel_memory_maps:
Memory Maps
###########
Concepts
********
The microkernel's memory map objects provide dynamic allocation and
release of fixed-size memory blocks.
Any number of memory maps can be defined in a microkernel system. Each
memory map has:
* A **name** that uniquely identifies it.
* The **number of blocks** it contains.
* **Block size** of a single block, measured in bytes.
The number of blocks and block size values cannot be zero. On most
processors, the block size must be defined as a multiple of the word size.
A task that needs to use a memory block simply allocates it from a memory
map. When all the blocks are currently in use, the task can wait
for one to become available. When the task finishes with a memory block,
it must release the block back to the memory map that allocated it so that
the block can be reused.
Any number of tasks can wait on an empty memory map simultaneously; when a
memory block becomes available, it is given to the highest-priority task that
has waited the longest.
The microkernel manages memory blocks in an efficient and deterministic
manner that eliminates the risk of memory fragmentation problems which can
arise when using variable-size blocks.
Unlike a heap, more than one memory map can be defined, if needed. This
allows for a memory map with smaller blocks and others with larger-sized
blocks. Alternatively, a memory pool object may be used.
Purpose
*******
Use a memory map to allocate and free memory in fixed-size blocks.
Usage
*****
Defining a Memory Map
=====================
The following parameters must be defined:
*name*
This specifies a unique name for the memory map.
*num_blocks*
This specifies the number of memory blocks in the memory map.
*block_size*
This specifies the size in bytes of each memory block.
Public Memory Map
-----------------
Define the memory map in the application's MDEF using the following
syntax:
.. code-block:: console
MAP name num_blocks block_size
For example, the file :file:`projName.mdef` defines a pair of memory maps
as follows:
.. code-block:: console
% MAP NAME NUMBLOCKS BLOCKSIZE
% ======================================
MAP MYMAP 4 1024
MAP YOURMAP 6 200
A public memory map can be referenced by name from any source file that
includes the file :file:`zephyr.h`.
Private Memory Map
------------------
Define the memory map in a source file using the following syntax:
.. code-block:: c
DEFINE_MEM_MAP(name, num_blocks, block_size);
Example: Defining a Memory Map, Referencing it from Elsewhere in the Application
================================================================================
This code defines a private memory map named ``PRIV_MEM_MAP``:
.. code-block:: c
DEFINE_MEM_MAP(PRIV_MEM_MAP, 6, 200);
To reference the map from a different source file, use the following syntax:
.. code-block:: c
extern const kmemory_map_t PRIV_MEM_MAP;
Example: Requesting a Memory Block from a Map with No Conditions
================================================================
This code waits indefinitely for a memory block to become available
when all the memory blocks are in use.
.. code-block:: c
void *block_ptr;
task_mem_map_alloc(MYMAP, &block_ptr, TICKS_UNLIMITED);
Example: Requesting a Memory Block from a Map with a Conditional Time-out
=========================================================================
This code waits a specified amount of time for a memory block to become
available and gives a warning when the memory block does not become available
in the specified time.
.. code-block:: c
void *block_ptr;
if (task_mem_map_alloc(MYMAP, &block_ptr, 5) == RC_OK)) {
/* utilize memory block */
} else {
printf("Memory allocation time-out");
}
Example: Requesting a Memory Block from a Map with a No Blocking Condition
==========================================================================
This code gives an immediate warning when all memory blocks are in use.
.. code-block:: c
void *block_ptr;
if (task_mem_map_alloc(MYMAP, &block_ptr, TICKS_NONE) == RC_OK) {
/* utilize memory block */
} else {
display_warning(); /* and do not allocate memory block*/
}
Example: Freeing a Memory Block back to a Map
=============================================
This code releases a memory block back when it is no longer needed.
.. code-block:: c
void *block_ptr;
task_mem_map_alloc(MYMAP, &block_ptr, TICKS_UNLIMITED);
/* use memory block */
task_mem_map_free(MYMAP, &block_ptr);
APIs
****
The following Memory Map APIs are provided by :file:`microkernel.h`:
:cpp:func:`task_mem_map_alloc()`
Wait on a block of memory for the period of time defined by the time-out
parameter.
:c:func:`task_mem_map_free()`
Return a block to a memory map.
:cpp:func:`task_mem_map_used_get()`
Return the number of used blocks in a memory map.

View file

@ -1,203 +0,0 @@
.. _microkernel_memory_pools:
Memory Pools
############
Concepts
********
The microkernel's :dfn:`memory pool` objects provide dynamic allocation and
release of variable-size memory blocks.
Unlike :ref:`memory map <microkernel_memory_maps>` objects, which support
memory blocks of only a *single* size, a memory pool can support memory blocks
of *various* sizes. The memory pool does this by subdividing blocks into smaller
chunks, where possible, to more closely match the actual needs of a requesting
task.
Any number of memory pools can be defined in a microkernel system. Each memory
pool has:
* A **name** that uniquely identifies it.
* A **minimum** and **maximum** block size, in bytes, of memory blocks
within the pool.
* The **number of maximum-size memory blocks** initially available.
A task that needs to use a memory block simply allocates it from a memory
pool. When a block of the desired size is unavailable, the task can wait
for one to become available. Following a successful allocation, the
:c:data:`pointer_to_data` field of the block descriptor supplied by the
task indicates the starting address of the memory block. When the task is
finished with a memory block, it must release the block back to the memory
pool that allocated it so that the block can be reused.
Any number of tasks can wait on a memory pool simultaneously; when a
memory block becomes available, it is given to the highest-priority task
that has waited the longest.
When a request for memory is sufficiently smaller than an available
memory pool block, the memory pool will automatically split the block into
4 smaller blocks. The resulting smaller blocks can also be split repeatedly,
until a block just larger than the needed size is available, or the minimum
block size, as specified in the MDEF, is reached.
If the memory pool cannot find an available block that is at least
the requested size, it will attempt to create one by merging adjacent
free blocks. If a suitable block can't be created, the request fails.
Although a memory pool uses efficient algorithms to manage its blocks,
the splitting of available blocks and merging of free blocks takes time
and increases overhead block allocation. The larger the allowable
number of splits, the larger the overhead. However, the minimum and maximum
block-size parameters specified for a pool can be used to control the amount
of splitting, and thus the amount of overhead.
Unlike a heap, more than one memory pool can be defined, if needed. For
example, different applications can utilize different memory pools; this
can help prevent one application from hijacking resources to allocate all
of the available blocks.
Purpose
*******
Use memory pools to allocate memory in variable-size blocks.
Use memory pool blocks when sending data to a mailbox asynchronously.
Usage
*****
Defining a Memory Pool
======================
The following parameters must be defined:
*name*
This specifies a unique name for the memory pool.
*min_block_size*
This specifies the minimum memory block size in bytes.
It should be a multiple of the processor's word size.
*max_block_size*
This specifies the maximum memory block size in bytes.
It should be a power of 4 times larger than *minBlockSize*;
therefore, maxBlockSize = minBlockSize * 4^n, where n>=0.
*num_max*
This specifies the number of maximum size memory blocks
available at startup.
Public Memory Pool
------------------
Define the memory pool in the application's MDEF with the following
syntax:
.. code-block:: console
POOL name min_block_size max_block_size num_max
For example, the file :file:`projName.mdef` defines two memory pools
as follows:
.. code-block:: console
% POOL NAME MIN MAX NMAX
% =======================================
POOL MY_POOL 32 8192 1
POOL SECOND_POOL_ID 64 1024 5
A public memory pool can be referenced by name from any source file that
includes the file :file:`zephyr.h`.
.. note::
Private memory pools are not supported by the Zephyr kernel.
Example: Requesting a Memory Block from a Pool with No Conditions
=================================================================
This code waits indefinitely for an 80 byte memory block to become
available, then fills it with zeroes.
.. code-block:: c
struct k_block block;
task_mem_pool_alloc(&block, MYPOOL, 80, TICKS_UNLIMITED);
memset(block.pointer_to_data, 0, 80);
Example: Requesting a Memory Block from a Pool with a Conditional Time-out
==========================================================================
This code waits up to 5 ticks for an 80 byte memory block to become
available and gives a warning if a suitable memory block is not obtained
in that time.
.. code-block:: none
struct k_block block;
if (task_mem_pool_alloc(:&block, MYPOOL, 80, 5) == RC_OK) {
/* use memory block */
} else {
printf('Memory allocation timeout');
}
Example: Requesting a Memory Block from a Pool with a No-Blocking Condition
===========================================================================
This code gives an immediate warning when it can not satisfy the request for
a memory block of 80 bytes.
.. code-block:: none
struct k_block block;
if (task_mem_pool_alloc (&block, MYPOOL, 80, TICKS_NONE) == RC_OK) {
/* use memory block */
} else {
printf('Memory allocation timeout');
}
Example: Freeing a Memory Block Back to a Pool
==============================================
This code releases a memory block back to a pool when it is no longer needed.
.. code-block:: c
struct k_block block;
task_mem_pool_alloc(&block, MYPOOL, size, TICKS_NONE);
/* use memory block */
task_mem_pool_free(&block);
Example: Manually Defragmenting a Memory Pool
=============================================
This code instructs the memory pool to concatenate any unused memory blocks
that can be merged. Doing a full defragmentation of the entire memory pool
before allocating a number of memory blocks may be more efficient than doing
an implicit partial defragmentation of the memory pool each time a memory
block allocation occurs.
.. code-block:: c
task_mem_pool_defragment(MYPOOL);
APIs
****
Memory Pools APIs provided by :file:`microkernel.h`
===================================================
:cpp:func:`task_mem_pool_alloc()`
Wait for a block of memory; wait the period of time defined by the time-out
parameter.
:cpp:func:`task_mem_pool_free()`
Return a block of memory to a memory pool.
:cpp:func:`task_mem_pool_defragment()`
Defragment a memory pool.

View file

@ -1,194 +0,0 @@
.. _microkernel_mutexes:
Mutexes
#######
Concepts
********
The microkernel's :dfn:`mutex` objects provide reentrant mutex
capabilities with basic priority inheritance.
Each mutex allows multiple tasks to safely share an associated
resource by ensuring mutual exclusivity while the resource is
being accessed by a task.
Any number of mutexes can be defined in a microkernel system.
Each mutex needs a **name** that uniquely identifies it. Typically,
the name should relate to the resource being shared, although this
is not a requirement.
A task that needs to use a shared resource must first gain exclusive
access by locking the associated mutex. If the mutex is already locked
by another task, the requesting task can wait for the mutex to be
unlocked.
After obtaining the mutex, the task may safely use the shared
resource for as long as needed. And when the task no longer needs
the resource, it must release the associated mutex to allow
other tasks to use the resource.
Any number of tasks may wait on a locked mutex. When more than one
task is waiting, the mutex locks the resource for the highest-priority
task that has waited the longest first; this is known as
:dfn:`priority-based waiting`. The order is decided when a task decides
to wait on the object: it is queued in priority order.
The task currently owning the mutex is also eligible for :dfn:`priority inheritance`.
Priority inheritance is the concept by which a task of lower priority gets its
priority *temporarily* elevated to the priority of the highest-priority
task that is waiting on a mutex held by the lower priority task. Thus, the
lower-priority task can complete its work and release the mutex as quickly
as possible. Once the mutex has been released, the lower-priority task resets
its task priority to the priority it had before locking that mutex.
.. note::
The :option:`CONFIG_PRIORITY_CEILING` configuration option limits
how high the kernel can raise a task's priority due to priority
inheritance. The default value of 0 permits unlimited elevation.
When two or more tasks wait on a mutex held by a lower priority task, the
kernel adjusts the owning task's priority each time a task begins waiting
(or gives up waiting). When the mutex is eventually released, the owning
task's priority correctly reverts to its original non-elevated priority.
The kernel does *not* fully support priority inheritance when a task holds
two or more mutexes simultaneously. This situation can result in the task's
priority not reverting to its original non-elevated priority when all mutexes
have been released. Preferably, a task holds only a single mutex when multiple
mutexes are shared between tasks of different priorities.
The microkernel also allows a task to repeatedly lock a mutex it has already
locked. This ensures that the task can access the resource at a point in its
execution when the resource may or may not already be locked. A mutex that is
repeatedly locked must be unlocked an equal number of times before the mutex
can release the resource completely.
Purpose
*******
Use mutexes to provide exclusive access to a resource, such as a physical
device.
Usage
*****
Defining a Mutex
================
The following parameters must be defined:
*name*
This specifies a unique name for the mutex.
Public Mutex
------------
Define the mutex in the application's MDEF file with the following syntax:
.. code-block:: console
MUTEX name
For example, the file :file:`projName.mdef` defines a single mutex as follows:
.. code-block:: console
% MUTEX NAME
% ===============
MUTEX DEVICE_X
A public mutex can be referenced by name from any source file that includes
the file :file:`zephyr.h`.
Private Mutex
-------------
Define the mutex in a source file using the following syntax:
.. code-block:: c
DEFINE_MUTEX(name);
For example, the following code defines a private mutex named ``XYZ``.
.. code-block:: c
DEFINE_MUTEX(XYZ);
The following syntax allows this mutex to be accessed from a different
source file:
.. code-block:: c
extern const kmutex_t XYZ;
Example: Locking a Mutex with No Conditions
===========================================
This code waits indefinitely for the mutex to become available if the
mutex is in use.
.. code-block:: c
task_mutex_lock(XYZ, TICKS_UNLIMITED);
moveto(100,100);
lineto(200,100);
task_mutex_unlock(XYZ);
Example: Locking a Mutex with a Conditional Timeout
===================================================
This code waits for a mutex to become available for a specified
time, and gives a warning if the mutex does not become available
in the specified amount of time.
.. code-block:: c
if (task_mutex_lock(XYZ, 100) == RC_OK)
{
moveto(100,100);
lineto(200,100);
task_mutex_unlock(XYZ);
}
else
{
printf("Cannot lock XYZ display\n");
}
Example: Locking a Mutex with a No Blocking Condition
=====================================================
This code gives an immediate warning when a mutex is in use.
.. code-block:: c
if (task_mutex_lock(XYZ, TICKS_NONE) == RC_OK);
{
do_something();
task_mutex_unlock(XYZ); /* and unlock mutex*/
}
else
{
display_warning(); /* and do not unlock mutex*/
}
APIs
****
Mutex APIs provided by :file:`microkernel.h`
============================================
:cpp:func:`task_mutex_lock()`
Wait on a locked mutex for the period of time defined by the timeout
parameter. Lock the mutex and increment the lock count if the mutex
becomes available during that period.
:cpp:func:`task_mutex_unlock()`
Decrement a mutex lock count, and unlock the mutex when the count
reaches zero.

View file

@ -1,333 +0,0 @@
.. _microkernel_pipes:
Pipes
#####
Concepts
********
The microkernel's :dfn:`pipe` object type is an implementation of a traditional
anonymous pipe.
A pipe allows a task to send a byte stream to another task. The pipe can be
configured with a ring buffer which holds data that has been sent
but not yet received; alternatively, the pipe may have no ring buffer.
Pipes can be used to transfer chunks of data in whole or in part, and either
synchronously or asynchronously.
Any number of pipes can be defined in a microkernel system. Each pipe
needs
* A **name** to uniquely identify it.
* A **size**, in bytes, of the ring buffer. Note that a size of zero defines
a pipe with no ring buffer.
Sending Data
============
A task sends data to a pipe by specifying a pointer to the data bytes
to be sent. It also specifies both the number of data bytes available
and a :dfn:`pipe option` that indicates the minimum number of data bytes
the pipe must accept. The following pipe option values are supported:
``_ALL_N``
Specifies that **all** available data bytes must be accepted by the pipe.
When this requirement is not fulfilled, the send request either fails or
performs a partial send.
``_1_TO_N``
Specifies that **at least one** data byte must be accepted by the pipe.
When this requirement is not fulfilled, the send request fails.
``_0_TO_N``
Specifies that **any number** of data bytes, including zero, may be accepted
by the pipe; the send request never fails.
The pipe accepts data bytes from the sending task if they can be delivered
by copying them directly to the receiving task. If the sending task is unable
to wait, or has waited as long as it can, the pipe can also accept data bytes
by copying them to its ring buffer for later delivery. The ring buffer is used
only when necessary to minimize copying of data bytes.
Upon the completion of a send operation, a :dfn:`return code` is provided to
indicate whether the send request was satisfied. The sending task can also read
the ``bytes written`` argument attribute to determine how many data bytes were
accepted by the pipe, and subsequently allowing it to deal with any unsent data
bytes.
Data sent to a pipe that does not have a ring buffer is sent synchronously;
that is, when the send operation is complete, the sending task knows that the
receiving task has received the data that was sent. Data sent to a pipe
that has a ring buffer is sent asynchronously; that is, when the send operation
is complete, some or all of the data that was sent may still be in the pipe
waiting for the receiving task to receive it.
Incomplete Send Requests
------------------------
Although a pipe endeavors to accept all available data bytes when the
``_ALL_N`` pipe option is specified, it does not guarantee that the
data bytes will be accepted in an "all or nothing" manner. When the pipe
is able to accept at least one data byte, it returns :c:macro:`RC_INCOMPLETE`
to notify the sending task that its request was not fully satisfied. When
the pipe is unable to accept any data bytes, it returns :c:macro:`RC_FAIL`.
One example of a situation that can result in an incomplete send is a
time-limited send request through an unbuffered pipe. If the receiving task
chooses to receive only a subset of the sender's data bytes, and the send
operation times out before the receiving task attempts to receive the
remainder, an incomplete send occurs.
Sending using a Memory Pool Block
---------------------------------
A task that sends large chunks of data through a pipe may be able to improve
its efficiency by placing the data into a memory pool block and sending
the block. The pipe treats the memory block as a temporary addition to
its ring buffer, allowing it to immediately accept the data bytes without
copying them. Once all of the data bytes in the block have been delivered
to the receiving task, the pipe automatically frees the block back to the
memory pool.
Data sent using a memory pool block is always sent asynchronously, even for
a pipe with no ring buffer of its own. Likewise, the pipe always accepts all
of the available data in the block -- a partial transfer never occurs.
Receiving Data
==============
A task receives from a pipe by specifying a pointer to an area to receive
the data bytes that were sent. It also specifies both the desired number
of data bytes and a :dfn:`pipe option` that indicates the minimum number of
data bytes the pipe must deliver. The following pipe option values
are supported:
``_ALL_N``
Specifies that all desired number of data bytes must be received.
When this requirement is not fulfilled, the receive request either fails or
performs a partial receive.
``_1_TO_N``
Specifies that at least one data byte must be received. When this requirement
is not fulfilled, the receive request fails.
``_0_TO_N``
Specifies that any number of data bytes (including zero) may be
received; the receive request never fails.
The pipe delivers data bytes by copying them directly from the sending task
or from the pipe's ring buffer. Data bytes taken from the ring buffer are
delivered in a first in, first out manner.
When a pipe is unable to deliver the specified minimum number of data bytes,
the receiving task may choose to wait until they can be delivered.
Upon completion of a receive operation, a :dfn:`return code` is provided to
indicate whether the receive request was satisfied. The receiving task also
can read the ``bytes read`` argument attribute to determine how many
data bytes were delivered by the pipe.
Incomplete Receive Requests
---------------------------
Although a pipe endeavors to deliver all desired data bytes when the
``_ALL_N`` pipe option is specified, it does not guarantee that the
data bytes will be delivered in an "all or nothing" manner. When the pipe
is able to deliver at least one data byte, it returns :c:macro:`RC_INCOMPLETE`
to notify the receiving task that its request was not fully satisfied. When
the pipe is unable to deliver any data bytes, it returns :c:macro:`RC_FAIL`.
An example of a situation that can result in an incomplete receive is a
time-limited receive request through an unbuffered pipe. If the sending task
sends fewer than the desired number of data bytes, and the receive
operation times out before the sending task attempts to send the remainder,
an incomplete receive occurs.
Receiving using a Memory Pool Block
-----------------------------------
A task can achieve the effect of receiving data from a pipe into a memory pool
block by pre-allocating a block and then receiving the data into it.
Sharing a Pipe
==============
A pipe is typically used by a single sending task and a single receiving
task; however, it is possible for a pipe to be shared by multiple sending
tasks or multiple receiving tasks.
Care must be taken when a pipe is shared by multiple sending tasks to
ensure the data bytes they send do not become interleaved unexpectedly;
using the ``_ALL_N`` pipe option helps to ensure that each data chunk is
transferred in a single operation. The same is true when multiple receiving
tasks are reading from the same pipe.
Purpose
*******
Use a pipe to transfer data when the receiving task needs the ability
to split or merge the data items generated by the sending task.
Usage
*****
Defining a Pipe
===============
The following parameters must be defined:
*name*
This specifies a unique name for the pipe.
*buffer_size*
This specifies the size in bytes of the pipe's ring buffer.
If no ring buffer is to be used specify zero.
Public Pipe
-----------
Define the pipe in the application's MDEF using the following syntax:
.. code-block:: console
PIPE name buffer_size
For example, the file :file:`projName.mdef` defines a pipe with a 1 KB ring
buffer as follows:
.. code-block:: console
% PIPE NAME BUFFERSIZE
% ===============================
PIPE DATA_PIPE 1024
A public pipe can be referenced by name from any source file that includes
the file :file:`zephyr.h`.
Private Pipe
------------
Define the pipe in a source file using the following syntax:
.. code-block:: c
DEFINE_PIPE(name, size);
For example, the following code defines a private pipe named ``PRIV_PIPE``.
.. code-block:: c
DEFINE_PIPE(PRIV_PIPE, 1024);
To use this pipe from a different source file use the following syntax:
.. code-block:: c
extern const kpipe_t PRIV_PIPE;
Example: Writing Fixed-Size Data Items to a Pipe
================================================
This code uses a pipe to send a series of fixed-size data items
to a consuming task.
.. code-block:: c
void producer_task(void)
{
struct item_type data_item;
int amount_written;
while (1) {
/* generate a data item to send */
data_item = ... ;
/* write the entire data item to the pipe */
task_pipe_put(DATA_PIPE, &data_item, sizeof(data_item),
&amount_written, _ALL_N, TICKS_UNLIMITED);
}
}
Example: Reading Fixed-Size Data Items from a Pipe
==================================================
This code uses a pipe to receive a series of fixed-size data items
from a producing task. To improve performance, the consuming task
waits until 20 data items are available then reads them as a group,
rather than reading them individually.
.. code-block:: c
void consumer_task(void)
{
struct item_type data_items[20];
int amount_read;
int i;
while (1) {
/* read 20 complete data items at once */
task_pipe_get(DATA_PIPE, &data_items, sizeof(data_items),
&amount_read, _ALL_N, TICKS_UNLIMITED);
/* process the data items one at a time */
for (i = 0; i < 20; i++) {
... = data_items[i];
...
}
}
}
Example: Reading a Stream of Data Bytes from a Pipe
===================================================
This code uses a pipe to process a stream of data bytes from a
producing task. The pipe is read in a non-blocking manner to allow
the consuming task to perform other work when there are no
unprocessed data bytes in the pipe.
.. code-block:: c
void consumer_task(void)
{
char data_area[20];
int amount_read;
int i;
while (1) {
/* consume any data bytes currently in the pipe */
while (task_pipe_get(DATA_PIPE, &data_area, sizeof(data_area),
&amount_read, _1_TO_N, TICKS_NONE) == RC_OK) {
/* now have from 1 to 20 data bytes */
for (i = 0; i < amount_read; i++) {
... = data_area[i];
...
}
}
/* do other processing */
...
}
}
APIs
****
Pipe APIs provided by :file:`microkernel.h`
===========================================
:cpp:func:`task_pipe_put()`
Write data to a pipe, with time limited waiting.
:c:func:`task_pipe_block_put()`
Write data to a pipe from a memory pool block.
:cpp:func:`task_pipe_get()`
Read data from a pipe, or fails and continues if data isn't there.

View file

@ -1,229 +0,0 @@
.. _microkernel_semaphores:
Semaphores
##########
Concepts
********
The microkernel's :dfn:`semaphore` objects are an implementation of traditional
counting semaphores.
Any number of semaphores can be defined in a microkernel system. Each semaphore
has a **name** that uniquely identifies it.
A semaphore starts off with a count of zero. This count is incremented each
time the semaphore is given, and is decremented each time the semaphore is taken.
However, a semaphore cannot be taken when it has a count of zero; this makes
it unavailable.
Semaphores may be given by tasks, fibers, or ISRs.
Semaphores may be taken by tasks only. A task that attempts to take an unavailable
semaphore may wait for the semaphore to be given. Any number of tasks may wait on
an unavailable semaphore simultaneously; and when the semaphore becomes available,
it is given to the highest priority task that has waited the longest.
The kernel allows a task to give multiple semaphores in a single operation using a
*semaphore group*. The task specifies the members of a semaphore group with an array
of semaphore names, terminated by the symbol :c:macro:`ENDLIST`. This technique
allows the task to give the semaphores more efficiently than giving them individually.
A task can also use a semaphore group to take a single semaphore from a set
of semaphores in a single operation. This technique allows the task to
monitor multiple synchronization sources at the same time, similar to the way
:c:func:`select()` can be used to read input from a set of file descriptors
in a POSIX-compliant operating system. The kernel does *not* define the order
in which semaphores are taken when more than one semaphore in a semaphore group
is available; the semaphore that is taken by the task may not be the one
that was given first.
There is no limit on the number of semaphore groups used by a task, or
on the number of semaphores belonging to any given semaphore group. Semaphore
groups may also be shared by multiple tasks, if desired.
Purpose
*******
Use a semaphore to control access to a set of resources by multiple tasks.
Use a semaphore synchronize processing between a producing task, fiber,
or ISR and one or more consuming tasks.
Use a semaphore group to allow a task to signal or to monitor multiple
semaphores simultaneously.
Usage
*****
Defining a Semaphore
====================
The following parameters must be defined:
*name*
This specifies a unique name for the semaphore.
Public Semaphore
----------------
Define the semaphore in the application's MDEF with the following syntax:
.. code-block:: console
SEMA name
For example, the file :file:`projName.mdef` defines two semaphores as follows:
.. code-block:: console
% SEMA NAME
% ================
SEMA INPUT_DATA
SEMA WORK_DONE
A public semaphore can be referenced by name from any source file that
includes the file :file:`zephyr.h`.
Private Semaphore
-----------------
Define the semaphore in a source file using the following syntax:
.. code-block:: c
DEFINE_SEMAPHORE(name);
For example, the following code defines a private semaphore named ``PRIV_SEM``.
.. code-block:: c
DEFINE_SEMAPHORE(PRIV_SEM);
To reference this semaphore from a different source file, use the following syntax:
.. code-block:: c
extern const ksem_t PRIV_SEM;
Example: Giving a Semaphore from a Task
=======================================
This code uses a semaphore to indicate that a unit of data
is available for processing by a consumer task.
.. code-block:: c
void producer_task(void)
{
/* save data item in a buffer */
...
/* notify task that an additional data item is available */
task_sem_give(INPUT_DATA);
...
}
Example: Taking a Semaphore with a Conditional Time-out
=======================================================
This code waits up to 500 ticks for a semaphore to be given,
and gives a warning if it is not obtained in that time.
.. code-block:: c
void consumer_task(void)
{
...
if (task_sem_take(INPUT_DATA, 500) == RC_TIME) {
printf("Input data not available!");
} else {
/* extract saved data item from buffer and process it */
...
}
...
}
Example: Monitoring Multiple Semaphores at Once
===============================================
This code waits on two semaphores simultaneously, and then takes
action depending on which one was given.
.. code-block:: c
ksem_t my_sem_group[3] = { INPUT_DATA, WORK_DONE, ENDLIST };
void consumer_task(void)
{
ksem_t sem_id;
...
sem_id = task_sem_group_take(my_sem_group, TICKS_UNLIMITED);
if (sem_id == WORK_DONE) {
printf("Shutting down!");
return;
} else {
/* process input data */
...
}
...
}
Example: Giving Multiple Semaphores at Once
===========================================
This code uses a semaphore group to allow a controlling task to signal
the semaphores used by four other tasks in a single operation.
.. code-block:: c
ksem_t my_sem_group[5] = { SEM1, SEM2, SEM3, SEM4, ENDLIST };
void control_task(void)
{
...
task_semaphore_group_give(my_sem_group);
...
}
APIs
****
All of the following APIs are provided by :file:`microkernel.h`:
APIs for an individual semaphore
================================
:cpp:func:`isr_sem_give()`
Give a semaphore (from an ISR).
:cpp:func:`fiber_sem_give()`
Give a semaphore (from a fiber).
:cpp:func:`task_sem_give()`
Give a semaphore.
:cpp:func:`task_sem_take()`
Take a semaphore, with time limited waiting.
:cpp:func:`task_sem_reset()`
Set the semaphore count to zero.
:cpp:func:`task_sem_count_get()`
Read the count for a semaphore.
APIs for semaphore groups
=========================
:cpp:func:`task_sem_group_give()`
Give each semaphore in a group.
:cpp:func:`task_sem_group_take()`
Wait up to a specified time period for a semaphore from a group.
:cpp:func:`task_sem_group_reset()`
Set the count to zero for each semaphore in a group.

View file

@ -1,14 +0,0 @@
.. _microkernel_synchronization:
Synchronization Services
########################
This section contains the information about the synchronization services
available in the microkernel.
.. toctree::
:maxdepth: 2
microkernel_events
microkernel_semaphores
microkernel_mutexes

View file

@ -1,540 +0,0 @@
.. _microkernel_tasks:
Task Services
#############
Concepts
********
A task is a preemptible thread of execution that implements a portion of
an application's processing. It is normally used for processing that
is too lengthy or too complex to be performed by a fiber or an ISR.
A microkernel application can define any number of application tasks. Each
task has a name that uniquely identifies it, allowing it to be directly
referenced by other tasks. For each microkernel task, the following
properties must be specified:
* A **memory region** to be used for stack and execution context information.
* A **function** to be invoked when the task starts executing.
* The **priority** to be used by the microkernel scheduler.
A task's entry point function takes no arguments, so there is no need to
define any argument values for it.
The microkernel automatically defines a system task, known as the *idle task*,
at the lowest priority. This task is used during system initialization,
and subsequently executes only when there is no other work for the system to do.
The idle task is anonymous and must not be referenced by application tasks.
.. note::
A nanokernel application can define only a single application task, known
as the *background task*, which is very different from the microkernel tasks
described in this section. For more information see
:ref:`Nanokernel Task Services <nanokernel_tasks>`.
Task Lifecycle
==============
The kernel automatically starts a task during system initialization if the task
belongs to the :c:macro:`EXE` task group; see `Task Groups`_ below.
A task that is not started automatically must be started by another task
using :c:func:`task_start()`.
Once a task is started it normally executes forever. A task may terminate
gracefully by simply returning from its entry point function. If it does,
it is the task's responsibility to release any system resources it may own
(such as mutexes and dynamically allocated memory blocks) prior to returning,
since the kernel does *not* attempt to reclaim them so they can be reused.
A task may also terminate non-gracefully by *aborting*. The kernel
automatically aborts a task when it generates a fatal error condition,
such as dereferencing a null pointer. A task can also be explicitly aborted
using :c:func:`task_abort()`. As with graceful task termination,
the kernel does not attempt to reclaim system resources owned by the task.
A task may optionally register an *abort handler function* to be invoked
by the kernel when the task terminates (including during graceful termination).
The abort handler can be used to record information about the terminating
task or to assist in reclaiming system resources owned by the task. The abort
handler function is invoked by the microkernel server fiber, so it cannot
directly call kernel APIs that must be invoked by a task; instead, it must
coordinate with another task to invoke such APIs indirectly.
.. note::
The kernel does not currently make any claims regarding an application's
ability to restart a terminated task.
Task Scheduling
===============
The microkernel's scheduler selects which of the system's tasks is allowed
to execute; this task is known as the *current task*. The nanokernel's scheduler
permits the current task to execute only when no fibers or ISRs are available
to execute; fiber and ISR executions always take precedence.
When prompted for a context switch to a different task, fiber, or ISR, the kernel
automatically saves the current task's CPU register values; these values get
restored when the task later resumes execution.
Task State
----------
Each microkernel task has an associated *state* that determines whether or not
it can be scheduled for execution. The state records factors that could prevent
the task from executing, such as:
* The task has not been started.
* The task is waiting (for a semaphore, for a timeout, ...).
* The task has been suspended.
* The task has terminated.
A task whose state has no factors that prevent its execution is said to be
*executable*.
Task Priorities
---------------
A microkernel application can be configured to support any number of
task priority levels using the :option:`CONFIG_NUM_TASK_PRIORITIES`
configuration option.
An application task can have any priority from 0 (highest priority)
down to :option:`CONFIG_NUM_TASK_PRIORITIES`\-2. The lowest priority
level, :option:`CONFIG_NUM_TASK_PRIORITIES`\-1, is reserved for the
microkernel's idle task.
A task's original priority can be altered up or down after the task has been
started.
Scheduling Algorithm
--------------------
The microkernel's scheduler always selects the highest priority executable task
to be the current task. When multiple executable tasks of the same priority are
available, the scheduler chooses the one that has been waiting longest.
Once a task becomes the current task it remains scheduled for execution
by the microkernel until one of the following occurs:
* The task is supplanted by a higher-priority task that becomes ready to
execute.
* The task is supplanted by an equal-priority task that is ready to execute,
either because the current task explicitly calls :c:func:`task_yield()`
or because the kernel implicitly calls :c:func:`task_yield()` after the
scheduler's time slice expired.
* The task is supplanted by an equal or lower-priority task that is ready
to execute because the current task called a kernel API that blocked its
own execution. For example, the task attempted to take a semaphore that
was unavailable.
* The task terminates itself by returning from its entry point function.
* The task aborts itself by performing an operation that causes a fatal error,
or by calling :c:func:`task_abort()`.
Time Slicing
------------
The microkernel's scheduler supports an optional time slicing capability
that prevents a task from monopolizing the CPU when other tasks of the
same priority are ready to execute.
The scheduler divides time into a series of *time slices*, where
slices are measured in system clock ticks. The time slice size is
specified with the :option:`CONFIG_TIMESLICE_SIZE` configuration
option, but this size can also be changed dynamically, while the
application is running.
At the end of every time slice, the scheduler implicitly invokes
:c:func:`task_yield()` on behalf of the current task; this gives
any other task of that priority the opportunity to execute before the
current task can once again be scheduled. If one or more equal-priority
tasks are ready to execute, the current task is preempted to allow those
tasks to execute. If no tasks of equal priority are ready to execute,
the current task remains the current task, and it continues to execute.
Tasks with a priority higher than that specified by the
:option:`CONFIG_TIMESLICE_PRIORITY` configuration option are exempt
from time slicing, and are never preempted by a task of equal
priority. This capability allows an application to use time slicing
only for lower priority tasks that are less time-sensitive.
.. note::
The microkernel's time slicing algorithm does *not* ensure that a set
of equal-priority tasks will receive an equitable amount of CPU time,
since it does not measure the amount of time a task actually gets to
execute. For example, a task may become the current task just before
the end of a time slice and then immediately have to yield the CPU.
On the other hand, the microkernel's scheduler *does* ensure that a task
never executes for longer than a single time slice without being required
to yield.
Task Suspension
---------------
The microkernel allows a task to be *suspended*, which prevents the task
from executing for an indefinite period of time. The :c:func:`task_suspend()`
API allows an application task to suspend any other task, including itself.
Suspending a task that is already suspended has no additional effect.
Once suspended, a task cannot be scheduled until another task calls
:c:func:`task_resume()` to remove the suspension.
.. note::
A task can prevent itself from executing for a specified period of time
using :c:func:`task_sleep()`. However, this is different from suspending
a task since a sleeping task becomes executable automatically when the
time limit is reached.
Task Groups
===========
The kernel allows a set of related tasks, known as a *task group*, to be
manipulated as a single unit, rather than individually. This simplifies
the work required to start related tasks, to suspend and resume them, or
to abort them.
The kernel supports a maximum of 32 distinct task groups. Each task group
has a name that uniquely identifies it, allowing it to be directly referenced
by tasks.
The task groups a task belongs to are specified when the task is defined.
A task may belong to a single task group, to multiple task groups, or to
no task group. A task's group memberships can also be changed dynamically
while the application is running.
The task group designations listed below are pre-defined by the kernel;
additional task groups can be defined by the application.
:c:macro:`EXE`
This task group is started automatically by the kernel during system
intialization.
:c:macro:`SYS`
This task group is a set of system tasks that continues to execute
during system debugging.
:c:macro:`FPU`
This task group is a set of tasks that requires the kernel to save
x87 FPU and MMX floating point context information during context switches.
:c:macro:`SSE`
This task group is a set of tasks that requires the kernel to save SSE
floating point context information during context switches. (Tasks with
this group designation are implicitly members of the :c:macro:`FPU` task
group too.)
Usage
*****
Defining a Task
===============
The following parameters must be defined:
*name*
This specifies a unique name for the task.
*priority*
This specifies the scheduling priority of the task.
*entry_point*
This specifies the name of the task's entry point function,
which should have the following form:
.. code-block:: c
void <entry_point>(void)
{
/* task mainline processing */
...
/* (optional) normal task termination */
return;
}
*stack_size*
This specifies the size of the memory region used for the task's
stack and for other execution context information, in bytes.
*groups*
This specifies the task groups the task belongs to.
Public Task
-----------
Define the task in the application's MDEF using the following syntax:
.. code-block:: console
TASK name priority entry_point stack_size groups
The task groups are specified using a comma-separated list of task group names
enclosed in square brackets, with no embedded spaces. If the task does not
belong to any task group, specify an empty list; i.e. :literal:`[]`.
For example, the file :file:`projName.mdef` defines a system comprised
of six tasks as follows:
.. code-block:: console
% TASK NAME PRIO ENTRY STACK GROUPS
% ===================================================================
TASK MAIN_TASK 6 keypad_main 1024 [KEYPAD_TASKS,EXE]
TASK PROBE_TASK 2 probe_main 400 []
TASK SCREEN1_TASK 8 screen_1_main 4096 [VIDEO_TASKS]
TASK SCREEN2_TASK 8 screen_2_main 4096 [VIDEO_TASKS]
TASK SPEAKER1_TASK 10 speaker_1_main 1024 [AUDIO_TASKS]
TASK SPEAKER2_TASK 10 speaker_2_main 1024 [AUDIO_TASKS]
A public task can be referenced by name from any source file that includes
the file :file:`zephyr.h`.
Private Task
------------
Define the task in a source file using the following syntax:
.. code-block:: c
DEFINE_TASK(PRIV_TASK, priority, entry, stack_size, groups);
The task groups are specified using a list of task group names separated by
:literal:`|`; i.e. the bitwise OR operator. If the task does not belong to any
task group specify NULL.
For example, the following code can be used to define a private task named
``PRIV_TASK``.
.. code-block:: c
DEFINE_TASK(PRIV_TASK, 10, priv_task_main, 800, EXE);
To utilize this task from a different source file use the following syntax:
.. code-block:: c
extern const ktask_t PRIV_TASK;
Defining a Task Group
=====================
The following parameters must be defined:
*name*
This specifies a unique name for the task group.
Public Task Group
-----------------
Define the task group in the application's .MDEF file using the following
syntax:
.. code-block:: console
TASKGROUP name
For example, the file :file:`projName.mdef` defines three new task groups
as follows:
.. code-block:: console
% TASKGROUP NAME
% ========================
TASKGROUP VIDEO_TASKS
TASKGROUP AUDIO_TASKS
TASKGROUP KEYPAD_TASKS
A public task group can be referenced by name from any source file that
includes the file :file:`zephyr.h`.
.. note::
Private task groups are not supported by the Zephyr kernel.
Example: Starting a Task from Another Task
==========================================
This code shows how the currently-executing task can start another task.
.. code-block:: c
void keypad_main(void)
{
/* begin system initialization */
...
/* start task to monitor temperature */
task_start(PROBE_TASK);
/* continue to bring up and operate system */
...
}
Example: Suspending and Resuming a Set of Tasks
===============================================
This code shows how the currently-executing task can temporarily suspend
the execution of all tasks belonging to the designated task groups.
.. code-block:: c
void probe_main(void)
{
int was_overheated = 0;
/* continuously monitor temperature */
while (1) {
now_overheated = overheating_update();
/* suspend non-essential tasks when overheating is detected */
if (now_overheated && !was_overheated) {
task_group_suspend(VIDEO_TASKS
AUDIO_TASKS);
was_overheated = 1;
}
/* resume non-essential tasks when overheating abates */
if (!now_overheated && was_overheated) {
task_group_resume(VIDEO_TASKS
AUDIO_TASKS);
was_overheated = 0;
}
/* wait 10 ticks of system clock before checking again */
task_sleep(10);
}
}
Example: Offloading Work to the Microkernel Server Fiber
========================================================
This code shows how the currently-executing task can perform critical section
processing by offloading it to the microkernel server. Since the critical
section function is being executed by a fiber, once the function begins
executing it cannot be interrupted by any other fiber or task that wants
to log an alarm.
.. code-block:: c
/* alarm logging subsystem */
#define MAX_ALARMS 100
struct alarm_info alarm_log[MAX_ALARMS];
int num_alarms = 0;
int log_an_alarm(struct alarm_info *new_alarm)
{
/* ensure alarm log isn't full */
if (num_alarms == MAX_ALARMS) {
return 0;
}
/* add new alarm to alarm log */
alarm_info[num_alarms] = *new_alarm;
num_alarms++;
/* pass back alarm identifier to indicate successful logging */
return num_alarms;
}
/* task that generates an alarm */
void XXX_main(void)
{
struct alarm_info my_alarm = { ... };
...
/* record alarm in system's database */
if (task_offload_to_fiber(log_an_alarm, &my_alarm) == 0) {
printf("Unable to log alarm!");
}
...
}
APIs
****
All of the following Microkernel APIs are provided by :file:`microkernel.h`.
APIs Affecting the Currently-Executing Task
===========================================
:cpp:func:`task_id_get()`
Gets the task's ID.
:c:func:`isr_task_id_get()`
Gets the task's ID from an ISR.
:cpp:func:`task_priority_get()`
Gets the task's priority.
:c:func:`isr_task_priority_get()`
Gets the task's priority from an ISR.
:cpp:func:`task_group_mask_get()`
Gets the task's group memberships.
:c:func:`isr_task_group_mask_get()`
Gets the task's group memberships from an ISR.
:cpp:func:`task_abort_handler_set()`
Installs the task's abort handler.
:cpp:func:`task_yield()`
Yields CPU to equal-priority tasks.
:cpp:func:`task_sleep()`
Yields CPU for a specified time period.
:cpp:func:`task_offload_to_fiber()`
Instructs the microkernel server fiber to execute a function.
APIs Affecting a Specified Task
===============================
:cpp:func:`task_priority_set()`
Sets a task's priority.
:cpp:func:`task_entry_set()`
Sets a task's entry point.
:c:func:`task_start()`
Starts execution of a task.
:c:func:`task_suspend()`
Suspends execution of a task.
:c:func:`task_resume()`
Resumes execution of a task.
:c:func:`task_abort()`
Aborts execution of a task.
:cpp:func:`task_group_join()`
Adds a task to the specified task group(s).
:cpp:func:`task_group_leave()`
Removes a task from the specified task group(s).
APIs Affecting Multiple Tasks
=============================
:cpp:func:`sys_scheduler_time_slice_set()`
Sets the time slice period used in round-robin task scheduling.
:c:func:`task_group_start()`
Starts execution of all tasks in the specified task groups.
:c:func:`task_group_suspend()`
Suspends execution of all tasks in the specified task groups.
:c:func:`task_group_resume()`
Resumes execution of all tasks in the specified task groups.
:c:func:`task_group_abort()`
Aborts execution of all tasks in the specified task groups.

View file

@ -1,211 +0,0 @@
.. _microkernel_timers:
Timer Services
##############
Concepts
********
A :dfn:`microkernel timer` allows a task to determine whether or not a
specified time limit has been reached while the task is busy performing
other work. The timer uses the kernel's system clock, measured in
ticks, to monitor the passage of time.
Any number of microkernel timers can be defined in a microkernel system.
Each timer has a unique identifier, which allows it to be distinguished
from other timers.
A task that wants to use a timer must first allocate an unused timer
from the set of microkernel timers. A task can allocate more than one timer
when it needs to monitor multiple time intervals simultaneously.
A timer is started by specifying:
* A :dfn:`duration` is the number of ticks the timer counts before it
expires for the first time.
* A :dfn:`period` is the number of ticks the timer counts before it expires
each time thereafter.
* The :dfn:`microkernel semaphore identifier` is what the timer gives each
time the semaphore expires.
The semaphore's state can be examined by the task any time the task needs to
determine whether or not the given time limit has been reached.
When the timer's period is set to zero, the timer stops automatically
after reaching the duration and giving the semaphore. When the period is set to
any number of ticks other than zero, the timer restarts automatically with
a new duration that is equal to its period. When this new duration has elapsed,
the timer gives the semaphore again and restarts. For example, a timer can be
set to expire after 5 ticks, and to then re-expire every 20 ticks thereafter,
resulting in the semaphore being given 3 times after 45 ticks have elapsed.
.. note::
Care must be taken when specifying the duration of a microkernel timer.
The first tick measured by the timer after it is started will be
less than a full-tick interval. For example, when the system clock period
is 10 milliseconds, starting a timer that expires after 1 tick will result
in the semaphore being given anywhere from a fraction of a millisecond
later to just slightly less than 10 milliseconds later. To ensure that a
timer doesn't expire for at least ``N`` ticks, it is necessary to specify
a duration of ``N+1`` ticks. This adjustment is not required when specifying
the period of a timer, which always corresponds to full-tick intervals.
A running microkernel timer can be cancelled or restarted by a task prior to
its expiration. Cancelling a timer that has already expired does not affect
the state of the associated semaphore. Likewise, restarting a timer that has
already expired is equivalent to stopping the timer and starting it afresh.
When a task no longer needs a timer it should free the timer. This makes
the timer available for reallocation.
Purpose
*******
Use a microkernel timer to determine whether or not a specified number of
system clock ticks have elapsed while the task is busy performing other work.
.. note::
If a task has no other work to perform while waiting for time to pass
it can simply call :cpp:func:`task_sleep()`.
.. note::
The microkernel provides additional APIs that allow a task to monitor
both the system clock and the higher-precision hardware clock, without
using a microkernel timer.
Usage
*****
Configuring Microkernel Timers
==============================
Set the :option:`CONFIG_NUM_TIMER_PACKETS` configuration option to
specify the number of timer-related command packets available in the
application. This value should be **equal to** or **greater than** the
sum of the following quantities:
* The number of microkernel timers.
* The number of tasks.
.. note::
Unlike most other microkernel object types, microkernel timers are defined
as a group using a configuration option, rather than as individual public
objects in an MDEF or private objects in a source file.
Example: Allocating a Microkernel Timer
=======================================
This code allocates an unused timer.
.. code-block:: c
ktimer_t timer_id;
timer_id = task_timer_alloc();
Example: Starting a One Shot Microkernel Timer
==============================================
This code uses a timer to limit the amount of time a task spends on gathering
data. It works by monitoring the status of a microkernel semaphore that is set
when the timer expires. Since the timer is started with a period of zero, it
stops automatically once it expires.
.. code-block:: c
ktimer_t timer_id;
ksem_t my_sem;
...
/* set timer to expire in 10 ticks */
task_timer_start(timer_id, 10, 0, my_sem);
/* gather data until timer expires */
do {
...
} while (task_sem_take(my_sem, TICKS_NONE) != RC_OK);
/* process the new data */
...
Example: Starting a Periodic Microkernel Timer
==============================================
This code is similar to the previous example, except that the timer
automatically restarts every time it expires. This approach eliminates
the overhead of having the task explicitly issue a request to
reactivate the timer.
.. code-block:: c
ktimer_t timer_id;
ksem_t my_sem;
...
/* set timer to expire every 10 ticks */
task_timer_start(timer_id, 10, 10, my_sem);
while (1) {
/* gather data until timer expires */
do {
...
} while (task_sem_take(my_sem, TICKS_NONE) != RC_OK);
/* process the new data, then loop around to get more */
...
}
Example: Cancelling a Microkernel Timer
=======================================
This code illustrates how an active timer can be stopped prematurely.
.. code-block:: c
ktimer_t timer_id;
ksem_t my_sem;
...
/* set timer to expire in 10 ticks */
task_timer_start(timer_id, 10, 0, my_sem);
/* do work while waiting for input to arrive */
...
/* now have input, so stop the timer if it is still running */
task_timer_stop(timer_id);
/* check to see if the timer expired before it was stopped */
if (task_sem_take(my_sem, TICKS_NONE) == RC_OK) {
printf("Warning: Input took too long to arrive!");
}
Example: Freeing a Microkernel Timer
====================================
This code allows a task to relinquish a previously-allocated timer
so it can be used by other tasks.
.. code-block:: c
task_timer_free(timer_id);
APIs
****
The following microkernel timer APIs are provided by :file:`microkernel.h`:
:cpp:func:`task_timer_alloc()`
Allocates an unused timer.
:cpp:func:`task_timer_start()`
Starts a timer.
:cpp:func:`task_timer_restart()`
Restarts a timer.
:cpp:func:`task_timer_stop()`
Cancels a timer.
:cpp:func:`task_timer_free()`
Marks timer as unused.

View file

@ -1,20 +0,0 @@
.. _nanokernel:
Nanokernel Services
###################
This section describes the various services provided by the nanokernel.
Unless otherwise noted, these services are available in both microkernel
applications and nanokernel applications.
.. toctree::
:maxdepth: 1
nanokernel_tasks.rst
nanokernel_fibers
nanokernel_timers
nanokernel_synchronization
nanokernel_data
nanokernel_interrupts
nanokernel_kernel_event_logger
nanokernel_example

View file

@ -1,15 +0,0 @@
.. _nanokernel_data:
Data Passing Services
#####################
This section contains the information about all data passing services
provided by the nanokernel.
.. toctree::
:maxdepth: 2
nanokernel_fifos
nanokernel_lifos
nanokernel_stacks
nanokernel_ring_buffers

View file

@ -1,149 +0,0 @@
.. _nanokernel_example:
Semaphore, Timer, and Fiber Example
###################################
The following example is intended to provide a basic picture of how Zephyr's
semaphores, timers, and fibers work. The actual implementations of the
standard hello_world are much simpler, see
:file:`ZEPHYR_BASE/samples/hello_world`
Example Code
************
.. code-block:: c
#include <nanokernel.h>
#include <nanokernel/cpu.h>
/* specify delay between greetings (in ms); compute equivalent in ticks */
#define SLEEPTIME
#define SLEEPTICKS (SLEEPTIME * CONFIG_TICKFREQ / 1000)
#define STACKSIZE 2000
char fiberStack[STACKSIZE];
struct nano_sem nanoSemTask;
struct nano_sem nanoSemFiber;
void fiberEntry (void)
{
struct nano_timer timer;
uint32_t data[2] = {0, 0};
nano_sem_init (&nanoSemFiber);
nano_timer_init (&timer, data);
while (1)
{
/* wait for task to let us have a turn */
nano_fiber_sem_take(&nanoSemFiber, TICKS_UNLIMITED);
/* say "hello" */
PRINT ("%s: Hello World!\n", __FUNCTION__);
/* wait a while, then let task have a turn */
nano_fiber_timer_start (&timer, SLEEPTICKS);
nano_fiber_timer_test (&timer, TICKS_UNLIMITED);
nano_fiber_sem_give (&nanoSemTask);
}
}
void main (void)
{
struct nano_timer timer;
uint32_t data[2] = {0, 0};
task_fiber_start (&fiberStack[0], STACKSIZE,
(nano_fiber_entry_t) fiberEntry, 0, 0, 7, 0);
nano_sem_init (&nanoSemTask);
nano_timer_init (&timer, data);
while (1)
{
/* say "hello" */
PRINT ("%s: Hello World!\n", __FUNCTION__);
/* wait a while, then let fiber have a turn */
nano_task_timer_start (&timer, SLEEPTICKS);
nano_task_timer_test (&timer, TICKS_UNLIMITED);
nano_task_sem_give (&nanoSemFiber);
/* now wait for fiber to let us have a turn */
nano_task_sem_take (&nanoSemTask, TICKS_UNLIMITED);
}
}
Step-by-Step Description
************************
A quick breakdown of the major objects in use by this sample includes:
- One fiber, executing in the :c:func:`fiberEntry()` routine.
- The background task, executing in the :c:func:`main()` routine.
- Two semaphores (*nanoSemTask*, *nanoSemFiber*),
- Two timers:
+ One local to the fiber (timer)
+ One local to background task (timer)
First, the background task starts executing main(). The background task
calls task_fiber_start initializing and starting the fiber. Since a
fiber is available to be run, the background task is pre-empted and the
fiber begins running.
Execution jumps to fiberEntry. nanoSemFiber and the fiber-local timer
before dropping into the while loop, where it takes and waits on
nanoSemFiber. task_fiber_start.
The background task initializes nanoSemTask and the task-local timer.
The following steps repeat endlessly:
#. The background task execution begins at the top of the main while
loop and prints, “main: Hello World!”
#. The background task then starts a timer for SLEEPTICKS in the
future, and waits for that timer to expire.
#. Once the timer expires, it signals the fiber by giving the
nanoSemFiber semaphore, which in turn marks the fiber as runnable.
#. The fiber, now marked as runnable, pre-empts the background
task, allowing execution to jump to the fiber.
nano_fiber_sem_take.
#. The fiber then prints, “fiberEntry: Hello World!” It starts a time
for SLEEPTICKS in the future and waits for that timer to expire. The
fiber is marked as not runnable, and execution jumps to the
background task.
#. The background task then takes and waits on the nanoSemTask
semaphore.
#. Once the timer expires, the fiber signals the background task by
giving the nanoSemFiber semaphore. The background task is marked as
runnable, but code execution continues in the fiber, since fibers
take priority over the background task. The fiber execution
continues to the top of the while loop, where it takes and waits on
nanoSemFiber. The fiber is marked as not runnable, and the
background task is scheduled.
#. The background task execution picks up after the call to
:c:func:`nano_task_sem_take()`. It jumps to the top of the
while loop.

View file

@ -1,313 +0,0 @@
.. _nanokernel_fibers:
Fiber Services
##############
Concepts
********
A :dfn:`fiber` is a lightweight, non-preemptible thread of execution that
implements a portion of an application's processing. Fibers are often
used in device drivers and for performance-critical work.
Fibers can be used by microkernel applications, as well as by nanokernel
applications. However, fibers can interact with microkernel object types
to only a limited degree; for more information see :ref:`microkernel_fibers`.
An application can use any number of fibers. Each fiber is anonymous, and
cannot be directly referenced by other fibers or tasks once it has started
executing. The properties that must be specified when a fiber is spawned
include:
* A **memory region** to be used for stack and execution context information.
* A **function** to be invoked when the fiber starts executing.
* A **pair of arguments** to be passed to that entry point function.
* A **priority** to be used by the nanokernel scheduler.
* A **set of options** that will apply to the fiber.
The kernel may automatically spawn zero or more system fibers during system
initialization. The specific set of fibers spawned depends upon both:
#. The kernel capabilities that have been configured by the application.
#. The board configuration used to build the application image.
Fiber Lifecycle
===============
A fiber can be spawned by another fiber, by a task, or by the kernel itself
during system initialization. A fiber typically becomes executable immediately;
however, it is possible to delay the scheduling of a newly-spawned fiber for a
specified time period. For example, scheduling can be delayed to allow device
hardware which the fiber uses to become available. The kernel also supports a
delayed start cancellation capability, which prevents a newly-spawned fiber from
executing if the fiber becomes unnecessary before its full delay period is reached.
Once a fiber is started it normally executes forever. A fiber may terminate
itself gracefully by simply returning from its entry point function. When this
happens, it is the fiber's responsibility to release any system resources it may
own (such as a nanokernel semaphore being used in a mutex-like manner) prior
to returning, since the kernel does *not* attempt to reclaim them so they can
be reused.
A fiber may also terminate non-gracefully by *aborting*. The kernel
automatically aborts a fiber when it generates a fatal error condition,
such as dereferencing a null pointer. A fiber can also explicitly abort itself
using :cpp:func:`fiber_abort()`. As with graceful fiber termination, the kernel
does not attempt to reclaim system resources owned by the fiber.
.. note::
The kernel does not currently make any claims regarding an application's
ability to restart a terminated fiber.
Fiber Scheduling
================
The nanokernel's scheduler selects which of the system's threads is allowed
to execute; this thread is known as the :dfn:`current context`. The nanokernel's
scheduler permits threads to execute only when no ISR needs to execute; execution
of ISRs take precedence.
When executing threads, the nanokernel's scheduler gives fiber execution
precedence over task execution. The scheduler preempts task execution
whenever a fiber needs to execute, but never preempts the execution of a fiber
to allow another fiber to execute -- even if it is a higher priority fiber.
The kernel automatically saves an executing fiber's CPU register values when
making a context switch to a different fiber, a task, or an ISR; these values
get restored when the fiber later resumes execution.
Fiber State
-----------
A fiber has an implicit *state* that determines whether or not it can be
scheduled for execution. The state records all factors that can prevent
the fiber from executing, such as:
* The fiber has not been spawned.
* The fiber is waiting for a kernel service, for example, a semaphore or a timer.
* The fiber has terminated.
A fiber whose state has no factors that prevent its execution is said to be
*executable*.
Fiber Priorities
----------------
The kernel supports a virtually unlimited number of fiber priority levels,
ranging from 0 (highest priority) to 2^31-1 (lowest priority). Negative
priority levels must not be used.
A fiber's original priority cannot be altered up or down after it has been
spawned.
Fiber Scheduling Algorithm
--------------------------
Whenever possible, the nanokernel's scheduler selects the highest priority
executable fiber to be the current context. When multiple executable fibers
of that priority are available, the scheduler chooses the one that has been
waiting longest.
When no executable fibers exist, the scheduler selects the current task
to be the current context. The current task selected depends upon whether the
application is a nanokernel application or a microkernel application. In nanokernel
applications, the current task is always the background task. In microkernel
applications, the current task is the current task selected by the microkernel's
scheduler. The current task is always executable.
Once a fiber becomes the current context, it remains scheduled for execution
by the nanokernel until one of the following occurs:
* The fiber is supplanted by another thread because it calls a kernel API
that blocks its own execution. (For example, the fiber attempts to take
a nanokernel semaphore that is unavailable.)
* The fiber terminates itself by returning from its entry point function.
* The fiber aborts itself by performing an operation that causes a fatal error,
or by calling :cpp:func:`fiber_abort()`.
Once the current task becomes the current context, it remains scheduled for
execution by the nanokernel until is supplanted by a fiber.
.. note::
The current task is **never** directly supplanted by another task, since the
microkernel scheduler uses the microkernel server fiber to initiate a
change from one microkernel task to another.
Cooperative Time Slicing
------------------------
Due to the non-preemptive nature of the nanokernel's scheduler, a fiber that
performs lengthy computations may cause an unacceptable delay in the scheduling
of other fibers, including higher priority and equal priority ones. To overcome
such problems, the fiber can choose to voluntarily relinquish the CPU from time
to time to permit other fibers to execute.
A fiber can relinquish the CPU in two ways:
* Calling :cpp:func:`fiber_yield()` places the fiber back in the nanokernel
scheduler's list of executable fibers and then invokes the scheduler.
All executable fibers whose priority is higher or equal to that of the
yielding fiber are then allowed to execute before the yielding fiber is
rescheduled. If no such executable fibers exist, the scheduler immediately
reschedules the yielding fiber without context switching.
* Calling :cpp:func:`fiber_sleep()` blocks the execution of the fiber for
a specified time period. Executable fibers of all priorities are then
allowed to execute, although there is no guarantee that fibers whose
priority is lower than that of the sleeping task will actually be scheduled
before the time period expires and the sleeping task becomes executable
once again.
Fiber Options
=============
The kernel supports several :dfn:`fiber options` that may be used to inform
the kernel of special treatment the fiber requires.
The set of kernel options associated with a fiber are specified when the fiber
is spawned. If the fiber uses multiple options, they are separated with
:literal:`|`, the logical ``OR`` operator. A fiber that does not use any
options is spawned using an options value of 0.
The fiber options listed below are pre-defined by the kernel.
:c:macro:`USE_FP`
Instructs the kernel to save the fiber's x87 FPU and MMX floating point
context information during context switches.
:c:macro:`USE_SSE`
Instructs the kernel to save the fiber's SSE floating point context
information during context switches. A fiber with this option
implicitly uses the :c:macro:`USE_FP` option, as well.
Usage
*****
Defining a Fiber
================
The following properties must be defined when spawning a fiber:
*stack_name*
This specifies the memory region used for the fiber's stack and for
other execution context information. To ensure proper memory alignment,
it should have the following form:
.. code-block:: c
char __stack <stack_name>[<stack_size>];
*stack_size*
This specifies the size of the *stack_name* memory region, in bytes.
*entry_point*
This specifies the name of the fiber's entry point function,
which should have the following form:
.. code-block:: c
void <entry_point>(int arg1, int arg2)
{
/* fiber mainline processing */
...
/* (optional) normal fiber termination */
return;
}
*arguments*
This specifies the two arguments passed to *entry_point* when the fiber
begins executing. Non-integer arguments can be passed in by casting to
an integer type.
*priority*
This specifies the scheduling priority of the fiber.
*options*
This specifies the fiber's options.
Example: Spawning a Fiber from a Task
=====================================
This code shows how the currently executing task can spawn multiple fibers,
each dedicated to processing data from a different communication channel.
.. code-block:: c
#define COMM_STACK_SIZE 512
#define NUM_COMM_CHANNELS 8
struct descriptor {
...;
};
char __stack comm_stack[NUM_COMM_CHANNELS][COMM_STACK_SIZE];
struct descriptor comm_desc[NUM_COMM_CHANNELS] = { ... };
...
void comm_fiber(int desc_arg, int unused);
{
ARG_UNUSED(unused);
struct descriptor *desc = (struct descriptor *) desc_arg;
while (1) {
/* process packet of data from comm channel */
...
}
}
void comm_main(void)
{
...
for (int i = 0; i < NUM_COMM_CHANNELS; i++) {
task_fiber_start(&comm_stack[i][0], COMM_STACK_SIZE,
comm_fiber, (int) &comm_desc[i], 0,
10, 0);
}
...
}
APIs
****
APIs affecting the currently-executing fiber are provided
by :file:`microkernel.h` and by :file:`nanokernel.h`:
:cpp:func:`fiber_yield()`
Yield the CPU to higher priority and equal priority fibers.
:cpp:func:`fiber_sleep()`
Yield the CPU for a specified time period.
:cpp:func:`fiber_abort()`
Terminate fiber execution.
APIs affecting a specified fiber are provided by :file:`microkernel.h`
and by :file:`nanokernel.h`:
:cpp:func:`task_fiber_start()`, :cpp:func:`fiber_fiber_start()`,
:cpp:func:`fiber_start()`
Spawn a new fiber.
:cpp:func:`task_fiber_delayed_start()`,
:cpp:func:`fiber_fiber_delayed_start()`,
:cpp:func:`fiber_delayed_start()`
Spawn a new fiber after a specified time period.
:cpp:func:`task_fiber_delayed_start_cancel()`,
:cpp:func:`fiber_fiber_delayed_start_cancel()`,
:cpp:func:`fiber_delayed_start_cancel()`
Cancel spawning of a new fiber, if not already started.

View file

@ -1,144 +0,0 @@
.. _nanokernel_fifos:
Nanokernel FIFOs
################
Concepts
********
The nanokernel's FIFO object type is an implementation of a traditional
first in, first out queue. It is mainly intended for use by fibers.
A nanokernel FIFO allows data items of any size tasks to be sent and received
asynchronously. The FIFO uses a linked list to hold data items that have been
sent but not yet received.
FIFO data items must be aligned on a 4-byte boundary, as the kernel reserves
the first 32 bits of each item for use as a pointer to the next data item
in the FIFO's linked list. Consequently, a data item that holds N bytes
of application data requires N+4 bytes of memory.
Any number of nanokernel FIFOs can be defined. Each FIFO is a distinct
variable of type :c:type:`struct nano_fifo`, and is referenced using a
pointer to that variable. A FIFO must be initialized before it can be used to
send or receive data items.
Items can be added to a nanokernel FIFO in a non-blocking manner by any
context type (i.e. ISR, fiber, or task).
Items can be removed from a nanokernel FIFO in a non-blocking manner by any
context type; if the FIFO is empty the :c:macro:`NULL` return code
indicates that no item was removed. Items can also be removed from a
nanokernel FIFO in a blocking manner by a fiber or task; if the FIFO is empty
the thread waits for an item to be added.
Any number of threads may wait on an empty nanokernel FIFO simultaneously.
When a data item becomes available it is given to the fiber that has waited
longest, or to a waiting task if no fiber is waiting.
.. note::
A task that waits on an empty nanokernel FIFO does a busy wait. This is
not an issue for a nanokernel application's background task; however, in
a microkernel application a task that waits on a nanokernel FIFO remains
the current task. In contrast, a microkernel task that waits on a
microkernel data passing object ceases to be the current task, allowing
other tasks of equal or lower priority to do useful work.
If multiple tasks in a microkernel application wait on the same nanokernel
FIFO, higher priority tasks are given data items in preference to lower
priority tasks. However, the order in which equal priority tasks are given
data items is unpredictible.
Purpose
*******
Use a nanokernel FIFO to asynchronously transfer data items of arbitrary size
in a "first in, first out" manner.
Usage
*****
Example: Initializing a Nanokernel FIFO
=======================================
This code establishes an empty nanokernel FIFO.
.. code-block:: c
struct nano_fifo signal_fifo;
nano_fifo_init(&signal_fifo);
Example: Writing to a Nanokernel FIFO from a Fiber
==================================================
This code uses a nanokernel FIFO to send data to one or more consumer fibers.
.. code-block:: c
struct data_item_t {
void *fifo_reserved; /* 1st word reserved for use by FIFO */
...
};
struct data_item_t tx_data;
void producer_fiber(int unused1, int unused2)
{
ARG_UNUSED(unused1);
ARG_UNUSED(unused2);
while (1) {
/* create data item to send (e.g. measurement, timestamp, ...) */
tx_data = ...
/* send data to consumers */
nano_fiber_fifo_put(&signal_fifo, &tx_data);
...
}
}
Example: Reading from a Nanokernel FIFO
=======================================
This code uses a nanokernel FIFO to obtain data items from a producer fiber,
which are then processed in some manner. This design supports the distribution
of data items to multiple consumer fibers, if desired.
.. code-block:: c
void consumer_fiber(int unused1, int unused2)
{
struct data_item_t *rx_data;
ARG_UNUSED(unused1);
ARG_UNUSED(unused2);
while (1) {
rx_data = nano_fiber_fifo_get(&signal_fifo, TICKS_NONE);
/* process FIFO data */
...
}
}
APIs
****
The following APIs for a nanokernel FIFO are provided by :file:`nanokernel.h`:
:cpp:func:`nano_fifo_init()`
Initializes a FIFO.
:cpp:func:`nano_task_fifo_put()`, :cpp:func:`nano_fiber_fifo_put()`,
:cpp:func:`nano_isr_fifo_put()`, :cpp:func:`nano_fifo_put()`
Add an item to a FIFO.
:cpp:func:`nano_task_fifo_get()`, :cpp:func:`nano_fiber_fifo_get()`,
:cpp:func:`nano_isr_fifo_get()`, :cpp:func:`nano_fifo_get()`
Remove an item from a FIFO, or waits for an item for a specified
time period if it is empty.

View file

@ -1,173 +0,0 @@
.. _nanokernel_interrupts:
Interrupt Services
##################
Concepts
********
:abbr:`ISRs (Interrupt Service Routines)` are execution threads
that run in response to a hardware or software interrupt.
They are used to preempt the execution of the
task or fiber running at the time of the interrupt,
allowing the response to occur with very low overhead.
When an ISR completes its normal task and fiber execution resumes.
Any number of ISRs can be utilized in a Zephyr project, subject to
any hardware constraints imposed by the underlying hardware.
Each ISR has the following properties:
* The :abbr:`IRQ (Interrupt ReQuest)` signal that triggers the ISR.
* The priority level associated with the IRQ.
* The address of the function that is invoked to handle the interrupt.
* The argument value that is passed to that function.
An :abbr:`IDT (Interrupt Descriptor Table)` is used to associate a given interrupt
source with a given ISR.
Only a single ISR can be associated with a specific IRQ at any given time.
Multiple ISRs can utilize the same function to process interrupts,
allowing a single function to service a device that generates
multiple types of interrupts or to service multiple devices
(usually of the same type). The argument value passed to an ISR's function
can be used to allow the function to determine which interrupt has been
signaled.
The Zephyr kernel provides a default ISR for all unused IDT entries. This ISR
generates a fatal system error if an unexpected interrupt is signaled.
The kernel supports interrupt nesting. This allows an ISR to be preempted
in mid-execution if a higher priority interrupt is signaled. The lower
priority ISR resumes execution once the higher priority ISR has completed
its processing.
The kernel allows a task or fiber to temporarily lock out the execution
of ISRs, either individually or collectively, should the need arise.
The collective lock can be applied repeatedly; that is, the lock can
be applied when it is already in effect. The collective lock must be
unlocked an equal number of times before interrupts are again processed
by the kernel.
Purpose
*******
Use an ISR to perform interrupt processing that requires a very rapid
response, and which can be done quickly and without blocking.
.. note::
Interrupt processing that is time consuming, or which involves blocking,
should be handed off to a fiber or task. See `Offloading ISR Work`_ for
a description of various techniques that can be used in a Zephyr project.
Installing an ISR
*****************
It's important to note that IRQ_CONNECT() is not a C function and does
some inline assembly magic behind the scenes. All its arguments must be known
at build time. Drivers that have multiple instances may need to define
per-instance config functions to configure the interrupt for that instance.
Example
-------
.. code-block:: c
#define MY_DEV_IRQ 24 /* device uses IRQ 24 */
#define MY_DEV_PRIO 2 /* device uses interrupt priority 2 */
/* argument passed to my_isr(), in this case a pointer to the device */
#define MY_ISR_ARG DEVICE_GET(my_device)
#define MY_IRQ_FLAGS 0 /* IRQ flags. Unused on non-x86 */
void my_isr(void *arg)
{
... /* ISR code */
}
void my_isr_installer(void)
{
...
IRQ_CONNECT(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_ISR_ARG, MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ); /* enable IRQ */
...
}
Working with Interrupts
***********************
Offloading ISR Work
*******************
Interrupt service routines should generally be kept short
to ensure predictable system operation.
In situations where time consuming processing is required
an ISR can quickly restore the kernel's ability to respond
to other interrupts by offloading some or all of the interrupt-related
processing work to a fiber or task.
Zephyr OS provides a variety of mechanisms to allow an ISR to offload work
to a fiber or task.
1. An ISR can signal a helper fiber (or task) to do interrupt-related work
using a nanokernel object, such as a FIFO, LIFO, or semaphore.
The :c:func:`nano_isr_XXX()` APIs should be used to notify the helper fiber
(or task) that work is available for it.
See :ref:`nanokernel_fibers`.
2. An ISR can signal the microkernel server fiber to do interrupt-related
work by sending an event that has an associated event handler.
See :ref:`microkernel_events`.
3. An ISR can signal a helper task to do interrupt-related work
by sending an event that the helper task detects.
See :ref:`microkernel_events`.
4. An ISR can signal a helper task to do interrupt-related work.
by giving a semaphore that the helper task takes.
See :ref:`microkernel_semaphores`.
When an ISR offloads work to a fiber there is typically a single
context switch to that fiber when the ISR completes.
Thus, interrupt-related processing usually continues almost immediately.
Additional intermediate context switches may be required
to execute any currently executing fiber
or any higher-priority fibers that are scheduled to run.
When an ISR offloads work to a task there is typically a context switch
to the microkernel server fiber, followed by a context switch to that task.
Thus, there is usually a larger delay before the interrupt-related processing
resumes than when offloading work to a fiber.
Additional intermediate context switches may be required
to execute any currently executing fiber or any higher-priority tasks
that are scheduled to run.
APIs
****
These are the interrupt-related Application Program Interfaces.
:c:func:`irq_enable()`
Enables interrupts from a specific IRQ.
:c:func:`irq_disable()`
Disables interrupts from a specific IRQ.
:c:func:`irq_lock()`
Locks out interrupts from all sources.
:c:func:`irq_unlock()`
Removes lock on interrupts from all sources.
Macros
******
These are the macros used to install a static ISR.
:c:macro:`IRQ_CONNECT()`
Registers a static ISR with the IDT.

View file

@ -1,364 +0,0 @@
.. _nanokernel_event_logger:
Kernel Event Logger
###################
Definition
**********
The kernel event logger is a standardized mechanism to record events within the Kernel while
providing a single interface for the user to collect the data. This mechanism is currently used
to log the following events:
* Sleep events (entering and exiting low power conditions).
* Context switch events.
* Interrupt events.
Kernel Event Logger Configuration
*********************************
Kconfig provides the ability to enable and disable the collection of events and to configure the
size of the buffer used by the event logger.
These options can be found in the following path :file:`kernel/Kconfig`.
General kernel event logger configuration:
* :option:`CONFIG_KERNEL_EVENT_LOGGER_BUFFER_SIZE`
Default size: 128 words, 32-bit length.
Profiling points configuration:
* :option:`CONFIG_KERNEL_EVENT_LOGGER_DYNAMIC`
Allows modifying at runtime the events to record. At boot no event is recorded if enabled
This flag adds functions allowing to enable/disable recoding of kernel event logger and
task monitor events.
* :option:`CONFIG_KERNEL_EVENT_LOGGER_CUSTOM_TIMESTAMP`
Enables the possibility to set the timer function to be used to populate kernel event logger
timestamp. This has to be done at runtime by calling sys_k_event_logger_set_timer and providing
the function callback.
Adding a Kernel Event Logging Point
***********************************
Custom trace points can be added with the following API:
* :c:func:`sys_k_event_logger_put()`
Adds the profile of a new event with custom data.
* :cpp:func:`sys_k_event_logger_put_timed()`
Adds timestamped profile of a new event.
.. important::
The data must be in 32-bit sized blocks.
Retrieving Kernel Event Data
****************************
Applications are required to implement a fiber for accessing the recorded event messages
in both the nanokernel and microkernel systems. Developers can use the provided API to
retrieve the data, or may write their own routines using the ring buffer provided by the
event logger.
The API functions provided are:
* :c:func:`sys_k_event_logger_get()`
* :c:func:`sys_k_event_logger_get_wait()`
* :c:func:`sys_k_event_logger_get_wait_timeout()`
The above functions specify various ways to retrieve a event message and to copy it to
the provided buffer. When the buffer size is smaller than the message, the function will
return an error. All three functions retrieve messages via a FIFO method. The :literal:`wait`
and :literal:`wait_timeout` functions allow the caller to pend until a new message is
logged, or until the timeout expires.
Enabling/disabling event recording
**********************************
If KERNEL_EVENT_LOGGER_DYNAMIC is enabled, following functions must be checked for
dynamically enabling/disabling event recording at runtime:
* :cpp:func:`sys_k_event_logger_set_mask()`
* :cpp:func:`sys_k_event_logger_get_mask()`
* :cpp:func:`sys_k_event_logger_set_monitor_mask()`
* :cpp:func:`sys_k_event_logger_get_monitor_mask()`
Each mask bit corresponds to the corresponding event ID (mask is starting at bit 1 not bit 0).
More details are provided in function description.
Timestamp
*********
The timestamp used by the kernel event logger is 32-bit LSB of board HW timer (for example
Lakemont APIC timer for Quark SE). This timer period is very small and leads to timestamp
wraparound happening quite often (e.g. every 134s for Quark SE).
see :option:`CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC`
This wraparound must be considered when analyzing kernel event logger data and care must be
taken when tickless idle is enabled and sleep duration can exceed maximum HW timer value.
Timestamp used by the kernel event logger can be customized by enabling following option:
:option:`CONFIG_KERNEL_EVENT_LOGGER_CUSTOM_TIMESTAMP`
In case this option is enabled, a callback function returning a 32-bit timestamp must
be provided to the kernel event logger by calling the following function at runtime:
:cpp:func:`sys_k_event_logger_set_timer()`
Message Formats
***************
Interrupt-driven Event Messaging
--------------------------------
The data of the interrupt-driven event message comes in two block of 32 bits:
* The first block contains the timestamp occurrence of the interrupt event.
* The second block contains the Id of the interrupt.
Example:
.. code-block:: c
uint32_t data[2];
data[0] = timestamp_event;
data[1] = interrupt_id;
Context-switch Event Messaging
------------------------------
The data of the context-switch event message comes in two block of 32 bits:
* The first block contains the timestamp occurrence of the context-switch event.
* The second block contains the thread id of the context involved.
Example:
.. code-block:: c
uint32_t data[2];
data[0] = timestamp_event;
data[1] = context_id;
Sleep Event Messaging
---------------------
The data of the sleep event message comes in three block of 32 bits:
* The first block contains the timestamp when the CPU went to sleep mode.
* The second block contains the timestamp when the CPU woke up.
* The third block contains the interrupt Id that woke the CPU up.
Example:
.. code-block:: c
uint32_t data[3];
data[0] = timestamp_went_sleep;
data[1] = timestamp woke_up.
data[2] = interrupt_id.
Task Monitor
------------
The task monitor tracks the activities of the task schedule server
in the microkernel and it is able to report three different types of
events related with the scheduler activities:
Task Monitor Task State Change Event
++++++++++++++++++++++++++++++++++++
The Task Monitor Task State Change Event tracks the task's status changes.
The event data is arranged as three 32 bit blocks:
* The first block contains the timestamp when the task server
changed the task status.
* The second block contains the task ID of the affected task.
* The thid block contains a 32 bit number with the new status.
Example:
.. code-block:: c
uint32_t data[3];
data[0] = timestamp;
data[1] = task_id.
data[2] = status_data.
Task Monitor Kevent Event
+++++++++++++++++++++++++
The Task Monitor Kevent Event tracks the commands requested to the
task server by the kernel. The event data is arranged as two blocks
of 32 bits each:
* The first block contains the timestamp when the task server
attended the kernel command.
* The second block contains the code of the command.
.. code-block:: c
uint32_t data[3];
data[0] = timestamp;
data[1] = event_code.
Task Monitor Command Packet Event
+++++++++++++++++++++++++++++++++
The Task Monitor Command Packet Event track the command packets sent
to the task server. The event data is arranged as three blocks of
32 bits each:
* The first block contains the timestamp when the task server
attended the kernel command.
* The second block contains the task identifier of the task
affected by the packet.
* The thid block contains the memory vector of the routine
executed by the task server.
Example:
.. code-block:: c
uint32_t data[3];
data[0] = timestamp;
data[1] = task_id.
data[2] = comm_handler.
Example: Retrieving Profiling Messages
======================================
.. code-block:: c
uint32_t data[3];
uint8_t data_length = SIZE32_OF(data);
uint8_t dropped_count;
while(1) {
/* collect the data */
res = sys_k_event_logger_get_wait(&event_id, &dropped_count, data,
&data_length);
if (dropped_count > 0) {
/* process the message dropped count */
}
if (res > 0) {
/* process the data */
switch (event_id) {
case KERNEL_EVENT_CONTEXT_SWITCH_EVENT_ID:
/* ... Process the context switch event data ... */
break;
case KERNEL_EVENT_INTERRUPT_EVENT_ID:
/* ... Process the interrupt event data ... */
break;
case KERNEL_EVENT_SLEEP_EVENT_ID:
/* ... Process the data for a sleep event ... */
break;
case KERNEL_EVENT_LOGGER_TASK_MON_TASK_STATE_CHANGE_EVENT_ID:
/* ... Process the data for a task monitor event ... */
break;
case KERNEL_EVENT_LOGGER_TASK_MON_KEVENT_EVENT_ID:
/* ... Process the data for a task monitor command event ... */
break;
case KERNEL_EVENT_LOGGER_TASK_MON_CMD_PACKET_EVENT_ID:
/* ... Process the data for a task monitor packet event ... */
break;
default:
printf("unrecognized event id %d\n", event_id);
}
} else {
if (res == -EMSGSIZE) {
/* ERROR - The buffer provided to collect the
* profiling events is too small.
*/
} else if (ret == -EAGAIN) {
/* There is no message available in the buffer */
}
}
}
.. note::
To see an example that shows how to collect the kernel event data, check the
project :file:`samples/kernel_event_logger`.
Example: Adding a Kernel Event Logging Point
============================================
.. code-block:: c
uint32_t data[2];
if (sys_k_must_log_event(KERNEL_EVENT_LOGGER_CUSTOM_ID)) {
data[0] = custom_data_1;
data[1] = custom_data_2;
sys_k_event_logger_put(KERNEL_EVENT_LOGGER_CUSTOM_ID, data, ARRAY_SIZE(data));
}
Use the following function to register only the time of an event.
.. code-block:: c
if (sys_k_must_log_event(KERNEL_EVENT_LOGGER_CUSTOM_ID)) {
sys_k_event_logger_put_timed(KERNEL_EVENT_LOGGER_CUSTOM_ID);
}
APIs
****
The following APIs are provided by the :file:`k_event_logger.h` file:
:cpp:func:`sys_k_event_logger_register_as_collector()`
Register the current fiber as the collector fiber.
:c:func:`sys_k_event_logger_put()`
Enqueue a kernel event logger message with custom data.
:cpp:func:`sys_k_event_logger_put_timed()`
Enqueue a kernel event logger message with the current time.
:c:func:`sys_k_event_logger_get()`
De-queue a kernel event logger message.
:c:func:`sys_k_event_logger_get_wait()`
De-queue a kernel event logger message. Wait if the buffer is empty.
:c:func:`sys_k_event_logger_get_wait_timeout()`
De-queue a kernel event logger message. Wait if the buffer is empty until the timeout expires.
:cpp:func:`sys_k_must_log_event()`
Check if an event type has to be logged or not
In case KERNEL_EVENT_LOGGER_DYNAMIC is enabled:
:cpp:func:`sys_k_event_logger_set_mask()`
Set kernel event logger event mask
:cpp:func:`sys_k_event_logger_get_mask()`
Get kernel event logger event mask
:cpp:func:`sys_k_event_logger_set_monitor_mask()`
Set task monitor event mask
:cpp:func:`sys_k_event_logger_get_monitor_mask()`
Get task monitor event mask
In case KERNEL_EVENT_LOGGER_CUSTOM_TIMESTAMP is enabled:
:cpp:func:`sys_k_event_logger_set_timer()`
Set kernel event logger timestamp function

View file

@ -1,142 +0,0 @@
.. _nanokernel_lifos:
Nanokernel LIFOs
################
Concepts
********
The nanokernel's LIFO object type is an implementation of a traditional
last in, first out queue. It is mainly intended for use by fibers.
A nanokernel LIFO allows data items of any size tasks to be sent and received
asynchronously. The LIFO uses a linked list to hold data items that have been
sent but not yet received.
LIFO data items must be aligned on a 4-byte boundary, as the kernel reserves
the first 32 bits of each item for use as a pointer to the next data item
in the LIFO's linked list. Consequently, a data item that holds N bytes
of application data requires N+4 bytes of memory.
Any number of nanokernel LIFOs can be defined. Each LIFO is a distinct
variable of type :c:type:`struct nano_lifo`, and is referenced using a
pointer to that variable. A LIFO must be initialized before it can be used to
send or receive data items.
Items can be added to a nanokernel LIFO in a non-blocking manner by any
context type (i.e. ISR, fiber, or task).
Items can be removed from a nanokernel LIFO in a non-blocking manner by any
context type; if the LIFO is empty the :c:macro:`NULL` return code
indicates that no item was removed. Items can also be removed from a
nanokernel LIFO in a blocking manner by a fiber or task; if the LIFO is empty
the thread waits for an item to be added.
Any number of threads may wait on an empty nanokernel LIFO simultaneously.
When a data item becomes available it is given to the fiber that has waited
longest, or to a waiting task if no fiber is waiting.
.. note::
A task that waits on an empty nanokernel LIFO does a busy wait. This is
not an issue for a nanokernel application's background task; however, in
a microkernel application a task that waits on a nanokernel LIFO remains
the current task. In contrast, a microkernel task that waits on a
microkernel data passing object ceases to be the current task, allowing
other tasks of equal or lower priority to do useful work.
If multiple tasks in a microkernel application wait on the same nanokernel
LIFO, higher priority tasks are given data items in preference to lower
priority tasks. However, the order in which equal priority tasks are given
data items is unpredictable.
Purpose
*******
Use a nanokernel LIFO to asynchronously transfer data items of arbitrary size
in a "last in, first out" manner.
Usage
*****
Example: Initializing a Nanokernel LIFO
=======================================
This code establishes an empty nanokernel LIFO.
.. code-block:: c
struct nano_lifo signal_lifo;
nano_lifo_init(&signal_lifo);
Example: Writing to a Nanokernel LIFO from a Fiber
==================================================
This code uses a nanokernel LIFO to send data to a consumer fiber.
.. code-block:: c
struct data_item_t {
void *lifo_reserved; /* 1st word reserved for use by LIFO */
...
};
struct data_item_t tx data;
void producer_fiber(int unused1, int unused2)
{
ARG_UNUSED(unused1);
ARG_UNUSED(unused2);
while (1) {
/* create data item to send */
tx_data = ...
/* send data to consumer */
nano_fiber_lifo_put(&signal_lifo, &tx_data);
...
}
}
Example: Reading from a Nanokernel LIFO
=======================================
This code uses a nanokernel LIFO to obtain data items from a producer fiber,
which are then processed in some manner.
.. code-block:: c
void consumer_fiber(int unused1, int unused2)
{
struct data_item_t *rx_data;
ARG_UNUSED(unused1);
ARG_UNUSED(unused2);
while (1) {
rx_data = nano_fiber_lifo_get(&signal_lifo, TICKS_UNLIMITED);
/* process LIFO data */
...
}
}
APIs
****
The following APIs for a nanokernel LIFO are provided by :file:`nanokernel.h`:
:cpp:func:`nano_lifo_init()`
Initializes a LIFO.
:cpp:func:`nano_task_lifo_put()`, :cpp:func:`nano_fiber_lifo_put()`,
:cpp:func:`nano_isr_lifo_put()`, :cpp:func:`nano_lifo_put()`
Add an item to a LIFO.
:cpp:func:`nano_task_lifo_get()`, :cpp:func:`nano_fiber_lifo_get()`,
:cpp:func:`nano_isr_lifo_get()`, :cpp:func:`nano_lifo_get()`
Remove an item from a LIFO, or waits for an item for a specified
time period if it is empty.

View file

@ -1,143 +0,0 @@
.. _nanokernel_ring_buffers:
Nanokernel Ring Buffers
#######################
Definition
**********
The ring buffer is defined in :file:`include/misc/ring_buffer.h` and
:file:`kernel/nanokernel/ring_buffer.c`. This is an array-based
circular buffer, stored in first-in-first-out order. The APIs allow
for enqueueing and retrieval of chunks of data up to 1024 bytes in size,
along with two metadata values (type ID and an app-specific integer).
Unlike nanokernel FIFOs, storage of enqueued items and their metadata
is managed in a fixed buffer and there are no preconditions on the data
enqueued (other than the size limit). Since the size annotation is only
an 8-bit value, sizes are expressed in terms of 32-bit chunks.
Internally, the ring buffer always maintains an empty 32-bit block in the
buffer to distinguish between empty and full buffers. Any given entry
in the buffer will use a 32-bit block for metadata plus any data attached.
If the size of the buffer array is a power of two, the ring buffer will
use more efficient masking instead of expensive modulo operations to
maintain itself.
Concurrency
***********
Concurrency control of ring buffers is not implemented at this level.
Depending on usage (particularly with respect to number of concurrent
readers/writers) applications may need to protect the ring buffer with
mutexes and/or use semaphores to notify consumers that there is data to
read.
For the trivial case of one producer and one consumer, concurrency
shouldn't be needed.
Example: Initializing a Ring Buffer
===================================
There are three ways to initialize a ring buffer. The first two are through use
of macros which defines one (and an associated private buffer) in file scope.
You can declare a fast ring buffer that uses mask operations by declaring
a power-of-two sized buffer:
.. code-block:: c
/* Buffer with 2^8 or 256 elements */
SYS_RING_BUF_DECLARE_POW2(my_ring_buf, 8);
Arbitrary-sized buffers may also be declared with a different macro, but
these will always be slower due to use of modulo operations:
.. code-block:: c
#define MY_RING_BUF_SIZE 93
SYS_RING_BUF_DECLARE_SIZE(my_ring_buf, MY_RING_BUF_SIZE);
Alternatively, a ring buffer may be initialized manually. Whether the buffer
will use modulo or mask operations will be detected automatically:
.. code-block:: c
#define MY_RING_BUF_SIZE 64
struct my_struct {
struct ring_buffer rb;
uint32_t buffer[MY_RING_BUF_SIZE];
...
};
struct my_struct ms;
void init_my_struct {
sys_ring_buf_init(&ms.rb, sizeof(ms.buffer), ms.buffer);
...
}
Example: Enqueuing data
=======================
.. code-block:: c
int ret;
ret = sys_ring_buf_put(&ring_buf, TYPE_FOO, 0, &my_foo, SIZE32_OF(my_foo));
if (ret == -EMSGSIZE) {
... not enough room for the message ..
}
If the type or value fields are sufficient, the data pointer and size may be 0.
.. code-block:: c
int ret;
ret = sys_ring_buf_put(&ring_buf, TYPE_BAR, 17, NULL, 0);
if (ret == -EMSGSIZE) {
... not enough room for the message ..
}
Example: Retrieving data
========================
.. code-block:: c
int ret;
uint32_t data[6];
size = SIZE32_OF(data);
ret = sys_ring_buf_get(&ring_buf, &type, &value, data, &size);
if (ret == -EMSGSIZE) {
printk("Buffer is too small, need %d uint32_t\n", size);
} else if (ret == -EAGAIN) {
printk("Ring buffer is empty\n");
} else {
printk("got item of type %u value &u of size %u dwords\n",
type, value, size);
...
}
APIs
****
The following APIs for ring buffers are provided by :file:`ring_buffer.h`:
:cpp:func:`sys_ring_buf_init()`
Initializes a ring buffer.
:c:func:`SYS_RING_BUF_DECLARE_POW2()`, :c:func:`SYS_RING_BUF_DECLARE_SIZE()`
Declare and init a file-scope ring buffer.
:cpp:func:`sys_ring_buf_space_get()`
Returns the amount of free buffer storage space in 32-bit dwords.
:cpp:func:`sys_ring_buf_is_empty()`
Indicates whether a buffer is empty.
:cpp:func:`sys_ring_buf_put()`
Enqueues an item.
:cpp:func:`sys_ring_buf_get()`
De-queues an item.

View file

@ -1,142 +0,0 @@
.. _nanokernel_stacks:
Nanokernel Stacks
#################
Concepts
********
The nanokernel's stack object type is an implementation of a traditional
last in, first out queue for a limited number of 32-bit data values.
It is mainly intended for use by fibers.
Each stack uses an array of 32-bit words to hold its data values. The array
may be of any size, but must be aligned on a 4-byte boundary.
Any number of nanokernel stacks can be defined. Each stack is a distinct
variable of type :c:type:`struct nano_stack`, and is referenced using a pointer
to that variable. A stack must be initialized to use its array before it
can be used to send or receive data values.
Data values can be added to a stack in a non-blocking manner by any context type
(i.e. ISR, fiber, or task).
.. note::
A context must not attempt to add a data value to a stack whose array
is already full, as the resulting array overflow will lead to
unpredictable behavior.
Data values can be removed from a stack in a non-blocking manner by any context
type; if the stack is empty a special return code indicates that no data value
was removed. Data values can also be removed from a stack in a blocking manner
by a fiber or task; if the stack is empty the fiber or task waits for a data
value to be added.
Only a single fiber, but any number of tasks, may wait on an empty nanokernel
stack simultaneously. When a data value becomes available it is given to the
waiting fiber, or to a waiting task if no fiber is waiting.
.. note::
The nanokernel does not allow more than one fiber to wait on a nanokernel
stack. If a second fiber starts waiting the first waiting fiber is
superseded and ends up waiting forever.
A task that waits on an empty nanokernel stack does a busy wait. This is
not an issue for a nanokernel application's background task; however, in
a microkernel application a task that waits on a nanokernel stack remains
the current task. In contrast, a microkernel task that waits on a
microkernel data passing object ceases to be the current task, allowing
other tasks of equal or lower priority to do useful work.
If multiple tasks in a microkernel application wait on the same nanokernel
stack, higher priority tasks are given data values in preference to lower
priority tasks. However, the order in which equal priority tasks are given
data values is unpredictable.
Purpose
*******
Use a nanokernel stack to store and retrieve 32-bit data values in a "last in,
first out" manner, when the maximum number of stored items is known.
Usage
*****
Example: Initializing a Nanokernel Stack
========================================
This code establishes an empty nanokernel stack capable of holding
up to 10 items.
.. code-block:: c
#define MAX_ALARMS 10
struct nano_stack alarm_stack;
uint32_t stack_area[MAX_ALARMS];
...
nano_stack_init(&alarm_stack, stack_area);
Example: Writing to a Nanokernel Stack
======================================
This code shows how an ISR can use a nanokernel stack to pass a 32-bit alarm
indication to a processing fiber.
.. code-block:: c
#define OVERHEAT_ALARM 17
void overheat_interrupt_handler(void *arg)
{
...
/* report alarm */
nano_isr_stack_push(&alarm_stack, OVERHEAT_ALARM);
...
}
Example: Reading from a Nanokernel Stack
========================================
This code shows how a fiber can use a nanokernel stack to retrieve 32-bit alarm
indications signalled by other parts of the application,
such as ISRs and other fibers. It is assumed that the fiber can handle
bursts of alarms before the stack overflows, and that the order
in which alarms are processed isn't significant.
.. code-block:: c
void alarm_handler_fiber(int arg1, int arg2)
{
uint32_t alarm_number;
while (1) {
/* wait for an alarm to be reported */
alarm_number = nano_fiber_stack_pop(&alarm_stack, TICKS_UNLIMITED);
/* process alarm indication */
...
}
}
APIs
****
The following APIs for a nanokernel stack are provided by
:file:`nanokernel.h`:
:cpp:func:`nano_stack_init()`
Initializes a stack.
:cpp:func:`nano_task_stack_push()`, :cpp:func:`nano_fiber_stack_push()`,
:cpp:func:`nano_isr_stack_push()`, :cpp:func:`nano_stack_push()`
Add an item to a stack.
:cpp:func:`nano_task_stack_pop()`, :cpp:func:`nano_fiber_stack_pop()`,
:cpp:func:`nano_isr_stack_pop()`, :cpp:func:`nano_stack_pop()`
Remove an item from a stack, or wait for an item if it is empty.

View file

@ -1,133 +0,0 @@
.. _nanokernel_synchronization:
Synchronization Services
########################
This section describes synchronization services provided by the nanokernel.
Currently, only a single service is provided.
.. _nanokernel_semaphores:
Nanokernel Semaphores
*********************
Concepts
========
The nanokernel's :dfn:`semaphore` object type is an implementation of a
traditional counting semaphore. It is mainly intended for use by fibers.
Any number of nanokernel semaphores can be defined. Each semaphore is a
distinct variable of type :c:type:`struct nano_sem`, and is referenced
using a pointer to that variable. A semaphore must be initialized before
it can be used.
A nanokernel semaphore's count is set to zero when the semaphore is initialized.
This count is incremented each time the semaphore is given, and is decremented
each time the semaphore is taken. However, a semaphore cannot be taken if it is
unavailable; that is, when it has a count of zero.
A nanokernel semaphore may be **given** by any context type: ISRs, fibers,
or tasks.
A nanokernel semaphore may be **taken in a non-blocking manner** by any
context type; a special return code indicates if the semaphore is unavailable.
A semaphore can also be **taken in a blocking manner** by a fiber or task;
if the semaphore is unavailable, the thread waits for it to be given.
Any number of threads may wait on an unavailable nanokernel semaphore
simultaneously. When the semaphore is signaled, it is given to the fiber
that has waited longest, or to a waiting task when no fiber is waiting.
.. note::
A task that waits on an unavailable nanokernel FIFO semaphore busy-waits.
This is not an issue for a nanokernel application's background task;
however, in a microkernel application a task that waits on a nanokernel
semaphore remains the current task. In contrast, a microkernel task that
waits on a microkernel synchronization object ceases to be the current task,
allowing other tasks of equal or lower priority to do useful work.
When multiple tasks in a microkernel application are waiting on the same nanokernel
semaphore, higher priority tasks are given the semaphore in preference to
lower priority tasks. However, the order in which equal priority tasks are given
the semaphore is unpredictable.
Purpose
=======
Use a nanokernel semaphore to control access to a set of resources by multiple
fibers.
Use a nanokernel semaphore to synchronize processing between a producing task and
fiber, or among an ISR and one or more consuming fibers.
Usage
=====
Example: Initializing a Nanokernel Semaphore
--------------------------------------------
This code initializes a nanokernel semaphore, setting its count to zero.
.. code-block:: c
struct nano_sem input_sem;
nano_sem_init(&input_sem);
Example: Giving a Nanokernel Semaphore from an ISR
--------------------------------------------------
This code uses a nanokernel semaphore to indicate that a unit of data
is available for processing by a consumer fiber.
.. code-block:: c
void input_data_interrupt_handler(void *arg)
{
/* notify fiber that data is available */
nano_isr_sem_give(&input_sem);
...
}
Example: Taking a Nanokernel Semaphore with a Conditional Time-out
------------------------------------------------------------------
This code waits up to 500 ticks for a nanokernel semaphore to be given,
and gives warning if it is not obtained in that time.
.. code-block:: c
void consumer_fiber(void)
{
...
if (nano_fiber_sem_take(&input_sem, 500) != 1) {
printk("Input data not available!");
} else {
/* fetch available data */
...
}
...
}
APIs
====
The following APIs for a nanokernel semaphore are provided
by :file:`nanokernel.h`:
:cpp:func:`nano_sem_init()`
Initialize a semaphore.
:cpp:func:`nano_task_sem_give()`, :cpp:func:`nano_fiber_sem_give()`,
:cpp:func:`nano_isr_sem_give()`, :cpp:func:`nano_sem_give()`
Signal a sempahore.
:cpp:func:`nano_task_sem_take()`, :cpp:func:`nano_fiber_sem_take()`,
:cpp:func:`nano_isr_sem_take()`, :cpp:func:`nano_sem_take()`
Wait on a semaphore for a specified time period.

View file

@ -1,78 +0,0 @@
.. _nanokernel_tasks:
Task Services
#############
Concepts
********
A :dfn:`task` is a preemptible thread of execution that implements a portion of
an application's processing. It is normally used to perform processing that is
too lengthy or too complex to be performed by a fiber or an ISR.
A nanokernel application can define a single application task, known as the
*background task*, which can execute only when no fiber or ISR needs to
execute. The entry point function for the background task is :code:`main()`,
and it must be supplied by the application.
.. note::
The background task is very different from the tasks used by a microkernel
application; for more information see :ref:`microkernel_tasks`.
Task Lifecycle
==============
The kernel automatically starts the background task during system
initialization.
Once the background task is started, it executes forever. If the task attempts
to terminate by returning from :code:`main()`, the kernel puts the task into
a permanent idling state since the background task must always be available
to execute.
Task Scheduling
===============
The nanokernel's scheduler executes the background task only when no fiber or
ISR needs to execute; fiber and ISR executions always take precedence.
The kernel automatically saves the background task's CPU register values when
prompted for a context switch to a fiber or ISR. These values are restored
when the background task later resumes execution.
Usage
*****
Defining the Background Task
============================
The application must supply a function of the following form:
.. code-block:: c
void main(void)
{
/* background task processing */
...
/* (optional) enter permanent idling state */
return;
}
This function is used as the background task's entry point function. If a
nanokernel application does not need to perform any task-level processing,
:code:`main()` can simply do an immediate return.
The :option:`CONFIG_MAIN_STACK_SIZE` configuration option specifies
the size, in bytes, of the memory region used for the background
task's stack and for other execution context information.
APIs
****
The nanokernel provides the following API for manipulating the background task.
:cpp:func:`task_sleep()`
Put the background task to sleep for a specified time period.

View file

@ -1,179 +0,0 @@
.. _nanokernel_timers:
Timer Services
##############
Concepts
********
The nanokernel's :dfn:`timer` object type uses the kernel's system clock to
monitor the passage of time, as measured in ticks. It is mainly intended for use
by fibers.
A *nanokernel timer* allows a fiber or task to determine whether or not a
specified time limit has been reached while the thread itself is busy performing
other work. A thread can use more than one timer when it needs to monitor multiple
time intervals simultaneously.
A nanokernel timer points to a *user data structure* that is supplied by the
thread that uses it; this pointer is returned when the timer expires. The user
data structure must be at least 4 bytes long and aligned on a 4-byte boundary,
as the kernel reserves the first 32 bits of this area for its own use. Any
remaining bytes of this area can be used to hold data that is helpful to the
thread that uses the timer.
Any number of nanokernel timers can be defined. Each timer is a distinct
variable of type :c:type:`struct nano_timer`, and is referenced using a pointer
to that variable. A timer must be initialized with its user data structure
before it can be used.
A nanokernel timer is started by specifying a *duration*, which is the number
of ticks the timer counts before it expires.
.. note::
Care must be taken when specifying the duration of a nanokernel timer,
since the first tick measured by the timer after it is started will be
less than a full tick interval. For example, when the system clock period
is 10 milliseconds, starting a timer than expires after 1 tick will result
in the timer expiring anywhere from a fraction of a millisecond
later to just slightly less than 10 milliseconds later. To ensure that
a timer doesn't expire for at least ``N`` ticks it is necessary to specify
a duration of ``N+1`` ticks.
Once started, a nanokernel timer can be tested in either a non-blocking or
blocking manner to allow a thread to determine if the timer has expired.
If the timer has expired, the kernel returns the pointer to the user data
structure. If the timer has not expired, the kernel either returns
:c:macro:`NULL` (for a non-blocking test), or it waits for the timer to expire
(for a blocking test).
.. note::
The nanokernel does not allow more than one thread to wait on a nanokernel
timer at any given time. If a second thread starts waiting, only the first
waiting thread wakes up when the timer expires. The second thread continues
waiting.
A task that waits on a nanokernel timer does a ``busy wait``. This is
not an issue for a nanokernel application's background task; however, in
a microkernel application, a task that waits on a nanokernel timer remains
the *current task* and prevents other tasks of equal or lower priority
from doing useful work.
A nanokernel timer can be cancelled after it has been started. Cancelling
a timer while it is still running causes the timer to expire immediately,
thereby unblocking any thread waiting on the timer. Cancelling a timer
that has already expired has no effect on the timer.
A nanokernel timer can be reused once it has expired, but must **not** be
restarted while it is still running. If desired, a timer can be re-initialized
with a different user data structure before it is started again.
Purpose
*******
Use a nanokernel timer to determine whether or not a specified number
of system clock ticks have elapsed while a fiber or task is busy performing
other work.
.. note::
If a fiber or task has no other work to perform while waiting
for time to pass, it can simply call :cpp:func:`fiber_sleep()`
or :cpp:func:`task_sleep()`, respectively.
.. note::
The kernel provides additional APIs that allow a fiber or task to monitor
the system clock, as well as the higher precision hardware clock,
without using a nanokernel timer.
Usage
*****
Example: Initializing a Nanokernel Timer
========================================
This code initializes a nanokernel timer.
.. code-block:: c
struct nano_timer my_timer;
uint32_t data_area[3] = { 0, 1111, 2222 };
nano_timer_init(&my_timer, data_area);
Example: Starting a Nanokernel Timer
====================================
This code uses the above nanokernel timer to limit the amount of time a fiber
spends gathering data before processing it.
.. code-block:: c
/* set timer to expire in 10 ticks */
nano_fiber_timer_start(&my_timer, 10);
/* gather data until timer expires */
do {
...
} while (nano_fiber_timer_test(&my_timer, TICKS_NONE) == NULL);
/* process the data */
...
Example: Cancelling a Nanokernel Timer
======================================
This code illustrates how an active nanokernel timer can be stopped prematurely.
.. code-block:: c
struct nano_timer my_timer;
uint32_t dummy;
...
/* set timer to expire in 10 ticks */
nano_timer_init(&my_timer, &dummy);
nano_fiber_timer_start(&my_timer, 10);
/* do work while waiting for an input signal to arrive */
...
/* now have input signal, so stop the timer if it is still running */
nano_fiber_timer_stop(&my_timer);
/* check to see if the timer expired before it was stopped */
if (nano_fiber_timer_test(&my_timer, TICKS_NONE) != NULL) {
printf("Warning: Input signal took too long to arrive!");
}
APIs
****
APIs for a nanokernel timer provided by :file:`nanokernel.h`
============================================================
:cpp:func:`nano_timer_init()`
Initialize a timer.
:cpp:func:`nano_task_timer_start()`, :cpp:func:`nano_fiber_timer_start()`,
:cpp:func:`nano_isr_timer_start()`, :cpp:func:`nano_timer_start()`
Start a timer.
:cpp:func:`nano_task_timer_test()`, :cpp:func:`nano_fiber_timer_test()`,
:cpp:func:`nano_isr_timer_test()`, :cpp:func:`nano_timer_test()`
Wait or test for timer expiration.
:cpp:func:`nano_task_timer_stop()`, :cpp:func:`nano_fiber_timer_stop()`,
:cpp:func:`nano_isr_timer_stop()`, :cpp:func:`nano_timer_stop()`
Force timer expiration, if not already expired.
:cpp:func:`nano_timer_ticks_remain()`
Return timer ticks before timer expiration.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 83 KiB

View file

@ -1,372 +0,0 @@
.. _kernel_fundamentals:
Kernel Fundamentals
###################
This section provides a high-level overview of the concepts and capabilities
of the Zephyr kernel.
Organization
************
The central elements of the Zephyr kernel are its *microkernel* and underlying
*nanokernel*. The kernel also contains a variety of auxiliary subsystems,
including a library of device drivers and networking software.
Applications can be developed using both the microkernel and the nanokernel,
or using the nanokernel only.
The nanokernel is a high-performance, multi-threaded execution environment
with a basic set of kernel features. The nanokernel is ideal for systems
with sparse memory (the kernel itself requires as little as 2 KB!) or only
simple multi-threading requirements (such as a set of interrupt
handlers and a single background task). Examples of such systems include:
embedded sensor hubs, environmental sensors, simple LED wearables, and
store inventory tags.
The microkernel supplements the capabilities of the nanokernel to provide
a richer set of kernel features. The microkernel is suitable for systems
with heftier memory (50 to 900 KB), multiple communication devices
(like WIFI and Bluetooth Low Energy), and multiple data processing tasks.
Examples of such systems include: fitness wearables, smart watches, and
IoT wireless gateways.
Related sections:
* :ref:`common`
* :ref:`nanokernel`
* :ref:`microkernel`
Multi-threading
***************
The Zephyr kernel supports multi-threaded processing for three types
of execution contexts.
* A **task context** is a preemptible thread, normally used to perform
processing that is lengthy or complex. Task scheduling is priority-based,
so that the execution of a higher priority task preempts the execution
of a lower priority task. The kernel also supports an optional round-robin
time slicing capability so that equal priority tasks can execute in turn,
without the risk of any one task monopolizing the CPU.
* A **fiber context** is a lightweight and non-preemptible thread, typically
used for device drivers and performance critical work. Fiber scheduling is
priority-based, so that a higher priority fiber is scheduled for execution
before a lower priority fiber; however, once a fiber is scheduled it remains
scheduled until it performs an operation that blocks its own execution.
Fiber execution takes precedence over task execution, so tasks execute only
when no fiber can be scheduled.
* The **interrupt context** is a special kernel context used to execute
:abbr:`ISRs Interrupt Service Routines`. The interrupt context takes
precedence over all other contexts, so tasks and fibers execute only
when no ISR needs to run. (See below for more on interrupt handling.)
The Zephyr microkernel does not limit the number of tasks or fibers used
in an application; however, an application that uses only the nanokernel
is limited to a single task.
Related sections:
* :ref:`Nanokernel Fiber Services <nanokernel_fibers>`
* :ref:`Microkernel Task Services <microkernel_tasks>`
Interrupts
**********
The Zephyr kernel supports the handling of hardware interrupts and software
interrupts by interrupt handlers, also known as ISRs. Interrupt handling takes
precedence over task and fiber processing, so that an ISR preempts the currently
scheduled task or fiber whenever it needs to run. The kernel also supports nested
interrupt handling, allowing a higher priority ISR to interrupt the execution of
a lower priority ISR, should the need arise.
The nanokernel supplies ISRs for a few interrupt sources (IRQs), such as the
hardware timer device and system console device. The ISRs for all other IRQs
are supplied by either device drivers or application code. Each ISR can
be registered with the kernel at compile time, but can also be registered
dynamically once the kernel is up and running. Zephyr supports ISRs that
are written entirely in C, but also permits the use of assembly language.
In situations where an ISR cannot complete the processing of an interrupt in a
timely manner by itself, the kernel's synchronization and data passing mechanisms
can hand off the remaining processing to a fiber or task.
Related sections:
* :ref:`Nanokernel Interrupt Services <nanokernel_interrupts>`
Clocks and Timers
*****************
Kernel clocking is based on time units called :dfn:`ticks` which have a
configurable duration. A 64-bit *system clock* counts the number of ticks
that have elapsed since the kernel started executing.
Zephyr also supports a higher-resolution *hardware clock*, which can be used
to measure durations requiring sub-tick interval precision.
The nanokernel allows a fiber or thread to perform time-based processing
based on the system clock. This can be done either by using a nanokernel API
that supports a *timeout* argument, or by using a *timer* object that can
be set to expire after a specified number of ticks.
The microkernel also allows tasks to perform time-based processing using
timeouts and timers. Microkernel timers have additional capabilities
not provided by nanokernel timers, such as a periodic expiration mode.
Related sections:
* :ref:`common_kernel_clocks`
* :ref:`Nanokernel Timer Services <nanokernel_timers>`
* :ref:`Microkernel Timers Services <microkernel_timers>`
Synchronization
***************
The Zephyr kernel provides four types of objects that allow different
contexts to synchronize their execution.
The microkernel provides the object types listed below. These types
are intended for tasks, with limited ability to be used by fibers and ISRs.
* A :dfn:`semaphore` is a counting semaphore, which indicates how many units
of a particular resource are available.
* An :dfn:`event` is a binary semaphore, which simply indicates if a particular
resource is available or not.
* A :dfn:`mutex` is a reentrant mutex with priority inversion protection. It is
similar to a binary semaphore, but contains additional logic to ensure that
only the owner of the associated resource can release it and to expedite the
execution of a lower priority thread that holds a resource needed by a
higher priority thread.
The nanokernel provides the object type listed below. This type
is intended for fibers, with only limited ability to be used by tasks and ISRs.
* A :dfn:`nanokernel semaphore` is a counting semaphore that indicates
how many units of a particular resource are available.
Each type has specific capabilities and limitations that affect suitability
of use in a given situation. For more details, see the documentation for each
specific object type.
Related sections:
* :ref:`Microkernel Synchronization Services <microkernel_synchronization>`
* :ref:`Nanokernel Synchronization Services <nanokernel_synchronization>`
Data Passing
************
The Zephyr kernel provides six types of objects that allow different
contexts to exchange data.
The microkernel provides the object types listed below. These types are
designed to be used by tasks, and cannot be used by fibers and ISRs.
* A :dfn:`microkernel FIFO` is a queuing mechanism that allows tasks to exchange
fixed-size data items in an asychronous :abbr:`First In, First Out (FIFO)`
manner.
* A :dfn:`mailbox` is a queuing mechanism that allows tasks to exchange
variable-size data items in a synchronous, "first in, first out" manner.
Mailboxes also support asynchronous exchanges, and allow tasks to exchange
messages both anonymously and non-anonymously using the same mailbox.
* A :dfn:`pipe` is a queuing mechanism that allows a task to send a stream
of bytes to another task. Both asynchronous and synchronous exchanges
can be supported by a pipe.
The nanokernel provides the object types listed below. These types are
primarily designed to be used by fibers, and have only a limited ability
to be used by tasks and ISRs.
* A :dfn:`nanokernel FIFO` is a queuing mechanism that allows contexts to exchange
variable-size data items in an asychronous, first-in, first-out manner.
* A :dfn:`nanokernel LIFO` is a queuing mechanism that allows contexts to exchange
variable-size data items in an asychronous, last-in, first-out manner.
* A :dfn:`nanokernel stack` is a queuing mechanism that allows contexts to exchange
32-bit data items in an asynchronous first-in, first-out manner.
Each of these types has specific capabilities and limitations that affect
suitability for use in a given situation. For more details, see the
documentation for each specific object type.
Related sections:
* :ref:`Microkernel Data Passing Services <microkernel_data>`
* :ref:`Nanokernel Data Passing Services <nanokernel_data>`
Dynamic Memory Allocation
*************************
The Zephyr kernel requires all system resources to be defined at compile-time,
and therefore provides only limited support for dynamic memory allocation.
This support can be used in place of C standard library calls like
:c:func:`malloc()` and :c:func:`free()`, albeit with certain differences.
The microkernel provides two types of objects that allow tasks to dynamically
allocate memory blocks. These object types cannot be used by fibers or ISRs.
* A :dfn:`memory map` is a memory region that supports the dynamic allocation and
release of memory blocks of a single fixed size. An application can have
multiple memory maps, whose block size and block capacity can be configured
individually.
* A :dfn:`memory pool` is a memory region that supports the dynamic allocation and
release of memory blocks of multiple fixed sizes. This allows more efficient
use of available memory when an application requires blocks of different
sizes. An application can have multiple memory pools, whose block sizes
and block capacity can be configured individually.
The nanokernel does not provide any support for dynamic memory allocation.
For additional information see:
* :ref:`Microkernel Memory Maps <microkernel_memory_maps>`
* :ref:`Microkernel Pools <microkernel_memory_pools>`
Public and Private Microkernel Objects
**************************************
Microkernel objects, such as semaphores, mailboxes, or tasks,
can usually be defined as a public object or a private object.
* A :dfn:`public object` is one that is available for general use by all parts
of the application. Any code that includes :file:`zephyr.h` can interact
with the object by referencing the object's name.
* A :dfn:`private object` is one that is intended for use only by a specific
part of the application, such as a single device driver or subsystem.
The object's name is visible only to code within the file where the object
is defined, hiding it from general use unless the code which defined the
object takes additional steps to share the name with other files.
Aside from the way they are defined, and the resulting visibility of
the object's name, a public object and a private object of the same type
operate in exactly the same manner using the same set of APIs.
In most cases, the decision to make a given microkernel object a public
object or a private object is simply a matter of convenience. For example,
when defining a server-type subsystem that handles requests from multiple
clients, it usually makes sense to define public objects.
.. note::
Nanokernel object types can only be defined as private objects. This means
a nanokernel object must be defined using a global variable to allow it to
be accessed by code outside the file in which the object is defined.
.. _microkernel_server:
Microkernel Server
******************
The microkernel performs most operations involving microkernel objects
using a special *microkernel server* fiber, called :c:func:`_k_server`.
When a task invokes an API associated with a microkernel object type,
such as :c:func:`task_fifo_put()`, the associated operation is not
carried out directly. Instead, the following sequence of steps typically
occurs:
#. The task creates a *command packet*, which contains the input parameters
needed to carry out the desired operation.
#. The task queues the command packet on the microkernel server's
*command stack*. The kernel then preempts the task and schedules
the microkernel server.
#. The microkernel server dequeues the command packet from its command
stack and performs the desired operation. All output parameters for the
operation, such as the return code, are saved in the command packet.
#. When the operation is complete the microkernel server attempts
to fetch a command packet from its now empty command stack
and becomes blocked. The kernel then schedules the requesting task.
#. The task processes the command packet's output parameters to determine
the results of the operation.
The actual sequence of steps may vary from the above guideline in some
instances. For example, if the operation causes a higher-priority task
to become runnable, the requesting task is not scheduled for execution by
the kernel until *after* the higher priority task is first scheduled.
In addition, a few operations involving microkernel objects do not require
the use of a command packet at all.
While this indirect execution approach may seem somewhat inefficient,
it actually has a number of important benefits:
* All operations performed by the microkernel server are inherently free
from race conditions; operations are processed serially by a single fiber
that cannot be preempted by tasks or other fibers. This means the
microkernel server can manipulate any of the microkernel objects in the
system during any operation without having to take additional steps
to prevent interference by other contexts.
* Microkernel operations have minimal impact on interrupt latency;
interrupts are never locked for a significant period to prevent race
conditions.
* The microkernel server can easily decompose complex operations into two or
more simpler operations by creating additional command packets and queueing
them on the command stack.
* The overall memory footprint of the system is reduced; a task using microkernel
objects only needs to provide stack space for the first step of the above sequence,
rather than for all steps required to perform the operation.
For additional information see:
* :ref:`Microkernel Server Fiber <microkernel_server_fiber>`
Standard C Library
******************
The Zephyr kernel currently provides only the minimal subset of the
standard C library required to meet the kernel's own needs, primarily
in the areas of string manipulation and display.
Applications that require a more extensive C library can either submit
contributions that enhance the existing library or substitute
a replacement library.
C++ Support for Applications
****************************
The Zephyr kernel supports applications written in both C and C++. However, to
use C++ in an application, you must configure your kernel to include C++
support and the build system must select the correct compiler.
The build system selects the C++ compiler based on the suffix of the files.
Files identified with either a **cxx** or a **cpp** suffix are compiled using
the C++ compiler. For example, :file:`myCplusplusApp.cpp`.
The Zephyr kernel currently provides only a subset of C++ functionality. The
following features are not supported:
* Dynamic object management with the **new** and **delete** operators
* :abbr:`RTTI (run-time type information)`
* Exceptions
* Static global object destruction
While not an exhaustive list, support for the following functionality is
included:
* Inheritance
* Virtual functions
* Virtual tables
* Static global object constructors
Static global object constructors are initialized after the drivers are
initialized but before the application :c:func:`main()` function. Therefore,
use of C++ is restricted to application code.
.. note::
Do not use C++ for kernel, driver, or system initialization code.

View file

@ -1,12 +0,0 @@
.. _overview:
Overview
########
This section provides a high level overview of the Zephyr kernel ecosystem.
.. toctree::
:maxdepth: 1
kernel_fundamentals.rst
source_tree.rst

View file

@ -1,52 +0,0 @@
.. _source_tree:
Source Tree Structure
#####################
The Zephyr source tree provides the following top-level directories,
each of which may have one or more additional levels of subdirectories
which are not described here.
:file:`arch`
Architecture-specific nanokernel and board code. Each supported
architecture has its own subdirectory, which contains additional
subdirectories for the following areas:
* architecture-specific nanokernel source files
* architecture-specific nanokernel include files for private APIs
* board-specific code
:file:`boards`
Board related code and configuration files.
:file:`doc`
Zephyr documentation-related material and tools.
:file:`drivers`
Device driver code.
:file:`include`
Include files for all public APIs, except those defined under :file:`lib`.
:file:`kernel`
Microkernel code, and architecture-independent nanokernel code.
:file:`lib`
Library code, including the minimal standard C library.
:file:`misc`
Miscellaneous code.
:file:`net`
Networking code, including the Bluetooth stack and networking stacks.
:file:`samples`
Sample applications for the microkernel, nanokernel, Bluetooth stack,
and networking stacks.
:file:`tests`
Test code and benchmarks for the various kernel features.
:file:`scripts`
Various programs and other files used to build and test Zephyr
applications.

View file

@ -5,8 +5,7 @@ Memory Pools
A :dfn:`memory pool` is a kernel object that allows memory blocks
to be dynamically allocated from a designated memory region.
Unlike :ref:`memory map <microkernel_memory_maps>` objects,
the memory blocks in a memory pool can be of any size,
The memory blocks in a memory pool can be of any size,
thereby reducing the amount of wasted memory when an application
needs to allocate storage for data structures of different sizes.
The memory pool uses a "buddy memory allocation" algorithm

View file

@ -178,9 +178,6 @@ Two crucial concepts when writing an architecture port are the following:
When talking about "the task" in this document, it refers to the task the
nanokernel is currently aware of.
For a refresher on nanokernel and microkernel concepts, see
:ref:`kernel_fundamentals`.
A context switch can happen in several circumstances:
* When a thread executes a blocking operation, such as taking a semaphore that