unified/doc: Kernel primer for unified kernel

Work by: Allan Stephens

Change-Id: I1f936cd6e7d592969f65330a6d204729ab0f32db
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This commit is contained in:
Benjamin Walsh 2016-09-02 15:54:16 -04:00
commit e135a273ec
38 changed files with 4992 additions and 0 deletions

View file

@ -32,6 +32,7 @@ Sections
getting_started/getting_started.rst
board/board.rst
kernel/kernel.rst
kernel_v2/kernel.rst
drivers/drivers.rst
subsystems/subsystems.rst
api/api.rst

View file

@ -0,0 +1,18 @@
.. _data_passing_v2:
Data Passing
############
This section describes kernel services for passing data
between different threads, or between an ISR and a thread.
.. toctree::
:maxdepth: 2
fifos.rst
lifos.rst
stacks.rst
message_queues.rst
ring_buffers.rst
mailboxes.rst
pipes.rst

View file

@ -0,0 +1,151 @@
.. _fifos_v2:
Fifos
#####
A :dfn:`fifo` is a kernel object that implements a traditional
first in, first out (FIFO) queue, allowing threads and ISRs
to add and remove data items of any size.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of fifos can be defined. Each fifo is referenced
by its memory address.
A fifo has the following key properties:
* A **queue** of data items that have been added but not yet removed.
The queue is implemented as a simple linked list.
A fifo must be initialized before it can be used. This sets its queue to empty.
Fifo data items must be aligned on a 4-byte boundary, as the kernel reserves
the first 32 bits of an item for use as a pointer to the next data item in
the queue. Consequently, a data item that holds N bytes of application data
requires N+4 bytes of memory.
A data item may be **added** to a fifo by a thread or an ISR.
The item is given directly to a waiting thread, if one exists;
otherwise the item is added to the fifo's queue.
There is no limit to the number of items that may be queued.
A data item may be **removed** from a fifo by a thread. If the fifo's queue
is empty a thread may choose to wait for a data item to be given.
Any number of threads may wait on an empty fifo simultaneously.
When a data item is added, it is given to the highest priority thread
that has waited longest.
.. note::
The kernel does allow an ISR to remove an item from a fifo, however
the ISR must not attempt to wait if the fifo is empty.
Implementation
**************
Defining a Fifo
===============
A fifo is defined using a variable of type :c:type:`struct k_fifo`.
It must then be initialized by calling :cpp:func:`k_fifo_init()`.
The following code defines and initializes an empty fifo.
.. code-block:: c
struct k_fifo my_fifo;
k_fifo_init(&my_fifo);
Alternatively, an empty fifo can be defined and initialized at compile time
by calling :c:macro:`K_FIFO_DEFINE()`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_FIFO_DEFINE(my_fifo);
Writing to a Fifo
=================
A data item is added to a fifo by calling :cpp:func:`k_fifo_put()`.
The following code builds on the example above, and uses the fifo
to send data to one or more consumer threads.
.. code-block:: c
struct data_item_t {
void *fifo_reserved; /* 1st word reserved for use by fifo */
...
};
struct data_item_t tx_data;
void producer_thread(int unused1, int unused2, int unused3)
{
while (1) {
/* create data item to send */
tx_data = ...
/* send data to consumers */
k_fifo_put(&my_fifo, &tx_data);
...
}
}
.. note::
STILL NEED TO DESCRIBE APIS THAT ADD A LIST OF ITEMS TO A FIFO!
Reading from a Fifo
===================
A data item is removed from a fifo by calling :cpp:func:`k_fifo_get()`.
The following code builds on the example above, and uses the fifo
to obtain data items from a producer thread,
which are then processed in some manner.
.. code-block:: c
void consumer_thread(int unused1, int unused2, int unused3)
{
struct data_item_t *rx_data;
while (1) {
rx_data = k_fifo_get(&my_fifo, K_FOREVER);
/* process fifo data item */
...
}
}
Suggested Uses
**************
Use a fifo to asynchronously transfer data items of arbitrary size
in a "first in, first out" manner.
Configuration Options
*********************
Related configuration options:
* None.
APIs
****
The following fifo APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_fifo_init()`
* :cpp:func:`k_fifo_put()`
* :cpp:func:`k_fifo_put_list()`
* :cpp:func:`k_fifo_put_slist()`
* :cpp:func:`k_fifo_get()`

View file

@ -0,0 +1,146 @@
.. _lifos_v2:
Lifos
#####
A :dfn:`lifo` is a kernel object that implements a traditional
last in, first out (LIFO) queue, allowing threads and ISRs
to add and remove data items of any size.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of lifos can be defined. Each lifo is referenced
by its memory address.
A lifo has the following key properties:
* A **queue** of data items that have been added but not yet removed.
The queue is implemented as a simple linked list.
A lifo must be initialized before it can be used. This sets its queue to empty.
Lifo data items must be aligned on a 4-byte boundary, as the kernel reserves
the first 32 bits of an item for use as a pointer to the next data item in
the queue. Consequently, a data item that holds N bytes of application data
requires N+4 bytes of memory.
A data item may be **added** to a lifo by a thread or an ISR.
The item is given directly to a waiting thread, if one exists;
otherwise the item is added to the lifo's queue.
There is no limit to the number of items that may be queued.
A data item may be **removed** from a lifo by a thread. If the lifo's queue
is empty a thread may choose to wait for a data item to be given.
Any number of threads may wait on an empty lifo simultaneously.
When a data item is added, it is given to the highest priority thread
that has waited longest.
.. note::
The kernel does allow an ISR to remove an item from a lifo, however
the ISR must not attempt to wait if the lifo is empty.
Implementation
**************
Defining a Lifo
===============
A lifo is defined using a variable of type :c:type:`struct k_lifo`.
It must then be initialized by calling :cpp:func:`k_lifo_init()`.
The following defines and initializes an empty lifo.
.. code-block:: c
struct k_lifo my_lifo;
k_lifo_init(&my_lifo);
Alternatively, an empty lifo can be defined and initialized at compile time
by calling :c:macro:`K_LIFO_DEFINE()`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_LIFO_DEFINE(my_lifo);
Writing to a Lifo
=================
A data item is added to a lifo by calling :cpp:func:`k_lifo_put()`.
The following code builds on the example above, and uses the lifo
to send data to one or more consumer threads.
.. code-block:: c
struct data_item_t {
void *lifo_reserved; /* 1st word reserved for use by lifo */
...
};
struct data_item_t tx data;
void producer_thread(int unused1, int unused2, int unused3)
{
while (1) {
/* create data item to send */
tx_data = ...
/* send data to consumers */
k_lifo_put(&my_lifo, &tx_data);
...
}
}
Reading from a Lifo
===================
A data item is removed from a lifo by calling :cpp:func:`k_lifo_get()`.
The following code builds on the example above, and uses the lifo
to obtain data items from a producer thread,
which are then processed in some manner.
.. code-block:: c
void consumer_thread(int unused1, int unused2, int unused3)
{
struct data_item_t *rx_data;
while (1) {
rx_data = k_lifo_get(&my_lifo, K_FOREVER);
/* process lifo data item */
...
}
}
Suggested Uses
**************
Use a lifo to asynchronously transfer data items of arbitrary size
in a "last in, first out" manner.
Configuration Options
*********************
Related configuration options:
* None.
APIs
****
The following lifo APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_lifo_init()`
* :cpp:func:`k_lifo_put()`
* :cpp:func:`k_lifo_get()`

View file

@ -0,0 +1,633 @@
.. _mailboxes_v2:
Mailboxes
#########
A :dfn:`mailbox` is a kernel object that provides enhanced message queue
capabilities that go beyond the capabilities of a message queue object.
A mailbox allows threads to send and receive messages of any size
synchronously or asynchronously.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of mailboxes can be defined. Each mailbox is referenced
by its memory address.
A mailbox has the following key properties:
* A **send queue** of messages that have been sent but not yet received.
* A **receive queue** of threads that are waiting to receive a message.
A mailbox must be initialized before it can be used. This sets both of its
queues to empty.
A mailbox allows threads, but not ISRs, to exchange messages.
A thread that sends a message is known as the **sending thread**,
while a thread that receives the message is known as the **receiving thread**.
Each message may be received by only one thread (i.e. point-to-multipoint and
broadcast messaging is not supported).
Messages exchanged using a mailbox are handled non-anonymously,
allowing both threads participating in an exchange to know
(and even specify) the identity of the other thread.
Message Format
==============
A **message descriptor** is a data structure that specifies where a message's
data is located, and how the message is to be handled by the mailbox.
Both the sending thread and the receiving thread supply a message descriptor
when accessing a mailbox. The mailbox uses the message descriptors to perform
a message exchange between compatible sending and receiving threads.
The mailbox also updates certain message descriptor fields during the exchange,
allowing both threads to know what has occurred.
A mailbox message contains zero or more bytes of **message data**.
The size and format of the message data is application-defined, and can vary
from one message to the next. There are two forms of message data:
* A **message buffer** is an area of memory provided by the thread
that sends or receives the message. An array or structure variable
can often be used for this purpose.
* A **message block** is an area of memory allocated from a memory pool.
A message may *not* have both a message buffer and a message block.
A message that has neither form of message data is called an **empty message**.
.. note::
A message whose message buffer or memory block exists, but contains
zero bytes of actual data, is *not* an empty message.
Message Lifecycle
=================
The life cycle of a message is straightforward. A message is created when
it is given to a mailbox by the sending thread. The message is then owned
by the mailbox until it is given to a receiving thread. The receiving thread
may retrieve the message data when it receives the message from the mailbox,
or it may perform data retrieval during a second, subsequent mailbox operation.
Only when data retrieval has occurred is the message deleted by the mailbox.
Thread Compatibility
====================
A sending thread can specify the address of the thread to which the message
is sent, or it send it to any thread by specifying :c:macro:`K_ANY`.
Likewise, a receiving thread can specify the address of the thread from which
it wishes to receive a message, or it can receive a message from any thread
by specifying :c:macro:`K_ANY`.
A message is exchanged only when the requirements of both the sending thread
and receiving thread are satisfied; such threads are said to be **compatible**.
For example, if thread A sends a message to thread B (and only thread B)
it will be received by thread B if thread B tries to receive a message
from thread A or if thread B tries to receive from any thread.
The exchange will not occur if thread B tries to receive a message
from thread C. The message can never be received by thread C,
even if it tries to receive a message from thread A (or from any thread).
Message Flow Control
====================
Mailbox messages can be exchanged **synchronously** or **asynchronously**.
In a synchronous exchange, the sending thread blocks until the message
has been fully processed by the receiving thread. In an asynchronous exchange,
the sending thread does not wait until the message has been received
by another thread before continuing; this allows the sending thread to do
other work (such as gather data that will be used in the next message)
*before* the message is given to a receiving thread and fully processed.
The technique used for a given message exchange is determined
by the sending thread.
The synchronous exchange technique provides an implicit form of flow control,
preventing a sending thread from generating messages faster than they can be
consumed by receiving threads. The asynchronous exchange technique provides an
explicit form of flow control, which allows a sending thread to determine
if a previously sent message still exists before sending a subsequent message.
Implementation
**************
Defining a Mailbox
==================
A mailbox is defined using a variable of type :c:type:`struct k_mbox`.
It must then be initialized by calling :cpp:func:`k_mbox_init()`.
The following code defines and initializes an empty mailbox.
.. code-block:: c
struct k_mbox my_mailbox;
k_mbox_init(&my_mailbox);
Alternatively, a mailbox can be defined and initialized at compile time
by calling :c:macro:`K_MBOX_DEFINE()`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_MBOX_DEFINE(my_mailbox);
Message Descriptors
===================
A message descriptor is a structure of type :c:type:`struct k_mbox_msg`.
Only the fields listed below should be used; any other fields are for
internal mailbox use only.
*info*
A 32-bit value that is exchanged by the message sender and receiver,
and whose meaning is defined by the application. This exchange is
bi-directional, allowing the sender to pass a value to the receiver
during any message exchange, and allowing the receiver to pass a value
to the sender during a synchronous message exchange.
*size*
The message data size, in bytes. Set it to zero when sending an empty
message, or when sending a message buffer or message block with no
actual data. When receiving a message, set it to the maximum amount
of data desired, or to zero if the message data is not wanted.
The mailbox updates this field with the actual number of data bytes
exchanged once the message is received.
*tx_data*
A pointer to the sending thread's message buffer. Set it to :c:macro:`NULL`
when sending a memory block, or when sending an empty message.
Leave this field uninitialized when receiving a message.
*tx_block*
The descriptor for the sending thread's memory block. Set tx_block.pool_id
to :c:macro:`NULL` when sending an empty message. Leave this field
uninitialized when sending a message buffer, or when receiving a message.
*tx_target_thread*
The address of the desired receiving thread. Set it to :c:macro:`K_ANY`
to allow any thread to receive the message. Leave this field uninitialized
when receiving a message. The mailbox updates this field with
the actual receiver's address once the message is received.
*rx_source_thread*
The address of the desired sending thread. Set it to :c:macro:`K_ANY`
to receive a message sent by any thread. Leave this field uninitialized
when sending a message. The mailbox updates this field
with the actual sender's address once the message is received.
Sending a Message
=================
A thread sends a message by first creating its message data, if any.
A message buffer is typically used when the data volume is small,
and the cost of copying the data is less than the cost of allocating
and freeing a message block.
Next, the sending thread creates a message descriptor that characterizes
the message to be sent, as described in the previous section.
Finally, the sending thread calls a mailbox send API to initiate the
message exchange. The message is immediately given to a compatible receiving
thread, if one is currently waiting. Otherwise, the message is added
to the mailbox's send queue.
Any number of messages may exist simultaneously on a send queue.
The messages in the send queue are sorted according to the priority
of the sending thread. Messages of equal priority are sorted so that
the oldest message can be received first.
For a synchronous send operation, the operation normally completes when a
receiving thread has both received the message and retrieved the message data.
If the message is not received before the waiting period specified by the
sending thread is reached, the message is removed from the mailbox's send queue
and the send operation fails. When a send operation completes successfully
the sending thread can examine the message descriptor to determine
which thread received the message, how much data was exchanged,
and the application-defined info value supplied by the receiving thread.
.. note::
A synchronous send operation may block the sending thread indefinitely,
even when the thread specifies a maximum waiting period.
The waiting period only limits how long the mailbox waits
before the message is received by another thread. Once a message is received
there is *no* limit to the time the receiving thread may take to retrieve
the message data and unblock the sending thread.
For an asynchronous send operation, the operation always completes immediately.
This allows the sending thread to continue processing regardless of whether the
message is given to a receiving thread immediately or added to the send queue.
The sending thread may optionally specify a semaphore that the mailbox gives
when the message is deleted by the mailbox, for example, when the message
has been received and its data retrieved by a receiving thread.
The use of a semaphore allows the sending thread to easily implement
a flow control mechanism that ensures that the mailbox holds no more than
an application-specified number of messages from a sending thread
(or set of sending threads) at any point in time.
.. note::
A thread that sends a message asynchronously has no way to determine
which thread received the message, how much data was exchanged, or the
application-defined info value supplied by the receiving thread.
Sending an Empty Message
------------------------
This code uses a mailbox to synchronously pass 4 byte random values
to any consuming thread that wants one. The message "info" field is
large enough to carry the information being exchanged, so the data
portion of the message isn't used.
.. code-block:: c
void producer_thread(void)
{
struct k_mbox_msg send_msg;
while (1) {
/* generate random value to send */
uint32_t random_value = sys_rand32_get();
/* prepare to send empty message */
send_msg.info = random_value;
send_msg.size = 0;
send_msg.tx_data = NULL;
send_msg.tx_block.pool_id = NULL;
send_msg.tx_target_thread = K_ANY;
/* send message and wait until a consumer receives it */
k_mbox_put(&my_mailbox, &send_msg, K_FOREVER);
}
}
Sending Data Using a Message Buffer
-----------------------------------
This code uses a mailbox to synchronously pass variable-sized requests
from a producing thread to any consuming thread that wants it.
The message "info" field is used to exchange information about
the maximum size message buffer that each thread can handle.
.. code-block:: c
void producer_thread(void)
{
char buffer[100];
int buffer_bytes_used;
struct k_mbox_msg send_msg;
while (1) {
/* generate data to send */
...
buffer_bytes_used = ... ;
memcpy(buffer, source, buffer_bytes_used);
/* prepare to send message */
send_msg.info = buffer_bytes_used;
send_msg.size = buffer_bytes_used;
send_msg.tx_data = buffer;
send_msg.tx_target_thread = K_ANY;
/* send message and wait until a consumer receives it */
k_mbox_put(&my_mailbox, &send_msg, K_FOREVER);
/* info, size, and tx_target_thread fields have been updated */
/* verify that message data was fully received */
if (send_msg.size < buffer_bytes_used) {
printf("some message data dropped during transfer!");
printf("receiver only had room for %d bytes", send_msg.info);
}
}
}
Sending Data Using a Message Block
----------------------------------
This code uses a mailbox to send asynchronous messages. A semaphore is used
to hold off the sending of a new message until the previous message
has been consumed, so that a backlog of messages doesn't build up
when the consuming thread is unable to keep up.
The message data is stored in a memory block obtained from ``TXPOOL``,
thereby eliminating unneeded data copying when exchanging large messages.
.. code-block:: c
/* define a semaphore, indicating that no message has been sent */
K_SEM_DEFINE(my_sem, 1, 1);
void producer_thread(void)
{
struct k_mbox_msg send_msg;
volatile char *hw_buffer;
while (1) {
/* allocate a memory block to hold the message data */
k_mem_pool_alloc(&send_msg.tx_block, TXPOOL, 4096, K_FOREVER);
/* keep overwriting the hardware-generated data in the block */
/* until the previous message has been received by the consumer */
do {
memcpy(send_msg.tx_block.pointer_to_data, hw_buffer, 4096);
} while (k_sem_take(&my_sem, K_NO_WAIT) != 0);
/* finish preparing to send message */
send_msg.size = 4096;
send_msg.rx_target_thread = K_ANY;
/* send message containing most current data and loop around */
k_mbox_async_put(&my_mailbox, &send_msg, MY_SEMA);
}
}
Receiving a Message
===================
A thread receives a message by first creating a message descriptor that
characterizes the message it wants to receive. It then calls one of the
mailbox receive APIs. The mailbox searches its send queue and takes the message
from the first compatible thread it finds. If no compatible thread exists,
the receiving thread may choose to wait for one. If no compatible thread
appears before the waiting period specified by the receiving thread is reached,
the receive operation fails.
Once a receive operation completes successfully the receiving thread
can examine the message descriptor to determine which thread sent the message,
how much data was exchanged,
and the application-defined info value supplied by the sending thread.
Any number of receiving threads may wait simultaneously on a mailboxes's
receive queue. The threads are sorted according to their priority;
threads of equal priority are sorted so that the one that started waiting
first can receive a message first.
.. note::
Receiving threads do not always receive messages in a first in, first out
(FIFO) order, due to the thread compatibility constraints specified by the
message descriptors. For example, if thread A waits to receive a message
only from thread X and then thread B waits to receive a message from
thread Y, an incoming message from thread Y to any thread will be given
to thread B and thread A will continue to wait.
The receiving thread controls both the quantity of data it retrieves from an
incoming message and where the data ends up. The thread may choose to take
all of the data in the message, to take only the initial part of the data,
or to take no data at all. Similarly, the thread may choose to have the data
copied into a message buffer of its choice or to have it placed in a message
block. A message buffer is typically used when the volume of data
involved is small, and the cost of copying the data is less than the cost
of allocating and freeing a memory pool block.
The following sections outline various approaches a receiving thread may use
when retrieving message data.
Retrieving Data at Receive Time
-------------------------------
The most straightforward way for a thread to retrieve message data is to
specify a message buffer when the message is received. The thread indicates
both the location of the message buffer (which must not be :c:macro:`NULL`)
and its size.
The mailbox copies the message's data to the message buffer as part of the
receive operation. If the message buffer is not big enough to contain all of the
message's data, any uncopied data is lost. If the message is not big enough
to fill all of the buffer with data, the unused portion of the message buffer is
left unchanged. In all cases the mailbox updates the receiving thread's
message descriptor to indicate how many data bytes were copied (if any).
The immediate data retrieval technique is best suited for small messages
where the maximum size of a message is known in advance.
.. note::
This technique can be used when the message data is actually located
in a memory block supplied by the sending thread. The mailbox copies
the data into the message buffer specified by the receiving thread, then
frees the meessage block back to its memory pool. This allows
a receiving thread to retrieve message data without having to know
whether the data was sent using a message buffer or a message block.
The following code uses a mailbox to process variable-sized requests from any
producing thread, using the immediate data retrieval technique. The message
"info" field is used to exchange information about the maximum size
message buffer that each thread can handle.
.. code-block:: c
void consumer_thread(void)
{
struct k_mbox_msg recv_msg;
char buffer[100];
int i;
int total;
while (1) {
/* prepare to receive message */
recv_msg.info = 100;
recv_msg.size = 100;
recv_msg.rx_source_thread = K_ANY;
/* get a data item, waiting as long as needed */
k_mbox_get(&my_mailbox, &recv_msg, buffer, K_FOREVER);
/* info, size, and rx_source_thread fields have been updated */
/* verify that message data was fully received */
if (recv_msg.info != recv_msg.size) {
printf("some message data dropped during transfer!");
printf("sender tried to send %d bytes", recv_msg.info);
}
/* compute sum of all message bytes (from 0 to 100 of them) */
total = 0;
for (i = 0; i < recv_msg.size; i++) {
total += buffer[i];
}
}
}
Retrieving Data Later Using a Message Buffer
--------------------------------------------
A receiving thread may choose to defer message data retrieval at the time
the message is received, so that it can retrieve the data into a message buffer
at a later time.
The thread does this by specifying a message buffer location of :c:macro:`NULL`
and a size indicating the maximum amount of data it is willing to retrieve
later.
The mailbox does not copy any message data as part of the receive operation.
However, the mailbox still updates the receiving thread's message descriptor
to indicate how many data bytes are available for retrieval.
The receiving thread must then respond as follows:
* If the message descriptor size is zero, then either the sender's message
contained no data or the receiving thread did not want to receive any data.
The receiving thread does not need to take any further action, since
the mailbox has already completed data retrieval and deleted the message.
* If the message descriptor size is non-zero and the receiving thread still
wants to retrieve the data, the thread must call :c:func:`k_mbox_data_get()`
and supply a message buffer large enough to hold the data. The mailbox copies
the data into the message buffer and deletes the message.
* If the message descriptor size is non-zero and the receiving thread does *not*
want to retrieve the data, the thread must call :c:func:`k_mbox_data_get()`.
and specify a message buffer of :c:macro:`NULL`. The mailbox deletes
the message without copying the data.
The subsequent data retrieval technique is suitable for applications where
immediate retrieval of message data is undesirable. For example, it can be
used when memory limitations make it impractical for the receiving thread to
always supply a message buffer capable of holding the largest possible
incoming message.
.. note::
This technique can be used when the message data is actually located
in a memory block supplied by the sending thread. The mailbox copies
the data into the message buffer specified by the receiving thread, then
frees the message block back to its memory pool. This allows
a receiving thread to retrieve message data without having to know
whether the data was sent using a message buffer or a message block.
The following code uses a mailbox's deferred data retrieval mechanism
to get message data from a producing thread only if the message meets
certain criteria, thereby eliminating unneeded data copying. The message
"info" field supplied by the sender is used to classify the message.
.. code-block:: c
void consumer_thread(void)
{
struct k_mbox_msg recv_msg;
char buffer[10000];
while (1) {
/* prepare to receive message */
recv_msg.size = 10000;
recv_msg.rx_source_thread = K_ANY;
/* get message, but not its data */
k_mbox_get(&my_mailbox, &recv_msg, NULL, K_FOREVER);
/* get message data for only certain types of messages */
if (is_message_type_ok(recv_msg.info)) {
/* retrieve message data and delete the message */
k_mbox_data_get(&recv_msg, buffer);
/* process data in "buffer" */
...
} else {
/* ignore message data and delete the message */
k_mbox_data_get(&recv_msg, NULL);
}
}
}
Retrieving Data Later Using a Message Block
-------------------------------------------
A receiving thread may choose to retrieve message data into a memory block,
rather than a message buffer. This is done in much the same way as retrieving
data subsequently into a message buffer --- the receiving thread first
receives the message without its data, then retrieves the data by calling
:c:func:`k_mbox_data_block_get()`. The mailbox fills in the block descriptor
supplied by the receiving thread, allowing the thread to access the data.
The mailbox also deletes the received message, since data retrieval
has been completed. The receiving thread is then responsible for freeing
the message block back to the memory pool when the data is no longer needed.
This technique is best suited for applications where the message data has
been sent using a memory block.
.. note::
This technique can be used when the message data is located in a message
buffer supplied by the sending thread. The mailbox automatically allocates
a memory block and copies the message data into it. However, this is much
less efficient than simply retrieving the data into a message buffer
supplied by the receiving thread. In addition, the receiving thread
must be designed to handle cases where the data retrieval operation fails
because the mailbox cannot allocate a suitable message block from the memory
pool. If such cases are possible, the receiving thread must either try
retrieving the data at a later time or instruct the mailbox to delete
the message without retrieving the data.
The following code uses a mailbox to receive messages sent using a memory block,
thereby eliminating unneeded data copying when processing a large message.
(The messages may be sent synchronously or asynchronously.)
.. code-block:: c
void consumer_thread(void)
{
struct k_mbox_msg recv_msg;
struct k_mem_block recv_block;
int total;
char *data_ptr;
int i;
while (1) {
/* prepare to receive message */
recv_msg.size = 10000;
recv_msg.rx_source_thread = K_ANY;
/* get message, but not its data */
k_mbox_get(&my_mailbox, &recv_msg, NULL, K_FOREVER);
/* get message data as a memory block and discard message */
k_mbox_data_block_get(&recv_msg, RXPOOL, &recv_block, K_FOREVER);
/* compute sum of all message bytes in memory block */
total = 0;
data_ptr = (char *)(recv_block.pointer_to_data);
for (i = 0; i < recv_msg.size; i++) {
total += data_ptr++;
}
/* release memory block containing data */
k_mem_pool_free(&recv_block);
}
}
.. note::
An incoming message that was sent using a message buffer is also processed
correctly by this algorithm, since the mailbox automatically creates
a memory block containing the message data using ``RXPOOL``. However,
the performance benefit of using the memory block approach is lost.
Suggested Uses
**************
Use a mailbox to transfer data items between threads whenever the capabilities
of a message queue are insufficient.
Configuration Options
*********************
Related configuration options:
* :option:`CONFIG_NUM_MBOX_ASYNC_MSGS`
APIs
****
The following APIs for a mailbox are provided by :file:`kernel.h`:
* :cpp:func:`k_mbox_put()`
* :cpp:func:`k_mbox_async_put()`
* :cpp:func:`k_mbox_get()`
* :cpp:func:`k_mbox_data_get()`
* :cpp:func:`k_mbox_data_block_get()`

View file

@ -0,0 +1,175 @@
.. _message_queues_v2:
Message Queues
##############
A :dfn:`message queue` is a kernel object that implements a simple
message queue, allowing threads and ISRs to asynchronously send and receive
fixed-size data items.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of message queues can be defined. Each message queue is referenced
by its memory address.
A message queue has the following key properties:
* A **queue** of data items that have been sent but not yet received.
The queue is implemented using a ring buffer.
* The **data item size**, measured in bytes, of each data item.
* A **maximum quantity** of data items that can be queued in the ring buffer.
A message queue must be initialized before it can be used.
This sets its queue to empty.
A data item can be **sent** to a message queue by a thread or an ISR.
The data item a pointed at by the sending thread is copied to a waiting thread,
if one exists; otherwise the item is copied to the message queue's ring buffer,
if space is available. In either case, the size of the data area being sent
*must* equal the message queue's data item size.
If a thread attempts to send a data item when the ring buffer is full,
the sending thread may choose to wait for space to become available.
Any number of sending threads may wait simultaneously when the ring buffer
is full; when space becomes available
it is given to the highest priority sending task that has waited the longest.
A data item can be **received** from a message queue by a thread.
The data item is copied to the area specified by the receiving thread;
the size of the receiving area *must* equal the message queue's data item size.
If a thread attempts to receive a data item when the ring buffer is empty,
the receiving thread may choose to wait for a data item to be sent.
Any number of receiving threads may wait simultaneously when the ring buffer
is empty; when a data item becomes available
it is given to the highest priority receiving task that has waited the longest.
.. note::
The kernel does allow an ISR to receive an item from a message queue,
however the ISR must not attempt to wait if the message queue is empty.
Implementation
**************
Defining a Message Queue
========================
A message queue is defined using a variable of type :c:type:`struct k_msgq`.
It must then be initialized by calling :cpp:func:`k_msgq_init()`.
The following code defines and initializes an empty message queue
that is capable of holding 10 items.
.. code-block:: c
struct data_item_type {
...
};
char my_msgq_buffer[10 * sizeof(data_item_type)];
struct k_msgq my_msgq;
k_msgq_init(&my_msgq, 10, sizeof(data_item_type), my_msgq_buffer);
Alternatively, a message queue can be defined and initialized at compile time
by calling :c:macro:`K_MSGQ_DEFINE()`.
The following code has the same effect as the code segment above. Observe
that the macro defines both the message queue and its buffer.
.. code-block:: c
K_MSGQ_DEFINE(my_msgq, 10, sizeof(data_item_type));
Writing to a Message Queue
==========================
A data item is added to a message queue by calling :cpp:func:`k_msgq_put()`.
The following code builds on the example above, and uses the message queue
to pass data items from a producing thread to one or more consuming threads.
If the message queue fills up because the consumers can't keep up, the
producing thread throws away all existing data so the newer data can be saved.
.. code-block:: c
void producer_thread(void)
{
struct data_item_t data;
while (1) {
/* create data item to send (e.g. measurement, timestamp, ...) */
data = ...
/* send data to consumers */
while (k_msgq_put(&my_msgq, &data, K_NO_WAIT) != 0) {
/* message queue is full: purge old data & try again */
k_msgq_purge(&my_msgq);
}
/* data item was successfully added to message queue */
}
}
Reading from a Message Queue
============================
A data item is taken from a message queue by calling :cpp:func:`k_msgq_get()`.
The following code builds on the example above, and uses the message queue
to process data items generated by one or more producing threads.
.. code-block:: c
void consumer_thread(void)
{
struct data_item_t data;
while (1) {
/* get a data item */
k_msgq_get(&my_msgq, &data, K_FOREVER);
/* process data item */
...
}
}
Suggested Uses
**************
Use a message queue to transfer small data items between threads
in an asynchronous manner.
.. note::
A message queue can be used to transfer large data items, if desired.
However, it is often preferable to send pointers to large data items
to avoid copying the data. The kernel's memory map and memory pool object
types can be helpful for data transfers of this sort.
A synchronous transfer can be achieved by using the kernel's mailbox
object type.
Configuration Options
*********************
Related configuration options:
* None.
APIs
****
The following message queue APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_msgq_init()`
* :cpp:func:`k_msgq_put()`
* :cpp:func:`k_msgq_get()`
* :cpp:func:`k_msgq_purge()`
* :cpp:func:`k_msgq_num_used_get()`

View file

@ -0,0 +1,185 @@
.. _pipes_v2:
Pipes
#####
A :dfn:`pipe` is a kernel object that allows a thread to send a byte stream
to another thread. Pipes can be used to transfer chunks of data in whole
or in part, and either synchronously or asynchronously.
.. contents::
:local:
:depth: 2
Concepts
********
The pipe can be configured with a ring buffer which holds data that has been
sent but not yet received; alternatively, the pipe may have no ring buffer.
Any number of pipes can be defined. Each pipe is referenced by its memory
address.
A pipe has the following key property:
* A **size** that indicates the size of the pipe's ring buffer. Note that a
size of zero defines a pipe with no ring buffer.
A pipe must be initialized before it can be used. The pipe is initially empty.
Data can be synchronously **sent** either in whole or in part to a pipe by a
thread. If the specified minimum number of bytes can not be immediately
satisfied, then the operation will either fail immediately or attempt to send
as many bytes as possible and then pend in the hope that the send can be
completed later. Accepted data is either copied to the pipe's ring buffer
or directly to the waiting reader(s).
Data can be asynchronously **sent** in whole using a memory block to a pipe by
a thread. Once the pipe has accepted all the bytes in the memory block, it will
free the memory block and may give a semaphore if one was specified.
Data can be synchronously **received** from a pipe by a thread. If the specified
minimum number of bytes can not be immediately satisfied, then the operation
will either fail immediately or attempt to receive as many bytes as possible
and then pend in the hope that the receive can be completed later. Accepted
data is either copied from the pipe's ring buffer or directly from the
waiting sender(s).
.. note::
The kernel does NOT allow for an ISR to send or receive data to/from a
pipe even if it does not attempt to wait for space/data.
Implementation
**************
A pipe is defined using a variable of type :c:type:`struct k_pipe` and an
optional character buffer of type :c:type:`unsigned char`. It must then be
initialized by calling :c:func:`k_pipe_init()`.
The following code defines and initializes an empty pipe that has a ring
buffer capable of holding 100 bytes.
.. code-block:: c
unsigned char my_ring_buffer[100];
struct k_pipe my_pipe;
k_pipe_init(&my_pipe, my_ring_buffer, sizeof(my_ring_buffer));
Alternatively, a pipe can be defined and initialized at compile time by
calling :c:macro:`K_PIPE_DEFINE()`.
The following code has the same effect as the code segment above. Observe
that that macro defines both the pipe and its ring buffer.
.. code-block:: c
K_PIPE_DEFINE(my_pipe, 100);
Writing to a Pipe
=================
Data is added to a pipe by calling :c:func:`k_pipe_put()`.
The following code builds on the example above, and uses the pipe to pass
data from a producing thread to one or more consuming threads. If the pipe's
ring buffer fills up because the consumers can't keep up, the producing thread
waits for a specified amount of time.
.. code-block:: c
struct message_header {
...
};
void producer_thread(void)
{
unsigned char *data;
size_t total_size;
size_t bytes_written;
int rc;
...
while (1) {
/* Craft message to send in the pipe */
data = ...;
total_size = ...;
/* send data to the consumers */
rc = k_pipe_put(&my_pipe, data, total_size, &bytes_written,
sizeof(struct message_header), K_NO_WAIT);
if (rc < 0) {
/* Incomplete message header sent */
...
} else if (bytes_written < total_size) {
/* Some of the data was sent */
...
} else {
/* All data sent */
...
}
}
}
Reading from a Pipe
===================
Data is read from the pipe by calling :c:func:`k_pipe_get()`.
The following code builds on the example above, and uses the pipe to
process data items generated by one or more producing threads.
.. code-block:: c
void consumer_thread(void)
{
unsigned char buffer[120];
size_t bytes_read;
struct message_header *header = (struct message_header *)buffer;
while (1) {
rc = k_pipe_get(&my_pipe, buffer, sizeof(buffer), &bytes_read,
sizeof(header), 100);
if ((rc < 0) || (bytes_read < sizeof (header))) {
/* Incomplete message header received */
...
} else if (header->num_data_bytes + sizeof(header) > bytes_read) {
/* Only some data was received */
...
} else {
/* All data was received */
...
}
}
}
Suggested uses
**************
Use a pipe to send streams of data between threads.
.. note::
A pipe can be used to transfer long streams of data if desired. However
it is often preferable to send pointers to large data items to avoid
copying the data. The kernel's memory map and memory pool object types
can be helpful for data transfers of this sort.
Configuration Options
*********************
Related configuration options:
* CONFIG_NUM_PIPE_ASYNC_MSGS
APIs
****
The following message queue APIs are provided by :file:`kernel.h`:
* :c:func:`k_pipe_init()`
* :c:func:`k_pipe_put()`
* :c:func:`k_pipe_get()`
* :c:func:`k_pipe_block_put()`

View file

@ -0,0 +1,143 @@
.. _ring_buffers_v2:
Ring Buffers [TBD]
##################
Definition
**********
The ring buffer is defined in :file:`include/misc/ring_buffer.h` and
:file:`kernel/nanokernel/ring_buffer.c`. This is an array-based
circular buffer, stored in first-in-first-out order. The APIs allow
for enqueueing and retrieval of chunks of data up to 1024 bytes in size,
along with two metadata values (type ID and an app-specific integer).
Unlike nanokernel FIFOs, storage of enqueued items and their metadata
is managed in a fixed buffer and there are no preconditions on the data
enqueued (other than the size limit). Since the size annotation is only
an 8-bit value, sizes are expressed in terms of 32-bit chunks.
Internally, the ring buffer always maintains an empty 32-bit block in the
buffer to distinguish between empty and full buffers. Any given entry
in the buffer will use a 32-bit block for metadata plus any data attached.
If the size of the buffer array is a power of two, the ring buffer will
use more efficient masking instead of expensive modulo operations to
maintain itself.
Concurrency
***********
Concurrency control of ring buffers is not implemented at this level.
Depending on usage (particularly with respect to number of concurrent
readers/writers) applications may need to protect the ring buffer with
mutexes and/or use semaphores to notify consumers that there is data to
read.
For the trivial case of one producer and one consumer, concurrency
shouldn't be needed.
Example: Initializing a Ring Buffer
===================================
There are three ways to initialize a ring buffer. The first two are through use
of macros which defines one (and an associated private buffer) in file scope.
You can declare a fast ring buffer that uses mask operations by declaring
a power-of-two sized buffer:
.. code-block:: c
/* Buffer with 2^8 or 256 elements */
SYS_RING_BUF_DECLARE_POW2(my_ring_buf, 8);
Arbitrary-sized buffers may also be declared with a different macro, but
these will always be slower due to use of modulo operations:
.. code-block:: c
#define MY_RING_BUF_SIZE 93
SYS_RING_BUF_DECLARE_SIZE(my_ring_buf, MY_RING_BUF_SIZE);
Alternatively, a ring buffer may be initialized manually. Whether the buffer
will use modulo or mask operations will be detected automatically:
.. code-block:: c
#define MY_RING_BUF_SIZE 64
struct my_struct {
struct ring_buffer rb;
uint32_t buffer[MY_RING_BUF_SIZE];
...
};
struct my_struct ms;
void init_my_struct {
sys_ring_buf_init(&ms.rb, sizeof(ms.buffer), ms.buffer);
...
}
Example: Enqueuing data
=======================
.. code-block:: c
int ret;
ret = sys_ring_buf_put(&ring_buf, TYPE_FOO, 0, &my_foo, SIZE32_OF(my_foo));
if (ret == -EMSGSIZE) {
... not enough room for the message ..
}
If the type or value fields are sufficient, the data pointer and size may be 0.
.. code-block:: c
int ret;
ret = sys_ring_buf_put(&ring_buf, TYPE_BAR, 17, NULL, 0);
if (ret == -EMSGSIZE) {
... not enough room for the message ..
}
Example: Retrieving data
========================
.. code-block:: c
int ret;
uint32_t data[6];
size = SIZE32_OF(data);
ret = sys_ring_buf_get(&ring_buf, &type, &value, data, &size);
if (ret == -EMSGSIZE) {
printk("Buffer is too small, need %d uint32_t\n", size);
} else if (ret == -EAGAIN) {
printk("Ring buffer is empty\n");
} else {
printk("got item of type %u value &u of size %u dwords\n",
type, value, size);
...
}
APIs
****
The following APIs for ring buffers are provided by :file:`ring_buffer.h`:
:cpp:func:`sys_ring_buf_init()`
Initializes a ring buffer.
:c:func:`SYS_RING_BUF_DECLARE_POW2()`, :c:func:`SYS_RING_BUF_DECLARE_SIZE()`
Declare and init a file-scope ring buffer.
:cpp:func:`sys_ring_buf_space_get()`
Returns the amount of free buffer storage space in 32-bit dwords.
:cpp:func:`sys_ring_buf_is_empty()`
Indicates whether a buffer is empty.
:cpp:func:`sys_ring_buf_put()`
Enqueues an item.
:cpp:func:`sys_ring_buf_get()`
De-queues an item.

View file

@ -0,0 +1,141 @@
.. _stacks_v2:
Stacks
######
A :dfn:`stack` is a kernel object that implements a traditional
last in, first out (LIFO) queue, allowing threads and ISRs
to add and remove a limited number of 32-bit data values.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of stacks can be defined. Each stack is referenced
by its memory address.
A stack has the following key properties:
* A **queue** of 32-bit data values that have been added but not yet removed.
The queue is implemented using an array of 32-bit integers,
and must be aligned on a 4-byte boundary.
* A **maximum quantity** of data values that can be queued in the array.
A stack must be initialized before it can be used. This sets its queue to empty.
A data value can be **added** to a stack by a thread or an ISR.
The value is given directly to a waiting thread, if one exists;
otherwise the value is added to the lifo's queue.
The kernel does *not* detect attempts to add a data value to a stack
that has already reached its maximum quantity of queued values.
.. note::
Adding a data value to a stack that is already full will result in
array overflow, and lead to unpredictable behavior.
A data value may be **removed** from a stack by a thread.
If the stack's queue is empty a thread may choose to wait for it to be given.
Any number of threads may wait on an empty stack simultaneously.
When a data item is added, it is given to the highest priority thread
that has waited longest.
.. note::
The kernel does allow an ISR to remove an item from a stack, however
the ISR must not attempt to wait if the stack is empty.
Implementation
**************
Defining a Stack
================
A stack is defined using a variable of type :c:type:`struct k_stack`.
It must then be initialized by calling :cpp:func:`k_stack_init()`.
The following code defines and initializes an empty stack capable of holding
up to ten 32-bit data values.
.. code-block:: c
#define MAX_ITEMS 10
uint32_t my_stack_array[MAX_ITEMS];
struct k_stack my_stack;
k_stack_init(&my_stack, my_stack_array, MAX_ITEMS);
Alternatively, a stack can be defined and initialized at compile time
by calling :c:macro:`K_STACK_DEFINE()`.
The following code has the same effect as the code segment above. Observe
that the macro defines both the stack and its array of data values.
.. code-block:: c
K_STACK_DEFINE(my_stack, MAX_ITEMS);
Pushing to a Stack
==================
A data item is added to a stack by calling :cpp:func:`k_stack_push()`.
The following code builds on the example above, and shows how a thread
can create a pool of data structures by saving their memory addresses
in a stack.
.. code-block:: c
/* define array of data structures */
struct my_buffer_type {
int field1;
...
};
struct my_buffer_type my_buffers[MAX_ITEMS];
/* save address of each data structure in a stack */
for (int i = 0; i < MAX_ITEMS; i++) {
k_stack_push(&my_stack, (uint32_t)&my_buffers[i]);
}
Popping from a Stack
====================
A data item is taken from a stack by calling :cpp:func:`k_stack_pop()`.
The following code builds on the example above, and shows how a thread
can dynamically allocate an unused data structure.
When the data structure is no longer required, the thread must push
its address back on the stack to allow the data structure to be reused.
.. code-block:: c
struct my_buffer_type *new_buffer;
new_buffer = (struct my_buffer_type *)k_stack_pop(&buffer_stack, K_FOREVER);
new_buffer->field1 = ...
Suggested Uses
**************
Use a stack to store and retrieve 32-bit data values in a "last in,
first out" manner, when the maximum number of stored items is known.
Configuration Options
*********************
Related configuration options:
* None.
APIs
****
The following stack APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_stack_init()`
* :cpp:func:`k_stack_push()`
* :cpp:func:`k_stack_pop()`

View file

@ -0,0 +1,135 @@
.. _interrupts_v2:
Interrupts [TBD]
################
Concepts
********
:abbr:`ISRs (Interrupt Service Routines)` are functions
that execute in response to a hardware or software interrupt.
They are used to preempt the execution of the current thread,
allowing the response to occur with very low overhead.
Thread execution resumes only once all ISR work has been completed.
Any number of ISRs can be utilized by an application, subject to
any hardware constraints imposed by the underlying hardware.
Each ISR has the following key properties:
* The **:abbr:`IRQ (Interrupt ReQuest)` signal** that triggers the ISR.
* The **priority level** associated with the IRQ.
* The **address of the function** that is invoked to handle the interrupt.
* The **argument value** that is passed to that function.
An :abbr:`IDT (Interrupt Descriptor Table)` is used to associate
a given interrupt source with a given ISR.
Only a single ISR can be associated with a specific IRQ at any given time.
Multiple ISRs can utilize the same function to process interrupts,
allowing a single function to service a device that generates
multiple types of interrupts or to service multiple devices
(usually of the same type). The argument value passed to an ISR's function
can be used to allow the function to determine which interrupt has been
signaled.
The kernel provides a default ISR for all unused IDT entries. This ISR
generates a fatal system error if an unexpected interrupt is signaled.
The kernel supports interrupt nesting. This allows an ISR to be preempted
in mid-execution if a higher priority interrupt is signaled. The lower
priority ISR resumes execution once the higher priority ISR has completed
its processing.
The kernel allows a thread to temporarily lock out the execution
of ISRs, either individually or collectively, should the need arise.
The collective lock can be applied repeatedly; that is, the lock can
be applied when it is already in effect. The collective lock must be
unlocked an equal number of times before interrupts are again processed
by the kernel.
Examples
********
Installing an ISR
=================
It's important to note that IRQ_CONNECT() is not a C function and does
some inline assembly magic behind the scenes. All its arguments must be known
at build time. Drivers that have multiple instances may need to define
per-instance config functions to configure the interrupt for that instance.
The following code illustrates how to install an ISR:
.. code-block:: c
#define MY_DEV_IRQ 24 /* device uses IRQ 24 */
#define MY_DEV_PRIO 2 /* device uses interrupt priority 2 */
/* argument passed to my_isr(), in this case a pointer to the device */
#define MY_ISR_ARG DEVICE_GET(my_device)
#define MY_IRQ_FLAGS 0 /* IRQ flags. Unused on non-x86 */
void my_isr(void *arg)
{
... /* ISR code */
}
void my_isr_installer(void)
{
...
IRQ_CONNECT(MY_DEV_IRQ, MY_DEV_PRIO, my_isr, MY_ISR_ARG, MY_IRQ_FLAGS);
irq_enable(MY_DEV_IRQ); /* enable IRQ */
...
}
Offloading ISR Work
*******************
Interrupt service routines should generally be kept short
to ensure predictable system operation.
In situations where time consuming processing is required
an ISR can quickly restore the kernel's ability to respond
to other interrupts by offloading some or all of the interrupt-related
processing work to a thread.
The kernel provides a variety of mechanisms to allow an ISR to offload work
to a thread.
1. An ISR can signal a helper thread to do interrupt-related work
using a kernel object, such as a fifo, lifo, or semaphore.
2. An ISR can signal the kernel's system workqueue to do interrupt-related
work by sending an event that has an associated event handler.
When an ISR offloads work to a thread there is typically a single
context switch to that thread when the ISR completes.
Thus, interrupt-related processing usually continues almost immediately.
Additional intermediate context switches may be required
to execute a currently executing cooperative thread
or any higher-priority threads that are ready to run.
Suggested Uses
**************
Use an ISR to perform interrupt processing that requires a very rapid
response, and which can be done quickly and without blocking.
.. note::
Interrupt processing that is time consuming, or which involves blocking,
should be handed off to a thread. See `Offloading ISR Work`_ for
a description of various techniques that can be used in an application.
Configuration Options
*********************
[TBD]
APIs
****
These are the interrupt-related Application Program Interfaces.
* :cpp:func:`irq_enable()`
* :cpp:func:`irq_disable()`
* :cpp:func:`irq_lock()`
* :cpp:func:`irq_unlock()`
* :cpp:func:`k_am_in_isr()`

21
doc/kernel_v2/kernel.rst Normal file
View file

@ -0,0 +1,21 @@
.. _kernel_v2:
Zephyr Kernel Primer (version 2)
################################
This document provides a general introduction of the Zephyr kernel's
key capabilties and services. Additional details can be found by consulting
the :ref:`api` and :ref:`apps_kernel_conf` documentation, and by examining
the code in the Zephyr source tree.
.. toctree::
:maxdepth: 2
overview/overview.rst
threads/threads.rst
interrupts/interrupts.rst
timing/timing.rst
memory/memory.rst
synchronization/synchronization.rst
data_passing/data_passing.rst
other/other.rst

View file

@ -0,0 +1,136 @@
.. _memory_maps_v2:
Memory Maps
###########
A :dfn:`memory map` is a kernel object that allows fixed-size memory blocks
to be dynamically allocated from a designated memory region.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of memory maps can be defined. Each memory map is referenced
by its memory address.
A memory map has the following key properties:
* A **buffer** that provides the memory for the memory map's blocks.
* The **block size** of each block, measured in bytes.
* The **number of blocks** available for allocation.
The number of blocks and block size values must be greater than zero.
The block size must be at least 4 bytes, to allow the kernel
to maintain a linked list of unallocated blocks.
A thread that needs to use a memory block simply allocates it from a memory
map. When the thread finishes with a memory block,
it must release the block back to the memory map so the block can be reused.
If all the blocks are currently in use, a thread can optionally wait
for one to become available.
Any number of thread may wait on an empty memory map simultaneously;
when a memory block becomes available, it is given to the highest-priority
thread that has waited the longest.
The kernel manages memory blocks in an efficient and deterministic
manner that eliminates the risk of memory fragmentation problems which can
arise when using variable-size blocks.
Unlike a heap, more than one memory map can be defined, if needed. This
allows for a memory map with smaller blocks and others with larger-sized
blocks. Alternatively, a memory pool object may be used.
Implementation
**************
Defining a Memory Map
=====================
A memory map is defined using a variable of type :c:type:`struct k_mem_map`.
It must then be initialized by calling :cpp:func:`k_mem_map_init()`.
The following code defines and initializes a memory map that has 6 blocks
of 400 bytes each.
.. code-block:: c
struct k_mem_map my_map;
char my_map_buffer[6 * 400];
k_mem_map_init(&my_map, 6, 400, my_map_buffer);
Alternatively, a memory map can be defined and initialized at compile time
by calling :c:macro:`K_MEM_MAP_DEFINE()`.
The following code has the same effect as the code segment above. Observe
that the macro defines both the memory map and its buffer.
.. code-block:: c
K_MEM_MAP_DEFINE(my_map, 6, 400);
Allocating a Memory Block
=========================
A memory block is allocated by calling :cpp:func:`k_mem_map_alloc()`.
The following code builds on the example above, and waits up to 100 milliseconds
for a memory block to become available,
and gives a warning if it is not obtained in that time.
.. code-block:: c
char *block_ptr;
if (k_mem_map_alloc(&my_map, &block_ptr, 100) == 0)) {
/* utilize memory block */
} else {
printf("Memory allocation time-out");
}
Releasing a Memory Block
========================
A memory block is released by calling :cpp:func:`k_mem_map_free()`.
The following code builds on the example above, and allocates a memory block,
then releases it once it is no longer needed.
.. code-block:: c
char *block_ptr;
k_mem_map_alloc(&my_map, &block_ptr, K_FOREVER);
... /* use memory block pointed at by block_ptr */
k_mem_map_free(&my_map, &block_ptr);
Suggested Uses
**************
Use a memory map to allocate and free memory in fixed-size blocks.
Use memory map blocks when sending large amounts of data from one thread
to another.
Configuration Options
*********************
Related configuration options:
* None.
APIs
****
The following memory map APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_mem_map_init()`
* :cpp:func:`k_mem_map_alloc()`
* :cpp:func:`k_mem_map_free()`
* :cpp:func:`k_mem_map_num_used_get()`

View file

@ -0,0 +1,13 @@
.. _memory_v2:
Memory Allocation
#################
This section describes kernel services that allow threads to dynamically
allocate memory.
.. toctree::
:maxdepth: 2
maps.rst
pools.rst

View file

@ -0,0 +1,181 @@
.. _memory_pools_v2:
Memory Pools [TBD]
##################
A :dfn:`memory pool` is a kernel object that allows variable-size memory blocks
to be dynamically allocated from a designated memory region.
.. contents::
:local:
:depth: 2
Concepts
********
Unlike :ref:`memory map <microkernel_memory_maps>` objects, which support
memory blocks of only a *single* size, a memory pool can support memory blocks
of *various* sizes. The memory pool does this by subdividing blocks into smaller
chunks, where possible, to more closely match the actual needs of a requesting
task.
Any number of memory pools can be defined. Each memory pool is referenced
by its memory address.
A memory pool has the following key properties:
* A **minimum** and **maximum** block size, measured in bytes.
* The **number of maximum-size memory blocks** initially available.
The number of blocks and block size values must be greater than zero.
The block size must be defined as a multiple of the word size.
A thread that needs to use a memory block simply allocates it from a memory
pool. Following a successful allocation, the :c:data:`pointer_to_data` field
of the block descriptor supplied by the thread indicates the starting address
of the memory block. When the thread is finished with a memory block,
it must release the block back to the memory pool so the block can be reused.
If a block of the desired size is unavailable, a thread can optionally wait
for one to become available.
Any number of threads may wait on a memory pool simultaneously;
when a suitable memory block becomes available, it is given to
the highest-priority task that has waited the longest.
When a request for memory is sufficiently smaller than an available
memory pool block, the memory pool will automatically split the block into
4 smaller blocks. The resulting smaller blocks can also be split repeatedly,
until a block just larger than the needed size is available, or the minimum
block size, as specified in the MDEF, is reached.
If the memory pool cannot find an available block that is at least
the requested size, it will attempt to create one by merging adjacent
free blocks. If a suitable block can't be created, the request fails.
Although a memory pool uses efficient algorithms to manage its blocks,
the splitting of available blocks and merging of free blocks takes time
and increases overhead block allocation. The larger the allowable
number of splits, the larger the overhead. However, the minimum and maximum
block-size parameters specified for a pool can be used to control the amount
of splitting, and thus the amount of overhead.
Unlike a heap, more than one memory pool can be defined, if needed. For
example, different applications can utilize different memory pools; this
can help prevent one application from hijacking resources to allocate all
of the available blocks.
Implementation
**************
Defining a Memory Pool
======================
The following parameters must be defined:
*name*
This specifies a unique name for the memory pool.
*min_block_size*
This specifies the minimum memory block size in bytes.
It should be a multiple of the processor's word size.
*max_block_size*
This specifies the maximum memory block size in bytes.
It should be a power of 4 times larger than *minBlockSize*;
therefore, maxBlockSize = minBlockSize * 4^n, where n>=0.
*num_max*
This specifies the number of maximum size memory blocks
available at startup.
Public Memory Pool
------------------
Define the memory pool in the application's MDEF with the following
syntax:
.. code-block:: console
POOL name min_block_size max_block_size num_max
For example, the file :file:`projName.mdef` defines two memory pools
as follows:
.. code-block:: console
% POOL NAME MIN MAX NMAX
% =======================================
POOL MY_POOL 32 8192 1
POOL SECOND_POOL_ID 64 1024 5
A public memory pool can be referenced by name from any source file that
includes the file :file:`zephyr.h`.
.. note::
Private memory pools are not supported by the Zephyr kernel.
Allocating a Memory Block
=========================
A memory block is allocated by calling :cpp:func:`k_mem_pool_alloc()`.
The following code waits up to 100 milliseconds for a 256 byte memory block
to become available, then fills it with zeroes. A warning is issued
if a suitable block is not obtained.
.. code-block:: c
struct k_mem_block block;
if (k_mem_pool_alloc(&my_pool, 256, &block, 100) == 0)) {
memset(block.pointer_to_data, 0, 256);
...
} else {
printf("Memory allocation time-out");
}
Freeing a Memory Block
======================
A memory block is released by calling :cpp:func:`k_mem_pool_free()`.
The following code allocates a memory block, then releases it once
it is no longer needed.
.. code-block:: c
struct k_mem_block block;
k_mem_pool_alloc(&my_pool, size, &block, K_FOREVER);
/* use memory block */
k_mem_pool_free(&block);
Manually Defragmenting a Memory Pool
====================================
This code instructs the memory pool to concatenate any unused memory blocks
that can be merged. Doing a full defragmentation of the entire memory pool
before allocating a number of memory blocks may be more efficient than doing
an implicit partial defragmentation of the memory pool each time a memory
block allocation occurs.
.. code-block:: c
k_mem_pool_defragment(&my_pool);
Suggested Uses
**************
Use a memory pool to allocate memory in variable-size blocks.
Use memory pool blocks when sending large amounts of data from one thread
to another, to avoid unnecessary copying of the data.
APIs
****
The following memory pool APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_mem_pool_alloc()`
* :cpp:func:`k_mem_pool_free()`
* :cpp:func:`k_mem_pool_defragment()`

View file

@ -0,0 +1,103 @@
.. _atomic_v2:
Atomic Services
###############
An :dfn:`atomic variable` is a 32-bit variable that can be read and modified
by threads and ISRs in an uninterruptible manner.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of atomic variables can be defined.
Using the kernel's atomic APIs to manipulate an atomic variable
guarantees that the desired operation occurs correctly,
even if higher priority contexts also manipulate the same variable.
The kernel also supports the atomic manipulation of a single bit
in an array of atomic variables.
Implementation
**************
Defining an Atomic Variable
===========================
An atomic variable is defined using a variable of type :c:type:`atomic_t`.
By default an atomic variable is initialized to zero. However, it can be given
a different value using :c:macro:`ATOMIC_INIT()`:
.. code-block:: c
atomic_t flags = ATOMIC_INIT(0xFF);
Manipulating an Atomic Variable
===============================
An atomic variable is manipulated using the APIs listed at the end of
this section.
The following code shows how an atomic variable can be used to keep track
of the number of times a function has been invoked. Since the count is
incremented atomically, there is no risk that it will become corrupted
in mid-increment if a thread calling the function is interrupted if
by a higher priority context that also calls the routine.
.. code-block:: c
atomic_t call_count;
int call_counting_routine(void)
{
/* increment invocation counter */
atomic_inc(&call_count);
/* do rest of routine's processing */
...
}
.. note::
SHOULD HAVE AN EXAMPLE OF MANIPULATING AN ARRAY OF ATOMIC VARIABLES.
Suggested Uses
**************
Use an atomic variable to implement critical section processing that only
requires the manipulation of a single 32-bit value.
Use multiple atomic variables to implement critical section processing
on a set of flag bits in a bit array longer than 32 bits.
.. note::
Using atomic variables is typically far more efficient than using
other techniques to implement critical sections such as using a mutex
or locking interrupts.
APIs
****
The following atomic operation APIs are provided by :file:`atomic.h`:
* :cpp:func:`atomic_get()`
* :cpp:func:`atomic_set()`
* :cpp:func:`atomic_clear()`
* :cpp:func:`atomic_add()`
* :cpp:func:`atomic_sub()`
* :cpp:func:`atomic_inc()`
* :cpp:func:`atomic_dec()`
* :cpp:func:`atomic_and()`
* :cpp:func:`atomic_or()`
* :cpp:func:`atomic_xor()`
* :cpp:func:`atomic_nand()`
* :cpp:func:`atomic_cas()`
* :cpp:func:`atomic_set_bit()`
* :cpp:func:`atomic_clear_bit()`
* :cpp:func:`atomic_test_bit()`
* :cpp:func:`atomic_test_and_set_bit()`
* :cpp:func:`atomic_test_and_clear_bit()`

View file

@ -0,0 +1,12 @@
.. _c_library_v2:
Standard C Library
##################
The kernel currently provides only the minimal subset of the
standard C library required to meet the kernel's own needs,
primarily in the areas of string manipulation and display.
Applications that require a more extensive C library can either submit
contributions that enhance the existing library or substitute
a replacement library.

View file

@ -0,0 +1,35 @@
.. _cxx_support_v2:
C++ Support for Applications
############################
The kernel supports applications written in both C and C++. However, to
use C++ in an application you must configure the kernel to include C++
support and the build system must select the correct compiler.
The build system selects the C++ compiler based on the suffix of the files.
Files identified with either a **cxx** or a **cpp** suffix are compiled using
the C++ compiler. For example, :file:`myCplusplusApp.cpp` is compiled using C++.
The kernel currently provides only a subset of C++ functionality. The
following features are *not* supported:
* Dynamic object management with the **new** and **delete** operators
* :abbr:`RTTI (run-time type information)`
* Exceptions
* Static global object destruction
While not an exhaustive list, support for the following functionality is
included:
* Inheritance
* Virtual functions
* Virtual tables
* Static global object constructors
Static global object constructors are initialized after the drivers are
initialized but before the application :c:func:`main()` function. Therefore,
use of C++ is restricted to application code.
.. note::
Do not use C++ for kernel, driver, or system initialization code.

View file

@ -0,0 +1,364 @@
.. _event_logger_v2:
Kernel Event Logger [TBD]
#########################
Definition
**********
The kernel event logger is a standardized mechanism to record events within the Kernel while
providing a single interface for the user to collect the data. This mechanism is currently used
to log the following events:
* Sleep events (entering and exiting low power conditions).
* Context switch events.
* Interrupt events.
Kernel Event Logger Configuration
*********************************
Kconfig provides the ability to enable and disable the collection of events and to configure the
size of the buffer used by the event logger.
These options can be found in the following path :file:`kernel/Kconfig`.
General kernel event logger configuration:
* :option:`CONFIG_KERNEL_EVENT_LOGGER_BUFFER_SIZE`
Default size: 128 words, 32-bit length.
Profiling points configuration:
* :option:`CONFIG_KERNEL_EVENT_LOGGER_DYNAMIC`
Allows modifying at runtime the events to record. At boot no event is recorded if enabled
This flag adds functions allowing to enable/disable recoding of kernel event logger and
task monitor events.
* :option:`CONFIG_KERNEL_EVENT_LOGGER_CUSTOM_TIMESTAMP`
Enables the possibility to set the timer function to be used to populate kernel event logger
timestamp. This has to be done at runtime by calling sys_k_event_logger_set_timer and providing
the function callback.
Adding a Kernel Event Logging Point
***********************************
Custom trace points can be added with the following API:
* :c:func:`sys_k_event_logger_put()`
Adds the profile of a new event with custom data.
* :cpp:func:`sys_k_event_logger_put_timed()`
Adds timestamped profile of a new event.
.. important::
The data must be in 32-bit sized blocks.
Retrieving Kernel Event Data
****************************
Applications are required to implement a fiber for accessing the recorded event messages
in both the nanokernel and microkernel systems. Developers can use the provided API to
retrieve the data, or may write their own routines using the ring buffer provided by the
event logger.
The API functions provided are:
* :c:func:`sys_k_event_logger_get()`
* :c:func:`sys_k_event_logger_get_wait()`
* :c:func:`sys_k_event_logger_get_wait_timeout()`
The above functions specify various ways to retrieve a event message and to copy it to
the provided buffer. When the buffer size is smaller than the message, the function will
return an error. All three functions retrieve messages via a FIFO method. The :literal:`wait`
and :literal:`wait_timeout` functions allow the caller to pend until a new message is
logged, or until the timeout expires.
Enabling/disabling event recording
**********************************
If KERNEL_EVENT_LOGGER_DYNAMIC is enabled, following functions must be checked for
dynamically enabling/disabling event recording at runtime:
* :cpp:func:`sys_k_event_logger_set_mask()`
* :cpp:func:`sys_k_event_logger_get_mask()`
* :cpp:func:`sys_k_event_logger_set_monitor_mask()`
* :cpp:func:`sys_k_event_logger_get_monitor_mask()`
Each mask bit corresponds to the corresponding event ID (mask is starting at bit 1 not bit 0).
More details are provided in function description.
Timestamp
*********
The timestamp used by the kernel event logger is 32-bit LSB of platform HW timer (for example
Lakemont APIC timer for Quark SE). This timer period is very small and leads to timestamp
wraparound happening quite often (e.g. every 134s for Quark SE).
see :option:`CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC`
This wraparound must be considered when analyzing kernel event logger data and care must be
taken when tickless idle is enabled and sleep duration can exceed maximum HW timer value.
Timestamp used by the kernel event logger can be customized by enabling following option:
:option:`CONFIG_KERNEL_EVENT_LOGGER_CUSTOM_TIMESTAMP`
In case this option is enabled, a callback function returning a 32-bit timestamp must
be provided to the kernel event logger by calling the following function at runtime:
:cpp:func:`sys_k_event_logger_set_timer()`
Message Formats
***************
Interrupt-driven Event Messaging
--------------------------------
The data of the interrupt-driven event message comes in two block of 32 bits:
* The first block contains the timestamp occurrence of the interrupt event.
* The second block contains the Id of the interrupt.
Example:
.. code-block:: c
uint32_t data[2];
data[0] = timestamp_event;
data[1] = interrupt_id;
Context-switch Event Messaging
------------------------------
The data of the context-switch event message comes in two block of 32 bits:
* The first block contains the timestamp occurrence of the context-switch event.
* The second block contains the thread id of the context involved.
Example:
.. code-block:: c
uint32_t data[2];
data[0] = timestamp_event;
data[1] = context_id;
Sleep Event Messaging
---------------------
The data of the sleep event message comes in three block of 32 bits:
* The first block contains the timestamp when the CPU went to sleep mode.
* The second block contains the timestamp when the CPU woke up.
* The third block contains the interrupt Id that woke the CPU up.
Example:
.. code-block:: c
uint32_t data[3];
data[0] = timestamp_went_sleep;
data[1] = timestamp woke_up.
data[2] = interrupt_id.
Task Monitor
------------
The task monitor tracks the activities of the task schedule server
in the microkernel and it is able to report three different types of
events related with the scheduler activities:
Task Monitor Task State Change Event
++++++++++++++++++++++++++++++++++++
The Task Monitor Task State Change Event tracks the task's status changes.
The event data is arranged as three 32 bit blocks:
* The first block contains the timestamp when the task server
changed the task status.
* The second block contains the task ID of the affected task.
* The thid block contains a 32 bit number with the new status.
Example:
.. code-block:: c
uint32_t data[3];
data[0] = timestamp;
data[1] = task_id.
data[2] = status_data.
Task Monitor Kevent Event
+++++++++++++++++++++++++
The Task Monitor Kevent Event tracks the commands requested to the
task server by the kernel. The event data is arranged as two blocks
of 32 bits each:
* The first block contains the timestamp when the task server
attended the kernel command.
* The second block contains the code of the command.
.. code-block:: c
uint32_t data[3];
data[0] = timestamp;
data[1] = event_code.
Task Monitor Command Packet Event
+++++++++++++++++++++++++++++++++
The Task Monitor Command Packet Event track the command packets sent
to the task server. The event data is arranged as three blocks of
32 bits each:
* The first block contains the timestamp when the task server
attended the kernel command.
* The second block contains the task identifier of the task
affected by the packet.
* The thid block contains the memory vector of the routine
executed by the task server.
Example:
.. code-block:: c
uint32_t data[3];
data[0] = timestamp;
data[1] = task_id.
data[2] = comm_handler.
Example: Retrieving Profiling Messages
======================================
.. code-block:: c
uint32_t data[3];
uint8_t data_length = SIZE32_OF(data);
uint8_t dropped_count;
while(1) {
/* collect the data */
res = sys_k_event_logger_get_wait(&event_id, &dropped_count, data,
&data_length);
if (dropped_count > 0) {
/* process the message dropped count */
}
if (res > 0) {
/* process the data */
switch (event_id) {
case KERNEL_EVENT_CONTEXT_SWITCH_EVENT_ID:
/* ... Process the context switch event data ... */
break;
case KERNEL_EVENT_INTERRUPT_EVENT_ID:
/* ... Process the interrupt event data ... */
break;
case KERNEL_EVENT_SLEEP_EVENT_ID:
/* ... Process the data for a sleep event ... */
break;
case KERNEL_EVENT_LOGGER_TASK_MON_TASK_STATE_CHANGE_EVENT_ID:
/* ... Process the data for a task monitor event ... */
break;
case KERNEL_EVENT_LOGGER_TASK_MON_KEVENT_EVENT_ID:
/* ... Process the data for a task monitor command event ... */
break;
case KERNEL_EVENT_LOGGER_TASK_MON_CMD_PACKET_EVENT_ID:
/* ... Process the data for a task monitor packet event ... */
break;
default:
printf("unrecognized event id %d\n", event_id);
}
} else {
if (res == -EMSGSIZE) {
/* ERROR - The buffer provided to collect the
* profiling events is too small.
*/
} else if (ret == -EAGAIN) {
/* There is no message available in the buffer */
}
}
}
.. note::
To see an example that shows how to collect the kernel event data, check the
project :file:`samples/kernel_event_logger`.
Example: Adding a Kernel Event Logging Point
============================================
.. code-block:: c
uint32_t data[2];
if (sys_k_must_log_event(KERNEL_EVENT_LOGGER_CUSTOM_ID)) {
data[0] = custom_data_1;
data[1] = custom_data_2;
sys_k_event_logger_put(KERNEL_EVENT_LOGGER_CUSTOM_ID, data, ARRAY_SIZE(data));
}
Use the following function to register only the time of an event.
.. code-block:: c
if (sys_k_must_log_event(KERNEL_EVENT_LOGGER_CUSTOM_ID)) {
sys_k_event_logger_put_timed(KERNEL_EVENT_LOGGER_CUSTOM_ID);
}
APIs
****
The following APIs are provided by the :file:`k_event_logger.h` file:
:cpp:func:`sys_k_event_logger_register_as_collector()`
Register the current fiber as the collector fiber.
:c:func:`sys_k_event_logger_put()`
Enqueue a kernel event logger message with custom data.
:cpp:func:`sys_k_event_logger_put_timed()`
Enqueue a kernel event logger message with the current time.
:c:func:`sys_k_event_logger_get()`
De-queue a kernel event logger message.
:c:func:`sys_k_event_logger_get_wait()`
De-queue a kernel event logger message. Wait if the buffer is empty.
:c:func:`sys_k_event_logger_get_wait_timeout()`
De-queue a kernel event logger message. Wait if the buffer is empty until the timeout expires.
:cpp:func:`sys_k_must_log_event()`
Check if an event type has to be logged or not
In case KERNEL_EVENT_LOGGER_DYNAMIC is enabled:
:cpp:func:`sys_k_event_logger_set_mask()`
Set kernel event logger event mask
:cpp:func:`sys_k_event_logger_get_mask()`
Get kernel event logger event mask
:cpp:func:`sys_k_event_logger_set_monitor_mask()`
Set task monitor event mask
:cpp:func:`sys_k_event_logger_get_monitor_mask()`
Get task monitor event mask
In case KERNEL_EVENT_LOGGER_CUSTOM_TIMESTAMP is enabled:
:cpp:func:`sys_k_event_logger_set_timer()`
Set kernel event logger timestamp function

View file

@ -0,0 +1,180 @@
.. _float_v2:
Floating Point Services
#######################
The kernel allows threads to use floating point registers on board
configurations that support these registers.
.. note::
Floating point services are currently available only for boards
based on the ARM Cortex-M4 or the Intel x86 architectures. The
services provided are architecture specific.
The kernel does not support the use of floating point registers by ISRs.
.. contents::
:local:
:depth: 2
Concepts
********
The kernel can be configured to provide only the floating point services
required by an application. Three modes of operation are supported,
which are described below. In addition, the kernel's support for the SSE
registers can be included or omitted, as desired.
No FP registers mode
====================
This mode is used when the application has no threads that use floating point
registers. It is the kernel's default floating point services mode.
If a thread uses any floating point register,
the kernel generates a fatal error condition and aborts the thread.
Unshared FP registers mode
==========================
This mode is used when the application has only a single thread
that uses floating point registers.
The kernel initializes the floating point registers so they can be used
by any thread. The floating point registers are left unchanged
whenever a context switch occurs.
.. note::
Incorrect operation may result if two or more threads use
floating point registers, as the kernel does not attempt to detect
(or prevent) multiple threads from using these registers.
Shared FP registers mode
========================
This mode is used when the application has two or more threads that use
floating point registers. Depending upon the underlying CPU architecture,
the kernel supports one or more of the following thread sub-classes:
* non-user: A thread that cannot use any floating point registers
* FPU user: A thread that can use the standard floating point registers
* SSE user: A thread that can use both the standard floating point registers
and SSE registers
The kernel initializes the floating point registers so they can be used
by any thread, then saves and restores these registers during
context switches to ensure the computations performed by each FPU user
or SSE user are not impacted by the computations performed by the other users.
On the ARM Cortex-M4 architecture the kernel treats *all* threads
as FPU users when shared FP registers mode is enabled. This means that the
floating point registers are saved and restored during a context switch, even
when the associated threads are not using them. Each thread must provide
an extra 132 bytes of stack space where these register values can be saved.
On the x86 architecture the kernel treats each thread as a non-user,
FPU user or SSE user on a case-by-case basis. A "lazy save" algorithm is used
during context switching which updates the floating point registers only when
it is absolutely necessary. For example, the registers are *not* saved when
switching from an FPU user to a non-user thread, and then back to the original
FPU user. The following table indicates the amount of additional stack space a
thread must provide so the registers can be saved properly.
=========== =============== ==========================
Thread type FP register use Extra stack space required
=========== =============== ==========================
cooperative any 0 bytes
preemptive none 0 bytes
preemptive FPU 108 bytes
preemptive SSE 464 bytes
=========== =============== ==========================
The x86 kernel automatically detects that a given thread is using
the floating point registers the first time the thread accesses them.
The thread is tagged as an SSE user if the kernel has been configured
to support the SSE registers, or as an FPU user if the SSE registers are
not supported. If this would result in a thread that is an FPU user being
tagged as an SSE user, or if the application wants to avoid the exception
handling overhead involved in auto-tagging threads, it is possible to
pre-tag a thread using one of the techniques listed below.
* A statically-spawned x86 thread can be pre-tagged by passing the
:c:macro:`USE_FP` or :c:macro:`USE_SSE` option to
:c:macro:`K_THREAD_DEFINE()`.
* A dynamically-spawned x86 thread can be pre-tagged by passing the
:c:macro:`USE_FP` or :c:macro:`USE_SSE` option to :c:func:`k_thread_spawn()`.
* An already-spawned x86 thread can pre-tag itself once it has started
by passing the :c:macro:`USE_FP` or :c:macro:`USE_SSE` option to
:c:func:`k_float_enable()`.
If an x86 thread uses the floating point registers infrequently it can call
:c:func:`k_float_disable()` to remove its tagging as an FPU user or SSE user.
This eliminates the need for the kernel to take steps to preserve
the contents of the floating point registers during context switches
when there is no need to do so.
When the thread again needs to use the floating point registers it can re-tag
itself as an FPU user or SSE user by calling :c:func:`k_float_enable()`.
Implementation
**************
Performing Floating Point Arithmetic
====================================
No special coding is required for a thread to use floating point arithmetic
if the kernel is properly configured.
The following code shows how a routine can use floating point arithmetic
to avoid overflow issues when computing the average of a series of integer
values.
.. code-block:: c
int average(int *values, int num_values)
{
double sum;
int i;
sum = 0.0;
for (i = 0; i < num_values; i++) {
sum += *values;
values++;
}
return (int)((sum / num_values) + 0.5);
}
Suggested Uses
**************
Use the kernel floating point services when an application needs to
perform floating point operations.
Configuration Options
*********************
To configure unshared FP registers mode, enable the :option:`CONFIG_FLOAT`
configuration option and leave the :option:`CONFIG_FP_SHARING` configuration
option disabled.
To configure shared FP registers mode, enable both the :option:`CONFIG_FLOAT`
configuration option and the :option:`CONFIG_FP_SHARING` configuration option.
Also, ensure that any thread that uses the floating point registers has
sufficient added stack space for saving floating point register values
during context switches, as described above.
Use the :option:`CONFIG_SSE` configuration option to enable support for
SSEx instructions (x86 only).
APIs
****
The following floating point APIs (x86 only) are provided by :file:`kernel.h`:
* :cpp:func:`k_float_enable()`
* :cpp:func:`k_float_disable()`

View file

@ -0,0 +1,15 @@
.. _other_v2:
Other Services
##############
This section describes other services provided by the kernel.
.. toctree::
:maxdepth: 1
atomic.rst
float.rst
event_logger.rst
c_library.rst
cxx_support.rst

View file

@ -0,0 +1,128 @@
.. _changes_v2:
Changes from Version 1 Kernel
#############################
The version 2 kernel incorporates numerous changes
that improve ease of use for developers.
Some of the major benefits of these changes are:
* elimination of separate microkernel and nanokernel build types,
* elimination of the MDEF in microkernel-based applications,
* simplifying and streamlining the kernel API,
* easing restrictions on the use of kernel objects,
* reducing memory footprint by merging duplicated services, and
* improving performance by reducing context switching.
.. note::
To ease the transition of existing applications and other Zephyr subsystems
to the new kernel model, the revised kernel will continue to support
the version 1 "legacy" APIs and MDEF for a limited period of time,
after which support will be removed.
The changes introduced by the version 2 kernel are too numerous to fully
describe here; readers are advised to consult the individual sections of the
Kernel Primer to familiarize themselves with the way the version 2 kernel
operates. However, the most significant changes are summarized below.
Application Design
******************
The microkernel and nanokernel portions of Zephyr have been merged into
a single entity, which is simply referred to as "the kernel". Consequently,
there is now only a single way to design and build Zephyr applications.
The MDEF has been eliminated. All kernel objects are now defined directly
in code.
Multi-threading
***************
The task and fiber context types have been merged into a single type,
known as a "thread". Setting a thread's priority to a negative priority
makes it a "cooperative thread", which operates in a fiber-like manner;
setting it to a non-negative priority makes it a "preemptive thread",
which operates in a task-like manner.
It is now possible to pass up to 3 arguments to a thread's entry point.
(The version 1 kernel allowed 2 arguments to be passed to a fiber
and allowed no arguments to be passed to a task.)
The kernel now spawns both a "main thread" and an "idle thread" during
startup. (The version 1 kernel spawned only a single thread.)
The kernel's main thread performs system initialization and then invokes
:cpp:func:`main()`. If no :cpp:func:`main()` is defined by the application,
the main thread terminates.
System initialization code can now perform blocking operations,
during which time the kernel's idle thread executes.
Kernel APIs
***********
Kernel APIs now use a **k_** or **K_** prefix. There are no longer distinct
APIs for invoking a service from an task, a fiber, and an ISR.
Kernel APIs now return 0 to indicate success and a non-zero error code
to indicate the reason for failure. (The version 1 kernel supported only
two error codes, rather than an unlimited number of them.)
Kernel APIs now specify timeout intervals in milliseconds, rather than
in system clock ticks. (This change makes things more intuitive for most
developers. However, the kernel still implements timeouts using the
tick-based system clock.)
Kernel objects can now be used by both task-like threads and fiber-like
threads. (The version 1 kernel did not permit fibers to use microkernel
objects, and could result in undesirable busy-waiting when nanokernel
objects were used by tasks.)
Kernel objects now typically allow multiple threads to wait on a given
object. (The version 1 kernel restricted waiting on certain types of
kernel object to a single thread.)
Kernel object APIs now always execute in the context of the invoking thread.
(The version 1 kernel required microkernel object APIs to context switch
the thread to the microkernel server fiber, followed by another context
switch back to the invoking thread.)
Clocks and Timers
*****************
The nanokernel timer and microkernel timer object types have been merged
into a single type.
Synchronization
***************
The nanokernel semaphore and microkernel semaphore object types have been
merged into a single type. The new type can now be used as a binary semaphore,
as well as a counting semaphore.
The microkernel event object type is now presented as a relative to Unix-style
signalling. Due to improvements to the implementation of semaphores, events
are now less efficient to use for basic synchronization than semaphores;
consequently, events should now be reserved for scenarios requiring the use
of a callback function.
Data Passing
************
The microkernel FIFO object type has been renamed to "message queue",
to avoid confusion with the nanokernel FIFO object type.
The microkernel mailbox object type no longer supports the explicit message
priority concept. Messages are now implicitly ordered based on the priority
of the sending thread.
The microkernel mailbox object type now supports the sending of asynchronous
messages using a message buffer. (The version 1 kernel only supported
asynchronous messages using a message block.)
Task Groups
***********
There is no k_thread_group_xxx() equivalent to the legacy task_group_xxx()
APIs as task groups are being phased out. Use of the legacy task_group_xxx()
APIs are limited to statically defined threads.

View file

@ -0,0 +1,17 @@
.. _glossary_v2:
Glossary of Terms [TBD]
#######################
API (Application Program Interface)
A defined set of routines and protocols for building software inputs
and output mechanisms.
IDT (Interrupt Descriptor Table)
[TBD]
ISR (Interrupt Service Routine)
[TBD]
XIP (eXecute In Place)
[TBD]

View file

@ -0,0 +1,29 @@
.. _overview_v2:
Overview
########
The Zephyr kernel lies at the heart of every Zephyr application. It provides
a low footprint, high performance, multi-threaded execution environment
with a rich set of available features. The rest of the Zephyr ecosystem,
including device drivers, networking stack, and application-specific code,
uses the kernel's features to create a complete application.
The configurable nature of the kernel allows you to incorporate only those
features needed by your application, making it ideal for systems with limited
amounts of memory (as little as 2 KB!) or with simple multi-threading
requirements (such as a set of interrupt handlers and a single background task).
Examples of such systems include: embedded sensor hubs, environmental sensors,
simple LED wearables, and store inventory tags.
Applications requiring more memory (50 to 900 KB), multiple communication
devices (like WiFi and Bluetooth Low Energy), and complex multi-threading,
can also be developed using the Zephyr kernel. Examples of such systems
include: fitness wearables, smart watches, and IoT wireless gateways.
.. toctree::
:maxdepth: 1
source_tree.rst
glossary.rst
changes.rst

View file

@ -0,0 +1,67 @@
.. _source_tree_v2:
Source Tree Structure
#####################
Understanding the Zephyr source tree can be helpful in locating the code
associated with a particular Zephyr feature.
The Zephyr source tree provides the following top-level directories,
each of which may have one or more additional levels of subdirectories
which are not described here.
:file:`arch`
Architecture-specific kernel and system-on-chip (SoC) code.
Each supported architecture (for example, x86 and ARM)
has its own subdirectory,
which contains additional subdirectories for the following areas:
* architecture-specific kernel source files
* architecture-specific kernel include files for private APIs
* SoC-specific code
:file:`boards`
Board related code and configuration files.
:file:`doc`
Zephyr documentation source files and tools.
:file:`drivers`
Device driver code.
:file:`ext`
Externally created code that has been integrated into Zephyr
from other sources, such as hardware interface code supplied by
manufacturers and cryptographic library code.
:file:`fs`
File system code.
:file:`include`
Include files for all public APIs, except those defined under :file:`lib`.
:file:`kernel`
Architecture-independent kernel code.
:file:`lib`
Library code, including the minimal standard C library.
:file:`misc`
Miscellaneous code that doesn't belong to any of the other top-level
directories.
:file:`net`
Networking code, including the Bluetooth stack and networking stacks.
:file:`samples`
Sample applications that demonstrate the use of Zephyr features.
:file:`scripts`
Various programs and other files used to build and test Zephyr
applications.
:file:`tests`
Test code and benchmarks for Zephyr features.
:file:`usb`
USB device stack code.

View file

@ -0,0 +1,233 @@
.. _events_v2:
Events
######
An :dfn:`event` is a kernel object that allows an application to perform
asynchronous signalling when a condition of interest occurs.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of events can be defined. Each event is referenced by
its memory address.
An event has the following key properties:
* An **event handler**, which specifies the action to be taken
when the event is signalled.
* An **event pending flag**, which is set if the event is signalled
and an event handler function does not consume the signal.
An event must be initialized before it can be used. This establishes
its event handler and clears the event pending flag.
Event Lifecycle
===============
An ISR or a thread signals an event by **sending** the event
when a condition of interest occurs that cannot be handled by the detector.
Each time an event is sent, the kernel examines its event handler
to determine what action to take.
* :c:macro:`K_EVT_IGNORE` causes the event to be ignored.
* :c:macro:`K_EVT_DEFAULT` causes the event pending flag to be set.
* Any other value is assumed to be the address of an event handler function,
and is invoked by the system workqueue thread. If the function returns
zero, the signal is deemed to have been consumed; otherwise, the event
pending flag is set.
The kernel ensures that the event handler function is executed once
for each time an event is sent, even if the event is sent multiple times
in rapid succession.
An event whose event pending flag becomes set remains pending until
the event is accepted by a thread. This clears the event pending flag.
A thread accepts a pending event by **receiving** the event.
If the event's pending flag is currently clear, the thread may choose
to wait for the event to become pending.
Any number of threads may wait for a pending event simultaneously;
when the event is pended it is accepted by the highest priority thread
that has waited longest.
.. note::
A thread that accepts an event cannot directly determine how many times
the event pending flag was set since the event was last accepted.
SHOULD WE ALLOW THE EVENT INITIALIZATION ROUTINE TO ACCEPT THE MAXIMUM
NUMBER OF TIMES THE EVENT CAN PEND? IT'S A TRIVIAL CHANGE ...
Comparison to Unix-style Signals
================================
Zephyr events are somewhat akin to Unix-style signals, but have a number of
significant differences. The most notable of these are listed below:
* A Zephyr event cannot be blocked --- it is always delivered to its event
handler immediately.
* A Zephyr event pends *after* it has been delivered to its event handler,
and only if an event handler function does not consume the event.
* Zephyr has no pre-defined events or actions. All events are application
defined, and all have a default action that pends the event.
Implementation
**************
Defining an Event
=================
An event is defined using a variable of type :c:type:`struct k_event`.
It must then be initialized by calling :cpp:func:`k_event_init()`.
The following code defines and initializes an event.
.. code-block:: c
struct k_event my_event;
extern int my_event_handler(struct k_event *event)
k_event_init(&my_event, my_event_handler);
Alternatively, an event can be defined and initialized at compile time
by calling :c:macro:`K_EVENT_DEFINE()`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_EVENT_DEFINE(my_event, my_event_handler);
Signaling an Event
==================
An event is signalled by calling :cpp:func:`k_event_send()`.
The following code builds on the example above, and uses the event
in an ISR to signal that a key press has occurred.
.. code-block:: c
void keypress_interrupt_handler(void *arg)
{
...
k_event_send(&my_event);
...
}
Handling an Event
=================
An event handler function is used when a signalled event should not be ignored
or immediately pended. It has the following form:
.. code-block:: c
int <function_name>(struct k_event *event)
{
/* catch the event signal; return zero if the signal is consumed, */
/* or non-zero to let the event pend */
...
}
The following code builds on the example above, and uses an event handler
function to do processing that is too complex to be performed by the ISR.
.. code-block:: c
int my_event_handler(struct k_event *event_id_is_unused)
{
/* determine what key was pressed */
char c = get_keypress();
/* do complex processing of the keystroke */
...
/* signalled event has been consumed */
return 0;
}
Accepting an Event
==================
A pending event is accepted by a thread by calling :cpp:func:`k_event_recv()`.
The following code is an alternative to the example above,
and uses a dedicated thread to do very complex processing
of key presses that would otherwise monopolize the system workqueue.
The event handler function is used to filter out unwanted key press
notifications, allowing the dedicated thread to wake up and process
key presses only when needed.
.. code-block:: c
int my_event_handler(struct k_event *event_id_is_unused)
{
/* determine what key was pressed */
char c = get_keypress();
/* signal thread only if key pressed was a digit */
if ((c >= '0') && (c <= '9')) {
/* save key press information */
...
/* signalled event should be pended */
return 1;
} else {
/* signalled event has been consumed */
return 0;
}
}
void keypress_thread(int unused1, int unused2, int unused3)
{
/* consume key presses */
while (1) {
/* wait for a key press event to pend */
k_event_recv(&my_event, K_FOREVER);
/* process saved key press, which must be a digit */
...
}
}
Suggested Uses
**************
Use an event to allow the kernel's system workqueue to handle an event,
rather than defining an application thread to handle it.
Use an event to allow the kernel's system workqueue to pre-process an event,
prior to letting an application thread handle it.
Configuration Options
*********************
Related configuration options:
* None.
APIs
****
The following APIs for an event are provided by :file:`kernel.h`:
:cpp:func:`k_event_handler_set()`
Register an event handler function for an event.
:cpp:func:`k_event_send()`
Signal an event.
:cpp:func:`k_event_recv()`
Catch an event signal.

View file

@ -0,0 +1,171 @@
.. _mutexes_v2:
Mutexes
#######
A :dfn:`mutex` is a kernel object that implements a traditional
reentrant mutex. A mutex allows multiple threads to safely share
an associated hardware or software resource by ensuring mutually exclusive
access to the resource.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of mutexes can be defined. Each mutex is referenced by its memory
address.
A mutex has the following key properties:
* A **lock count** that indicates the number of times the mutex has be locked
by the thread that has locked it. A count of zero indicates that the mutex
is unlocked.
* An **owning thread** that identifies the thread that has locked the mutex,
when it is locked.
A mutex must be initialized before it can be used. This sets its lock count
to zero.
A thread that needs to use a shared resource must first gain exclusive rights
to access it by **locking** the associated mutex. If the mutex is already locked
by another thread, the requesting thread may choose to wait for the mutex
to be unlocked.
After locking a mutex, the thread may safely use the associated resource
for as long as needed; however, it is considered good practise to hold the lock
for as short a time as possible to avoid negatively impacting other threads
that want to use the resource. When the thread no longer needs the resource
it must **unlock** the mutex to allow other threads to use the resource.
Any number of threads may wait on a locked mutex simultaneously.
When the mutex becomes unlocked it is then locked by the highest-priority
thread that has waited the longest.
.. note::
Mutex objects are *not* designed for use by ISRs.
Reentrant Locking
=================
A thread is permitted to lock a mutex it has already locked.
This allows the thread to access the associated resource at a point
in its execution when the mutex may or may not already be locked.
A mutex that is repeatedly locked by a thread must be unlocked an equal number
of times before the mutex becomes fully unlocked so it can be claimed
by another thread.
Priority Inheritance
====================
The thread that has locked a mutex is eligible for :dfn:`priority inheritance`.
This means the kernel will *temporarily* elevate the thread's priority
if a higher priority thread begins waiting on the mutex. This allows the owning
thread to complete its work and release the mutex more rapidly by executing
at the same priority as the waiting thread. Once the mutex has been unlocked,
the unlocking thread resets its priority to the level it had before locking
that mutex.
.. note::
The :option:`CONFIG_PRIORITY_CEILING` configuration option limits
how high the kernel can raise a thread's priority due to priority
inheritance. The default value of 0 permits unlimited elevation.
When two or more threads wait on a mutex held by a lower priority thread, the
kernel adjusts the owning thread's priority each time a thread begins waiting
(or gives up waiting). When the mutex is eventually unlocked, the unlocking
thread's priority correctly reverts to its original non-elevated priority.
The kernel does *not* fully support priority inheritance when a thread holds
two or more mutexes simultaneously. This situation can result in the thread's
priority not reverting to its original non-elevated priority when all mutexes
have been released. It is recommended that a thread lcok only a single mutex
at a time when multiple mutexes are shared between threads of different
priorities.
Implementation
**************
Defining a Mutex
================
A mutex is defined using a variable of type :c:type:`struct k_mutex`.
It must then be initialized by calling :cpp:func:`k_mutex_init()`.
The following code defines and initializes a mutex.
.. code-block:: c
struct k_mutex my_mutex;
k_mutex_init(&my_mutex);
Alternatively, a mutex can be defined and initialized at compile time
by calling :c:macro:`K_MUTEX_DEFINE()`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_MUTEX_DEFINE(my_mutex);
Locking a Mutex
===============
A mutex is locked by calling :cpp:func:`k_mutex_lock()`.
The following code builds on the example above, and waits indefinitely
for the mutex to become available if it is already locked by another thread.
.. code-block:: c
k_mutex_lock(&my_mutex, K_FOREVER);
The following code waits up to 100 milliseconds for the mutex to become
available, and gives a warning if the mutex does not become availablee.
.. code-block:: c
if (k_mutex_lock(&my_mutex, 100) == 0) {
/* mutex successfully locked */
} else {
printf("Cannot lock XYZ display\n");
}
Unlocking a Mutex
=================
A mutex is unlocked by calling :cpp:func:`k_mutex_unlock()`.
The following code builds on the example above,
and unlocks the mutex that was previously locked by the thread.
.. code-block:: c
k_mutex_unlock(&my_mutex);
Suggested Uses
**************
Use a mutex to provide exclusive access to a resource, such as a physical
device.
Configuration Options
*********************
Related configuration options:
* :option:`CONFIG_PRIORITY_CEILING`
APIs
****
The following mutex APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_mutex_init()`
* :cpp:func:`k_mutex_lock()`
* :cpp:func:`k_mutex_unlock()`

View file

@ -0,0 +1,229 @@
.. _semaphore_groups_v2:
Semaphore Groups [TBD]
######################
Concepts
********
The microkernel's :dfn:`semaphore` objects are an implementation of traditional
counting semaphores.
Any number of semaphores can be defined in a microkernel system. Each semaphore
has a **name** that uniquely identifies it.
A semaphore starts off with a count of zero. This count is incremented each
time the semaphore is given, and is decremented each time the semaphore is taken.
However, a semaphore cannot be taken when it has a count of zero; this makes
it unavailable.
Semaphores may be given by tasks, fibers, or ISRs.
Semaphores may be taken by tasks only. A task that attempts to take an unavailable
semaphore may wait for the semaphore to be given. Any number of tasks may wait on
an unavailable semaphore simultaneously; and when the semaphore becomes available,
it is given to the highest priority task that has waited the longest.
The kernel allows a task to give multiple semaphores in a single operation using a
*semaphore group*. The task specifies the members of a semaphore group with an array
of semaphore names, terminated by the symbol :c:macro:`ENDLIST`. This technique
allows the task to give the semaphores more efficiently than giving them individually.
A task can also use a semaphore group to take a single semaphore from a set
of semaphores in a single operation. This technique allows the task to
monitor multiple synchronization sources at the same time, similar to the way
:c:func:`select()` can be used to read input from a set of file descriptors
in a POSIX-compliant operating system. The kernel does *not* define the order
in which semaphores are taken when more than one semaphore in a semaphore group
is available; the semaphore that is taken by the task may not be the one
that was given first.
There is no limit on the number of semaphore groups used by a task, or
on the number of semaphores belonging to any given semaphore group. Semaphore
groups may also be shared by multiple tasks, if desired.
Purpose
*******
Use a semaphore to control access to a set of resources by multiple tasks.
Use a semaphore synchronize processing between a producing task, fiber,
or ISR and one or more consuming tasks.
Use a semaphore group to allow a task to signal or to monitor multiple
semaphores simultaneously.
Usage
*****
Defining a Semaphore
====================
The following parameters must be defined:
*name*
This specifies a unique name for the semaphore.
Public Semaphore
----------------
Define the semaphore in the application's MDEF with the following syntax:
.. code-block:: console
SEMA name
For example, the file :file:`projName.mdef` defines two semaphores as follows:
.. code-block:: console
% SEMA NAME
% ================
SEMA INPUT_DATA
SEMA WORK_DONE
A public semaphore can be referenced by name from any source file that
includes the file :file:`zephyr.h`.
Private Semaphore
-----------------
Define the semaphore in a source file using the following syntax:
.. code-block:: c
DEFINE_SEMAPHORE(name);
For example, the following code defines a private semaphore named ``PRIV_SEM``.
.. code-block:: c
DEFINE_SEMAPHORE(PRIV_SEM);
To reference this semaphore from a different source file, use the following syntax:
.. code-block:: c
extern const ksem_t PRIV_SEM;
Example: Giving a Semaphore from a Task
=======================================
This code uses a semaphore to indicate that a unit of data
is available for processing by a consumer task.
.. code-block:: c
void producer_task(void)
{
/* save data item in a buffer */
...
/* notify task that an additional data item is available */
task_sem_give(INPUT_DATA);
...
}
Example: Taking a Semaphore with a Conditional Time-out
=======================================================
This code waits up to 500 ticks for a semaphore to be given,
and gives a warning if it is not obtained in that time.
.. code-block:: c
void consumer_task(void)
{
...
if (task_sem_take(INPUT_DATA, 500) == RC_TIME) {
printf("Input data not available!");
} else {
/* extract saved data item from buffer and process it */
...
}
...
}
Example: Monitoring Multiple Semaphores at Once
===============================================
This code waits on two semaphores simultaneously, and then takes
action depending on which one was given.
.. code-block:: c
ksem_t my_sem_group[3] = { INPUT_DATA, WORK_DONE, ENDLIST };
void consumer_task(void)
{
ksem_t sem_id;
...
sem_id = task_sem_group_take(my_sem_group, TICKS_UNLIMITED);
if (sem_id == WORK_DONE) {
printf("Shutting down!");
return;
} else {
/* process input data */
...
}
...
}
Example: Giving Multiple Semaphores at Once
===========================================
This code uses a semaphore group to allow a controlling task to signal
the semaphores used by four other tasks in a single operation.
.. code-block:: c
ksem_t my_sem_group[5] = { SEM1, SEM2, SEM3, SEM4, ENDLIST };
void control_task(void)
{
...
task_semaphore_group_give(my_sem_group);
...
}
APIs
****
All of the following APIs are provided by :file:`microkernel.h`:
APIs for an individual semaphore
================================
:cpp:func:`isr_sem_give()`
Give a semaphore (from an ISR).
:cpp:func:`fiber_sem_give()`
Give a semaphore (from a fiber).
:cpp:func:`task_sem_give()`
Give a semaphore.
:cpp:func:`task_sem_take()`
Take a semaphore, with time limited waiting.
:cpp:func:`task_sem_reset()`
Set the semaphore count to zero.
:cpp:func:`task_sem_count_get()`
Read the count for a semaphore.
APIs for semaphore groups
=========================
:cpp:func:`task_sem_group_give()`
Give each semaphore in a group.
:cpp:func:`task_sem_group_take()`
Wait up to a specified time period for a semaphore from a group.
:cpp:func:`task_sem_group_reset()`
Set the count to zero for each semaphore in a group.

View file

@ -0,0 +1,137 @@
.. _semaphores_v2:
Semaphores
##########
A :dfn:`semaphore` is a kernel object that implements a traditional
counting semaphore.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of semaphores can be defined. Each semaphore is referenced
by its memory address.
A semaphore has the following key properties:
* A **count** that indicates the number of times the semaphore can be taken.
A count of zero indicates that the semaphore is unavailable.
* A **limit** that indicates the maximum value the semaphore's count
can reach.
A semaphore must be initialized before it can be used. Its count must be set
to a non-negative value that is less than or equal to its limit.
A semaphore may be **given** by a thread or an ISR. Giving the semaphore
increments its count, unless the count is already equal to the limit.
A semaphore may be **taken** by a thread. Taking the semaphore
decrements its count, unless the semaphore is unavailable (i.e. at zero).
When a semaphore is unavailable a thread may choose to wait for it to be given.
Any number of threads may wait on an unavailable semaphore simultaneously.
When the semaphore is given, it is taken by the highest priority thread
that has waited longest.
.. note::
The kernel does allow an ISR to take a semaphore, however the ISR must
not attempt to wait if the semaphore is unavailable.
Implementation
**************
Defining a Semaphore
====================
A semaphore is defined using a variable of type :c:type:`struct k_sem`.
It must then be initialized by calling :cpp:func:`k_sem_init()`.
The following code defines a semaphore, then configures it as a binary
semaphore by setting its count to 0 and its limit to 1.
.. code-block:: c
struct k_sem my_sem;
k_sem_init(&my_sem, 0, 1);
Alternatively, a semaphore can be defined and initialized at compile time
by calling :c:macro:`K_SEM_DEFINE()`.
The following code has the same effect as the code segment above.
.. code-block:: c
K_SEM_DEFINE(my_sem, 0, 1);
Giving a Semaphore
==================
A semaphore is given by calling :cpp:func:`k_sem_give()`.
The following code builds on the example above, and gives the semaphore to
indicate that a unit of data is available for processing by a consumer thread.
.. code-block:: c
void input_data_interrupt_handler(void *arg)
{
/* notify thread that data is available */
k_sem_give(&my_sem);
...
}
Taking a Semaphore
==================
A semaphore is taken by calling :cpp:func:`k_sem_take()`.
The following code builds on the example above, and waits up to 50 milliseconds
for the semaphore to be given.
A warning is issued if the semaphore is not obtained in time.
.. code-block:: c
void consumer_thread(void)
{
...
if (k_sem_take(&my_sem, 50) != 0) {
printk("Input data not available!");
} else {
/* fetch available data */
...
}
...
}
Suggested Uses
**************
Use a semaphore to control access to a set of resources by multiple threads.
Use a semaphore to synchronize processing between a producing and consuming
threads or ISRs.
Configuration Options
*********************
Related configuration options:
* None.
APIs
****
The following semaphore APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_sem_init()`
* :cpp:func:`k_sem_give()`
* :cpp:func:`k_sem_take()`
* :cpp:func:`k_sem_reset()`
* :cpp:func:`k_sem_count_get()`

View file

@ -0,0 +1,15 @@
.. _synchronization_v2:
Synchronization
###############
This section describes kernel services for synchronizing the operation
of different threads, or the operation of an ISR and a thread.
.. toctree::
:maxdepth: 2
semaphores.rst
semaphore_groups.rst
mutexes.rst
events.rst

View file

@ -0,0 +1,83 @@
.. _custom_data_v2:
Custom Data
###########
A thread's :dfn:`custom data` is a 32-bit, thread-specific value
that may be used by an application for any purpose.
.. contents::
:local:
:depth: 2
Concepts
********
Every thread has a 32-bit custom data area.
The custom data is accessible only by the thread itself,
and may be used by the application for any purpose it chooses.
The default custom data for a thread is zero.
.. note::
Custom data support is not available to ISRs because they operate
within a single shared kernel interrupt handling context.
Implementation
**************
Using Custom Data
=================
By default, thread custom data support is disabled. The configuration option
:option:`CONFIG_THREAD_CUSTOM_DATA` can be used to enable support.
The :cpp:func:`k_thread_custom_data_set()` and
:cpp:func:`k_thread_custom_data_get()` functions are used to write and read
a thread's custom data, respectively. A thread can only access its own
custom data, and not that of another thread.
The following code uses the custom data feature to record the number of times
each thread calls a specific routine.
.. note::
Obviously, only a single routine can use this technique,
since it monopolizes the use of the custom data feature.
.. code-block:: c
int call_tracking_routine(void)
{
uint32_t call_count;
if (k_am_in_isr()) {
/* ignore any call made by an ISR */
} else {
call_count = (uint32_t)k_thread_custom_data_get();
call_count++;
k_thread_custom_data_set((void *)call_count);
}
/* do rest of routine's processing */
...
}
Suggested Uses
**************
Use thread custom data to allow a routine to access thread-specific information,
by using the custom data as a pointer to a data structure owned by the thread.
Configuration Options
*********************
Related configuration options:
* :option:`CONFIG_THREAD_CUSTOM_DATA`
APIs
****
The following thread custom data APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_thread_custom_data_set()`
* :cpp:func:`k_thread_custom_data_get()`

View file

@ -0,0 +1,229 @@
.. _lifecycle_v2:
Lifecycle
#########
A :dfn:`thread` is a kernel object that is used for application processing
that is too lengthy or too complex to be performed by an ISR.
.. contents::
:local:
:depth: 2
Concepts
********
Any number of threads can be definedby an application. Each thread is
referenced by its memory address.
A thread has the following key properties:
* A **thread region**, which is the area of memory used by the thread
and for its stack. The **size** of the thread region can be tailored
to meet the specific needs of the thread.
* An **entry point function**, which is invoked when the thread is started.
Up to 3 **argument values** can be passed to this function.
* An **abort function**, which is invoked when the thread is completely
finished executing. (See "Thread Aborting".)
* A **scheduling priority**, which instructs the kernel's scheduler how to
allocate CPU time to the thread. (See "Thread Scheduling".)
* A **start delay**, which specifies how long the kernel should wait before
starting the thread.
Thread Spawning
===============
A thread must be spawned before it can be used. The kernel initializes
both the thread data structure portion and the stack portion of
the thread's thread region.
Specifying a start delay of :c:macro:`K_NO_WAIT` instructs the kernel
to start thread execution immediately. Alternatively, the kernel can be
instructed to delay execution of the thread by specifying a timeout
value -- for example, to allow device hardware used by the thread
to become available.
The kernel allows a delayed start to be cancelled before the thread begins
executing. A cancellation request has no effect if the thread has already
started. A thread whose delayed start was successfully cancelled must be
re-spawned before it can be used.
Thread Termination
==================
Once a thread is started it typically executes forever. However, a thread may
synchronously end its execution by returning from its entry point function.
This is known as **termination**.
A thread that terminates is responsible for releasing any shared resources
it may own (such as mutexes and dynamically allocated memory)
prior to returning, since the kernel does *not* reclaim them automatically.
.. note::
The kernel does not currently make any claims regarding an application's
ability to respawn a thread that terminates.
Thread Aborting
===============
A thread may asynchronously end its execution by **aborting**. The kernel
automatically aborts a thread if the thread triggers a fatal error condition,
such as dereferencing a null pointer.
A thread can also be aborted by another thread (or by itself)
by calling :c:func:`k_thread_abort()`. However, it is typically preferable
to signal a thread to terminate itself gracefully, rather than aborting it.
As with thread termination, the kernel does not reclaim shared resources
owned by an aborted thread.
.. note::
The kernel does not currently make any claims regarding an application's
ability to respawn a thread that aborts.
Abort Handler
=============
A thread's **abort handler** is automatically invoked when the thread
terminates or aborts. The abort handler is a function that takes no arguments
and returns ``void``.
If the thread's abort handler is ``NULL``, no action is taken;
otherwise, the abort handler is executed using the kernel's system workqueue.
The abort handler can be used to record information about the thread
or to assist in reclaiming resources owned by the thread.
Thread Suspension
=================
A thread can be prevented from executing for an indefinite period of time
if it becomes **suspended**. The function :c:func:`k_thread_suspend()`
can be used to suspend any thread, including the calling thread.
Suspending a thread that is already suspended has no additional effect.
Once suspended, a thread cannot be scheduled until another thread calls
:c:func:`k_thread_resume()` to remove the suspension.
.. note::
A thread can prevent itself from executing for a specified period of time
using :c:func:`k_sleep()`. However, this is different from suspending
a thread since a sleeping thread becomes executable automatically when the
time limit is reached.
Implementation
**************
Spawning a Thread
=================
A thread is spawned by defining its thread region and then calling
:cpp:func:`k_thread_spawn()`. The thread region is an array of bytes
whose size must equal :c:func:`sizeof(struct k_thread)` plus the size
of the thread's stack. The thread region must be defined using the
:c:macro:`__stack` attribute to ensure it is properly aligned.
The thread spawning function returns the thread's memory address,
which can be saved for later reference. Alternatively, the address of
the thread can be obtained by casting the address of the thread region
to type :c:type:`struct k_thread *`.
The following code spawns a thread that starts immediately.
.. code-block:: c
#define MY_THREAD_SIZE 500
#define MY_PRIORITY 5
extern void my_entry_point(void *, void *, void *);
char __noinit __stack my_thread_area[MY_THREAD_SIZE];
struct k_thread *my_thread_ptr;
my_thread_ptr = k_thread_spawn(my_thread_area, MY_THREAD_SIZE,
my_entry_point, 0, 0, 0,
NULL, MY_PRIORITY, K_NO_WAIT);
Alternatively, a thread can be spawned at compile time by calling
:c:macro:`K_THREAD_DEFINE()`. Observe that the macro defines the thread
region automatically, as well as a variable containing the thread's address.
The following code has the same effect as the code segment above.
.. code-block:: c
K_THREAD_DEFINE(my_thread_ptr, my_thread_area, MY_THREAD_SIZE,
my_entry_point, 0, 0, 0,
NULL, MY_PRIORITY, K_NO_WAIT);
.. note::
NEED TO FIGURE OUT HOW WE'RE GOING TO HANDLE THE FLOATING POINT OPTIONS!
Terminating a Thread
====================
A thread terminates itself by returning from its entry point function.
The following code illustrates the ways a thread can terminate.
.. code-block:: c
void my_entry_point(int unused1, int unused2, int unused3)
{
while (1) {
...
if (<some condition>) {
return; /* thread terminates from mid-entry point function */
}
...
}
/* thread terminates at end of entry point function */
}
Suggested Uses
**************
Use threads to handle processing that cannot be handled in an ISR.
Use separate threads to handle logically distinct processing operations
that can execute in parallel.
Configuration Options
*********************
Related configuration options:
* None.
APIs
****
The following thread APIs are are provided by :file:`kernel.h`:
:cpp:func:`k_thread_spawn()`, :cpp:func:`k_thread_spawn_config()`
Spawn a new thread.
:cpp:func:`k_thread_spawn_cancel()`
[NON-EXISTENT] Cancel spawning of a new thread, if not already started.
:cpp:func:`thread_entry_set()`
[NON-EXISTENT] Sets a thread's entry point.
:cpp:func:`thread_suspend()`
[NON-EXISTENT] Suspend execution of a thread.
:cpp:func:`thread_resume()`
[NON-EXISTENT] Resume execution of a thread.
:cpp:func:`k_thread_abort()`
Abort execution of a thread.
:cpp:func:`thread_abort_handler_set()`
Install a thread's abort handler.

View file

@ -0,0 +1,224 @@
.. _scheduling_v2:
Scheduling
##########
The kernel's priority-based scheduler allows an application's threads
to share the CPU.
.. contents::
:local:
:depth: 2
Concepts
********
The scheduler determines which thread is allowed to execute
at any point in time; this thread is known as the **current thread**.
Whenever the scheduler changes the identity of the current thread,
or when execution of the current thread is supplanted by an ISR,
the kernel first saves the current thread's CPU register values.
These register values get restored when the thread later resumes execution.
Thread States
=============
A thread that has no factors that prevent its execution is deemed
to be **ready**, and is eligible to be selected as the current thread.
A thread that has one or more factors that prevent its execution
is deemed to be **unready**, and cannot be selected as the current thread.
The following factors make a thread unready:
* The thread has not been started.
* The thread is waiting on for a kernel object to complete an operation.
(For example, the thread is taking a semaphore that is unavailable.)
* The thread is waiting for a timeout to occur.
* The thread has been suspended.
* The thread has terminated or aborted.
Thread Priorities
=================
A thread's priority is an integer value, and can be either negative or
non-negative.
Numerically lower priorities takes precedence over numerically higher values.
For example, the scheduler gives thread A of priority 4 *higher* priority
over thread B of priority 7; likewise thread C of priority -2 has higher
priority than both thread A and thread B.
The scheduler distinguishes between two classes of threads,
based on each thread's priority.
* A :dfn:`cooperative thread` has a negative priority value.
Once it becomes the current thread, a cooperative thread remains
the current thread until it performs an action that makes it unready.
* A :dfn:`preemptible thread` has a non-negative priority value.
Once it becomes the current thread, a preemptible thread may be supplanted
at any time if a cooperative thread, or a preemptible thread of higher
or equal priority, becomes ready.
A thread's initial priority value can be altered up or down after the thread
has been started. Thus it possible for a preemptible thread to become
a cooperative thread, and vice versa, by changing its priority.
The kernel supports a virtually unlimited number of thread priority levels.
The configuration options :option:`CONFIG_NUM_COOP_PRIORITIES` and
:option:`CONFIG_NUM_PREEMPT_PRIORITIES` specify the number of priority
levels for each class of thread, resulting the following usable priority
ranges:
* cooperative threads: (-:option:`CONFIG_NUM_COOP_PRIORITIES`) to -1
* preemptive threads: 0 to (:option:`CONFIG_NUM_PREEMPT_PRIORITIES` - 1)
For example, configuring 5 cooperative priorities and 10 preemptive priorities
results in the ranages -5 to -1 and 0 to 9, respectively.
Scheduling Algorithm
====================
The kernel's scheduler selects the highest priority ready thread
to be the current thread. When multiple ready threads of the same priority
exist, the scheduler chooses the one that has been waiting longest.
.. note::
Execution of ISRs takes precedence over thread execution,
so the execution of the current thread may be supplanted by an ISR
at any time unless interrupts have been masked. This applies to both
cooperative threads and preemptive threads.
Cooperative Time Slicing
========================
Once a cooperative thread becomes the current thread, it remains
the current thread until it performs an action that makes it unready.
Consequently, if a cooperative thread performs lengthy computations,
it may cause an unacceptable delay in the scheduling of other threads,
including those of higher priority and equal priority.
To overcome such problems, a cooperative thread can voluntarily relinquish
the CPU from time to time to permit other threads to execute.
A thread can relinquish the CPU in two ways:
* Calling :cpp:func:`k_yield()` puts the thread at the back of the scheduler's
prioritized list of ready threads, and then invokes the scheduler.
All ready threads whose priority is higher or equal to that of the
yielding thread are then allowed to execute before the yielding thread is
rescheduled. If no such ready threads exist, the scheduler immediately
reschedules the yielding thread without context switching.
* Calling :cpp:func:`k_sleep()` makes the thread unready for a specified
time period. Ready threads of *all* priorities are then allowed to execute;
however, there is no guarantee that threads whose priority is lower
than that of the sleeping thread will actually be scheduled before
the sleeping thread becomes ready once again.
Preemptive Time Slicing
=======================
Once a preemptive thread becomes the current thread, it remains
the current thread until a higher priority thread becomes ready,
or until the thread performs an action that makes it unready.
Consequently, if a preemptive thread performs lengthy computations,
it may cause an unacceptable delay in the scheduling of other threads,
including those of equal priority.
To overcome such problems, a preemptive thread can perform cooperative
time slicing (as described above), or the scheduler's time slicing capability
can be used to allow other threads of the same priority to execute.
The scheduler divides time into a series of **time slices**, where slices
are measured in system clock ticks. The time slice size is configurable,
but this size can be changed while the application is running.
At the end of every time slice, the scheduler checks to see if the current
thread is preemptible and, if so, implicitly invokes :c:func:`k_yield()`
on behalf of the thread. This gives other ready threads of the same priority
the opportunity to execute before the current thread is scheduled again.
If no threads of equal priority are ready, the current thread remains
the current thread.
Threads with a priority higher than specified limit are exempt from preemptive
time slicing, and are never preempted by a thread of equal priority.
This allows an application to use preemptive time slicing
only when dealing with lower priority threads that are less time-sensitive.
.. note::
The kernel's time slicing algorithm does *not* ensure that a set
of equal-priority threads receive an equitable amount of CPU time,
since it does not measure the amount of time a thread actually gets to
execute. For example, a thread may become the current thread just before
the end of a time slice and then immediately have to yield the CPU.
However, the algorithm *does* ensure that a thread never executes
for longer than a single time slice without being required to yield.
Scheduler Locking
=================
A preemptible thread that does not wish to be preempted while performing
a critical operation can instruct the scheduler to temporarily treat it
as a cooperative thread by calling :cpp:func:`k_sched_lock()`. This prevents
other threads from interfering while the critical operation is being performed.
Once the critical operation is complete the preemptible thread must call
:cpp:func:`k_sched_unlock()` to restore its normal, preemptible status.
If a thread calls :cpp:func:`k_sched_lock()` and subsequently performs an
action that makes it unready, the scheduler will switch the locking thread out
and allow other threads to execute. When the locking thread again
becomes the current thread, its non-preemptible status is maintained.
.. note:
Locking out the scheduler is a more efficient way for a preemptible thread
to inhibit preemption than changing its priority level to a negative value.
Busy Waiting
============
A thread can call :cpp:func:`k_busy_wait()` to perform a ``busy wait``
that delays its processing for a specified time period
*without* relinquishing the CPU to another ready thread.
A busy wait is typically used when the required delay is too short
to warrant having the scheduler context switch to another thread
and then back again.
Suggested Uses
**************
Use cooperative threads for device drivers and other performance-critical work.
Use cooperative threads to implement mutually exclusion without the need
for a kernel object, such as a mutex.
Use preemptive threads to give priority to time-sensitive processing
over less time-sensitive processing.
Configuration Options
*********************
Related configuration options:
* :option:`CONFIG_NUM_COOP_PRIORITIES`
* :option:`CONFIG_NUM_PREEMPT_PRIORITIES`
* :option:`CONFIG_TIMESLICE_SIZE`
* :option:`CONFIG_TIMESLICE_PRIORITY`
APIs
****
The following thread scheduling-related APIs are provided by :file:`kernel.h`:
* :cpp:func:`k_current_get()`
* :cpp:func:`thread_priority_get()` [NON-EXISTENT]
* :cpp:func:`thread_priority_set()` [NON-EXISTENT]
* :cpp:func:`k_yield()`
* :cpp:func:`k_sleep()`
* :cpp:func:`k_wakeup()`
* :cpp:func:`k_busy_wait()`
* :cpp:func:`k_sched_time_slice_set()`
* :cpp:func:`k_workload_get()` [NON-EXISTENT]
* :cpp:func:`k_workload_time_slice_set()` [NON-EXISTENT]

View file

@ -0,0 +1,82 @@
.. _system_threads_v2:
System Threads
##############
A :dfn:`system thread` is a thread that is spawned by the kernel itself
to perform essential work. An application can sometimes utilize a system
thread to perform work, rather than spawning an additional thread.
.. contents::
:local:
:depth: 2
Concepts
********
The kernel spawns the following system threads.
**Main thread**
This thread performs kernel initialization, then calls the application's
:cpp:func:`main()` function. If the application does not supply a
:cpp:func:`main()` function, the main thread terminates once initialization
is complete.
By default, the main thread uses the highest configured preemptible thread
priority (i.e. 0). If the kernel is not configured to support preemptible
threads, the main thread uses the lowest configured cooperative thread
priority (i.e. -1).
**Idle thread**
This thread executes when there is no other work for the system to do.
If possible, the idle thread activates the board's power management support
to save power; otherwise, the idle thread simply performs a "do nothing"
loop.
The idle thread always uses the lowest configured thread priority.
If this makes it a cooperative thread, the idle thread repeatedly
yields the CPU to allow the application's other threads to run when
they need to.
.. note::
Additional system threads may also be spawned, depending on the kernel
and board configuration options specified by the application.
Implementation
**************
Offloading Work to the System Workqueue
=======================================
NEED TO COME UP WITH A BETTER EXAMPLE, SUCH AS AN ISR HANDING OFF WORK
TO THE SYSTEM WORKQUEUE THREAD BECAUSE IT TAKES TOO LONG.
.. code-block:: c
/* TBD */
Suggested Uses
**************
Use the main thread to perform thread-based processing in an application
that only requires a single thread, rather than defining an additional
application-specific thread.
Use the system workqueue to defer complex interrupt-related processing
from an ISR to a cooperative thread. This allows the interrupt-related
processing to be done promptly without compromising the system's ability
to respond to subsequent interrupts, and does not require the application
to define an additional thread to do the processing.
Configuration Options
*********************
Related configuration options:
* :option:`CONFIG_MAIN_THREAD_PRIORITY`
* :option:`CONFIG_MAIN_STACK_SIZE`
APIs
****
[Add workqueue APIs?]

View file

@ -0,0 +1,16 @@
.. _threads_v2:
Threads
#######
This section describes kernel services for creating, scheduling, and deleting
independently executable threads of instructions.
.. toctree::
:maxdepth: 1
lifecycle.rst
scheduling.rst
custom_data.rst
system_threads.rst

View file

@ -0,0 +1,140 @@
.. _clocks_v2:
Kernel Clocks
#############
Concepts
********
The kernel supports two distinct clocks.
* A 64-bit **system clock**, which is the foundation for the kernel's
time-based services. This clock is a counter measured in **ticks**,
and increments at a frequency determined by the application.
The kernel allows this clock to be accessed directly by reading
the timer. It can also be accessed indirectly by using a kernel
timer or timeout capability.
* A 32-bit **hardware clock**, which is used as the source of the ticks
for the system clock. This clock is a counter measured in unspecified
units (called **cycles**), and increments at a frequency determined by
the hardware.
The kernel allows this clock to be accessed directly by reading
the timer.
The kernel also provides a number of variables that can be used
to convert the time units used by the clocks into standard time units
(e.g. seconds, milliseconds, nanoseconds, etc), and to convert between
the two types of clock time units.
Suggested Use
*************
Use the system clock for time-based processing that does not require
high precision, such as implementing time limits or time delays.
Use the hardware clock for time-based processing that requires higher
precision than the system clock can provide, such as fine-grained
time measurements.
.. note::
The high frequency of the hardware clock, combined with its 32-bit size,
means that counter rollover must be taken into account when taking
high-precision measurements over an extended period of time.
Configuration
*************
Use the :option:`CONFIG_SYS_CLOCK_TICKS_PER_SEC` configuration option
to specify how many ticks occur every second. Setting this value
to zero disables all system clock and hardware clock capabilities.
.. note::
Making the system clock frequency value larger allows the system clock
to provide finer-grained timing, but also increases the amount of work
the kernel has to do to process ticks (since they occur more frequently).
Examples
********
Measuring Time with Normal Precision
====================================
This code uses the system clock to determine how many ticks have elapsed
between two points in time.
.. code-block:: c
int64_t time_stamp;
int64_t ticks_spent;
/* capture initial time stamp */
time_stamp = sys_tick_get();
/* do work for some (extended) period of time */
...
/* compute how long the work took & update time stamp */
ticks_spent = sys_tick_delta(&time_stamp);
Measuring Time with High Precision
==================================
This code uses the hardware clock to determine how many ticks have elapsed
between two points in time.
.. code-block:: c
uint32_t start_time;
uint32_t stop_time;
uint32_t cycles_spent;
uint32_t nanoseconds_spent;
/* capture initial time stamp */
start_time = sys_cycle_get_32();
/* do work for some (short) period of time */
...
/* capture final time stamp */
stop_time = sys_cycle_get_32();
/* compute how long the work took (assumes no counter rollover) */
cycles_spent = stop_time - start_time;
nanoseconds_spent = SYS_CLOCK_HW_CYCLES_TO_NS(cycles_spent);
APIs
****
The following kernel clock APIs are provided by :file:`kernel.h`:
:cpp:func:`sys_tick_get()`, :cpp:func:`sys_tick_get_32()`
Read the system clock.
:cpp:func:`sys_tick_delta()`, :cpp:func:`sys_tick_delta_32()`
Compute the elapsed time since an earlier system clock reading.
:cpp:func:`sys_tick_get()`, :cpp:func:`sys_tick_get_32()`
Read the system clock.
:cpp:func:`sys_tick_delta()`, :cpp:func:`sys_tick_delta_32()`
Compute the elapsed time since an earlier system clock reading.
:cpp:func:`sys_cycle_get_32()`
Read hardware clock.
The following kernel clock variables are provided by :file:`kernel.h`:
:c:data:`sys_clock_ticks_per_sec`
The number of system clock ticks in a single second.
:c:data:`sys_clock_hw_cycles_per_sec`
The number of hardware clock cycles in a single second.
:c:data:`sys_clock_us_per_tick`
The number of microseconds in a single system clock tick.
:c:data:`sys_clock_hw_cycles_per_tick`
The number of hardware clock cycles in a single system clock tick.

View file

@ -0,0 +1,211 @@
.. _microkernel_timers_v2:
Timer Services
##############
Concepts
********
A :dfn:`microkernel timer` allows a task to determine whether or not a
specified time limit has been reached while the task is busy performing
other work. The timer uses the kernel's system clock, measured in
ticks, to monitor the passage of time.
Any number of microkernel timers can be defined in a microkernel system.
Each timer has a unique identifier, which allows it to be distinguished
from other timers.
A task that wants to use a timer must first allocate an unused timer
from the set of microkernel timers. A task can allocate more than one timer
when it needs to monitor multiple time intervals simultaneously.
A timer is started by specifying:
* A :dfn:`duration` is the number of ticks the timer counts before it
expires for the first time.
* A :dfn:`period` is the number of ticks the timer counts before it expires
each time thereafter.
* The :dfn:`microkernel semaphore identifier` is what the timer gives each
time the semaphore expires.
The semaphore's state can be examined by the task any time the task needs to
determine whether or not the given time limit has been reached.
When the timer's period is set to zero, the timer stops automatically
after reaching the duration and giving the semaphore. When the period is set to
any number of ticks other than zero, the timer restarts automatically with
a new duration that is equal to its period. When this new duration has elapsed,
the timer gives the semaphore again and restarts. For example, a timer can be
set to expire after 5 ticks, and to then re-expire every 20 ticks thereafter,
resulting in the semaphore being given 3 times after 45 ticks have elapsed.
.. note::
Care must be taken when specifying the duration of a microkernel timer.
The first tick measured by the timer after it is started will be
less than a full-tick interval. For example, when the system clock period
is 10 milliseconds, starting a timer that expires after 1 tick will result
in the semaphore being given anywhere from a fraction of a millisecond
later to just slightly less than 10 milliseconds later. To ensure that a
timer doesn't expire for at least ``N`` ticks, it is necessary to specify
a duration of ``N+1`` ticks. This adjustment is not required when specifying
the period of a timer, which always corresponds to full-tick intervals.
A running microkernel timer can be cancelled or restarted by a task prior to
its expiration. Cancelling a timer that has already expired does not affect
the state of the associated semaphore. Likewise, restarting a timer that has
already expired is equivalent to stopping the timer and starting it afresh.
When a task no longer needs a timer it should free the timer. This makes
the timer available for reallocation.
Purpose
*******
Use a microkernel timer to determine whether or not a specified number of
system clock ticks have elapsed while the task is busy performing other work.
.. note::
If a task has no other work to perform while waiting for time to pass
it can simply call :cpp:func:`task_sleep()`.
.. note::
The microkernel provides additional APIs that allow a task to monitor
both the system clock and the higher-precision hardware clock, without
using a microkernel timer.
Usage
*****
Configuring Microkernel Timers
==============================
Set the :option:`CONFIG_NUM_TIMER_PACKETS` configuration option to
specify the number of timer-related command packets available in the
application. This value should be **equal to** or **greater than** the
sum of the following quantities:
* The number of microkernel timers.
* The number of tasks.
.. note::
Unlike most other microkernel object types, microkernel timers are defined
as a group using a configuration option, rather than as individual public
objects in an MDEF or private objects in a source file.
Example: Allocating a Microkernel Timer
=======================================
This code allocates an unused timer.
.. code-block:: c
ktimer_t timer_id;
timer_id = task_timer_alloc();
Example: Starting a One Shot Microkernel Timer
==============================================
This code uses a timer to limit the amount of time a task spends on gathering
data. It works by monitoring the status of a microkernel semaphore that is set
when the timer expires. Since the timer is started with a period of zero, it
stops automatically once it expires.
.. code-block:: c
ktimer_t timer_id;
ksem_t my_sem;
...
/* set timer to expire in 10 ticks */
task_timer_start(timer_id, 10, 0, my_sem);
/* gather data until timer expires */
do {
...
} while (task_sem_take(my_sem, TICKS_NONE) != RC_OK);
/* process the new data */
...
Example: Starting a Periodic Microkernel Timer
==============================================
This code is similar to the previous example, except that the timer
automatically restarts every time it expires. This approach eliminates
the overhead of having the task explicitly issue a request to
reactivate the timer.
.. code-block:: c
ktimer_t timer_id;
ksem_t my_sem;
...
/* set timer to expire every 10 ticks */
task_timer_start(timer_id, 10, 10, my_sem);
while (1) {
/* gather data until timer expires */
do {
...
} while (task_sem_take(my_sem, TICKS_NONE) != RC_OK);
/* process the new data, then loop around to get more */
...
}
Example: Cancelling a Microkernel Timer
=======================================
This code illustrates how an active timer can be stopped prematurely.
.. code-block:: c
ktimer_t timer_id;
ksem_t my_sem;
...
/* set timer to expire in 10 ticks */
task_timer_start(timer_id, 10, 0, my_sem);
/* do work while waiting for input to arrive */
...
/* now have input, so stop the timer if it is still running */
task_timer_stop(timer_id);
/* check to see if the timer expired before it was stopped */
if (task_sem_take(my_sem, TICKS_NONE) == RC_OK) {
printf("Warning: Input took too long to arrive!");
}
Example: Freeing a Microkernel Timer
====================================
This code allows a task to relinquish a previously-allocated timer
so it can be used by other tasks.
.. code-block:: c
task_timer_free(timer_id);
APIs
****
The following microkernel timer APIs are provided by :file:`microkernel.h`:
:cpp:func:`task_timer_alloc()`
Allocates an unused timer.
:cpp:func:`task_timer_start()`
Starts a timer.
:cpp:func:`task_timer_restart()`
Restarts a timer.
:cpp:func:`task_timer_stop()`
Cancels a timer.
:cpp:func:`task_timer_free()`
Marks timer as unused.

View file

@ -0,0 +1,179 @@
.. _nanokernel_timers_v2:
Timer Services
##############
Concepts
********
The nanokernel's :dfn:`timer` object type uses the kernel's system clock to
monitor the passage of time, as measured in ticks. It is mainly intended for use
by fibers.
A *nanokernel timer* allows a fiber or task to determine whether or not a
specified time limit has been reached while the thread itself is busy performing
other work. A thread can use more than one timer when it needs to monitor multiple
time intervals simultaneously.
A nanokernel timer points to a *user data structure* that is supplied by the
thread that uses it; this pointer is returned when the timer expires. The user
data structure must be at least 4 bytes long and aligned on a 4-byte boundary,
as the kernel reserves the first 32 bits of this area for its own use. Any
remaining bytes of this area can be used to hold data that is helpful to the
thread that uses the timer.
Any number of nanokernel timers can be defined. Each timer is a distinct
variable of type :c:type:`struct nano_timer`, and is referenced using a pointer
to that variable. A timer must be initialized with its user data structure
before it can be used.
A nanokernel timer is started by specifying a *duration*, which is the number
of ticks the timer counts before it expires.
.. note::
Care must be taken when specifying the duration of a nanokernel timer,
since the first tick measured by the timer after it is started will be
less than a full tick interval. For example, when the system clock period
is 10 milliseconds, starting a timer than expires after 1 tick will result
in the timer expiring anywhere from a fraction of a millisecond
later to just slightly less than 10 milliseconds later. To ensure that
a timer doesn't expire for at least ``N`` ticks it is necessary to specify
a duration of ``N+1`` ticks.
Once started, a nanokernel timer can be tested in either a non-blocking or
blocking manner to allow a thread to determine if the timer has expired.
If the timer has expired, the kernel returns the pointer to the user data
structure. If the timer has not expired, the kernel either returns
:c:macro:`NULL` (for a non-blocking test), or it waits for the timer to expire
(for a blocking test).
.. note::
The nanokernel does not allow more than one thread to wait on a nanokernel
timer at any given time. If a second thread starts waiting, only the first
waiting thread wakes up when the timer expires. The second thread continues
waiting.
A task that waits on a nanokernel timer does a ``busy wait``. This is
not an issue for a nanokernel application's background task; however, in
a microkernel application, a task that waits on a nanokernel timer remains
the *current task* and prevents other tasks of equal or lower priority
from doing useful work.
A nanokernel timer can be cancelled after it has been started. Cancelling
a timer while it is still running causes the timer to expire immediately,
thereby unblocking any thread waiting on the timer. Cancelling a timer
that has already expired has no effect on the timer.
A nanokernel timer can be reused once it has expired, but must **not** be
restarted while it is still running. If desired, a timer can be re-initialized
with a different user data structure before it is started again.
Purpose
*******
Use a nanokernel timer to determine whether or not a specified number
of system clock ticks have elapsed while a fiber or task is busy performing
other work.
.. note::
If a fiber or task has no other work to perform while waiting
for time to pass, it can simply call :cpp:func:`fiber_sleep()`
or :cpp:func:`task_sleep()`, respectively.
.. note::
The kernel provides additional APIs that allow a fiber or task to monitor
the system clock, as well as the higher precision hardware clock,
without using a nanokernel timer.
Usage
*****
Example: Initializing a Nanokernel Timer
========================================
This code initializes a nanokernel timer.
.. code-block:: c
struct nano_timer my_timer;
uint32_t data_area[3] = { 0, 1111, 2222 };
nano_timer_init(&my_timer, data_area);
Example: Starting a Nanokernel Timer
====================================
This code uses the above nanokernel timer to limit the amount of time a fiber
spends gathering data before processing it.
.. code-block:: c
/* set timer to expire in 10 ticks */
nano_fiber_timer_start(&my_timer, 10);
/* gather data until timer expires */
do {
...
} while (nano_fiber_timer_test(&my_timer, TICKS_NONE) == NULL);
/* process the data */
...
Example: Cancelling a Nanokernel Timer
======================================
This code illustrates how an active nanokernel timer can be stopped prematurely.
.. code-block:: c
struct nano_timer my_timer;
uint32_t dummy;
...
/* set timer to expire in 10 ticks */
nano_timer_init(&my_timer, &dummy);
nano_fiber_timer_start(&my_timer, 10);
/* do work while waiting for an input signal to arrive */
...
/* now have input signal, so stop the timer if it is still running */
nano_fiber_timer_stop(&my_timer);
/* check to see if the timer expired before it was stopped */
if (nano_fiber_timer_test(&my_timer, TICKS_NONE) != NULL) {
printf("Warning: Input signal took too long to arrive!");
}
APIs
****
APIs for a nanokernel timer provided by :file:`nanokernel.h`
============================================================
:cpp:func:`nano_timer_init()`
Initialize a timer.
:cpp:func:`nano_task_timer_start()`, :cpp:func:`nano_fiber_timer_start()`,
:cpp:func:`nano_isr_timer_start()`, :cpp:func:`nano_timer_start()`
Start a timer.
:cpp:func:`nano_task_timer_test()`, :cpp:func:`nano_fiber_timer_test()`,
:cpp:func:`nano_isr_timer_test()`, :cpp:func:`nano_timer_test()`
Wait or test for timer expiration.
:cpp:func:`nano_task_timer_stop()`, :cpp:func:`nano_fiber_timer_stop()`,
:cpp:func:`nano_isr_timer_stop()`, :cpp:func:`nano_timer_stop()`
Force timer expiration, if not already expired.
:cpp:func:`nano_timer_ticks_remain()`
Return timer ticks before timer expiration.

View file

@ -0,0 +1,14 @@
.. _timing_v2:
Timing [TBD]
############
This section describes the timing-related services available
in the kernel.
.. toctree::
:maxdepth: 2
clocks.rst
nanokernel_timers.rst
microkernel_timers.rst