docs: kernel: thread documentation enhancements and cleanup

Moving all thread docs into 1 page was a bit too much. Split the section
a bit and remove redundant and useless sections and move some thread
related documentation from the scheduling page to threads (thread states
and priorities).

Add a new figure for thread states.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This commit is contained in:
Anas Nashif 2019-09-19 20:01:27 -04:00
commit d803a534c0
7 changed files with 481 additions and 470 deletions

View file

@ -32,6 +32,8 @@ synchronization.
threads/index.rst
scheduling/index.rst
threads/system_threads.rst
threads/workqueue.rst
other/interrupts.rst
other/polling.rst
synchronization/semaphores.rst

View file

@ -17,70 +17,6 @@ or when execution of the current thread is supplanted by an ISR,
the kernel first saves the current thread's CPU register values.
These register values get restored when the thread later resumes execution.
Thread States
=============
A thread that has no factors that prevent its execution is deemed
to be **ready**, and is eligible to be selected as the current thread.
A thread that has one or more factors that prevent its execution
is deemed to be **unready**, and cannot be selected as the current thread.
The following factors make a thread unready:
* The thread has not been started.
* The thread is waiting on for a kernel object to complete an operation.
(For example, the thread is taking a semaphore that is unavailable.)
* The thread is waiting for a timeout to occur.
* The thread has been suspended.
* The thread has terminated or aborted.
Thread Priorities
=================
A thread's priority is an integer value, and can be either negative or
non-negative.
Numerically lower priorities takes precedence over numerically higher values.
For example, the scheduler gives thread A of priority 4 *higher* priority
over thread B of priority 7; likewise thread C of priority -2 has higher
priority than both thread A and thread B.
The scheduler distinguishes between two classes of threads,
based on each thread's priority.
* A :dfn:`cooperative thread` has a negative priority value.
Once it becomes the current thread, a cooperative thread remains
the current thread until it performs an action that makes it unready.
.. image:: cooperative.svg
:align: center
* A :dfn:`preemptible thread` has a non-negative priority value.
Once it becomes the current thread, a preemptible thread may be supplanted
at any time if a cooperative thread, or a preemptible thread of higher
or equal priority, becomes ready.
.. image:: preemptive.svg
:align: center
A thread's initial priority value can be altered up or down after the thread
has been started. Thus it possible for a preemptible thread to become
a cooperative thread, and vice versa, by changing its priority.
The kernel supports a virtually unlimited number of thread priority levels.
The configuration options :option:`CONFIG_NUM_COOP_PRIORITIES` and
:option:`CONFIG_NUM_PREEMPT_PRIORITIES` specify the number of priority
levels for each class of thread, resulting the following usable priority
ranges:
* cooperative threads: (-:option:`CONFIG_NUM_COOP_PRIORITIES`) to -1
* preemptive threads: 0 to (:option:`CONFIG_NUM_PREEMPT_PRIORITIES` - 1)
.. image:: priorities.svg
:align: center
For example, configuring 5 cooperative priorities and 10 preemptive priorities
results in the ranges -5 to -1 and 0 to 9, respectively.
Scheduling Algorithm
====================
@ -104,6 +40,10 @@ Consequently, if a cooperative thread performs lengthy computations,
it may cause an unacceptable delay in the scheduling of other threads,
including those of higher priority and equal priority.
.. image:: cooperative.svg
:align: center
To overcome such problems, a cooperative thread can voluntarily relinquish
the CPU from time to time to permit other threads to execute.
A thread can relinquish the CPU in two ways:
@ -131,6 +71,10 @@ Consequently, if a preemptive thread performs lengthy computations,
it may cause an unacceptable delay in the scheduling of other threads,
including those of equal priority.
.. image:: preemptive.svg
:align: center
To overcome such problems, a preemptive thread can perform cooperative
time slicing (as described above), or the scheduler's time slicing capability
can be used to allow other threads of the same priority to execute.

View file

@ -1,26 +1,18 @@
.. _threads_v2:
Threads
^^^^^^^
#######
.. contents::
:local:
:depth: 2
This section describes kernel services for creating, scheduling, and deleting
independently executable threads of instructions.
.. contents::
:local:
:depth: 1
.. _lifecycle_v2:
Lifecycle
#########
A :dfn:`thread` is a kernel object that is used for application processing
that is too lengthy or too complex to be performed by an ISR.
Concepts
********
Any number of threads can be defined by an application. Each thread is
referenced by a :dfn:`thread id` that is assigned when the thread is spawned.
@ -53,6 +45,11 @@ A thread has the following key properties:
peripherals. User mode threads have a reduced set of privileges.
This depends on the :option:`CONFIG_USERSPACE` option. See :ref:`usermode`.
.. _lifecycle_v2:
Lifecycle
***********
.. _spawning_thread:
Thread Creation
@ -74,7 +71,7 @@ started. A thread whose delayed start was successfully canceled must be
re-spawned before it can be used.
Thread Termination
==================
===================
Once a thread is started it typically executes forever. However, a thread may
synchronously end its execution by returning from its entry point function.
@ -107,7 +104,7 @@ owned by an aborted thread.
ability to respawn a thread that aborts.
Thread Suspension
=================
==================
A thread can be prevented from executing for an indefinite period of time
if it becomes **suspended**. The function :cpp:func:`k_thread_suspend()`
@ -123,10 +120,79 @@ Once suspended, a thread cannot be scheduled until another thread calls
a thread since a sleeping thread becomes executable automatically when the
time limit is reached.
.. _thread_states:
Thread States
*************
A thread that has no factors that prevent its execution is deemed
to be **ready**, and is eligible to be selected as the current thread.
A thread that has one or more factors that prevent its execution
is deemed to be **unready**, and cannot be selected as the current thread.
The following factors make a thread unready:
* The thread has not been started.
* The thread is waiting for a kernel object to complete an operation.
(For example, the thread is taking a semaphore that is unavailable.)
* The thread is waiting for a timeout to occur.
* The thread has been suspended.
* The thread has terminated or aborted.
.. image:: thread_states.svg
:align: center
.. _thread_priorities:
Thread Priorities
******************
A thread's priority is an integer value, and can be either negative or
non-negative.
Numerically lower priorities takes precedence over numerically higher values.
For example, the scheduler gives thread A of priority 4 *higher* priority
over thread B of priority 7; likewise thread C of priority -2 has higher
priority than both thread A and thread B.
The scheduler distinguishes between two classes of threads,
based on each thread's priority.
* A :dfn:`cooperative thread` has a negative priority value.
Once it becomes the current thread, a cooperative thread remains
the current thread until it performs an action that makes it unready.
* A :dfn:`preemptible thread` has a non-negative priority value.
Once it becomes the current thread, a preemptible thread may be supplanted
at any time if a cooperative thread, or a preemptible thread of higher
or equal priority, becomes ready.
A thread's initial priority value can be altered up or down after the thread
has been started. Thus it is possible for a preemptible thread to become
a cooperative thread, and vice versa, by changing its priority.
The kernel supports a virtually unlimited number of thread priority levels.
The configuration options :option:`CONFIG_NUM_COOP_PRIORITIES` and
:option:`CONFIG_NUM_PREEMPT_PRIORITIES` specify the number of priority
levels for each class of thread, resulting in the following usable priority
ranges:
* cooperative threads: (-:option:`CONFIG_NUM_COOP_PRIORITIES`) to -1
* preemptive threads: 0 to (:option:`CONFIG_NUM_PREEMPT_PRIORITIES` - 1)
.. image:: priorities.svg
:align: center
For example, configuring 5 cooperative priorities and 10 preemptive priorities
results in the ranges -5 to -1 and 0 to 9, respectively.
.. _thread_options_v2:
Thread Options
==============
***************
The kernel supports a small set of :dfn:`thread options` that allow a thread
to receive special treatment under specific circumstances. The set of options
@ -172,6 +238,56 @@ The following thread options are supported.
kernel object permissions that the parent thread had, except the parent
thread object. See :ref:`usermode`.
.. _custom_data_v2:
Thread Custom Data
******************
Every thread has a 32-bit :dfn:`custom data` area, accessible only by
the thread itself, and may be used by the application for any purpose
it chooses. The default custom data value for a thread is zero.
.. note::
Custom data support is not available to ISRs because they operate
within a single shared kernel interrupt handling context.
By default, thread custom data support is disabled. The configuration option
:option:`CONFIG_THREAD_CUSTOM_DATA` can be used to enable support.
The :cpp:func:`k_thread_custom_data_set()` and
:cpp:func:`k_thread_custom_data_get()` functions are used to write and read
a thread's custom data, respectively. A thread can only access its own
custom data, and not that of another thread.
The following code uses the custom data feature to record the number of times
each thread calls a specific routine.
.. note::
Obviously, only a single routine can use this technique,
since it monopolizes the use of the custom data feature.
.. code-block:: c
int call_tracking_routine(void)
{
u32_t call_count;
if (k_is_in_isr()) {
/* ignore any call made by an ISR */
} else {
call_count = (u32_t)k_thread_custom_data_get();
call_count++;
k_thread_custom_data_set((void *)call_count);
}
/* do rest of routine's processing */
...
}
Use thread custom data to allow a routine to access thread-specific information,
by using the custom data as a pointer to a data structure owned by the thread.
Implementation
**************
@ -204,7 +320,7 @@ The following code spawns a thread that starts immediately.
NULL, NULL, NULL,
MY_PRIORITY, 0, K_NO_WAIT);
Alternatively, a thread can be spawned at compile time by calling
Alternatively, a thread can be declared at compile time by calling
:c:macro:`K_THREAD_DEFINE`. Observe that the macro defines
the stack area, control block, and thread id variables automatically.
@ -289,399 +405,12 @@ Use threads to handle processing that cannot be handled in an ISR.
Use separate threads to handle logically distinct processing operations
that can execute in parallel.
.. _custom_data_v2:
Custom Data
###########
A thread's :dfn:`custom data` is a 32-bit, thread-specific value that may be
used by an application for any purpose.
Concepts
********
Every thread has a 32-bit custom data area.
The custom data is accessible only by the thread itself, and may be used by the
application for any purpose it chooses.
The default custom data for a thread is zero.
.. note::
Custom data support is not available to ISRs because they operate
within a single shared kernel interrupt handling context.
Implementation
**************
Using Custom Data
=================
By default, thread custom data support is disabled. The configuration option
:option:`CONFIG_THREAD_CUSTOM_DATA` can be used to enable support.
The :cpp:func:`k_thread_custom_data_set()` and
:cpp:func:`k_thread_custom_data_get()` functions are used to write and read
a thread's custom data, respectively. A thread can only access its own
custom data, and not that of another thread.
The following code uses the custom data feature to record the number of times
each thread calls a specific routine.
.. note::
Obviously, only a single routine can use this technique,
since it monopolizes the use of the custom data feature.
.. code-block:: c
int call_tracking_routine(void)
{
u32_t call_count;
if (k_is_in_isr()) {
/* ignore any call made by an ISR */
} else {
call_count = (u32_t)k_thread_custom_data_get();
call_count++;
k_thread_custom_data_set((void *)call_count);
}
/* do rest of routine's processing */
...
}
Suggested Uses
**************
Use thread custom data to allow a routine to access thread-specific information,
by using the custom data as a pointer to a data structure owned by the thread.
.. _system_threads_v2:
System Threads
##############
A :dfn:`system thread` is a thread that the kernel spawns automatically
during system initialization.
Concepts
********
The kernel spawns the following system threads.
**Main thread**
This thread performs kernel initialization, then calls the application's
:cpp:func:`main()` function (if one is defined).
By default, the main thread uses the highest configured preemptible thread
priority (i.e. 0). If the kernel is not configured to support preemptible
threads, the main thread uses the lowest configured cooperative thread
priority (i.e. -1).
The main thread is an essential thread while it is performing kernel
initialization or executing the application's :cpp:func:`main()` function;
this means a fatal system error is raised if the thread aborts. If
:cpp:func:`main()` is not defined, or if it executes and then does a normal
return, the main thread terminates normally and no error is raised.
**Idle thread**
This thread executes when there is no other work for the system to do.
If possible, the idle thread activates the board's power management support
to save power; otherwise, the idle thread simply performs a "do nothing"
loop. The idle thread remains in existence as long as the system is running
and never terminates.
The idle thread always uses the lowest configured thread priority.
If this makes it a cooperative thread, the idle thread repeatedly
yields the CPU to allow the application's other threads to run when
they need to.
The idle thread is an essential thread, which means a fatal system error
is raised if the thread aborts.
Additional system threads may also be spawned, depending on the kernel
and board configuration options specified by the application. For example,
enabling the system workqueue spawns a system thread
that services the work items submitted to it. (See :ref:`workqueues_v2`.)
Implementation
**************
Writing a main() function
=========================
An application-supplied :cpp:func:`main()` function begins executing once
kernel initialization is complete. The kernel does not pass any arguments
to the function.
The following code outlines a trivial :cpp:func:`main()` function.
The function used by a real application can be as complex as needed.
.. code-block:: c
void main(void)
{
/* initialize a semaphore */
...
/* register an ISR that gives the semaphore */
...
/* monitor the semaphore forever */
while (1) {
/* wait for the semaphore to be given by the ISR */
...
/* do whatever processing is now needed */
...
}
}
Suggested Uses
**************
Use the main thread to perform thread-based processing in an application
that only requires a single thread, rather than defining an additional
application-specific thread.
.. _workqueues_v2:
Workqueue Threads
#################
A :dfn:`workqueue` is a kernel object that uses a dedicated thread to process
work items in a first in, first out manner. Each work item is processed by
calling the function specified by the work item. A workqueue is typically
used by an ISR or a high-priority thread to offload non-urgent processing
to a lower-priority thread so it does not impact time-sensitive processing.
Concepts
********
Any number of workqueues can be defined. Each workqueue is referenced by its
memory address.
A workqueue has the following key properties:
* A **queue** of work items that have been added, but not yet processed.
* A **thread** that processes the work items in the queue. The priority of the
thread is configurable, allowing it to be either cooperative or preemptive
as required.
A workqueue must be initialized before it can be used. This sets its queue
to empty and spawns the workqueue's thread.
Work Item Lifecycle
===================
Any number of **work items** can be defined. Each work item is referenced
by its memory address.
A work item has the following key properties:
* A **handler function**, which is the function executed by the workqueue's
thread when the work item is processed. This function accepts a single
argument, which is the address of the work item itself.
* A **pending flag**, which is used by the kernel to signify that the
work item is currently a member of a workqueue's queue.
* A **queue link**, which is used by the kernel to link a pending work
item to the next pending work item in a workqueue's queue.
A work item must be initialized before it can be used. This records the work
item's handler function and marks it as not pending.
A work item may be **submitted** to a workqueue by an ISR or a thread.
Submitting a work item appends the work item to the workqueue's queue.
Once the workqueue's thread has processed all of the preceding work items
in its queue the thread will remove a pending work item from its queue and
invoke the work item's handler function. Depending on the scheduling priority
of the workqueue's thread, and the work required by other items in the queue,
a pending work item may be processed quickly or it may remain in the queue
for an extended period of time.
A handler function can utilize any kernel API available to threads. However,
operations that are potentially blocking (e.g. taking a semaphore) must be
used with care, since the workqueue cannot process subsequent work items in
its queue until the handler function finishes executing.
The single argument that is passed to a handler function can be ignored if
it is not required. If the handler function requires additional information
about the work it is to perform, the work item can be embedded in a larger
data structure. The handler function can then use the argument value to compute
the address of the enclosing data structure, and thereby obtain access to the
additional information it needs.
A work item is typically initialized once and then submitted to a specific
workqueue whenever work needs to be performed. If an ISR or a thread attempts
to submit a work item that is already pending, the work item is not affected;
the work item remains in its current place in the workqueue's queue, and
the work is only performed once.
A handler function is permitted to re-submit its work item argument
to the workqueue, since the work item is no longer pending at that time.
This allows the handler to execute work in stages, without unduly delaying
the processing of other work items in the workqueue's queue.
.. important::
A pending work item *must not* be altered until the item has been processed
by the workqueue thread. This means a work item must not be re-initialized
while it is pending. Furthermore, any additional information the work item's
handler function needs to perform its work must not be altered until
the handler function has finished executing.
Delayed Work
============
An ISR or a thread may need to schedule a work item that is to be processed
only after a specified period of time, rather than immediately. This can be
done by submitting a **delayed work item** to a workqueue, rather than a
standard work item.
A delayed work item is a standard work item that has the following added
properties:
* A **delay** specifying the time interval to wait before the work item
is actually submitted to a workqueue's queue.
* A **workqueue indicator** that identifies the workqueue the work item
is to be submitted to.
A delayed work item is initialized and submitted to a workqueue in a similar
manner to a standard work item, although different kernel APIs are used.
When the submit request is made the kernel initiates a timeout mechanism
that is triggered after the specified delay has elapsed. Once the timeout
occurs the kernel submits the delayed work item to the specified workqueue,
where it remains pending until it is processed in the standard manner.
An ISR or a thread may **cancel** a delayed work item it has submitted,
providing the work item's timeout is still counting down. The work item's
timeout is aborted and the specified work is not performed.
Attempting to cancel a delayed work item once its timeout has expired has
no effect on the work item; the work item remains pending in the workqueue's
queue, unless the work item has already been removed and processed by the
workqueue's thread. Consequently, once a work item's timeout has expired
the work item is always processed by the workqueue and cannot be canceled.
System Workqueue
================
The kernel defines a workqueue known as the *system workqueue*, which is
available to any application or kernel code that requires workqueue support.
The system workqueue is optional, and only exists if the application makes
use of it.
.. important::
Additional workqueues should only be defined when it is not possible
to submit new work items to the system workqueue, since each new workqueue
incurs a significant cost in memory footprint. A new workqueue can be
justified if it is not possible for its work items to co-exist with
existing system workqueue work items without an unacceptable impact;
for example, if the new work items perform blocking operations that
would delay other system workqueue processing to an unacceptable degree.
Implementation
**************
Defining a Workqueue
====================
A workqueue is defined using a variable of type :c:type:`struct k_work_q`.
The workqueue is initialized by defining the stack area used by its thread
and then calling :cpp:func:`k_work_q_start()`. The stack area must be defined
using :c:macro:`K_THREAD_STACK_DEFINE` to ensure it is properly set up in
memory.
The following code defines and initializes a workqueue.
.. code-block:: c
#define MY_STACK_SIZE 512
#define MY_PRIORITY 5
K_THREAD_STACK_DEFINE(my_stack_area, MY_STACK_SIZE);
struct k_work_q my_work_q;
k_work_q_start(&my_work_q, my_stack_area,
K_THREAD_STACK_SIZEOF(my_stack_area), MY_PRIORITY);
Submitting a Work Item
======================
A work item is defined using a variable of type :c:type:`struct k_work`.
It must then be initialized by calling :cpp:func:`k_work_init()`.
An initialized work item can be submitted to the system workqueue by
calling :cpp:func:`k_work_submit()`, or to a specified workqueue by
calling :cpp:func:`k_work_submit_to_queue()`.
The following code demonstrates how an ISR can offload the printing
of error messages to the system workqueue. Note that if the ISR attempts
to resubmit the work item while it is still pending, the work item is left
unchanged and the associated error message will not be printed.
.. code-block:: c
struct device_info {
struct k_work work;
char name[16]
} my_device;
void my_isr(void *arg)
{
...
if (error detected) {
k_work_submit(&my_device.work);
}
...
}
void print_error(struct k_work *item)
{
struct device_info *the_device =
CONTAINER_OF(item, struct device_info, work);
printk("Got error on device %s\n", the_device->name);
}
/* initialize name info for a device */
strcpy(my_device.name, "FOO_dev");
/* initialize work item for printing device's error messages */
k_work_init(&my_device.work, print_error);
/* install my_isr() as interrupt handler for the device (not shown) */
...
Submitting a Delayed Work Item
==============================
A delayed work item is defined using a variable of type
:c:type:`struct k_delayed_work`. It must then be initialized by calling
:cpp:func:`k_delayed_work_init()`.
An initialized delayed work item can be submitted to the system workqueue by
calling :cpp:func:`k_delayed_work_submit()`, or to a specified workqueue by
calling :cpp:func:`k_delayed_work_submit_to_queue()`. A delayed work item
that has been submitted but not yet consumed by its workqueue can be canceled
by calling :cpp:func:`k_delayed_work_cancel()`.
Suggested Uses
**************
Use the system workqueue to defer complex interrupt-related processing
from an ISR to a cooperative thread. This allows the interrupt-related
processing to be done promptly without compromising the system's ability
to respond to subsequent interrupts, and does not require the application
to define an additional thread to do the processing.
Configuration Options
#####################
**********************
Related configuration options:
* :option:`CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE`
* :option:`CONFIG_SYSTEM_WORKQUEUE_PRIORITY`
* :option:`CONFIG_MAIN_THREAD_PRIORITY`
* :option:`CONFIG_MAIN_STACK_SIZE`
* :option:`CONFIG_IDLE_STACK_SIZE`
@ -696,7 +425,7 @@ Related configuration options:
API Reference
#############
**************
.. doxygengroup:: thread_apis
:project: Zephyr

View file

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Before After
Before After

View file

@ -0,0 +1,88 @@
.. _system_threads_v2:
System Threads
##############
.. contents::
:local:
:depth: 2
A :dfn:`system thread` is a thread that the kernel spawns automatically
during system initialization.
The kernel spawns the following system threads:
**Main thread**
This thread performs kernel initialization, then calls the application's
:cpp:func:`main()` function (if one is defined).
By default, the main thread uses the highest configured preemptible thread
priority (i.e. 0). If the kernel is not configured to support preemptible
threads, the main thread uses the lowest configured cooperative thread
priority (i.e. -1).
The main thread is an essential thread while it is performing kernel
initialization or executing the application's :cpp:func:`main()` function;
this means a fatal system error is raised if the thread aborts. If
:cpp:func:`main()` is not defined, or if it executes and then does a normal
return, the main thread terminates normally and no error is raised.
**Idle thread**
This thread executes when there is no other work for the system to do.
If possible, the idle thread activates the board's power management support
to save power; otherwise, the idle thread simply performs a "do nothing"
loop. The idle thread remains in existence as long as the system is running
and never terminates.
The idle thread always uses the lowest configured thread priority.
If this makes it a cooperative thread, the idle thread repeatedly
yields the CPU to allow the application's other threads to run when
they need to.
The idle thread is an essential thread, which means a fatal system error
is raised if the thread aborts.
Additional system threads may also be spawned, depending on the kernel
and board configuration options specified by the application. For example,
enabling the system workqueue spawns a system thread
that services the work items submitted to it. (See :ref:`workqueues_v2`.)
Implementation
**************
Writing a main() function
=========================
An application-supplied :cpp:func:`main()` function begins executing once
kernel initialization is complete. The kernel does not pass any arguments
to the function.
The following code outlines a trivial :cpp:func:`main()` function.
The function used by a real application can be as complex as needed.
.. code-block:: c
void main(void)
{
/* initialize a semaphore */
...
/* register an ISR that gives the semaphore */
...
/* monitor the semaphore forever */
while (1) {
/* wait for the semaphore to be given by the ISR */
...
/* do whatever processing is now needed */
...
}
}
Suggested Uses
**************
Use the main thread to perform thread-based processing in an application
that only requires a single thread, rather than defining an additional
application-specific thread.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 15 KiB

View file

@ -0,0 +1,245 @@
.. _workqueues_v2:
Workqueue Threads
#################
.. contents::
:local:
:depth: 1
A :dfn:`workqueue` is a kernel object that uses a dedicated thread to process
work items in a first in, first out manner. Each work item is processed by
calling the function specified by the work item. A workqueue is typically
used by an ISR or a high-priority thread to offload non-urgent processing
to a lower-priority thread so it does not impact time-sensitive processing.
Any number of workqueues can be defined. Each workqueue is referenced by its
memory address.
A workqueue has the following key properties:
* A **queue** of work items that have been added, but not yet processed.
* A **thread** that processes the work items in the queue. The priority of the
thread is configurable, allowing it to be either cooperative or preemptive
as required.
A workqueue must be initialized before it can be used. This sets its queue
to empty and spawns the workqueue's thread.
Work Item Lifecycle
********************
Any number of **work items** can be defined. Each work item is referenced
by its memory address.
A work item has the following key properties:
* A **handler function**, which is the function executed by the workqueue's
thread when the work item is processed. This function accepts a single
argument, which is the address of the work item itself.
* A **pending flag**, which is used by the kernel to signify that the
work item is currently a member of a workqueue's queue.
* A **queue link**, which is used by the kernel to link a pending work
item to the next pending work item in a workqueue's queue.
A work item must be initialized before it can be used. This records the work
item's handler function and marks it as not pending.
A work item may be **submitted** to a workqueue by an ISR or a thread.
Submitting a work item appends the work item to the workqueue's queue.
Once the workqueue's thread has processed all of the preceding work items
in its queue the thread will remove a pending work item from its queue and
invoke the work item's handler function. Depending on the scheduling priority
of the workqueue's thread, and the work required by other items in the queue,
a pending work item may be processed quickly or it may remain in the queue
for an extended period of time.
A handler function can utilize any kernel API available to threads. However,
operations that are potentially blocking (e.g. taking a semaphore) must be
used with care, since the workqueue cannot process subsequent work items in
its queue until the handler function finishes executing.
The single argument that is passed to a handler function can be ignored if
it is not required. If the handler function requires additional information
about the work it is to perform, the work item can be embedded in a larger
data structure. The handler function can then use the argument value to compute
the address of the enclosing data structure, and thereby obtain access to the
additional information it needs.
A work item is typically initialized once and then submitted to a specific
workqueue whenever work needs to be performed. If an ISR or a thread attempts
to submit a work item that is already pending, the work item is not affected;
the work item remains in its current place in the workqueue's queue, and
the work is only performed once.
A handler function is permitted to re-submit its work item argument
to the workqueue, since the work item is no longer pending at that time.
This allows the handler to execute work in stages, without unduly delaying
the processing of other work items in the workqueue's queue.
.. important::
A pending work item *must not* be altered until the item has been processed
by the workqueue thread. This means a work item must not be re-initialized
while it is pending. Furthermore, any additional information the work item's
handler function needs to perform its work must not be altered until
the handler function has finished executing.
Delayed Work
************
An ISR or a thread may need to schedule a work item that is to be processed
only after a specified period of time, rather than immediately. This can be
done by submitting a **delayed work item** to a workqueue, rather than a
standard work item.
A delayed work item is a standard work item that has the following added
properties:
* A **delay** specifying the time interval to wait before the work item
is actually submitted to a workqueue's queue.
* A **workqueue indicator** that identifies the workqueue the work item
is to be submitted to.
A delayed work item is initialized and submitted to a workqueue in a similar
manner to a standard work item, although different kernel APIs are used.
When the submit request is made the kernel initiates a timeout mechanism
that is triggered after the specified delay has elapsed. Once the timeout
occurs the kernel submits the delayed work item to the specified workqueue,
where it remains pending until it is processed in the standard manner.
An ISR or a thread may **cancel** a delayed work item it has submitted,
providing the work item's timeout is still counting down. The work item's
timeout is aborted and the specified work is not performed.
Attempting to cancel a delayed work item once its timeout has expired has
no effect on the work item; the work item remains pending in the workqueue's
queue, unless the work item has already been removed and processed by the
workqueue's thread. Consequently, once a work item's timeout has expired
the work item is always processed by the workqueue and cannot be canceled.
System Workqueue
*****************
The kernel defines a workqueue known as the *system workqueue*, which is
available to any application or kernel code that requires workqueue support.
The system workqueue is optional, and only exists if the application makes
use of it.
.. important::
Additional workqueues should only be defined when it is not possible
to submit new work items to the system workqueue, since each new workqueue
incurs a significant cost in memory footprint. A new workqueue can be
justified if it is not possible for its work items to co-exist with
existing system workqueue work items without an unacceptable impact;
for example, if the new work items perform blocking operations that
would delay other system workqueue processing to an unacceptable degree.
Implementation
**************
Defining a Workqueue
====================
A workqueue is defined using a variable of type :c:type:`struct k_work_q`.
The workqueue is initialized by defining the stack area used by its thread
and then calling :cpp:func:`k_work_q_start()`. The stack area must be defined
using :c:macro:`K_THREAD_STACK_DEFINE` to ensure it is properly set up in
memory.
The following code defines and initializes a workqueue.
.. code-block:: c
#define MY_STACK_SIZE 512
#define MY_PRIORITY 5
K_THREAD_STACK_DEFINE(my_stack_area, MY_STACK_SIZE);
struct k_work_q my_work_q;
k_work_q_start(&my_work_q, my_stack_area,
K_THREAD_STACK_SIZEOF(my_stack_area), MY_PRIORITY);
Submitting a Work Item
======================
A work item is defined using a variable of type :c:type:`struct k_work`.
It must then be initialized by calling :cpp:func:`k_work_init()`.
An initialized work item can be submitted to the system workqueue by
calling :cpp:func:`k_work_submit()`, or to a specified workqueue by
calling :cpp:func:`k_work_submit_to_queue()`.
The following code demonstrates how an ISR can offload the printing
of error messages to the system workqueue. Note that if the ISR attempts
to resubmit the work item while it is still pending, the work item is left
unchanged and the associated error message will not be printed.
.. code-block:: c
struct device_info {
struct k_work work;
char name[16]
} my_device;
void my_isr(void *arg)
{
...
if (error detected) {
k_work_submit(&my_device.work);
}
...
}
void print_error(struct k_work *item)
{
struct device_info *the_device =
CONTAINER_OF(item, struct device_info, work);
printk("Got error on device %s\n", the_device->name);
}
/* initialize name info for a device */
strcpy(my_device.name, "FOO_dev");
/* initialize work item for printing device's error messages */
k_work_init(&my_device.work, print_error);
/* install my_isr() as interrupt handler for the device (not shown) */
...
Submitting a Delayed Work Item
==============================
A delayed work item is defined using a variable of type
:c:type:`struct k_delayed_work`. It must then be initialized by calling
:cpp:func:`k_delayed_work_init()`.
An initialized delayed work item can be submitted to the system workqueue by
calling :cpp:func:`k_delayed_work_submit()`, or to a specified workqueue by
calling :cpp:func:`k_delayed_work_submit_to_queue()`. A delayed work item
that has been submitted but not yet consumed by its workqueue can be canceled
by calling :cpp:func:`k_delayed_work_cancel()`.
Suggested Uses
**************
Use the system workqueue to defer complex interrupt-related processing
from an ISR to a cooperative thread. This allows the interrupt-related
processing to be done promptly without compromising the system's ability
to respond to subsequent interrupts, and does not require the application
to define an additional thread to do the processing.
Configuration Options
**********************
Related configuration options:
* :option:`CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE`
* :option:`CONFIG_SYSTEM_WORKQUEUE_PRIORITY`