From a6dcf333a1b865b4838e6484838592949d4292c3 Mon Sep 17 00:00:00 2001 From: Lauren Murphy Date: Thu, 30 Sep 2021 23:23:46 -0500 Subject: [PATCH] doc: misc fixes Makes miscellaneous fixes to kernel and usermode documentation, such as fixing broken links and adding clarifying wording. Signed-off-by: Lauren Murphy --- doc/reference/kernel/data_passing/message_queues.rst | 9 +++++---- doc/reference/kernel/data_passing/pipes.rst | 3 +-- doc/reference/kernel/data_passing/stacks.rst | 8 ++++---- doc/reference/kernel/other/float.rst | 2 +- doc/reference/kernel/synchronization/semaphores.rst | 7 +++++++ doc/reference/kernel/threads/index.rst | 11 ++++++++--- doc/reference/usermode/memory_domain.rst | 8 ++++---- 7 files changed, 30 insertions(+), 18 deletions(-) diff --git a/doc/reference/kernel/data_passing/message_queues.rst b/doc/reference/kernel/data_passing/message_queues.rst index 7f7e450d59e..5e8d43eb4f2 100644 --- a/doc/reference/kernel/data_passing/message_queues.rst +++ b/doc/reference/kernel/data_passing/message_queues.rst @@ -202,10 +202,11 @@ in an asynchronous manner. .. note:: A message queue can be used to transfer large data items, if desired. However, this can increase interrupt latency as interrupts are locked - while a data item is written or read. It is usually preferable to transfer - large data items by exchanging a pointer to the data item, rather than the - data item itself. The kernel's memory map and memory pool object types - can be helpful for data transfers of this sort. + while a data item is written or read. The time to write or read a data item + increases linearly with its size since the item is copied in its entirety + to or from the buffer in memory. For this reason, it is usually preferable + to transfer large data items by exchanging a pointer to the data item, + rather than the data item itself. A synchronous transfer can be achieved by using the kernel's mailbox object type. diff --git a/doc/reference/kernel/data_passing/pipes.rst b/doc/reference/kernel/data_passing/pipes.rst index ff78eab2b0c..802ad41644f 100644 --- a/doc/reference/kernel/data_passing/pipes.rst +++ b/doc/reference/kernel/data_passing/pipes.rst @@ -164,8 +164,7 @@ Use a pipe to send streams of data between threads. .. note:: A pipe can be used to transfer long streams of data if desired. However it is often preferable to send pointers to large data items to avoid - copying the data. The kernel's memory map and memory pool object types - can be helpful for data transfers of this sort. + copying the data. Configuration Options ********************* diff --git a/doc/reference/kernel/data_passing/stacks.rst b/doc/reference/kernel/data_passing/stacks.rst index 89c0ff5dc39..71c7783a632 100644 --- a/doc/reference/kernel/data_passing/stacks.rst +++ b/doc/reference/kernel/data_passing/stacks.rst @@ -32,12 +32,12 @@ A stack must be initialized before it can be used. This sets its queue to empty. A data value can be **added** to a stack by a thread or an ISR. The value is given directly to a waiting thread, if one exists; otherwise the value is added to the LIFO's queue. -The kernel does *not* detect attempts to add a data value to a stack -that has already reached its maximum quantity of queued values. .. note:: - Adding a data value to a stack that is already full will result in - array overflow, and lead to unpredictable behavior. + If :kconfig:`CONFIG_NO_RUNTIME_CHECKS` is enabled, the kernel will *not* detect + and prevent attempts to add a data value to a stack that has already reached + its maximum quantity of queued values. Adding a data value to a stack that is + already full will result in array overflow, and lead to unpredictable behavior. A data value may be **removed** from a stack by a thread. If the stack's queue is empty a thread may choose to wait for it to be given. diff --git a/doc/reference/kernel/other/float.rst b/doc/reference/kernel/other/float.rst index 3ee0a243b83..ed8f2112c3b 100644 --- a/doc/reference/kernel/other/float.rst +++ b/doc/reference/kernel/other/float.rst @@ -113,7 +113,7 @@ an extra 72 bytes of stack space where the callee-saved FP context can be saved. `Lazy Stacking -`_ +`_ is currently enabled in Zephyr applications on ARM Cortex-M architecture, minimizing interrupt latency, when the floating point context is active. diff --git a/doc/reference/kernel/synchronization/semaphores.rst b/doc/reference/kernel/synchronization/semaphores.rst index b2e1f704ef0..29eed511485 100644 --- a/doc/reference/kernel/synchronization/semaphores.rst +++ b/doc/reference/kernel/synchronization/semaphores.rst @@ -37,6 +37,13 @@ Any number of threads may wait on an unavailable semaphore simultaneously. When the semaphore is given, it is taken by the highest priority thread that has waited longest. +.. note:: + You may initialize a "full" semaphore (count equal to limit) to limit the number + of threads able to execute the critical section at the same time. You may also + initialize an empty semaphore (count equal to 0, with a limit greater than 0) + to create a gate through which no waiting thread may pass until the semaphore + is incremented. All standard use cases of the common semaphore are supported. + .. note:: The kernel does allow an ISR to take a semaphore, however the ISR must not attempt to wait if the semaphore is unavailable. diff --git a/doc/reference/kernel/threads/index.rst b/doc/reference/kernel/threads/index.rst index 8947feab944..7bf0bfd5f0e 100644 --- a/doc/reference/kernel/threads/index.rst +++ b/doc/reference/kernel/threads/index.rst @@ -245,6 +245,10 @@ A thread's initial priority value can be altered up or down after the thread has been started. Thus it is possible for a preemptible thread to become a cooperative thread, and vice versa, by changing its priority. +.. note:: + The scheduler does not make heuristic decisions to re-prioritize threads. + Thread priorities are set and changed only at the application's request. + The kernel supports a virtually unlimited number of thread priority levels. The configuration options :kconfig:`CONFIG_NUM_COOP_PRIORITIES` and :kconfig:`CONFIG_NUM_PREEMPT_PRIORITIES` specify the number of priority @@ -269,9 +273,10 @@ When enabled (see :kconfig:`CONFIG_NUM_METAIRQ_PRIORITIES`), there is a special subclass of cooperative priorities at the highest (numerically lowest) end of the priority space: meta-IRQ threads. These are scheduled according to their normal priority, but also have the special ability -to preempt all other threads (and other meta-irq threads) at lower +to preempt all other threads (and other meta-IRQ threads) at lower priorities, even if those threads are cooperative and/or have taken a -scheduler lock. +scheduler lock. Meta-IRQ threads are still threads, however, +and can still be interrupted by any hardware interrupt. This behavior makes the act of unblocking a meta-IRQ thread (by any means, e.g. creating it, calling k_sem_give(), etc.) into the @@ -284,7 +289,7 @@ run before the current CPU returns into application code. Unlike similar features in other OSes, meta-IRQ threads are true threads and run on their own stack (which must be allocated normally), -not the per-CPU interrupt stack. Design work to enable the use of the +not the per-CPU interrupt stack. Design work to enable the use of the IRQ stack on supported architectures is pending. Note that because this breaks the promise made to cooperative diff --git a/doc/reference/usermode/memory_domain.rst b/doc/reference/usermode/memory_domain.rst index 6bdafee7a13..0d31909ff6e 100644 --- a/doc/reference/usermode/memory_domain.rst +++ b/doc/reference/usermode/memory_domain.rst @@ -4,9 +4,9 @@ Memory Protection Design ######################## Zephyr's memory protection design is geared towards microcontrollers with MPU -(Memory Protection Unit) hardware. We do support some architectures which have -a paged MMU (Memory Management Unit), but in that case the MMU is used like -an MPU with an identity page table. +(Memory Protection Unit) hardware. We do support some architectures, such as x86, +which have a paged MMU (Memory Management Unit), but in that case the MMU is +used like an MPU with an identity page table. All of the discussion below will be using MPU terminology; systems with MMUs can be considered to have an MPU with an unlimited number of programmable @@ -46,7 +46,7 @@ text/ro-data, this is sufficient for the boot time configuration. Hardware Stack Overflow *********************** -``CONFIG_HW_STACK_PROTECTION`` is an optional feature which detects stack +:kconfig:`CONFIG_HW_STACK_PROTECTION` is an optional feature which detects stack buffer overflows when the system is running in supervisor mode. This catches issues when the entire stack buffer has overflowed, and not individual stack frames, use compiler-assisted :kconfig:`CONFIG_STACK_CANARIES`