doc: misc fixes

Makes miscellaneous fixes to kernel and usermode documentation,
such as fixing broken links and adding clarifying wording.

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
This commit is contained in:
Lauren Murphy 2021-09-30 23:23:46 -05:00 committed by Anas Nashif
commit a6dcf333a1
7 changed files with 30 additions and 18 deletions

View file

@ -202,10 +202,11 @@ in an asynchronous manner.
.. note::
A message queue can be used to transfer large data items, if desired.
However, this can increase interrupt latency as interrupts are locked
while a data item is written or read. It is usually preferable to transfer
large data items by exchanging a pointer to the data item, rather than the
data item itself. The kernel's memory map and memory pool object types
can be helpful for data transfers of this sort.
while a data item is written or read. The time to write or read a data item
increases linearly with its size since the item is copied in its entirety
to or from the buffer in memory. For this reason, it is usually preferable
to transfer large data items by exchanging a pointer to the data item,
rather than the data item itself.
A synchronous transfer can be achieved by using the kernel's mailbox
object type.

View file

@ -164,8 +164,7 @@ Use a pipe to send streams of data between threads.
.. note::
A pipe can be used to transfer long streams of data if desired. However
it is often preferable to send pointers to large data items to avoid
copying the data. The kernel's memory map and memory pool object types
can be helpful for data transfers of this sort.
copying the data.
Configuration Options
*********************

View file

@ -32,12 +32,12 @@ A stack must be initialized before it can be used. This sets its queue to empty.
A data value can be **added** to a stack by a thread or an ISR.
The value is given directly to a waiting thread, if one exists;
otherwise the value is added to the LIFO's queue.
The kernel does *not* detect attempts to add a data value to a stack
that has already reached its maximum quantity of queued values.
.. note::
Adding a data value to a stack that is already full will result in
array overflow, and lead to unpredictable behavior.
If :kconfig:`CONFIG_NO_RUNTIME_CHECKS` is enabled, the kernel will *not* detect
and prevent attempts to add a data value to a stack that has already reached
its maximum quantity of queued values. Adding a data value to a stack that is
already full will result in array overflow, and lead to unpredictable behavior.
A data value may be **removed** from a stack by a thread.
If the stack's queue is empty a thread may choose to wait for it to be given.

View file

@ -113,7 +113,7 @@ an extra 72 bytes of stack space where the callee-saved FP context can
be saved.
`Lazy Stacking
<http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0298a/DAFGGBJD.html>`_
<https://developer.arm.com/documentation/dai0298/a>`_
is currently enabled in Zephyr applications on ARM Cortex-M
architecture, minimizing interrupt latency, when the floating
point context is active.

View file

@ -37,6 +37,13 @@ Any number of threads may wait on an unavailable semaphore simultaneously.
When the semaphore is given, it is taken by the highest priority thread
that has waited longest.
.. note::
You may initialize a "full" semaphore (count equal to limit) to limit the number
of threads able to execute the critical section at the same time. You may also
initialize an empty semaphore (count equal to 0, with a limit greater than 0)
to create a gate through which no waiting thread may pass until the semaphore
is incremented. All standard use cases of the common semaphore are supported.
.. note::
The kernel does allow an ISR to take a semaphore, however the ISR must
not attempt to wait if the semaphore is unavailable.

View file

@ -245,6 +245,10 @@ A thread's initial priority value can be altered up or down after the thread
has been started. Thus it is possible for a preemptible thread to become
a cooperative thread, and vice versa, by changing its priority.
.. note::
The scheduler does not make heuristic decisions to re-prioritize threads.
Thread priorities are set and changed only at the application's request.
The kernel supports a virtually unlimited number of thread priority levels.
The configuration options :kconfig:`CONFIG_NUM_COOP_PRIORITIES` and
:kconfig:`CONFIG_NUM_PREEMPT_PRIORITIES` specify the number of priority
@ -269,9 +273,10 @@ When enabled (see :kconfig:`CONFIG_NUM_METAIRQ_PRIORITIES`), there is a special
subclass of cooperative priorities at the highest (numerically lowest)
end of the priority space: meta-IRQ threads. These are scheduled
according to their normal priority, but also have the special ability
to preempt all other threads (and other meta-irq threads) at lower
to preempt all other threads (and other meta-IRQ threads) at lower
priorities, even if those threads are cooperative and/or have taken a
scheduler lock.
scheduler lock. Meta-IRQ threads are still threads, however,
and can still be interrupted by any hardware interrupt.
This behavior makes the act of unblocking a meta-IRQ thread (by any
means, e.g. creating it, calling k_sem_give(), etc.) into the
@ -284,7 +289,7 @@ run before the current CPU returns into application code.
Unlike similar features in other OSes, meta-IRQ threads are true
threads and run on their own stack (which must be allocated normally),
not the per-CPU interrupt stack. Design work to enable the use of the
not the per-CPU interrupt stack. Design work to enable the use of the
IRQ stack on supported architectures is pending.
Note that because this breaks the promise made to cooperative

View file

@ -4,9 +4,9 @@ Memory Protection Design
########################
Zephyr's memory protection design is geared towards microcontrollers with MPU
(Memory Protection Unit) hardware. We do support some architectures which have
a paged MMU (Memory Management Unit), but in that case the MMU is used like
an MPU with an identity page table.
(Memory Protection Unit) hardware. We do support some architectures, such as x86,
which have a paged MMU (Memory Management Unit), but in that case the MMU is
used like an MPU with an identity page table.
All of the discussion below will be using MPU terminology; systems with MMUs
can be considered to have an MPU with an unlimited number of programmable
@ -46,7 +46,7 @@ text/ro-data, this is sufficient for the boot time configuration.
Hardware Stack Overflow
***********************
``CONFIG_HW_STACK_PROTECTION`` is an optional feature which detects stack
:kconfig:`CONFIG_HW_STACK_PROTECTION` is an optional feature which detects stack
buffer overflows when the system is running in supervisor mode. This
catches issues when the entire stack buffer has overflowed, and not
individual stack frames, use compiler-assisted :kconfig:`CONFIG_STACK_CANARIES`