kernel: fix typo

Utilize a code spell-checking tool to scan for and correct spelling errors
in all files within the `kernel` directory.

Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
This commit is contained in:
Pisit Sawangvonganan 2024-07-06 01:12:07 +07:00 committed by Alberto Escolar
commit 5ed3cd4bc9
7 changed files with 7 additions and 7 deletions

View file

@ -276,7 +276,7 @@ choice DYNAMIC_THREAD_PREFER
help
If both CONFIG_DYNAMIC_THREAD_ALLOC=y and
CONFIG_DYNAMIC_THREAD_POOL_SIZE > 0, then the user may
specify the order in which allocation is attmpted.
specify the order in which allocation is attempted.
config DYNAMIC_THREAD_PREFER_ALLOC
bool "Prefer heap-based allocation"

View file

@ -96,7 +96,7 @@ config IPI_OPTIMIZE
O(N) in the number of CPUs, and in exchange reduces the number of
interrupts delivered. Which to choose is going to depend on
application behavior. If the architecture also supports directing
IPIs to specific CPUs then this has the potential to signficantly
IPIs to specific CPUs then this has the potential to significantly
reduce the number of IPIs (and consequently ISRs) processed by the
system as the number of CPUs increases. If not, the only benefit
would be to not issue any IPIs if the newly readied thread is of

View file

@ -169,7 +169,7 @@ static inline int z_vrfy_k_thread_stack_free(k_thread_stack_t *stack)
/* The thread stack object must not be in initialized state.
*
* Thread stack objects are initialized when the thread is created
* and de-initialized whent the thread is destroyed. Since we can't
* and de-initialized when the thread is destroyed. Since we can't
* free a stack that is in use, we have to check that the caller
* has access to the object but that it is not in use anymore.
*/

View file

@ -277,7 +277,7 @@ int z_sched_waitq_walk(_wait_q_t *wait_q,
*
* This function assumes local interrupts are masked (so that the
* current CPU pointer and current thread are safe to modify), but
* requires no other synchronizaton. Architecture layers don't need
* requires no other synchronization. Architecture layers don't need
* to do anything more.
*/
void z_sched_usage_stop(void);

View file

@ -615,7 +615,7 @@ void *k_mem_map_phys_guard(uintptr_t phys, size_t size, uint32_t flags, bool is_
dst += CONFIG_MMU_PAGE_SIZE;
if (is_anon) {
/* Mapping from annoymous memory */
/* Mapping from anonymous memory */
VIRT_FOREACH(dst, size, pos) {
ret = map_anon_page(pos, flags);

View file

@ -685,7 +685,7 @@ int z_pend_curr(struct k_spinlock *lock, k_spinlock_key_t key,
/* We do a "lock swap" prior to calling z_swap(), such that
* the caller's lock gets released as desired. But we ensure
* that we hold the scheduler lock and leave local interrupts
* masked until we reach the context swich. z_swap() itself
* masked until we reach the context switch. z_swap() itself
* has similar code; the duplication is because it's a legacy
* API that doesn't expect to be called with scheduler lock
* held.

View file

@ -213,7 +213,7 @@ void sys_clock_announce(int32_t ticks)
/* We release the lock around the callbacks below, so on SMP
* systems someone might be already running the loop. Don't
* race (which will cause paralllel execution of "sequential"
* race (which will cause parallel execution of "sequential"
* timeouts and confuse apps), just increment the tick count
* and return.
*/