cleanup: rename fiber/task -> thread

We still have many places talking about tasks and threads, replace those
with thread terminology.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This commit is contained in:
Anas Nashif 2017-10-29 07:10:22 -04:00 committed by Anas Nashif
commit 780324b8ed
41 changed files with 100 additions and 94 deletions

View file

@ -169,10 +169,10 @@ config FLOAT
prompt "Floating point registers" prompt "Floating point registers"
default n default n
help help
This option allows tasks and fibers to use the floating point registers. This option allows threads to use the floating point registers.
By default, only a single task or fiber may use the registers. By default, only a single thread may use the registers.
Disabling this option means that any task or fiber that uses a Disabling this option means that any thread that uses a
floating point register will get a fatal exception. floating point register will get a fatal exception.
config FP_SHARING config FP_SHARING
@ -181,7 +181,7 @@ config FP_SHARING
depends on FLOAT depends on FLOAT
default n default n
help help
This option allows multiple tasks and fibers to use the floating point This option allows multiple threads to use the floating point
registers. registers.
endmenu endmenu

View file

@ -120,8 +120,8 @@ registers (to avoid stack accesses). It is possible to register a FIRQ
handler that operates outside of the kernel, but care must be taken to only handler that operates outside of the kernel, but care must be taken to only
use instructions that only use the banked registers. use instructions that only use the banked registers.
The kernel is able to handle transitions to and from FIRQ, RIRQ and threads The kernel is able to handle transitions to and from FIRQ, RIRQ and threads.
(fibers/task). The contexts are saved 'lazily': the minimum amount of work is The contexts are saved 'lazily': the minimum amount of work is
done upfront, and the rest is done when needed: done upfront, and the rest is done when needed:
o RIRQ o RIRQ
@ -129,7 +129,7 @@ o RIRQ
All needed regisers to run C code in the ISR are saved automatically All needed regisers to run C code in the ISR are saved automatically
on the outgoing thread's stack: loop, status32, pc, and the caller- on the outgoing thread's stack: loop, status32, pc, and the caller-
saved GPRs. That stack frame layout is pre-determined. If returning saved GPRs. That stack frame layout is pre-determined. If returning
to a fiber, the stack is popped and no registers have to be saved by to a thread, the stack is popped and no registers have to be saved by
the kernel. If a context switch is required, the callee-saved GPRs the kernel. If a context switch is required, the callee-saved GPRs
are then saved in the thread control structure (TCS). are then saved in the thread control structure (TCS).
@ -151,7 +151,7 @@ o FIRQ
During early initialization, the sp in the 2nd register bank is made to During early initialization, the sp in the 2nd register bank is made to
refer to _firq_stack. This allows for the FIRQ handler to use its own stack. refer to _firq_stack. This allows for the FIRQ handler to use its own stack.
GPRs are banked, loop registers are saved in unused callee saved regs upon GPRs are banked, loop registers are saved in unused callee saved regs upon
interrupt entry. If returning to a fiber, loop registers are restored and the interrupt entry. If returning to a thread, loop registers are restored and the
CPU switches back to bank 0 for the GPRs. If a context switch is CPU switches back to bank 0 for the GPRs. If a context switch is
needed, at this point only are all the registers saved. First, a needed, at this point only are all the registers saved. First, a
stack frame with the same layout as the automatic RIRQ one is created stack frame with the same layout as the automatic RIRQ one is created

View file

@ -21,8 +21,8 @@
#define _kernel_arch_thread__h_ #define _kernel_arch_thread__h_
/* /*
* Reason a thread has relinquished control: fibers can only be in the NONE * Reason a thread has relinquished control: threads can only be in the NONE
* or COOP state, tasks can be one in the four. * or COOP state, threads can be one in the four.
*/ */
#define _CAUSE_NONE 0 #define _CAUSE_NONE 0
#define _CAUSE_COOP 1 #define _CAUSE_COOP 1

View file

@ -73,10 +73,10 @@ config FLOAT
prompt "Floating point registers" prompt "Floating point registers"
default n default n
help help
This option allows tasks and fibers to use the floating point registers. This option allows threads to use the floating point registers.
By default, only a single task or fiber may use the registers. By default, only a single thread may use the registers.
Disabling this option means that any task or fiber that uses a Disabling this option means that any thread that uses a
floating point register will get a fatal exception. floating point register will get a fatal exception.
config FP_SHARING config FP_SHARING
@ -85,7 +85,7 @@ config FP_SHARING
depends on FLOAT depends on FLOAT
default n default n
help help
This option allows multiple tasks and fibers to use the floating point This option allows multiple threads to use the floating point
registers. registers.
choice choice

View file

@ -73,7 +73,7 @@ static inline void enable_floating_point(void)
* Although automatic state preservation is enabled, the processor * Although automatic state preservation is enabled, the processor
* does not automatically save the volatile FP registers until they * does not automatically save the volatile FP registers until they
* have first been touched. Perform a dummy move operation so that * have first been touched. Perform a dummy move operation so that
* the stack frames are created as expected before any task or fiber * the stack frames are created as expected before any thread
* context switching can occur. * context switching can occur.
*/ */
__asm__ volatile( __asm__ volatile(

View file

@ -9,7 +9,7 @@
* @brief ARM Cortex-M interrupt initialization * @brief ARM Cortex-M interrupt initialization
* *
* The ARM Cortex-M architecture provides its own k_thread_abort() to deal with * The ARM Cortex-M architecture provides its own k_thread_abort() to deal with
* different CPU modes (handler vs thread) when a fiber aborts. When its entry * different CPU modes (handler vs thread) when a thread aborts. When its entry
* point returns or when it aborts itself, the CPU is in thread mode and must * point returns or when it aborts itself, the CPU is in thread mode and must
* call _Swap() (which triggers a service call), but when in handler mode, the * call _Swap() (which triggers a service call), but when in handler mode, the
* CPU must exit handler mode to cause the context switch, and thus must queue * CPU must exit handler mode to cause the context switch, and thus must queue

View file

@ -102,7 +102,7 @@ FUNC_NORETURN void _NanoFatalErrorHandler(unsigned int reason,
/* /*
* Error was fatal to a kernel task or a fiber; invoke the system * Error was fatal to a kernel task or a thread; invoke the system
* fatal error handling policy defined for the platform. * fatal error handling policy defined for the platform.
*/ */

View file

@ -329,7 +329,7 @@ CROHandlingDone:
movl %eax, _kernel_offset_to_current(%edi) movl %eax, _kernel_offset_to_current(%edi)
/* recover task/fiber stack pointer from k_thread */ /* recover thread stack pointer from k_thread */
movl _thread_offset_to_esp(%eax), %esp movl _thread_offset_to_esp(%eax), %esp
@ -404,7 +404,7 @@ time_read_not_needed:
* jumps to _thread_entry(). * jumps to _thread_entry().
* *
* GDB normally stops unwinding a stack when it detects that it has * GDB normally stops unwinding a stack when it detects that it has
* reached a function called main(). Kernel tasks, however, do not have * reached a function called main(). Kernel threads, however, do not have
* a main() function, and there does not appear to be a simple way of stopping * a main() function, and there does not appear to be a simple way of stopping
* the unwinding of the stack. * the unwinding of the stack.
* *

View file

@ -70,7 +70,7 @@ static inline void _FpAccessDisable(void)
* @brief Save non-integer context information * @brief Save non-integer context information
* *
* This routine saves the system's "live" non-integer context into the * This routine saves the system's "live" non-integer context into the
* specified area. If the specified task or fiber supports SSE then * specified area. If the specified thread supports SSE then
* x87/MMX/SSEx thread info is saved, otherwise only x87/MMX thread is saved. * x87/MMX/SSEx thread info is saved, otherwise only x87/MMX thread is saved.
* Function is invoked by _FpCtxSave(struct tcs *tcs) * Function is invoked by _FpCtxSave(struct tcs *tcs)
* *
@ -90,7 +90,7 @@ static inline void _do_fp_regs_save(void *preemp_float_reg)
* @brief Save non-integer context information * @brief Save non-integer context information
* *
* This routine saves the system's "live" non-integer context into the * This routine saves the system's "live" non-integer context into the
* specified area. If the specified task or fiber supports SSE then * specified area. If the specified thread supports SSE then
* x87/MMX/SSEx thread info is saved, otherwise only x87/MMX thread is saved. * x87/MMX/SSEx thread info is saved, otherwise only x87/MMX thread is saved.
* Function is invoked by _FpCtxSave(struct tcs *tcs) * Function is invoked by _FpCtxSave(struct tcs *tcs)
* *

View file

@ -45,14 +45,14 @@ static inline void kernel_arch_init(void)
/** /**
* *
* @brief Set the return value for the specified fiber (inline) * @brief Set the return value for the specified thread (inline)
* *
* @param fiber pointer to fiber * @param thread pointer to thread
* @param value value to set as return value * @param value value to set as return value
* *
* The register used to store the return value from a function call invocation * The register used to store the return value from a function call invocation
* is set to <value>. It is assumed that the specified <fiber> is pending, and * is set to @a value. It is assumed that the specified @a thread is pending, and
* thus the fibers context is stored in its TCS. * thus the threads context is stored in its TCS.
* *
* @return N/A * @return N/A
*/ */

View file

@ -258,13 +258,13 @@ struct _thread_arch {
/* /*
* The location of all floating point related structures/fields MUST be * The location of all floating point related structures/fields MUST be
* located at the end of struct tcs. This way only the * located at the end of struct tcs. This way only the
* fibers/tasks that actually utilize non-integer capabilities need to * threads that actually utilize non-integer capabilities need to
* account for the increased memory required for storing FP state when * account for the increased memory required for storing FP state when
* sizing stacks. * sizing stacks.
* *
* Given that stacks "grow down" on IA-32, and the TCS is located * Given that stacks "grow down" on IA-32, and the TCS is located
* at the start of a thread's "workspace" memory, the stacks of * at the start of a thread's "workspace" memory, the stacks of
* fibers/tasks that do not utilize floating point instruction can * threads that do not utilize floating point instruction can
* effectively consume the memory occupied by the 'tCoopFloatReg' and * effectively consume the memory occupied by the 'tCoopFloatReg' and
* 'tPreempFloatReg' structures without ill effect. * 'tPreempFloatReg' structures without ill effect.
*/ */

View file

@ -9,7 +9,7 @@
* @brief Stack frame created by swap (IA-32) * @brief Stack frame created by swap (IA-32)
* *
* This file details the stack frame generated by _Swap() when it saves a task * This file details the stack frame generated by _Swap() when it saves a task
* or fiber's context. This is specific to the IA-32 processor architecture. * or thread's context. This is specific to the IA-32 processor architecture.
* *
* NOTE: _Swap() does not use this file as it uses the push instruction to * NOTE: _Swap() does not use this file as it uses the push instruction to
* save a context. Changes to the file will not automatically be picked up by * save a context. Changes to the file will not automatically be picked up by

View file

@ -40,7 +40,7 @@ static ALWAYS_INLINE void kernel_arch_init(void)
{ {
_kernel.nested = 0; _kernel.nested = 0;
#if XCHAL_CP_NUM > 0 #if XCHAL_CP_NUM > 0
/* Initialize co-processor management for tasks. /* Initialize co-processor management for threads.
* Leave CPENABLE alone. * Leave CPENABLE alone.
*/ */
_xt_coproc_init(); _xt_coproc_init();

View file

@ -23,7 +23,7 @@ extern "C" {
* be allocated for saving coprocessor state and/or C library state information * be allocated for saving coprocessor state and/or C library state information
* (if thread safety is enabled for the C library). The sizes are in bytes. * (if thread safety is enabled for the C library). The sizes are in bytes.
* *
* Stack sizes for individual tasks should be derived from these minima based * Stack sizes for individual threads should be derived from these minima based
* on the maximum call depth of the task and the maximum level of interrupt * on the maximum call depth of the task and the maximum level of interrupt
* nesting. A minimum stack size is defined by XT_STACK_MIN_SIZE. This minimum * nesting. A minimum stack size is defined by XT_STACK_MIN_SIZE. This minimum
* is based on the requirement for a task that calls nothing else but can be * is based on the requirement for a task that calls nothing else but can be
@ -46,7 +46,7 @@ extern "C" {
* *
* XT_USE_THREAD_SAFE_CLIB -- Define this to a nonzero value to enable * XT_USE_THREAD_SAFE_CLIB -- Define this to a nonzero value to enable
* thread-safe use of the C library. This will require extra stack space to be * thread-safe use of the C library. This will require extra stack space to be
* allocated for tasks that use the C library reentrant functions. See below * allocated for threads that use the C library reentrant functions. See below
* for more information. * for more information.
* *
* NOTE: The Xtensa toolchain supports multiple C libraries and not all of them * NOTE: The Xtensa toolchain supports multiple C libraries and not all of them

View file

@ -309,7 +309,7 @@ security violations and limit their impact:
[PAUL09]_. [PAUL09]_.
- **Least privilege** describes an access model in which each user, - **Least privilege** describes an access model in which each user,
program, thread, and fiber shall have the smallest possible program and thread shall have the smallest possible
subset of permissions in the system required to perform their subset of permissions in the system required to perform their
task. This positive security model aims to minimize the attack task. This positive security model aims to minimize the attack
surface of the system. surface of the system.

View file

@ -66,7 +66,7 @@ This is the name used to identify the event-based idling mechanism of the
Zephyr RTOS kernel scheduler. The kernel scheduler can run in two modes. During Zephyr RTOS kernel scheduler. The kernel scheduler can run in two modes. During
normal operation, when at least one thread is active, it sets up the system normal operation, when at least one thread is active, it sets up the system
timer in periodic mode and runs in an interval-based scheduling mode. The timer in periodic mode and runs in an interval-based scheduling mode. The
interval-based mode allows it to time slice between tasks. Many times, the interval-based mode allows it to time slice between threads. Many times, the
threads would be waiting on semaphores, timeouts or for events. When there threads would be waiting on semaphores, timeouts or for events. When there
are no threads running, it is inefficient for the kernel scheduler to run are no threads running, it is inefficient for the kernel scheduler to run
in interval-based mode. This is because, in this mode the timer would trigger in interval-based mode. This is because, in this mode the timer would trigger

View file

@ -99,10 +99,10 @@ type and the channel on which the trigger must be configured.
Because most sensors are connected via SPI or I2C busses, it is not possible Because most sensors are connected via SPI or I2C busses, it is not possible
to communicate with them from the interrupt execution context. The to communicate with them from the interrupt execution context. The
execution of the trigger handler is deferred to a fiber, so that data execution of the trigger handler is deferred to a thread, so that data
fetching operations are possible. A driver can spawn its own fiber to fetch fetching operations are possible. A driver can spawn its own thread to fetch
data, thus ensuring minimum latency. Alternatively, multiple sensor drivers data, thus ensuring minimum latency. Alternatively, multiple sensor drivers
can share a system-wide fiber. The shared fiber approach increases the can share a system-wide thread. The shared thread approach increases the
latency of handling interrupts but uses less memory. You can configure which latency of handling interrupts but uses less memory. You can configure which
approach to follow for each driver. Most drivers can entirely disable approach to follow for each driver. Most drivers can entirely disable
triggers resulting in a smaller footprint. triggers resulting in a smaller footprint.

View file

@ -212,10 +212,10 @@ static void _gpio_sch_manage_callback(struct device *dev)
{ {
struct gpio_sch_data *gpio = dev->driver_data; struct gpio_sch_data *gpio = dev->driver_data;
/* Start the fiber only when relevant */ /* Start the thread only when relevant */
if (!sys_slist_is_empty(&gpio->callbacks) && gpio->cb_enabled) { if (!sys_slist_is_empty(&gpio->callbacks) && gpio->cb_enabled) {
if (!gpio->poll) { if (!gpio->poll) {
SYS_LOG_DBG("Starting SCH GPIO polling fiber"); SYS_LOG_DBG("Starting SCH GPIO polling thread");
gpio->poll = 1; gpio->poll = 1;
k_thread_create(&gpio->polling_thread, k_thread_create(&gpio->polling_thread,
gpio->polling_stack, gpio->polling_stack,

View file

@ -446,7 +446,7 @@ void _timer_idle_exit(void)
/* /*
* Ensure the timer will expire at the end of the next tick in case * Ensure the timer will expire at the end of the next tick in case
* the ISR makes any tasks and/or fibers ready to run. * the ISR makes any threads ready to run.
*/ */
timer0_limit_register_set(cycles_per_tick - 1); timer0_limit_register_set(cycles_per_tick - 1);
timer0_count_register_set(current_count % cycles_per_tick); timer0_count_register_set(current_count % cycles_per_tick);

View file

@ -53,7 +53,7 @@
* previous factor. * previous factor.
* *
* 5. Tickless idle may be prematurely aborted due to a non-timer interrupt. * 5. Tickless idle may be prematurely aborted due to a non-timer interrupt.
* Its handler may make a task or fiber ready to run, so any elapsed ticks * Its handler may make a thread ready to run, so any elapsed ticks
* must be accounted for and the timer must also expire at the end of the * must be accounted for and the timer must also expire at the end of the
* next logical tick so _timer_int_handler() can put it back in periodic mode. * next logical tick so _timer_int_handler() can put it back in periodic mode.
* This can only be distinguished from the previous factor by the execution of * This can only be distinguished from the previous factor by the execution of
@ -569,8 +569,8 @@ void _timer_idle_exit(void)
* Either a non-timer interrupt occurred, or we straddled a tick when * Either a non-timer interrupt occurred, or we straddled a tick when
* entering tickless idle. It is impossible to determine which occurred * entering tickless idle. It is impossible to determine which occurred
* at this point. Regardless of the cause, ensure that the timer will * at this point. Regardless of the cause, ensure that the timer will
* expire at the end of the next tick in case the ISR makes any tasks * expire at the end of the next tick in case the ISR makes any threads
* and/or fibers ready to run. * ready to run.
* *
* NOTE #1: In the case of a straddled tick, the '_sys_idle_elapsed_ticks' * NOTE #1: In the case of a straddled tick, the '_sys_idle_elapsed_ticks'
* calculation below may result in either 0 or 1. If 1, then this may * calculation below may result in either 0 or 1. If 1, then this may

View file

@ -73,8 +73,8 @@ extern void _irq_spurious(void *unused);
* *
* @brief Disable all interrupts on the local CPU * @brief Disable all interrupts on the local CPU
* *
* This routine disables interrupts. It can be called from either interrupt, * This routine disables interrupts. It can be called from either interrupt or
* task or fiber level. This routine returns an architecture-dependent * thread level. This routine returns an architecture-dependent
* lock-out key representing the "interrupt disable state" prior to the call; * lock-out key representing the "interrupt disable state" prior to the call;
* this key can be passed to irq_unlock() to re-enable interrupts. * this key can be passed to irq_unlock() to re-enable interrupts.
* *
@ -92,7 +92,7 @@ extern void _irq_spurious(void *unused);
* thread executes, or while the system is idle. * thread executes, or while the system is idle.
* *
* The "interrupt disable state" is an attribute of a thread. Thus, if a * The "interrupt disable state" is an attribute of a thread. Thus, if a
* fiber or task disables interrupts and subsequently invokes a kernel * thread disables interrupts and subsequently invokes a kernel
* routine that causes the calling thread to block, the interrupt * routine that causes the calling thread to block, the interrupt
* disable state will be restored when the thread is later rescheduled * disable state will be restored when the thread is later rescheduled
* for execution. * for execution.
@ -117,7 +117,7 @@ static ALWAYS_INLINE unsigned int _arch_irq_lock(void)
* is an architecture-dependent lock-out key that is returned by a previous * is an architecture-dependent lock-out key that is returned by a previous
* invocation of irq_lock(). * invocation of irq_lock().
* *
* This routine can be called from either interrupt, task or fiber level. * This routine can be called from either interrupt or thread level.
* *
* @return N/A * @return N/A
*/ */

View file

@ -77,8 +77,8 @@ static ALWAYS_INLINE unsigned int find_lsb_set(u32_t op)
* *
* @brief Disable all interrupts on the CPU * @brief Disable all interrupts on the CPU
* *
* This routine disables interrupts. It can be called from either interrupt, * This routine disables interrupts. It can be called from either interrupt or
* task or fiber level. This routine returns an architecture-dependent * thread level. This routine returns an architecture-dependent
* lock-out key representing the "interrupt disable state" prior to the call; * lock-out key representing the "interrupt disable state" prior to the call;
* this key can be passed to irq_unlock() to re-enable interrupts. * this key can be passed to irq_unlock() to re-enable interrupts.
* *
@ -96,7 +96,7 @@ static ALWAYS_INLINE unsigned int find_lsb_set(u32_t op)
* thread executes, or while the system is idle. * thread executes, or while the system is idle.
* *
* The "interrupt disable state" is an attribute of a thread. Thus, if a * The "interrupt disable state" is an attribute of a thread. Thus, if a
* fiber or task disables interrupts and subsequently invokes a kernel * thread disables interrupts and subsequently invokes a kernel
* routine that causes the calling thread to block, the interrupt * routine that causes the calling thread to block, the interrupt
* disable state will be restored when the thread is later rescheduled * disable state will be restored when the thread is later rescheduled
* for execution. * for execution.
@ -150,7 +150,7 @@ static ALWAYS_INLINE unsigned int _arch_irq_lock(void)
* architecture-dependent lock-out key that is returned by a previous * architecture-dependent lock-out key that is returned by a previous
* invocation of irq_lock(). * invocation of irq_lock().
* *
* This routine can be called from either interrupt, task or fiber level. * This routine can be called from either interrupt or thread level.
* *
* @param key architecture-dependent lock-out key * @param key architecture-dependent lock-out key
* *

View file

@ -394,8 +394,8 @@ typedef struct nanoIsf {
/** /**
* @brief Disable all interrupts on the CPU (inline) * @brief Disable all interrupts on the CPU (inline)
* *
* This routine disables interrupts. It can be called from either interrupt, * This routine disables interrupts. It can be called from either interrupt
* task or fiber level. This routine returns an architecture-dependent * or thread level. This routine returns an architecture-dependent
* lock-out key representing the "interrupt disable state" prior to the call; * lock-out key representing the "interrupt disable state" prior to the call;
* this key can be passed to irq_unlock() to re-enable interrupts. * this key can be passed to irq_unlock() to re-enable interrupts.
* *
@ -413,7 +413,7 @@ typedef struct nanoIsf {
* thread executes, or while the system is idle. * thread executes, or while the system is idle.
* *
* The "interrupt disable state" is an attribute of a thread. Thus, if a * The "interrupt disable state" is an attribute of a thread. Thus, if a
* fiber or task disables interrupts and subsequently invokes a kernel * thread disables interrupts and subsequently invokes a kernel
* routine that causes the calling thread to block, the interrupt * routine that causes the calling thread to block, the interrupt
* disable state will be restored when the thread is later rescheduled * disable state will be restored when the thread is later rescheduled
* for execution. * for execution.
@ -441,7 +441,7 @@ static ALWAYS_INLINE unsigned int _arch_irq_lock(void)
* is an architecture-dependent lock-out key that is returned by a previous * is an architecture-dependent lock-out key that is returned by a previous
* invocation of irq_lock(). * invocation of irq_lock().
* *
* This routine can be called from either interrupt, task or fiber level. * This routine can be called from either interrupt or thread level.
* *
* @return N/A * @return N/A
* *

View file

@ -75,7 +75,7 @@ void sys_event_logger_put(struct event_logger *logger, u16_t event_id,
* than the message size the function returns -EMSGSIZE. Otherwise, it returns * than the message size the function returns -EMSGSIZE. Otherwise, it returns
* the number of 32-bit words copied. The function retrieves messages in * the number of 32-bit words copied. The function retrieves messages in
* FIFO order. If there is no message in the buffer the function returns * FIFO order. If there is no message in the buffer the function returns
* immediately. It can only be called from a fiber. * immediately. It can only be called from a thread.
* *
* @param logger Pointer to the event logger used. * @param logger Pointer to the event logger used.
* @param event_id Pointer to the id of the fetched event. * @param event_id Pointer to the id of the fetched event.
@ -101,7 +101,7 @@ int sys_event_logger_get(struct event_logger *logger, u16_t *event_id,
* returns the number of 32-bit words copied. * returns the number of 32-bit words copied.
* *
* The function retrieves messages in FIFO order. The caller pends if there is * The function retrieves messages in FIFO order. The caller pends if there is
* no message available in the buffer. It can only be called from a fiber. * no message available in the buffer. It can only be called from a thread.
* *
* @param logger Pointer to the event logger used. * @param logger Pointer to the event logger used.
* @param event_id Pointer to the ID of the fetched event. * @param event_id Pointer to the ID of the fetched event.
@ -128,7 +128,7 @@ int sys_event_logger_get_wait(struct event_logger *logger, u16_t *event_id,
* number of dwords copied. The function retrieves messages in FIFO order. * number of dwords copied. The function retrieves messages in FIFO order.
* If no message is available in the buffer, the caller pends until a * If no message is available in the buffer, the caller pends until a
* new message is added or the timeout expires. This routine can only be * new message is added or the timeout expires. This routine can only be
* called from a fiber. * called from a thread.
* *
* @param logger Pointer to the event logger used. * @param logger Pointer to the event logger used.
* @param event_id Pointer to the ID of the event fetched. * @param event_id Pointer to the ID of the event fetched.

View file

@ -287,8 +287,8 @@ static inline int _impl_sensor_attr_set(struct device *dev,
/** /**
* @brief Activate a sensor's trigger and set the trigger handler * @brief Activate a sensor's trigger and set the trigger handler
* *
* The handler will be called from a fiber, so I2C or SPI operations are * The handler will be called from a thread, so I2C or SPI operations are
* safe. However, the fiber's stack is limited and defined by the * safe. However, the thread's stack is limited and defined by the
* driver. It is currently up to the caller to ensure that the handler * driver. It is currently up to the caller to ensure that the handler
* does not overflow the stack. * does not overflow the stack.
* *

View file

@ -110,12 +110,12 @@ struct _kernel {
* for a thread to only "own" the XMM registers. * for a thread to only "own" the XMM registers.
*/ */
/* thread (fiber or task) that owns the FP regs */ /* thread that owns the FP regs */
struct k_thread *current_fp; struct k_thread *current_fp;
#endif #endif
#if defined(CONFIG_THREAD_MONITOR) #if defined(CONFIG_THREAD_MONITOR)
struct k_thread *threads; /* singly linked list of ALL fiber+tasks */ struct k_thread *threads; /* singly linked list of ALL threads */
#endif #endif
#if defined(CONFIG_USERSPACE) #if defined(CONFIG_USERSPACE)

View file

@ -153,7 +153,7 @@ void _arch_user_mode_enter(k_thread_entry_t user_entry, void *p1, void *p2,
extern FUNC_NORETURN void _arch_syscall_oops(void *ssf); extern FUNC_NORETURN void _arch_syscall_oops(void *ssf);
#endif /* CONFIG_USERSPACE */ #endif /* CONFIG_USERSPACE */
/* set and clear essential fiber/task flag */ /* set and clear essential thread flag */
extern void _thread_essential_set(void); extern void _thread_essential_set(void);
extern void _thread_essential_clear(void); extern void _thread_essential_clear(void);

View file

@ -33,14 +33,14 @@ static inline void _init_timeout(struct _timeout *t, _timeout_func_t func)
t->delta_ticks_from_prev = _INACTIVE; t->delta_ticks_from_prev = _INACTIVE;
/* /*
* Must be initialized here so that the _fiber_wakeup family of APIs can * Must be initialized here so that k_wakeup can
* verify the fiber is not on a wait queue before aborting a timeout. * verify the thread is not on a wait queue before aborting a timeout.
*/ */
t->wait_q = NULL; t->wait_q = NULL;
/* /*
* Must be initialized here, so the _handle_one_timeout() * Must be initialized here, so the _handle_one_timeout()
* routine can check if there is a fiber waiting on this timeout * routine can check if there is a thread waiting on this timeout
*/ */
t->thread = NULL; t->thread = NULL;

View file

@ -84,7 +84,7 @@
#define STACK_SIZE 768 #define STACK_SIZE 768
/* /*
* There are multiple tasks doing printfs and they may conflict. * There are multiple threads doing printfs and they may conflict.
* Therefore use puts() instead of printf(). * Therefore use puts() instead of printf().
*/ */
#if defined(CONFIG_STDOUT_CONSOLE) #if defined(CONFIG_STDOUT_CONSOLE)
@ -226,7 +226,10 @@ static void init_objects(void)
static void start_threads(void) static void start_threads(void)
{ {
/* create two fibers (prios -2/-1) and four tasks: (prios 0-3) */ /*
* create two coop. threads (prios -2/-1) and four preemptive threads
* : (prios 0-3)
*/
for (int i = 0; i < NUM_PHIL; i++) { for (int i = 0; i < NUM_PHIL; i++) {
int prio = new_prio(i); int prio = new_prio(i);

View file

@ -108,7 +108,7 @@ void _sys_k_event_logger_context_switch(void)
* The mechanism we use to log the kernel events uses a sync semaphore * The mechanism we use to log the kernel events uses a sync semaphore
* to inform that there are available events to be collected. The * to inform that there are available events to be collected. The
* context switch event can be triggered from a task. When we signal a * context switch event can be triggered from a task. When we signal a
* semaphore from a task and a fiber is waiting for that semaphore, a * semaphore from a thread is waiting for that semaphore, a
* context switch is generated immediately. Due to the fact that we * context switch is generated immediately. Due to the fact that we
* register the context switch event while the context switch is being * register the context switch event while the context switch is being
* processed, a new context switch can be generated before the kernel * processed, a new context switch can be generated before the kernel

View file

@ -26,7 +26,7 @@
#define STACKSIZE 512 #define STACKSIZE 512
#endif #endif
/* stack used by the fibers */ /* stack used by the threads */
static K_THREAD_STACK_DEFINE(thread_one_stack, STACKSIZE); static K_THREAD_STACK_DEFINE(thread_one_stack, STACKSIZE);
static K_THREAD_STACK_DEFINE(thread_two_stack, STACKSIZE); static K_THREAD_STACK_DEFINE(thread_two_stack, STACKSIZE);
static struct k_thread thread_one_data; static struct k_thread thread_one_data;

View file

@ -116,7 +116,7 @@ void main(void)
#endif #endif
#ifdef CONFIG_OBJECTS_THREAD #ifdef CONFIG_OBJECTS_THREAD
/* start a trivial fiber */ /* start a trivial thread */
k_thread_create(&objects_thread, pStack, THREAD_STACK_SIZE, k_thread_create(&objects_thread, pStack, THREAD_STACK_SIZE,
thread_entry, MESSAGE, (void *)func_array, thread_entry, MESSAGE, (void *)func_array,
NULL, 10, 0, K_NO_WAIT); NULL, 10, 0, K_NO_WAIT);

View file

@ -9,25 +9,25 @@ APIs tested in this test set
============================ ============================
k_thread_create k_thread_create
- start a helper fiber to help with k_yield() tests - start a helper thread to help with k_yield() tests
- start a fiber to test fiber related functionality - start a thread to test thread related functionality
k_yield k_yield
- Called by a higher priority fiber when there is another fiber - Called by a higher priority thread when there is another thread
- Called by an equal priority fiber when there is another fiber - Called by an equal priority thread when there is another thread
- Called by a lower priority fiber when there is another fiber - Called by a lower priority thread when there is another thread
k_current_get k_current_get
- Called from an ISR (interrupted a task) - Called from an ISR (interrupted a task)
- Called from an ISR (interrupted a fiber) - Called from an ISR (interrupted a thread)
- Called from a task - Called from a task
- Called from a fiber - Called from a thread
k_is_in_isr k_is_in_isr
- Called from an ISR that interrupted a task - Called from an ISR that interrupted a task
- Called from an ISR that interrupted a fiber - Called from an ISR that interrupted a thread
- Called from a task - Called from a task
- Called from a fiber - Called from a thread
k_cpu_idle k_cpu_idle
- CPU to be woken up by tick timer. Thus, after each call, the tick count - CPU to be woken up by tick timer. Thus, after each call, the tick count

View file

@ -31,7 +31,7 @@ struct result result[N_THREADS];
struct k_fifo fifo; struct k_fifo fifo;
static void errno_fiber(int n, int my_errno) static void errno_thread(int n, int my_errno)
{ {
errno = my_errno; errno = my_errno;
@ -61,7 +61,7 @@ void test_thread_context(void)
/**TESTPOINT: thread- threads stacks are separate */ /**TESTPOINT: thread- threads stacks are separate */
for (int ii = 0; ii < N_THREADS; ii++) { for (int ii = 0; ii < N_THREADS; ii++) {
k_thread_create(&threads[ii], stacks[ii], STACK_SIZE, k_thread_create(&threads[ii], stacks[ii], STACK_SIZE,
(k_thread_entry_t) errno_fiber, (k_thread_entry_t) errno_thread,
(void *) ii, (void *) errno_values[ii], NULL, (void *) ii, (void *) errno_values[ii], NULL,
K_PRIO_PREEMPT(ii + 5), 0, K_NO_WAIT); K_PRIO_PREEMPT(ii + 5), 0, K_NO_WAIT);
} }

View file

@ -109,7 +109,7 @@ struct fp_register_set {
/* /*
* The following constants define the initial byte value used by the background * The following constants define the initial byte value used by the background
* task, and the fiber when loading up the floating point registers. * task, and the thread when loading up the floating point registers.
*/ */
#define MAIN_FLOAT_REG_CHECK_BYTE ((unsigned char)0xe5) #define MAIN_FLOAT_REG_CHECK_BYTE ((unsigned char)0xe5)

View file

@ -24,8 +24,8 @@
* this test should be enhanced to ensure that the architectures' _Swap() * this test should be enhanced to ensure that the architectures' _Swap()
* routine doesn't context switch more registers that it needs to (which would * routine doesn't context switch more registers that it needs to (which would
* represent a performance issue). For example, on the IA-32, the test should * represent a performance issue). For example, on the IA-32, the test should
* issue a fiber_fp_disable() from main(), and then indicate that only x87 FPU * issue a k_fp_disable() from main(), and then indicate that only x87 FPU
* registers will be utilized (fiber_fp_enable()). The fiber should continue * registers will be utilized (k_fp_enable()). The thread should continue
* to load ALL non-integer registers, but main() should validate that only the * to load ALL non-integer registers, but main() should validate that only the
* x87 FPU registers are being saved/restored. * x87 FPU registers are being saved/restored.
*/ */

View file

@ -21,7 +21,7 @@
#define TOTAL_TEST_NUMBER 2 #define TOTAL_TEST_NUMBER 2
/* 1 IPM console fiber if enabled */ /* 1 IPM console thread if enabled */
#if defined(CONFIG_IPM_CONSOLE_RECEIVER) && defined(CONFIG_PRINTK) #if defined(CONFIG_IPM_CONSOLE_RECEIVER) && defined(CONFIG_PRINTK)
#define IPM_THREAD 1 #define IPM_THREAD 1
#else #else

View file

@ -376,7 +376,10 @@ void task_monitor(void)
offload1.sem = &sync_test_sem; offload1.sem = &sync_test_sem;
k_work_submit_to_queue(&offload_work_q, &offload1.work_item); k_work_submit_to_queue(&offload_work_q, &offload1.work_item);
/* Two fibers and two tasks should be waiting on the LIFO */ /*
* Two cooperative threads and two preemptive threads should
* be waiting on the LIFO
*/
/* Add data to the LIFO */ /* Add data to the LIFO */
k_lifo_put(&lifo, &lifo_test_data[0]); k_lifo_put(&lifo, &lifo_test_data[0]);

View file

@ -108,7 +108,7 @@ static void test_thread(int arg1, int arg2)
} }
TC_PRINT("Testing: test thread sleep + helper thread wakeup test\n"); TC_PRINT("Testing: test thread sleep + helper thread wakeup test\n");
k_sem_give(&helper_thread_sem); /* Activate helper fiber */ k_sem_give(&helper_thread_sem); /* Activate helper thread */
align_to_tick_boundary(); align_to_tick_boundary();
start_tick = k_uptime_get_32(); start_tick = k_uptime_get_32();
@ -124,7 +124,7 @@ static void test_thread(int arg1, int arg2)
} }
TC_PRINT("Testing: test thread sleep + isr offload wakeup test\n"); TC_PRINT("Testing: test thread sleep + isr offload wakeup test\n");
k_sem_give(&helper_thread_sem); /* Activate helper fiber */ k_sem_give(&helper_thread_sem); /* Activate helper thread */
align_to_tick_boundary(); align_to_tick_boundary();
start_tick = k_uptime_get_32(); start_tick = k_uptime_get_32();
@ -170,11 +170,11 @@ static void helper_thread(int arg1, int arg2)
k_sem_take(&helper_thread_sem, K_FOREVER); k_sem_take(&helper_thread_sem, K_FOREVER);
/* Wake the test fiber */ /* Wake the test thread */
k_wakeup(test_thread_id); k_wakeup(test_thread_id);
k_sem_take(&helper_thread_sem, K_FOREVER); k_sem_take(&helper_thread_sem, K_FOREVER);
/* Wake the test fiber from an ISR */ /* Wake the test thread from an ISR */
irq_offload(irq_offload_isr, (void *)test_thread_id); irq_offload(irq_offload_isr, (void *)test_thread_id);
} }
@ -210,7 +210,7 @@ void main(void)
/* Wait for test_thread to activate us */ /* Wait for test_thread to activate us */
k_sem_take(&task_sem, K_FOREVER); k_sem_take(&task_sem, K_FOREVER);
/* Wake the test fiber */ /* Wake the test thread */
k_wakeup(test_thread_id); k_wakeup(test_thread_id);
if (test_failure) { if (test_failure) {

View file

@ -212,7 +212,7 @@ void main(void)
MY_PRIORITY, 0, K_NO_WAIT); MY_PRIORITY, 0, K_NO_WAIT);
/* /*
* The fiber/task should not run past where the spurious interrupt is * The thread should not run past where the spurious interrupt is
* generated. Therefore spur_handler_aborted_thread should remain at 1. * generated. Therefore spur_handler_aborted_thread should remain at 1.
*/ */
if (spur_handler_aborted_thread == 0) { if (spur_handler_aborted_thread == 0) {

View file

@ -314,7 +314,7 @@ static void coop_delayed_work_resubmit(int arg1, int arg2)
} }
} }
static int test_delayed_resubmit_fiber(void) static int test_delayed_resubmit_thread(void)
{ {
TC_PRINT("Starting delayed resubmit from coop thread test\n"); TC_PRINT("Starting delayed resubmit from coop thread test\n");
@ -380,7 +380,7 @@ void main(void)
reset_results(); reset_results();
if (test_delayed_resubmit_fiber() != TC_PASS) { if (test_delayed_resubmit_thread() != TC_PASS) {
goto end; goto end;
} }