zephyr/arch/arc/core/thread.c

299 lines
7.8 KiB
C
Raw Normal View History

/*
* Copyright (c) 2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief New thread creation for ARCv2
*
* Core thread related primitives for the ARCv2 processor architecture.
*/
#include <kernel.h>
headers: Refactor kernel and arch headers. This commit refactors kernel and arch headers to establish a boundary between private and public interface headers. The refactoring strategy used in this commit is detailed in the issue This commit introduces the following major changes: 1. Establish a clear boundary between private and public headers by removing "kernel/include" and "arch/*/include" from the global include paths. Ideally, only kernel/ and arch/*/ source files should reference the headers in these directories. If these headers must be used by a component, these include paths shall be manually added to the CMakeLists.txt file of the component. This is intended to discourage applications from including private kernel and arch headers either knowingly and unknowingly. - kernel/include/ (PRIVATE) This directory contains the private headers that provide private kernel definitions which should not be visible outside the kernel and arch source code. All public kernel definitions must be added to an appropriate header located under include/. - arch/*/include/ (PRIVATE) This directory contains the private headers that provide private architecture-specific definitions which should not be visible outside the arch and kernel source code. All public architecture- specific definitions must be added to an appropriate header located under include/arch/*/. - include/ AND include/sys/ (PUBLIC) This directory contains the public headers that provide public kernel definitions which can be referenced by both kernel and application code. - include/arch/*/ (PUBLIC) This directory contains the public headers that provide public architecture-specific definitions which can be referenced by both kernel and application code. 2. Split arch_interface.h into "kernel-to-arch interface" and "public arch interface" divisions. - kernel/include/kernel_arch_interface.h * provides private "kernel-to-arch interface" definition. * includes arch/*/include/kernel_arch_func.h to ensure that the interface function implementations are always available. * includes sys/arch_interface.h so that public arch interface definitions are automatically included when including this file. - arch/*/include/kernel_arch_func.h * provides architecture-specific "kernel-to-arch interface" implementation. * only the functions that will be used in kernel and arch source files are defined here. - include/sys/arch_interface.h * provides "public arch interface" definition. * includes include/arch/arch_inlines.h to ensure that the architecture-specific public inline interface function implementations are always available. - include/arch/arch_inlines.h * includes architecture-specific arch_inlines.h in include/arch/*/arch_inline.h. - include/arch/*/arch_inline.h * provides architecture-specific "public arch interface" inline function implementation. * supersedes include/sys/arch_inline.h. 3. Refactor kernel and the existing architecture implementations. - Remove circular dependency of kernel and arch headers. The following general rules should be observed: * Never include any private headers from public headers * Never include kernel_internal.h in kernel_arch_data.h * Always include kernel_arch_data.h from kernel_arch_func.h * Never include kernel.h from kernel_struct.h either directly or indirectly. Only add the kernel structures that must be referenced from public arch headers in this file. - Relocate syscall_handler.h to include/ so it can be used in the public code. This is necessary because many user-mode public codes reference the functions defined in this header. - Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is necessary to provide architecture-specific thread definition for 'struct k_thread' in kernel.h. - Remove any private header dependencies from public headers using the following methods: * If dependency is not required, simply omit * If dependency is required, - Relocate a portion of the required dependencies from the private header to an appropriate public header OR - Relocate the required private header to make it public. This commit supersedes #20047, addresses #19666, and fixes #3056. Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-10-25 00:08:21 +09:00
#include <ksched.h>
kernel/arch: consolidate tTCS and TNANO definitions There was a lot of duplication between architectures for the definition of threads and the "nanokernel" guts. These have been consolidated. Now, a common file kernel/unified/include/kernel_structs.h holds the common definitions. Architectures provide two files to complement it: kernel_arch_data.h and kernel_arch_func.h. The first one contains at least the struct _thread_arch and struct _kernel_arch data structures, as well as the struct _callee_saved and struct _caller_saved register layouts. The second file contains anything that needs what is provided by the common stuff in kernel_structs.h. Those two files are only meant to be included in kernel_structs.h in very specific locations. The thread data structure has been separated into three major parts: common struct _thread_base and struct k_thread, and arch-specific struct _thread_arch. The first and third ones are included in the second. The struct s_NANO data structure has been split into two: common struct _kernel and arch-specific struct _kernel_arch. The latter is included in the former. Offsets files have also changed: nano_offsets.h has been renamed kernel_offsets.h and is still included by the arch-specific offsets.c. Also, since the thread and kernel data structures are now made of sub-structures, offsets have to be added to make up the full offset. Some of these additions have been consolidated in shorter symbols, available from kernel/unified/include/offsets_short.h, which includes an arch-specific offsets_arch_short.h. Most of the code include offsets_short.h now instead of offsets.h. Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-08 10:36:50 -05:00
#include <offsets_short.h>
#include <wait_q.h>
#ifdef CONFIG_USERSPACE
#include <arch/arc/v2/mpu/arc_core_mpu.h>
#endif
/* initial stack frame */
struct init_stack_frame {
uint32_t pc;
#ifdef CONFIG_ARC_HAS_SECURE
uint32_t sec_stat;
#endif
uint32_t status32;
uint32_t r3;
uint32_t r2;
uint32_t r1;
uint32_t r0;
};
/*
* @brief Initialize a new thread from its stack space
*
kernel/arch: consolidate tTCS and TNANO definitions There was a lot of duplication between architectures for the definition of threads and the "nanokernel" guts. These have been consolidated. Now, a common file kernel/unified/include/kernel_structs.h holds the common definitions. Architectures provide two files to complement it: kernel_arch_data.h and kernel_arch_func.h. The first one contains at least the struct _thread_arch and struct _kernel_arch data structures, as well as the struct _callee_saved and struct _caller_saved register layouts. The second file contains anything that needs what is provided by the common stuff in kernel_structs.h. Those two files are only meant to be included in kernel_structs.h in very specific locations. The thread data structure has been separated into three major parts: common struct _thread_base and struct k_thread, and arch-specific struct _thread_arch. The first and third ones are included in the second. The struct s_NANO data structure has been split into two: common struct _kernel and arch-specific struct _kernel_arch. The latter is included in the former. Offsets files have also changed: nano_offsets.h has been renamed kernel_offsets.h and is still included by the arch-specific offsets.c. Also, since the thread and kernel data structures are now made of sub-structures, offsets have to be added to make up the full offset. Some of these additions have been consolidated in shorter symbols, available from kernel/unified/include/offsets_short.h, which includes an arch-specific offsets_arch_short.h. Most of the code include offsets_short.h now instead of offsets.h. Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-08 10:36:50 -05:00
* The thread control structure is put at the lower address of the stack. An
* initial context, to be "restored" by __return_from_coop(), is put at
* the other end of the stack, and thus reusable by the stack when not
* needed anymore.
*
* The initial context is a basic stack frame that contains arguments for
* z_thread_entry() return address, that points at z_thread_entry()
* and status register.
*
* <options> is currently unused.
*
* @param pStackmem the pointer to aligned stack memory
* @param stackSize the stack size in bytes
* @param pEntry thread entry point routine
* @param parameter1 first param to entry point
* @param parameter2 second param to entry point
* @param parameter3 third param to entry point
unified/arc: add unified kernel support for ARC arch - the interrupt (both regular and fast) now does not do rescheduling if the current thread is a coop thread or if the scheduler is not locked - the _nanokernel.flags cache of _current.flags is not used anymore (could be a source of bugs) and is not needed in the scheduling algo - there is no 'task' field in the _nanokernel anymore: scheduling routines call _get_next_ready_thread instead - the _nanokernel.fiber field is replaced by a more sophisticated ready_q, based on the microkernel's priority-bitmap-based one - thread initialization initializes new fields in the tcs, and does not initialize obsolete ones - nano_private includes nano_internal.h from the unified directory - The FIBER, TASK and PREEMPTIBLE flags do not exist anymore: the thread priority drives the behaviour - the tcs uses a dlist for queuing in both ready and wait queues instead of a custom singly-linked list - other new fields in the tcs include a schedule-lock count, a back-pointer to init data (when the task is static) and a pointer to swap data, needed when a thread pending on _Swap() must be passed more then just one value (e.g. k_stack_pop() needs an error code and data) - the 'fiber' and 'task' fields of _nanokernel are replaced with an O(1) ready queue (taken from the microkernel) - fiberRtnValueSet() is aliased to _set_thread_return_value since it also operates on preempt threads now - _set_thread_return_value_with_data() sets the swap_data field in addition to a return value from _Swap() - convenience aliases are created for shorter names: - _current is defined as _nanokernel.current - _ready_q is defined as _nanokernel.ready_q - _Swap() sets the threads's return code to -EAGAIN before swapping out to prevent timeouts to have to set it (solves hard issues in some kernel objects). Change-Id: Ib9690173cbc36c36a9ec67e65590b40d758673de Signed-off-by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
2016-09-30 11:02:37 -04:00
* @param priority thread priority
* @param options thread options: K_ESSENTIAL
*
* @return N/A
*/
void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
size_t stackSize, k_thread_entry_t pEntry,
void *parameter1, void *parameter2, void *parameter3,
int priority, unsigned int options)
{
char *pStackMem = Z_THREAD_STACK_BUFFER(stack);
char *stackEnd;
char *priv_stack_end;
struct init_stack_frame *pInitCtx;
#ifdef CONFIG_USERSPACE
size_t stackAdjSize;
size_t offset = 0;
stackAdjSize = Z_ARC_MPU_SIZE_ALIGN(stackSize);
stackEnd = pStackMem + stackAdjSize;
#ifdef CONFIG_STACK_POINTER_RANDOM
offset = stackAdjSize - stackSize;
#endif
if (options & K_USER) {
#ifdef CONFIG_GEN_PRIV_STACKS
thread->arch.priv_stack_start =
(uint32_t)z_priv_stack_find(thread->stack_obj);
#else
thread->arch.priv_stack_start =
(uint32_t)(stackEnd + STACK_GUARD_SIZE);
#endif
priv_stack_end = (char *)Z_STACK_PTR_ALIGN(
thread->arch.priv_stack_start +
CONFIG_PRIVILEGED_STACK_SIZE);
/* reserve 4 bytes for the start of user sp */
priv_stack_end -= 4;
(*(uint32_t *)priv_stack_end) = Z_STACK_PTR_ALIGN(
(uint32_t)stackEnd - offset);
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* reserve stack space for the userspace local data struct */
thread->userspace_local_data =
(struct _thread_userspace_local_data *)
Z_STACK_PTR_ALIGN(stackEnd -
sizeof(*thread->userspace_local_data) - offset);
/* update the start of user sp */
(*(uint32_t *)priv_stack_end) =
(uint32_t) thread->userspace_local_data;
#endif
} else {
pStackMem += STACK_GUARD_SIZE;
stackEnd += STACK_GUARD_SIZE;
thread->arch.priv_stack_start = 0;
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* reserve stack space for the userspace local data struct */
priv_stack_end = (char *)Z_STACK_PTR_ALIGN(stackEnd
- sizeof(*thread->userspace_local_data) - offset);
thread->userspace_local_data =
(struct _thread_userspace_local_data *)priv_stack_end;
#else
priv_stack_end = (char *)Z_STACK_PTR_ALIGN(stackEnd - offset);
#endif
}
z_new_thread_init(thread, pStackMem, stackAdjSize);
/* carve the thread entry struct from the "base" of
the privileged stack */
pInitCtx = (struct init_stack_frame *)(
priv_stack_end - sizeof(struct init_stack_frame));
/* fill init context */
pInitCtx->status32 = 0U;
if (options & K_USER) {
pInitCtx->pc = ((uint32_t)z_user_thread_entry_wrapper);
} else {
pInitCtx->pc = ((uint32_t)z_thread_entry_wrapper);
}
/*
* enable US bit, US is read as zero in user mode. This will allow use
* mode sleep instructions, and it enables a form of denial-of-service
* attack by putting the processor in sleep mode, but since interrupt
* level/mask can't be set from user space that's not worse than
* executing a loop without yielding.
*/
pInitCtx->status32 |= _ARC_V2_STATUS32_US;
#else /* For no USERSPACE feature */
pStackMem += STACK_GUARD_SIZE;
stackEnd = pStackMem + stackSize;
z_new_thread_init(thread, pStackMem, stackSize);
priv_stack_end = stackEnd;
pInitCtx = (struct init_stack_frame *)(
Z_STACK_PTR_ALIGN(priv_stack_end) -
sizeof(struct init_stack_frame));
pInitCtx->status32 = 0U;
pInitCtx->pc = ((uint32_t)z_thread_entry_wrapper);
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
pInitCtx->sec_stat = z_arc_v2_aux_reg_read(_ARC_V2_SEC_STAT);
#endif
pInitCtx->r0 = (uint32_t)pEntry;
pInitCtx->r1 = (uint32_t)parameter1;
pInitCtx->r2 = (uint32_t)parameter2;
pInitCtx->r3 = (uint32_t)parameter3;
/* stack check configuration */
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
pInitCtx->sec_stat |= _ARC_V2_SEC_STAT_SSC;
#else
pInitCtx->status32 |= _ARC_V2_STATUS32_SC;
#endif
#ifdef CONFIG_USERSPACE
if (options & K_USER) {
thread->arch.u_stack_top = (uint32_t)pStackMem;
thread->arch.u_stack_base = (uint32_t)stackEnd;
thread->arch.k_stack_top =
(uint32_t)(thread->arch.priv_stack_start);
thread->arch.k_stack_base = (uint32_t)
(thread->arch.priv_stack_start + CONFIG_PRIVILEGED_STACK_SIZE);
} else {
thread->arch.k_stack_top = (uint32_t)pStackMem;
thread->arch.k_stack_base = (uint32_t)stackEnd;
thread->arch.u_stack_top = 0;
thread->arch.u_stack_base = 0;
}
#else
thread->arch.k_stack_top = (uint32_t) pStackMem;
thread->arch.k_stack_base = (uint32_t) stackEnd;
#endif
#endif
#ifdef CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS
pInitCtx->status32 |= _ARC_V2_STATUS32_AD;
#endif
thread->switch_handle = thread;
kernel/arch: consolidate tTCS and TNANO definitions There was a lot of duplication between architectures for the definition of threads and the "nanokernel" guts. These have been consolidated. Now, a common file kernel/unified/include/kernel_structs.h holds the common definitions. Architectures provide two files to complement it: kernel_arch_data.h and kernel_arch_func.h. The first one contains at least the struct _thread_arch and struct _kernel_arch data structures, as well as the struct _callee_saved and struct _caller_saved register layouts. The second file contains anything that needs what is provided by the common stuff in kernel_structs.h. Those two files are only meant to be included in kernel_structs.h in very specific locations. The thread data structure has been separated into three major parts: common struct _thread_base and struct k_thread, and arch-specific struct _thread_arch. The first and third ones are included in the second. The struct s_NANO data structure has been split into two: common struct _kernel and arch-specific struct _kernel_arch. The latter is included in the former. Offsets files have also changed: nano_offsets.h has been renamed kernel_offsets.h and is still included by the arch-specific offsets.c. Also, since the thread and kernel data structures are now made of sub-structures, offsets have to be added to make up the full offset. Some of these additions have been consolidated in shorter symbols, available from kernel/unified/include/offsets_short.h, which includes an arch-specific offsets_arch_short.h. Most of the code include offsets_short.h now instead of offsets.h. Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-08 10:36:50 -05:00
thread->arch.relinquish_cause = _CAUSE_COOP;
thread->callee_saved.sp =
(uint32_t)pInitCtx - ___callee_saved_stack_t_SIZEOF;
kernel/arch: consolidate tTCS and TNANO definitions There was a lot of duplication between architectures for the definition of threads and the "nanokernel" guts. These have been consolidated. Now, a common file kernel/unified/include/kernel_structs.h holds the common definitions. Architectures provide two files to complement it: kernel_arch_data.h and kernel_arch_func.h. The first one contains at least the struct _thread_arch and struct _kernel_arch data structures, as well as the struct _callee_saved and struct _caller_saved register layouts. The second file contains anything that needs what is provided by the common stuff in kernel_structs.h. Those two files are only meant to be included in kernel_structs.h in very specific locations. The thread data structure has been separated into three major parts: common struct _thread_base and struct k_thread, and arch-specific struct _thread_arch. The first and third ones are included in the second. The struct s_NANO data structure has been split into two: common struct _kernel and arch-specific struct _kernel_arch. The latter is included in the former. Offsets files have also changed: nano_offsets.h has been renamed kernel_offsets.h and is still included by the arch-specific offsets.c. Also, since the thread and kernel data structures are now made of sub-structures, offsets have to be added to make up the full offset. Some of these additions have been consolidated in shorter symbols, available from kernel/unified/include/offsets_short.h, which includes an arch-specific offsets_arch_short.h. Most of the code include offsets_short.h now instead of offsets.h. Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-08 10:36:50 -05:00
/* initial values in all other regs/k_thread entries are irrelevant */
}
void *z_arch_get_next_switch_handle(struct k_thread **old_thread)
{
*old_thread = _current;
return z_get_next_switch_handle(*old_thread);
}
#ifdef CONFIG_USERSPACE
FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
void *p1, void *p2, void *p3)
{
_current->stack_info.start = (uint32_t)_current->stack_obj;
#ifdef CONFIG_GEN_PRIV_STACKS
_current->arch.priv_stack_start =
(uint32_t)z_priv_stack_find(_current->stack_obj);
#else
_current->arch.priv_stack_start =
(uint32_t)(_current->stack_info.start +
_current->stack_info.size + STACK_GUARD_SIZE);
#endif
#ifdef CONFIG_ARC_STACK_CHECKING
_current->arch.k_stack_top = _current->arch.priv_stack_start;
_current->arch.k_stack_base = _current->arch.priv_stack_start +
CONFIG_PRIVILEGED_STACK_SIZE;
_current->arch.u_stack_top = _current->stack_info.start;
_current->arch.u_stack_base = _current->stack_info.start +
_current->stack_info.size;
#endif
/* possible optimizaiton: no need to load mem domain anymore */
/* need to lock cpu here ? */
configure_mpu_thread(_current);
z_arc_userspace_enter(user_entry, p1, p2, p3,
(uint32_t)_current->stack_obj,
_current->stack_info.size, _current);
CODE_UNREACHABLE;
}
#endif
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
int arch_float_disable(struct k_thread *thread)
{
unsigned int key;
/* Ensure a preemptive context switch does not occur */
key = irq_lock();
/* Disable all floating point capabilities for the thread */
thread->base.user_options &= ~K_FP_REGS;
irq_unlock(key);
return 0;
}
int arch_float_enable(struct k_thread *thread)
{
unsigned int key;
/* Ensure a preemptive context switch does not occur */
key = irq_lock();
/* Enable all floating point capabilities for the thread */
thread->base.user_options |= K_FP_REGS;
irq_unlock(key);
return 0;
}
#endif /* CONFIG_FPU && CONFIG_FPU_SHARING */