All of these should be no-ops for the following reasons:
1. User threads cannot configure memory domains, only supervisor
threads.
2. The scope of memory domains is user thread memory access,
supervisor threads can access the entire memory map.
Hence it's never required to reprogram the MPU on the current CPU
when a memory domain API is called.
This does not address the issue #27785 if a user thread in the domain
is running on some other CPU.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All of these should be no-ops for the following reasons:
1. User threads cannot configure memory domains, only supervisor
threads.
2. The scope of memory domains is user thread memory access,
supervisor threads can access the entire memory map.
Hence it's never required to reprogram the MPU when a memory domain
API is called.
Fixes a problem where an assertion would fail if a supervisor thread
added a partition and then immediately removes it, and possibly
other problems.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
* add toolchain abstraction for coverage
* add select HAS_COVERAGE_SUPPORT to kconfig
* port gcov linker code to CKake for arc
Signed-off-by: Jingru Wang <jingru@synopsys.com>
The x86 paging code has been rewritten to support another paging mode
and non-identity virtual mappings.
- Paging code now uses an array of paging level characteristics and
walks tables using for loops. This is opposed to having different
functions for every paging level and lots of #ifdefs. The code is
now more concise and adding new paging modes should be trivial.
- We now support 32-bit, PAE, and IA-32e page tables.
- The page tables created by gen_mmu.py are now installed at early
boot. There are no longer separate "flat" page tables. These tables
are mutable at any time.
- The x86_mmu code now has a private header. Many definitions that did
not need to be in public scope have been moved out of mmustructs.h
and either placed in the C file or in the private header.
- Improvements to dumping page table information, with the physical
mapping and flags all shown
- arch_mem_map() implemented
- x86 userspace/memory domain code ported to use the new
infrastructure.
- add logic for physical -> virtual instruction pointer transition,
including cleaning up identity mappings after this takes place.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The address was being truncated because we were using
32-bit registers. CONFIG_MMU is always enabled on 64-bit,
remove the #ifdef.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We need to produce a binary set of page tables wired together
by physical address. Add build system logic to use the script
to produce them.
Some logic for running build scripts that produce artifacts moved
out of IA32 into common CMake code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This produces a set of page tables with system RAM
mapped for read/write/execute access by supervisor
mode, such that it may be installed in the CPU
in the earliest boot stages and mutable at runtime.
These tables optionally support a dual physical/virtual
mapping of RAM to help boot virtual memory systems.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The x86 ports are linked at their physical address and
the arch_mem_map() implementation currently requires
virtual = physical. This will be removed later.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If CONFIG_MMU is active, choose whether to separate text,
rodata, and ram into their own page-aligned regions so that
they have have different MMU permissions applied.
If disabled, all RAM pages will have RWX permission to
supervisor mode, but some memory may be saved due to lack
of page alignment padding between these regions.
This used to always happen. This patch adds the Kconfig,
linker script changes to come in a subsequent patch.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This adds the necessary bits in arch code, and Python scripts
to enable coredump support for ARM Cortex-M.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds a very primitive coredump mechanism under subsys/debug
where during fatal error, register and memory content can be
dumped to coredump backend. One such backend utilizing log
module for output is included. Once the coredump log is converted
to a binary file, it can be used with the ELF output file as
inputs to an overly simplified implementation of a GDB server.
This GDB server can be attached via the target remote command of
GDB and will be serving register and memory content. This allows
using GDB to examine stack and memory where the fatal error
occurred.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Use CONFIG_TRACING_ISR to exclude tracing ISRs just like other
architectures.
Also, z_sys_trace_isr_exit was not defined (It was renamed some time ago
and this was forgotten...)
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Move tracing switched_in and switched_out to the architecture code and
remove duplications. This changes swap tracing for x86, xtensa.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Make explicit what registers we are going to be touched / modified when
using z_arm64_enter_exc and z_arm64_exit_exc.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The default implementation is the same as this custom
one now, as the assertion that the context switch occurs
at the end of the ISR is true for all arches.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If a thread is running, an ISR fires, and the ISR
itself calls k_thread_abort() on the thread, the ISR
was being unexpectedly terminated.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
With the current identity mapping scheme a new test requires
some more memory to be set aside here.
In production this parameter gets turned per-board, and
the pending paging code overhaul in #27001 significantly
relaxes this as driver I/O mappings are no longer sparse.
Fixes a runtime failure in tests/kernel/device on
qemu_x86_64 that somehow slipped past CI.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
unify how XIP is configured across architectures. Use imply instead of
setting defaults per architecture and imply XIP on riscv arch and remove
XIP configuration from individual defconfig files to match other
architectures.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This commit adds the support for HW Stack Protection when
building Zephyr without support for multi-threading. The
single MPU guard (if the feature is enabled) is set to
guard the Main stack area. The stack fail check is also
updated.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
For the case of building Zephy with no-multithreading
support (CONFIG_MULTITHREADING=n) we introduce a
custom (ARCH-specific) function to switch to main()
from cstart(). This is required, since the Cortex-M
initialization code is temporarily using the interrupt
stack and main() should be using the z_main_stack,
instead. The function performs the PSP switching,
the PSPLIM setting (for ARMv8-M), FPU initialization
and static memory region initialization, to mimic
what the normal (CONFIG_MULTITHREADING=y) case does.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
We extract the common code for both multithreading and
non-multithreading cases into a common static function
which will get called in Cortex-M archictecture initialization.
This commit does not introduce behavioral changes.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This patch is simply adding the guard area (if applicable) to
the calculations for the size of the interrupt stack in reset.S
for ARM Cortex-M architecture. If exists, the GUARD area is
always reserved aside from CONFIG_ISR_STACK_SIZE, since the
interrupt stack is defined using the K_KERNEL_STACK_DEFINE.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Include directories for ${ARCH} is not specified correctly.
Several places in Zephyr, the include directories are specified as:
${ZEPHYR_BASE}/arch/${ARCH}/include
the correct line is:
${ARCH_DIR}/${ARCH}/include
to correctly support out of tree archs.
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
This set of functions seem to be there just because of historical
reasons, stemming from Kbuild. They are non-obvious and prone to errors,
so remove them in favor of the `_ifdef()` ones with an explicit
`CONFIG_` condition.
Script used:
git grep -l _if_kconfig | xargs sed -E -i
"s/_if_kconfig\(\s*(\w*)/_ifdef(CONFIG_\U\1\E \1/g"
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.
Two new arch defines are introduced:
- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN
New public declaration macros:
- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF
If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.
Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.
Based on #24467 authored by Flavio Ceolin.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.
thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.
thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.
CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.
thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.
Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
MISRA-C wants the parameter names in a function implementaion
to match the names used by the header prototype.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This interface is documented already in
kernel/include/kernel_arch_interface.h
Other architectural notes were left in place except where
they were incorrect (like the thread struct
being in the low stack addresses)
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
In CPUs with VTOR we are free to place the relay vector table
section anywhere inside ROM_START section (as long as we respect
alignment requirements). This PR moves the relay table towards
the end of ROM_START. This leaves sufficient area for placing
some SoC-specific sections inside ROM_START that need to start
at a fixed address.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
printf function didn't have enough specifiers for the
number of arguments in the command line (Coverity warning).
Fixes#26985Fixes#26986
Signed-off-by: David Leach <david.leach@nxp.com>
Rewrite 'exit_tickless_idle' macro to make code more readable.
No functional changes intended.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
NOP instruction is available via builtin for ARC so get rid of all
ASM inlines with NOP/NOP_S instructions.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
_vector_table and __vector_relay_table symbols were exported with GTEXT
(i.e. as functions). That resulted in bit[0] being incorrectly set in
the addresses they represent (for functions this bit set to 1 specifies
execution in Thumb state).
This commit corrects this by switching to exporting these objects as
objects, i.e. with GDATA.
Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
MISRA-C directive 4.10 requires that files being included must
prevent itself from being included more than once. So add
include guards to the offset files, even though they are C
source files.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>