Add MMU support for ARMv8A. We support 4kB translation granule.
Regions to be mapped with specific attributes are required to be
at least 4kB aligned and can be provided through platform file(soc.c).
Signed-off-by: Abhishek Shah <abhishek.shah@broadcom.com>
We lock IRQs around writing to RNR and immediate reading of RBAR
RASR in ARMv7-M MPU driver. We do this for the functions invoked
directly or undirectly by arch_buffer_validate(). This locking
guarantees that
- arch_buffer_validate() calls by ISRs may safely preempt each
other
- arch_buffer_validate() calls by threads may safely preempt
each other (i.e via context switch -out and -in again).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When entering user mode, and before the privileged are dropped,
the thread switches back to using its default (user) stack. For
stack limit checking not to lead to a stack overflow, the PSPLIM
and PSP register updates need to be done with PendSV IRQ locked.
This is because context-switch (done in PendSV IRQ) reprograms
the stack pointer limit register based on the current PSP
of the thread. This commit enforces PendSV locking and
unlocking while reprogramming PSP and PSPLIM when switching to
user stack at z_arm_userspace_enter().
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Modifying the PSP via an MSR instruction is not subject to
stack limit checking so we can remove the relevant code
block in the begining of z_arm_userspace_enter(), which clears
PSPLIM. We add a comment when setting the PSP to the privilege
stack to stress that clearing the PSPLIM is not required and it
is always a safe operation.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When returning from a system call, the thread switches back
to using its default (user) stack. For stack limit checking
not to lead to a stack overflow, the updates of PSPLIM and
PSP registers need to be done with PendSV IRQ locked. This
is because context-switch (done in PendSV IRQ) reprograms
the stack pointer limit register based on the current PSP
of the thread. This commit enforces PendSV locking and
unlocking while reprogramming PSP and PSPLIM when returning
from a system call.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
In this commit we remove the PSPLIM clearing when entering
z_arm_do_syscall(), since we want PSPLIM to keep guarding
the user thread stack, until the thread has switched to its
privileged stack, for executing the system call.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Thread will be in privileged mode after returning from SCVall. It
will use the default (user) stack before switching to the privileged
stack to execute the system call. We need to protect the user stack
against stack overflows until this stack transition. We update the
note in z_arm_do_syscall(), stating clearly that it executing with
stack protection when building with stack limit checking support
(ARMv8-M only).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When configuring the built-in stack guard, via setting the
PSPLIM register, during thread context-switch, we shall only
set PSPLIM to "guard" the thread's privileged stack area when
the thread is actually using it (PSP is on this stack).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
We do not need to have the PSPLIM clearing directly inside
the PendSV handler and outside the function that configures
it, configure_builtin_stack_guard(), since the latter is also
invoked inside the PendSV handler. This commit moves the
PSPLIM clearing inside configure_builtin_stack_guard(). The
patch is not introducing any behavioral change on the
stack limit checking mechanism for Cortex-M.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
We add the mechanism to generate offset #defines for
thread stack info start, to be used directly in ASM.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
We introduce a macro to define the IRQ priority level for
PendsV, and use it in arch/arm/include/aarch32/exc.h
to set the PendSV IRQ level. The commit does not change
the behavior of PendSV interrupt.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This commit adds some documentation for the exception
priority scheme for 32-bit ARM architecture variants.
In addition we document that SVCall priority level for
ARMv6-M is implicitly set to highest (by leaving it as
default).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If IO APIC is in logical destination mode, local APICs compare their
logical APIC ID defined in LDR (Logical Destination Register) with
the destination code sent with the interrupt to determine whether or not
to accept the incoming interrupt.
This patch programs LDR in xAPIC mode to support IO APIC logical mode.
The local APIC ID from local APIC ID register can't be used as the
'logical APIC ID' because LAPIC ID may not be consecutive numbers hence
it makes it impossible for LDR to encode 8 IDs within 8 bits.
This patch chooses 0 for BSP, and for APs, cpu_number which is the index
to x86_cpuboot[], which ultimately assigned in z_smp_init[].
Signed-off-by: Zide Chen <zide.chen@intel.com>
* for COOP_SCHED case, i.e., PREEMPT_ENABLED is not enabled, the
idle thread will block other threads which is not correct.
* remove the check of PREEMPT_ENABLED in the epilogue of irq and
exception handling. Let the scheduler (should_preempt()) decide
whether the thread should be preempted.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Same deal as in commit eddd98f811 ("kconfig: Replace some single-symbol
'if's with 'depends on'"), for the remaining cases outside defconfig
files. See that commit for an explanation.
Will do the defconfigs separately in case there are any complaints
there.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
* remove irq lock/unlock which is not needed because of
the protection of offload_sem in irq_offload
* simplify the assembly codes related irq_offload, remove
the thread switch logic
* the old codes may do thread switch in the epilogue of
irq_offload handling with int locked, this is not correct
may cause irq_offload related codes crash.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
This commit fixes incorrect Cortex-R interrupt lock, unlock and state
check function implementations.
The issues can be summarised as follows:
1. The current implementation of 'z_arch_irq_lock' returns the value
of CPSR as the IRQ key and, since CPSR contains many other state
bits, this caused 'z_arch_irq_unlocked' to return false even when
IRQ is unlocked. This problem is fixed by isolating only the I-bit
of CPSR and returning this value as the IRQ key, such that it
returns a non-zero value when interrupt is disabled.
2. The current implementation of 'z_arch_irq_unlock' directly updates
the value of CPSR control field with the IRQ key and this can cause
other state bits in CPSR to be corrupted. This problem is fixed by
conditionally enabling interrupt using CPSIE instruction when the
value of IRQ key is a zero.
3. The current implementation of 'z_arch_is_in_isr' checks the value
of CPSR MODE field and returns true if its value is IRQ or FIQ.
While this does not normally cause an issue, the function can return
false when IRQ offloading is used because the offload function
executes in SVC mode. This problem is fixed by adding check for SVC
mode.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
The callee-saved registers have been separated out and will not
be saved/restored if exception debugging is shut off.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The context switch implementation forgot to save the current flag
state of the old thread, so on resume the flags would be restored to
whatever value they had at the last interrupt preemption or thread
initialization. In practice this guaranteed that the interrupt enable
bit would always be wrong, becuase obviously new threads and preempted
ones have interrupts enabled, while arch_switch() is always called
with them masked. This opened up a race between exit from
arch_switch() and the final exit path in z_swap().
The other state bits weren't relevant -- the oddball ones aren't used
by Zephyr, and as arch_switch() on this architecture is a function
call the compiler would have spilled the (caller-save) comparison
result flags anyway.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Use of the _current_cpu pointer cannot be done safely in a preemptible
context. If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.
Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).
Note that the resulting _current expression now requires locking and
is going to be somewhat slower. Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.
Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing stack_analyze APIs had some problems:
1. Not properly namespaced
2. Accepted the stack object as a parameter, yet the stack object
does not contain the necessary information to get the associated
buffer region, the thread object is needed for this
3. Caused a crash on certain platforms that do not allow inspection
of unused stack space for the currently running thread
4. No user mode access
5. Separately passed in thread name
We deprecate these functions and add a new API
k_thread_stack_space_get() which addresses all of these issues.
A helper API log_stack_usage() also added which resembles
STACK_ANALYZE() in functionality.
Fixes: #17852
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This reverts commit 9987c2e2f9
which spills SoC configs into architecture files and is not
exactly desirable. So revert it.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
All SoCs must now 'select' one of the CONFIG_<arch> symbols. Add an
ARCH_IS_SET helper symbol that's selected by the arch symbols and
checked in CMake, printing a warning otherwise.
Might save people some time until they're used to the new scheme.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
All board defconfig files currently set the architecture in addition to
the board and the SoC, by setting e.g. CONFIG_ARM=y. This spams up
defconfig files.
CONFIG_<arch> symbols currently being set in configuration files also
means that they are configurable (can be changed in menuconfig and in
configuration files), even though changing the architecture won't work,
since other things get set from -DBOARD=<board>. Many boards also allow
changing the architecture symbols independently from the SoC symbols,
which doesn't make sense.
Get rid of all assignments to CONFIG_<arch> symbols and clean up the
relationships between symbols and the configuration interface, like
this:
1. Remove the choice with the CONFIG_<arch> symbols in arch/Kconfig and
turn the CONFIG_<arch> symbols into invisible
(promptless/nonconfigurable) symbols instead.
Getting rid of the choice allows the symbols to be 'select'ed (choice
symbols don't support 'select').
2. Select the right CONFIG_<arch> symbol from the SOC_SERIES_* symbols.
This makes sense since you know the architecture if you know the SoC.
Put the select on the SOC_* symbol instead for boards that don't have
a SOC_SERIES_*.
3. Remove all assignments to CONFIG_<arch> symbols. The assignments
would generate errors now, since the symbols are promptless.
The change was done by grepping for assignments to CONFIG_<arch>
symbols, finding the SOC_SERIES_* (or SOC_*) symbol being set in the
same defconfig file, and putting a 'select' on it instead.
See
https://github.com/ulfalizer/zephyr/commits/hide-arch-syms-unsquashed
for a split-up version of this commit, which will make it easier to see
how stuff was done. This needs to go in as one commit though.
This change is safer than it might seem re. outstanding PRs, because any
assignment to CONFIG_<arch> symbols generates an error now, making
outdated stuff easy to catch.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
CUSTOM_SECTION_ALIGN is already defined within an 'if ARM_MPU', so it
does not need a 'depends on ARM_MPU'.
Flagged by https://github.com/zephyrproject-rtos/ci-tools/pull/128.
Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
Add TRACING_ISR Kconfig to help high latency backend working well.
Currently the ISR tracing hook function is put at the begining and
ending of ISR wrapper, when there is ISR needed in the tracing path
(especially tracing backend), it will cause tracing buffer easily
be exhausted if async tracing method enabled. Also it will increase
system latency if all the ISRs are traced. So add TRACING_ISR to
enable/disable ISR tracing here. Later a filter out mechanism based
on irq number will be added.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>
if USERSPACE is configured, it needs to record the user/kernel mode
of interrupted thread, because the switch of aux_sec_k_sp/aux_user_sp
depends on the aux_irq_act's U bit.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Use BOOTLOADER definition to separate bootloader code. This allows to
use the same file reset-vector.S when building bootloader and when
CONFIG_XTENSA_RESET_VECTOR is enabled.
Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
There was a bug where double-dispatch of a single thread on multiple
SMP CPUs was possible. This can be mind-bending to diagnose, so when
CONFIG_ASSERT is enabled add an extra instruction to __resume (the
shared code path for both interupt return and context switch) that
poisons the shared RIP of the now-running thread with a recognizable
invalid value.
Now attempts to run the thread again will crash instantly with a
discoverable cookie in their instruction pointer, and this will remain
true until it gets a new RIP at the next interrupt or switch.
This is under CONFIG_ASSERT because it meets the same design goals of
"a cheap test for impossible situations", not because it's part of the
assertion framework.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Enable the shared IRQ for the UART line and enable the remaining tasks
that depends on a separated declaration of the TX/RX/Err/... IRQs.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The cmsis_rtos tests are failing because the stack size used by CMSIS is
too small. Customize the stack size for the aarch64 architecture and
re-enable the tests.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
ARMv8-A SoCs enter EL3 after reset. Add a new config option
(CONFIG_SWITCH_TO_EL1) to switch from EL3 to EL1 at boot and default it
to 'y'.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
While QEMU's Cortex-A53 emulation by default only emulates a CPU in EL1,
other QEMU forks (for example the QEMU released by Xilinx) and real
hardware starts in EL3.
To support all the ELn we introduce a macro to identify at run-time the
Exception Level and take the correct actions.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
To be able to pass the unit test we need to add a set of defines for the
ARM64 architecture. Fix this.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
To be able to successfully compile the kernel for the ARM64 architecture
we have to tweak the compiler-related files to be able to use the
AArch64 GCC compiler.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Introduce the basic ARM64 architecture support.
A new CONFIG_ARM64 symbol is introduced for the new architecture and new
cmake / Kconfig files are added to switch between ARM and ARM64.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Dynamic MPU regions are used in build configurations with User
mode or MPU-based stack-overflow guards. If these features are
disabled, we skip calling the ARM function for re-programming
the MPU peripheral during context-switch. We also skip doing
this when jumping to main thread (although this brings limited
performace gain as it is called once in the boot cycle)
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Users are reportedly not able to understand how to debug the following
error message from gen_isr_tables:
gen_isr_tables.py: multiple registrations at table_index 8 for irq 8
Debugging issues these kinds of issues is difficult so we need to give
users as much information as possible.
To make it clearer that it could be an abuse of the 'IRQ_CONNECT' API
that is causing the issue we add this to the error message.
Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
In zephyr_linker_sources().
This is done since the point of the location is to place things at given
offsets. This can only be done consistenly if the linker code is placed
into the _first_ section.
All uses of TEXT_START are replaced with ROM_START.
ROM_START is only supported in some arches, as some arches have several
custom sections before text. These don't currently have ROM_START or
TEXT_START available, but that could be added with a bit of refactoring
in their linker script.
No SORT_KEYs are changed.
This also fixes an error introduced when TEXT_START was added, where
TEXT_SECTION_OFFSET was applied to riscv's common linker.ld instead of
to openisa_rv32m1's specific linker.ld.
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>