Instead of implementing a custom power off API (pm_system_off),
implement the sys_poweroff hook, and indicate power off is supported by
selecting HAS_POWEROFF. Note that according to the PSCI specification
(DEN0022E), the SYSTEM_OFF operation does not return, however, an error
is printed and system is halted in case this occurs.
Note that the pm_system_off has also been deleted, from now on, systems
supporting PSCI should enable CONFIG_POWEROFF and call the standard
sys_poweroff() API.
Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
When the frame-pointer based unwinding is enabled, the stop condition
for the stack backtrace is (FP == NULL).
Set FP to 0 before jumping to C code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
* Add support for coredump on ARM64 architectures.
* Add the script used for post-processing coredump output.
Signed-off-by: Marcelo Ruaro <marcelo.ruaro@huawei.com>
Signed-off-by: Rodrigo Cataldo <rodrigo.cataldo@huawei.com>
Signed-off-by: Roberto Medina <roberto.medina@huawei.com>
Currently the lazy fpu saving algorithm in arm64 is using the fpu_owner
pointer from the cpu structure to understand the owner of the context
in the cpu and save it in case someone different from the owner is
accessing the fpu.
The semantics for memory consistency across smp systems is quite prone
to errors and reworks on the current code might miss some barriers that
could lead to inconsistent state across cores, so to overcome the issue,
use atomics to hide the complexity and be sure that the code will behave
as intended.
While there, add some isb barriers after writes to cpacr_el1, following
the guidance of ARM ARM specs about writes on system registers.
Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
cpu_node_list does not hold the corrent mapping of cpu id and mpid when
core booting sequence does not follow the DTS cpu node sequence. This
will cause an issue that sgi cannot deliver to the right target.
Add the cpu_map array to hold the corrent mapping between cpu id and
mpid.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Each core should init their own stack during the reset when SMP enabled,
but do not touch others. The current init results in each core starting
init the stack from the same address which will break others.
Fix the issue by setting a correct start address.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
LOG system has unalignment access instruction which will cause an
alignment exception before MPU is enabled. Remove the LOG print before
MPU is enabled to avoid this issue.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.
This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.
Signed-off-by: Andy Ross <andyross@google.com>
For secure EL2 to be entered the EEL2 bit in SCR_EL3 must be set. This
should only be set if Zephyr has not been configured for NS mode only,
if the device is currently in secure EL3, and if secure EL2 is supported
via the SEL2 bit in AA64PFRO_EL1. Added logic to enable EEL2 if all
conditions are met.
Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
Per the ARMv8 architecture document, modification of the system control
register is a context-changing operation. Context-changing operations are
only guaranteed to be seen after a context synchronization event.
An ISB is a context synchronization event. One has been placed after
each SCTLR modification. Issue was found running full speed on target.
Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
Let's consider CPU1 waiting on a spinlock already taken by CPU2.
It is possible for CPU2 to invoke the FPU and trigger an FPU exception
when the FPU context for CPU2 is not live on that CPU. If the FPU context
for the thread on CPU2 is still held in CPU1's FPU then an IPI is sent
to CPU1 asking to flush its FPU to memory.
But if CPU1 is spinning on a lock already taken by CPU2, it won't see
the pending IPI as IRQs are disabled. CPU2 won't get its FPU state
restored and won't complete the required work to release the lock.
Let's prevent this deadlock scenario by looking for pending FPU IPI from
the spinlock loop using the arch_spin_relax() hook.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The init infrastructure, found in `init.h`, is currently used by:
- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices
They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:
```c
struct init_entry {
int (*init)(const struct device *dev);
/* only set by DEVICE_*, otherwise NULL */
const struct device *dev;
}
```
As a result, we end up with such weird/ugly pattern:
```c
static int my_init(const struct device *dev)
{
/* always NULL! add ARG_UNUSED to avoid compiler warning */
ARG_UNUSED(dev);
...
}
```
This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:
```c
static int my_init(void)
{
...
}
```
This is achieved using a union:
```c
union init_function {
/* for SYS_INIT, used when init_entry.dev == NULL */
int (*sys)(void);
/* for DEVICE*, used when init_entry.dev != NULL */
int (*dev)(const struct device *dev);
};
struct init_entry {
/* stores init function (either for SYS_INIT or DEVICE*)
union init_function init_fn;
/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
* to know which union entry to call.
*/
const struct device *dev;
}
```
This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.
**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
init: convert SYS_INIT functions to the new signature
Conversion scripted using scripts/utils/migrate_sys_init.py.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
manifest: update projects for SYS_INIT changes
Update modules with updated SYS_INIT calls:
- hal_ti
- lvgl
- sof
- TraceRecorderSource
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: devicetree: devices: adjust test
Adjust test according to the recently introduced SYS_INIT
infrastructure.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: kernel: threads: adjust SYS_INIT call
Adjust to the new signature: int (*init_fn)(void);
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Update current stack limit on every context switch, including switching
to irq stack and switching back to thread stack.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
This commit mainly enable the safe exception stack including the stack
switch. Init the safe exception stack by calling
z_arm64_safe_exception_stack during the boot stage on every core. Also,
tweaks several files to properly switch the mode with different cases.
1) The same as before, when executing in userspace, SP_EL0 holds the
user stack and SP_EL1 holds the privileged stack, using EL1h mode.
2) When entering exception from EL0 then SP_EL0 will be saved in the
_esf_t structure. SP_EL1 will be the current SP, then retrieves the safe
exception stack to SP_EL0, making sure the always pointing to safe
exception stack as long as the system running in kernel space.
3) When exiting exception from EL1 to EL0 then SP_EL0 will be restored
from the stack value previously saved in the _esf_t structure. Still at
EL1h mode.
4) Either entering or exiting exception from EL1 to EL1, SP_EL0 will
keep holding the safe exception stack unchanged as memtioned above.
5) Do a quick stack check every time entering the exception from EL1 to
EL1. If check fail, set SP_EL1 to safe exception stack, and then handle
the fatal error.
Overall, the exception from user mode will be handled with kernel stack
at the assumption that it is impossible the stackoverflow happens at the
entry of exception from EL0 to EL1. However the exception from kernel
mode will be firstly checked with the safe exception stack to see if the
kernel stack overflows, because the exception might be triggered by
stack invalid accessing.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Add safe exception stack init function which does several things:
1) setting current cpu safe exception stack pointer to its corresponding
stack top.
2) init sp_el0 with the above safe exception stack.
That makes sure the sp_el0 points to per-cpu safe_stack in the kernel
space.
3) init the current_stack_limit and corrupted_sp with 0
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
As the preparation for enabling safe exception stack, add a variable in
_esf_t to save the user stack held by sp_el0 at the point of the
exception happening from EL0. The newly added variable in _esf_t is
named sp from which the user stack will be restored when exceptions eret
to EL0.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Add three per-cpu variables for the convenience of quickly accessing.
The safe_exception_stack stores the top of safe exception stack pointer.
The current_stack_limit stores the current thread's priv stack limit.
The corrputed_sp stores the priv sp or irq sp for the stack overflow
case, or 0 for the normal case.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Introduce two configs to prepare to enable the safe exception stack for
the kernel space. This is the preparation for enabling hardware stack
guard. Also define the safe exception stack for kernel exception stack
check.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
If so this is most certainly a bug. arch_mem_unmap() should be
used before mapping the same area again.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
First, we have commit 7d27bd0b85 ("arch: arm64: Disable infinite
recursion warning for `discard_table`") that blindly shut up a compiler
warning that did actually highlighted a real bug. Revert that and fix
the bug properly. And yes, mea culpa for having been the first to
approve that commit, or even creating the bug in the first place.
Then let's add proper table usage cound handling for discard_table() to
work properly and avoid leaking table pages.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The cache operations must be quick, optimized and possibly inlined. The
current API is clunky, functions are not inlined and passing parameters
around that are basically always known at compile time.
In this patch we rework the cache functions to allow us to get rid of
useless parameters and make inlining easier.
In particular this changeset is doing three things:
1. `CONFIG_HAS_ARCH_CACHE` is now `CONFIG_ARCH_CACHE` and
`CONFIG_HAS_EXTERNAL_CACHE` is now `CONFIG_EXTERNAL_CACHE`
2. The cache API has been reworked.
3. Comments are added.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The is code duplication as one is in C, and the other is an assembly
macro. As there is no easy way to find out about this duplication,
adding a comment seems the best way to go.
Signed-off-by: Henri Xavier <datacomos@huawei.com>
Change for loops of the form:
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
...
to
unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
...
We do the call outside of the for loop so that it only happens once,
rather than on every iteration.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Change automated searching for files using "IRQ_CONNECT()" API not
including <zephyr/irq.h>.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
GCC may generate ldp/stp instructions with the Advanced SIMD Qn
registers for consecutive 32-byte loads and stores.
This commit disables this GCC behaviour because saving and restoring
the Advanced SIMD context is very expensive, and it is preferable to
keep it turned off by not emitting these instructions for better
context switching performance.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
On GICv3, when we send an IPI interrupt, aff3, aff2 and aff1 should
be assigned a value corespond to a PE for which interrupt will be
generated. target_list only corresponds to aff0.
On real hardware, aff3, aff2, aff1 and aff0 should be treated as a
whole to determine a PE.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
VMPIDR_EL2 is assigned the value returned by EL2 reads of MPIDR_EL1
MPIDR_EL1 is the register holding the Multiprocessor ID which is to
identify different cores. Because of the virtualization requirements
for AArch64, MPIDR_EL1 should be virtualized (the different virtualized
cores can run on the same physical core). Thus the value of MPIDR_EL1
should be switched when the VM is switched. Setting the VMPIDR_EL2 is
the way to change the value returned by EL1 reads of MPIDR_EL1. Even
without virtualization, we still need to set VMPIDR_EL2 during booting
at EL2 or EL3. Otherwise, all cores' IDs are zero at the EL1 stage
which will break the SMP system.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
When a cache API function is called from userspace, this results on
ARM64 in an OOPS (bad syscall error). This is due to at least two
different factors:
- the location of the cache handlers is preventing the linker to
actually find the handlers
- specifically for ARM64 and ARC some cache handling functions are not
implemented (when userspace is not used the compiler simply optimizes
out these calls)
Fix the problem by:
- moving the userspace cache handlers to a their logical and proper
location (in the drivers directory)
- adding the missing handlers for ARM64 and ARC
Signed-off-by: Carlo Caione <ccaione@baylibre.com>