Commit 4f9b547ebd ("riscv: smp: prepare for more than one IPI type")
didn't clear pending IPI flags.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We can leverage the FPU dirty state as an indicator for preemptively
reloading the FPU content when a thread that did use the FPU before
being scheduled out is scheduled back in. This avoids the FPU access
trap overhead when switching between multiple threads with heavy FPU
usage.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
FPU context switching is always performed on demand through the FPU
access exception handler. Actual task switching only grants or denies
FPU access depending on the current FPU owner.
Because RISC-V doesn't have a dedicated FPU access exception, we must
catch the Illegal Instruction exception and look for actual FP opcodes.
There is no longer a need to allocate FPU storage on the stack for every
exception making esf smaller and stack overflows less likely.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Instead of saving/restoring FPU content on every exception and task
switch, this replaces FPU sharing support with a "lazy" (on-demand)
context switching algorithm similar to the one used on ARM64.
Every thread starts with FPU access disabled. On the first access the
FPU trap is invoked to:
- flush the FPU content to the previous thread's memory storage;
- restore the current thread's FPU content from memory.
When a thread loads its data in the FPU, it becomes the FPU owner.
FPU content is preserved across task switching, however FPU access is
either allowed if the new thread is the FPU owner, or denied otherwise.
A thread may claim FPU ownership only through the FPU trap. This way,
threads that don't use the FPU won't force an FPU context switch.
If only one running thread uses the FPU, there will be no FPU context
switching to do at all.
It is possible to do FP accesses in ISRs and syscalls. This is not the
norm though, so the same principle is applied here, although exception
contexts may not own the FPU. When they access the FPU, the FPU content
is flushed and the exception context is granted FPU access for the
duration of the exception. Nested IRQs are disallowed in that case to
dispense with the need to save and restore exception's FPU context data.
This is the core implementation only to ease reviewing. It is not yet
hooked into the build.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Right now this is hardcoded to z_sched_ipi(). Make it so that other IPI
services can be added in the future.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Given the Zephyr CPU number is no longer tied to the hartid, we must
consider the actual hartid when sending an IPI to a given CPU. Since
those hartids can be anything, let's just save them in the cpu structure
as each CPU is brought online.
While at it, throw in some `get_hart_msip()` cleanups.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Currently it is assumed that Zephyr CPU numbers match their hartid
value one for one. This assumption was relied upon to efficiently
retrieve the current CPU's `struct _cpu` pointer.
People are starting to have systems with a mix of different usage for
each CPU and such assumption may no longer be true.
Let's completely decouple the hartid from the Zephyr CPU number by
stuffing each CPU's `struct _cpu` pointer in their respective scratch
register instead. `arch_curr_cpu()` becomes more efficient as well.
Since the scratch register was previously used to store userspace's
exception stack pointer, that is now moved into `struct _cpu_arch`
which implied minor user space entry code cleanup and rationalization.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
RISC-V has a modular design. Some hardware with a custom interrupt
controller needs a bit more work to lock / unlock IRQs.
Account for this hardware by introducing a set of new
z_soc_irq_* functions that can override the default behaviour.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Some RISC-V SoCs implement a mechanism for hardware supported stacking /
unstacking of registers during ISR / exceptions. What happens is that on
ISR / exception entry part of the context is automatically saved by the
hardware on the stack without software intervention, and the same part
of the context is restored by the hardware usually on mret.
This is currently not yet supported by Zephyr, where the full context
must be saved by software in the full fledged ESF. This patcheset is
trying to address exactly this case.
At least three things are needed to support in a general fashion this
problem: (1) a way to store in software only the part of the ESF not
already stacked by hardware, (2) a way to restore in software only the
part of the context that is not going to be restored by hardware and (3)
a way to define a custom ESF.
Point (3) is important because the full ESF frame is now composed by a
custom part depending on the hardware (that can choose which register to
stack / unstack and the order they are saved onto the stack) and a part
defined in software for the remaining part of the context.
In this patch a new CONFIG_RISCV_SOC_HAS_ISR_STACKING is introduced that
enables the code path supporting the three points by the mean of three
macros that must be implemented by the user in a soc_stacking.h file:
SOC_ISR_SW_STACKING, SOC_ISR_SW_UNSTACKING and SOC_ISR_STACKING_ESF
(refer to the symbol help for more details).
This is an example of soc_isr_stacking.h for an hardware that doesn't do
any hardware stacking / unstacking but everything is managed in
software:
#ifndef __SOC_ISR_STACKING
#define __SOC_ISR_STACKING
#if !defined(_ASMLANGUAGE)
#define SOC_ISR_STACKING_ESF_DECLARE \
struct __esf { \
unsigned long ra; \
unsigned long t0; \
unsigned long t1; \
unsigned long t2; \
unsigned long t3; \
unsigned long t4; \
unsigned long t5; \
unsigned long t6; \
unsigned long a0; \
unsigned long a1; \
unsigned long a2; \
unsigned long a3; \
unsigned long a4; \
unsigned long a5; \
unsigned long a6; \
unsigned long a7; \
unsigned long mepc; \
unsigned long mstatus; \
unsigned long s0; \
} __aligned(16)
#else
#define SOC_ISR_SW_STACKING \
addi sp, sp, -__z_arch_esf_t_SIZEOF; \
DO_CALLER_SAVED(sr);
#define SOC_ISR_SW_UNSTACKING \
DO_CALLER_SAVED(lr);
#endif /* _ASMLANGUAGE */
#endif /* __SOC_ISR_STACKING */
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This reverts commit a7b5d606c7.
The assumption behind that commit was wrong. The software-based stack
sentinel writes to the very bottom of the _writable_ stack area i.e.
right next to the actual PMP based guard area. So they are compatible.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Change for loops of the form:
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
...
to
unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
...
We do the call outside of the for loop so that it only happens once,
rather than on every iteration.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Change automated searching for files using "IRQ_CONNECT()" API not
including <zephyr/irq.h>.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The interrupt stack is used as the system stack during kernel
initialization while IRQs are not yet enabled. The sp register is
set to z_interrupt_stacks + CONFIG_ISR_STACK_SIZE.
CONFIG_ISR_STACK_SIZE only represents the desired usable stack size.
This does not take into account the added guard area. Result is a stack
whose pointer is much closer to the trigger zone than expected when
CONFIG_PMP_STACK_GUARD=y, and the SMP configuration in particular pushes
it over the edge during many CI test cases.
Worse: during early init we're not quite ready to handle exceptions
yet and complete havoc ensues with no meaningful debugging output.
Make sure the early assembly code locates the actual top of the stack
by generating a constant with its true size.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The software-based stack sentinel writes to the very bottom of the
stack area triggering the PMP stack protection. Obviously they can't
be used together.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The IRQ stack in particular is different on each CPU, and so is its
stack guard PMP entry value. This creates 2 issues:
- The assertion ensuring the last global PMP address is the same
for each CPU does fail;
- That last global PMP address can't be relied upon to create a
single-slot per-thread TOR mapping.
Fix both issues by not remembering the actual address for the last
global entry but a dummy address instead that is guaranteed not to
match any opportunistic single-slot TOR mapping.
While at it, lock that IRQ stack guard PMP entry.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Z_THREAD_STACK_BUFFER() must not be used here. This is meant for stacks
defined with K_THREAD_STACK_ARRAY_DEFINE() whereas in this case we are
given a stack created with K_KERNEL_STACK_ARRAY_DEFINE().
If CONFIG_USERSPACE=y then K_THREAD_STACK_RESERVED gets defined with
a bigger value than K_KERNEL_STACK_RESERVED. Then Z_THREAD_STACK_BUFFER()
returns a pointer that is more advanced than expected, resulting in a
stack pointer outside its actual stack area and therefore memory
corruption ensues.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This commit is fixing placing the vectors section through
zephyr_linker_sources(ROM_START ...) (as done in the ARM
architecture port) so its order can be adjusted by SORT_KEY.
Fixes#49903
Signed-off-by: Mateusz Sierszulski <msierszulski@antmicro.com>
QEMU requires that the semihosting trap instruction sequence, which
consists of three uncompressed instructions, lie in the same page, and
refuses to interpret the trap sequence if these instructions are placed
across two different pages.
This commit adds 16-byte alignment requirement to the `semihost_exec`
function, which occupies 12 bytes, to ensure that the three trap
sequence instructions in this function are never placed across two
different pages.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
All SOC_ERET definitions expand to the mret instruction (used to return
from a trap: exception or interruption). The 'eret' instruction existed
in previous RISC-V privileged specs, but it doesn't seem to be used in
Zephyr (ref. RISC-V Privileged Architectures 3.2.2).
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
For vectored interrupts use the generated IRQ vector table instead of
relying on a custom-generated table.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Some early RISC-V SoCs have a problem when an `mret` instruction is used
outside a trap handler.
After the latest Zephyr RISC-V huge rework, the arch_switch code is
indeed calling `mret` when not in handler mode, breaking some early
RISC-V platforms.
Optionally restore the old behavior by adding a new
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL symbol.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This is really useful only for one case i.e. when testing against zero.
Do that test inline where it is needed and make the rest of the code
independent from the actual numerical value being tested to make code
maintenance easier if/when new cases are added.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Retrieve the pmpaddr value matching the last global PMP slot and add it
to the per-thread m-mode and u-mode entry array. Even if that value is
not written out again on thread context switch, that value can still be
used by set_pmp_entry() to attempt a single-slot TOR mapping with it.
Nicely abstract this with the new z_riscv_pmp_thread_init() where the
PMP_M_MODE(thread) and PMP_U_MODE(thread) argument generators can be
used.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
A QEMU bug may create bad transient PMP representations causing
false access faults to be reported. Work around it by setting
pmp registers to zero from the update start point to the end
before updating them with new values.
The QEMU fix is here with more details about this bug:
https://lists.gnu.org/archive/html/qemu-devel/2022-06/msg02800.html
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This reverts the bulk of commit c8bfc2afda ("riscv: make
arch_is_user_context() SMP compatible") and replaces it with a flag
stored in the thread local storage (TLS) area, therefore making TLS
mandatory for userspace support on RISC-V.
This has many advantages:
- The tp (x4) register is already dedicated by the standard for this
purpose, making TLS support almost free.
- This is very efficient, requiring only a single instruction to clear
and 2 instructions to set.
- This makes the SMP case much more efficient. No need for funky
exception code any longer.
- SMP and non-SMP now use the same implementation making maintenance
easier.
- The is_user_mode variable no longer requires a dedicated PMP mapping
and therefore freeing one PMP slot for other purposes.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
5f65dbcc9dab3d39473b05397e05.
The tp (x4) register is neither caller nor callee saved according to
the RISC-V standard calling convention. It only has to be set on thread
context switching and is otherwise read-only.
To protect the kernel against a possible rogue user thread, the tp is
also re-set on exception entry from u-mode.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
For some reasons RISCV is the only arch where the vector table entry is
called __irq_wrapper instead of _isr_wrapper. This is not only a
cosmetic change but Zephyr expects the common ISR handler to be called
_isr_wrapper (for example when generating the IRQ vector table).
Change it.
find ./ -type f -exec sed -i 's/__irq_wrapper/_isr_wrapper/g' {} \;
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
In preparation for the support of RV32E optimize a bit the t* registers
usage limiting that to t{0-2}.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This patch is doing several things:
- Core ISA and extension Kconfig symbols have now a formalized name
(CONFIG_RISCV_ISA_* and CONFIG_RISCV_ISA_EXT_*)
- a new Kconfig.isa file was introduced with the full set of extensions
currently supported by the v2.2 spec
- a new Kconfig.core file was introduced to host all the RISCV cores
(currently only E31)
- ISA and extensions settings are moved to SoC configuration files
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
When returning from z_riscv_switch, depending on whether the thread that
has just been swapped in was earlier swapped out synchronously (i.e. via
regular function call) or asynchronously (i.e. via exception/irq) we
will return to arch_switch() or __irq_wrapper respectively. Comment this
fact for clarity.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
After the introduction of arch_switch() in #43085, ECALL is no longer
used for context switching by default, so remove the comment stating so.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
ARCH_HAS_USERSPACE and ARCH_HAS_STACK_PROTECTION are direct functions
of RISCV_PMP regardless of the SoC.
PMP_STACK_GUARD is a function of HW_STACK_PROTECTION (from
ARCH_HAS_STACK_PROTECTION) and not the other way around.
This allows for tests/kernel/fatal/exception to test protection against
various stack overflows based on the PMP stack guard functionality.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
_current_cpu->irq_stack is not yet initialized when this is executed on
CPU 0. Also the guard area is outside of CONFIG_ISR_STACK_SIZE now
e.g. it is within the K_KERNEL_STACK_RESERVED area at the start of
the buffer. So simply use z_interrupt_stacks[] directly instead.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
A separate privileged stack is used when CONFIG_GEN_PRIV_STACKS=y. The
main stack guard area is no longer needed and can be made available to
the application upon transitioning to user mode. And that's actually
required if we want a naturally aligned power-of-two buffer to let the
PMP map a NAPOT entry on it which is the whole point of having this
CONFIG_PMP_POWER_OF_TWO_ALIGNMENT option in the first place.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The StackGuard area is used to save the esf and run the exception code
resulting from a StackGuard trap. Size it appropriately.
Remove redundancy, clarify documentation, etc.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Assembler files were not migrated with the new <zephyr/...> prefix.
Note that the conversion has been scripted, refer to #45388 for more
details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>