Commit graph

5846 commits

Author SHA1 Message Date
Flavio Ceolin
d6d3c1098d xtensa: mmu: dup_table does not need parameter
The only page table duplicated is the kernel page table. This function
does not need a parameter.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
0012725d85 xtensa: mmu: Avoid k_mem_domain_default duplication
We can use some extra bits available for SW implementation to
save original permissions and avoid duplicating the kernel page tables
for the default memory domain.

Whe duplicating the page table to a new domain we just ensure
to restore the original map.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
e9fa729a6b xtensa: mmu: Fix macro to get ring from a pte
The macro was not masking the pte correctly and it was returning
a wrong value.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
eb7d60dd67 xtensa: mmu: Remove duplicated macro
XTENSA_MMU_PTE was defined twice.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
dcaceda39b xtensa: mmu: Simplify memory map
Simplify the logic around the shared attribute. Checks if a memory
region should be shared only in the function that actually maps the
memory.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Nicolas Pitre
530b593275 arch: riscv: apply CONFIG_RISCV_MCAUSE_EXCEPTION_MASK to FPU code
Some implementations use bits outside of the mcause mask for other
purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-14 09:32:39 +02:00
Najumon B.A
2803dcd564 arch: x86: remove limitation of number of cpu support in smp
Remove the limitation of number of cpu support in x86 arch.
Also add support for retrieve cpu informations such as for
hybird cores.

Signed-off-by: Najumon B.A <najumon.ba@intel.com>
2024-05-13 16:07:11 -04:00
Nicolas Pitre
57305971d1 kernel: mmu: abstract access to page frame flags and address
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
frei tycho
81e4b91bb5 arch: x86: avoided increments/decrements with side effects
- moved ++/-- before or after the value use

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-09 15:44:54 +02:00
Hess Nathan
ed12a0cc35 coding guidelines: comply with MISRA Rule 11.8
modified parameter types to receive a const pointer when a
non-const pointer is not needed

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-09 10:28:44 +02:00
Debbie Martin
882238116e arch: arm: cortex-r: Add compiler tuning for Cortex-R82
Change the GCC toolchain configuration to make use of the Cortex-R82
target. When Cortex-R82 was added as a GCC toolchain option, the GCC
version of the Zephyr SDK did not support Cortex-R82 tuning. Zephyr was
therefore compiled compiled for the Armv8.4-A architecture. Since Zephyr
SDK 0.15.0 (which updated GCC from 10.3.0 to 12.1.0) coupled with Zephyr
3.2, the Cortex-R82 target is supported.

The Armv8-R AArch64 architecture does not support the EL3 exception level.
EL3 support is therefore made conditional on Armv8-R vs Armv8-A.

Signed-off-by: Debbie Martin <Debbie.Martin@arm.com>
2024-05-07 17:57:05 -04:00
Yong Cong Sin
b32c5e2a60 arch: riscv: only use z_riscv_fatal_error_csf if CONFIG_EXCEPTION_DEBUG
Use `z_riscv_fatal_error_csf` that expects the
callee-saved-registers pointer only if `CONFIG_EXCEPTION_DEBUG`
is enabled, otherwise use `z_riscv_fatal_error`, as there can
be garbage in the `a2`.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-07 09:34:32 +02:00
frei tycho
4b7e09230c arch: x86: coding guidelines: cast unused arguments to void
- added missing ARG_UNUSED
- added void to cast where ARG_UNUSED macro is not available

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-06 22:53:13 +01:00
frei tycho
30681799e8 coding guidelines: comply with MISRA C:2012 Rule 17.7 in arch
- added explicit cast to void when returned value is expectedly ignored

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-04 13:00:14 +03:00
Jonathon Penix
274bd59283 arch: arm: cortex-m: Change character used to mark immediate operand
Change the character used to indicate immediate operands from '$' to '#'
to resolve an "invalid instruction" error when building with clang.

For arm, binutils allows either '#' or '$' to indicate immediate operands.
clang seems to accept '$' for arm in other instances
(my build accepts 'subs r0, r0, $0x02', for example), but in this case it
produces an error that this is an invalid instruction due to the "$0x02"
operand.

Given clang's inconsistent behavior, I'm guessing this is a bug in clang
somewhere, but:

  1. '#' for immediate operands seems to be more standard for arm in
     general and seems to be what is used throughout the rest of Zephyr's
     arm asm code.
  2. Switching out '$' for '#' shouldn't negatively impact other
     toolchains.

As such, switch out the character used to unblock clang builds until this
can be fixed in clang.

Signed-off-by: Jonathon Penix <jpenix@quicinc.com>
2024-05-03 07:28:52 -04:00
Sebastian Bøe
850cd54e67 arm: fatal: log which IRQn is triggering on spurious IRQs
LOG which IRQn line is triggering on spurious IRQs as this makes it
much easier to debug spurious IRQs.

The new logs with this patch looks like:

<err> os: Unhandled IRQn: 227
<err> os: >>> ZEPHYR FATAL ERROR 1: Unhandled interrupt on CPU 0
<err> os: Current thread: 0x20032c20 (unknown)
<err> os: Halting system

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2024-05-02 22:42:30 +01:00
Alberto Escolar Piedras
320f23ef96 arch posix fuzzing: Move kconfig options to sample
Move the kconfig options used to configure the interrupt
and wait time to the sample which uses them instead of
having them in the architecture code.
This options are very particular for this sample and not
really an API.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-02 20:46:03 +03:00
Alberto Escolar Piedras
df5440fc59 arch posix: Correct ARCH_POSIX_LIBFUZZER kconfig help message
This option can be used now also with native_sim
and seems to work fine with both 32 and 64 bit targets.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-02 20:46:03 +03:00
Alberto Escolar Piedras
f5553004b0 sample fuzzer: Move fuzzer specific code to sample and fix for native_sim
Move the LLVM fuzzing specific code out of the board main
file and into the sample.
That way we avoid needing to duplicate it for native_sim and
avoid having a very adhoc interface between the fuzzer test
and runner code.

Also ensure it works for native_sim and not just native_posix

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-02 20:46:03 +03:00
Hess Nathan
aa3e9c1d1e coding guidelines: comply with MISRA Rule 2.2
- avoided dead stores

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:52:30 -04:00
Hess Nathan
cbd9b37ef5 coding guidelines: comply with MISRA Rule 20.9
- avoid to use undefined macros in #if expressions

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 13:10:29 +02:00
Patryk Duda
4fe5ac9248 arch: posix: Undefine operating system specific macros for native_sim
Compilers predefine system-specific macros which carry information about
compiler, target architecture and operating system. It provides basic
compiler-dependent information like size of types, their maximal and
minimal values, etc. It allows to write common libc headers for multiple
architectures and operating systems.

These macros allow code to always determine what is the target operating
system. This is a problem when compiling code of modules that supports
multiple operating systems (e.g. cryptography libraries).

To avoid confusion we shouldn't leak host operating system macros (e.g.
__linux__, __linux, linux, etc.) when compiling for native_sim board.

Unfortunately, there is no single universal switch that disables all
operating system macros:
- '-undef' removes also architecture-related macros
- '--target' is only available for Clang compiler

This patch uses '-include' option to include file that undefines all
well-known operating system macros.

Run 'gcc -dM -E - < /dev/null | sort' to get full list of predefined
macros.

Signed-off-by: Patryk Duda <patrykd@google.com>
2024-04-30 14:30:30 -04:00
Hess Nathan
a1fc3b78ef coding guidelines: comply with MISRA Rule 15.6
- added missing braces

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-04-30 16:20:38 +02:00
François Baldassari
3f78ca9873 ARC: fault: Fix uninitialized memory access
Found via static analysis. In fault path when checking for stack
overflows, if CONFIG_MULTITHREADING is not set, `guard_end` is left
uninitialized and is subsequently used in a comparison.

The solution is to simply return `false` in this configuration as stack
guards are not configured in the first place.

Signed-off-by: François Baldassari <francois@memfault.com>
2024-04-29 17:39:57 +01:00
Flavio Ceolin
d8635905e9 xtensa: mmu: Avoid unnecessary mapping
When duplicating a page table, we don't need to copy
the mapping to the kernel l1 page table virtual address.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-04-27 11:05:45 +03:00
Marcin Szymczyk
0ea7bf19e4 arch: riscv: irq_manage: support ISR_OFFSET in dynamic IRQs
`CONFIG_RISCV_RESERVED_IRQ_ISR_TABLES_OFFSET` shoud be taken into
account in `arch_irq_connect_dynamic`, same as it is done in
`ARCH_IRQ_CONNECT` macro.

Signed-off-by: Marcin Szymczyk <marcin.szymczyk@nordicsemi.no>
2024-04-25 15:03:23 +02:00
Wilfried Chauveau
00ddd8a81a arch: arm: cortex_m: enable interrupts before entering application’s main
This also fixes a typo in `z_arm_switch_to_main_no_multithreading` making
it unlock irq instead of locking them when main returns.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-25 07:54:47 -04:00
Pieter De Gendt
ff6985766b arch: posix: Select at least C11 standard
Replace the global CSTD property with the CSTD kconfig option to select
at least C11 standard.

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-04-25 09:54:39 +00:00
Yong Cong Sin
30b122b3f0 arch: riscv: print callee-saved-registers in fatal error
Print callee-saved registers during fatal error
to help with debugging.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-04-24 15:57:40 -04:00
Yong Cong Sin
7398831884 arch: riscv: implement frame-pointer based stack unwinding
Influenced heavily by the RISCV64 stack unwinding
implementation in the Linux kernel.

`CONFIG_RISCV_EXCEPTION_STACK_TRACE` can be enabled by
configuring the following Kconfigs:

```prj.conf
CONFIG_DEBUG_INFO=y
CONFIG_EXCEPTION_STACK_TRACE=y
CONFIG_OVERRIDE_FRAME_POINTER_DEFAULT=y
CONFIG_OMIT_FRAME_POINTER=n
```

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-04-20 13:54:43 -04:00
Wilfried Chauveau
b621802614 arch: arm: cortex_m: cpu_idle: Add missing irq masking/unmasking
This was missed during conversion from ASM to C.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-17 15:00:25 +02:00
Wilfried Chauveau
fb6ab560a5 arch: arm: cortex_m: fix inverted logic in cpu_idle
This mistake was introduced when converting from ASM to C.
This change also restores the associated comment from the ASM source.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-17 15:00:25 +02:00
Wilfried Chauveau
42036cdbca arch: arm: cortex_m: Only trigger context switch if thread is preemptible
This is a fix for #61761 where a cooperative task is switched from at the
end of an exception. A cooperative thread should only be switched from if
the thread exists the ready state.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
773739a52a arch: arm: cortex_m: move part of swap_helper to C
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

This change significantly enhances the maintainability & portability of the
code at the expanse of an indirection (1 outlined function).

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
65ec07fe33 arch: arm: cortex_m: use cmsis API rather than assembly
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
4760aad353 arch: arm: cortex_m: Convert cpu_idle from ASM to C
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

This change reduces the need for core specific conditional compilation and
unifies irq locking code.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>

# Conflicts:
#	soc/arm/nordic_nrf/nrf53/soc_cpu_idle.h
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
f11027df80 arch: arm: cortex_m: Use outlined arch_irq_(un)lock in assembly
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

This change reduces the need for core specific conditional compilation.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
8e55468af2 arch: arm: cortex_m: use C rather than asm in isr_wrapper & exc_exit
Asm is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

This is a first step in reducing the amount of ASM in arch/arm/cortex_m

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
85feaa60e2 arch: arm: cortex_m: Use r* register names rather than v*
v* register aliases are uncommon and it can be surprising to find them.
This change makes use of r* register names for a more consistent
experience of reading assembly.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Wilfried Chauveau
4c3f6ea5b2 arch: arm: cortex_m: Document why __aeabi_read_tp impl requires ASM impl
This method has special ABI requirement that requires the use of ASM.
This change documents why this is required & adds reference to the
related specification.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-04-15 09:09:28 -07:00
Kai Vehmanen
7fd0a7a5eb soc: intel_adsp: replace icache ISR workaround with custom idle solution
A workaround to avoid icache corruption was added in commit be881d4cf2
("arch: xtensa: add isync to interrupt vector").

This patch implements a different workaround by adding custom logic to
idle entry on affected Intel ADSP platforms. To safely enter "waiti"
when clock gating is enabled, we need to ensure icache is both unlocked
and invalidated upon entry.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
2024-04-15 16:26:39 +02:00
Aleksandar Cecaric
0144ed6b63 arch: riscv: update coredump for 64BIT RISCV
Add RISCV 64bit registers and parse them in coredump script.

Signed-off-by: Aleksandar Cecaric <aleksandar.cecaric@nextsilicon.com>
2024-04-13 07:03:23 -04:00
Alberto Escolar Piedras
4f7b144ef6 arch posix: When building for the native_simulator only link ASAN once
Only request the linker to link ASAN in the final stage, not
during the partial linking stage.
This fixes a link issue when building with llvm.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-12 15:03:35 +02:00
Alberto Escolar Piedras
b59b21f8bb arch posix: pass -fsanitize-recover=all also to native_simulator build
If the CONFIG_ASAN_RECOVER option is set, also pass
-fsanitize-recover=all to the build of the native simulator
built files.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-04-12 15:03:35 +02:00
Guennadi Liakhovetski
34ab1a1e51 llext: xtensa: add support for in-place relocatable extensions
Currently LLEXT on Xtensa supports relocatable extensions, linked for
a specific address range, while relocation itself takes place in a
temporary buffer. For this section addresses have to be set correctly
by the linker for their target locations.

This commit adds support for relocatable extensions, built without
using specific memory addresses and run at the same addresses, where
they are loaded.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2024-04-11 11:35:24 -05:00
Cedric Lescop
7b1d9d6166 llext: Full ARM ELF relocation support
Adds support for all relocation type produced by GCC
on ARM platform using partial linking (-r flag) or
shared link (-fpic and -shared flag).

Signed-off-by: Cedric Lescop <cedric.lescop@se.com>
2024-04-10 14:13:15 -04:00
Daniel Leung
027a1c30cd x86: add support for memory mapped stack for threads
This adds the necessary bits to enable memory mapping thread
stacks on both x86 and x86_64. Note that currently these do
not support multi level mappings (e.g. demand paging and
running in virtual address space: qemu_x86/atom/virt board)
as the mapped stacks require actual physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
d0a90a0b33 kernel: add the ability to memory map thread stacks
This introduces support for memory mapped thread stacks,
where each thread stack is mapped into virtual memory
address space with two guard pages to catch
under-/over-flowing the stack. This is just on the kernel
side. Additional architecture code is required to fully
support this feature.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
94997a026f x86: correct size for stack bound check for privileged stack
Previous commit changed the privileged stack size to be using
kconfig CONFIG_PRIVILEGED_STACK_SIZE instead of simply
CONFIG_MMU_PAGE_SIZE. However, the stack bound check function
was still using the MMU page size, so fix that.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung
ac5835565b x86: synchronize usage of CONFIG_X86_STACK_PROTECTION
Most places use CONFIG_X86_STACK_PROTECTION, but there are some
places using CONFIG_HW_STACK_PROTECTION. So synchronize all
to use CONFIG_X86_STACK_PROTECTION instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00