Commit graph

305 commits

Author SHA1 Message Date
Carlo Caione
acc53653b4 arm64: mm: increase MAX_XLAT_TABLES for USERSPACE && TEST
Since commit 0026a5610ac ("arm64: mm: use identity mapping for device
MMIO"), device_map() creates identity mappings (VA = PA) instead of
allocating virtual addresses from a contiguous pool. Each device at a
distinct 2MB-aligned physical address now requires its own L3 page
table, increasing the total number of translation tables needed.

Bump the USERSPACE && TEST default from 24 to 28 to accommodate the
additional tables required by identity-mapped device MMIO.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2026-04-15 17:17:55 -04:00
Appana Durga Kedareswara rao
355cb6c663 arch: arm64: coredump: add FP and SP registers for correct GDB backtraces
The ARM64 coredump arch block did not include FP (x29) and SP,
making GDB unable to unwind the stack. The GDB stub also
misinterpreted SPSR as SP (tu[20] mapped to SP_EL0), producing
corrupted stack pointer values and broken backtraces.
Bump the arch block version to v2 (24 registers, 192 bytes)
adding FP and SP after the existing 22 registers. Update the
GDB stub to auto-detect v1 vs v2 blocks by payload size and
correctly map SPSR (skip), ELR (PC), FP (x29), and SP.

When CONFIG_ARM64_SAFE_EXCEPTION_STACK is enabled and the
exception originated from EL0, use the saved esf->sp (original
sp_el0 stored by the exception entry code) instead of computing
it from the ESF address, since the exception handler may be
running on a separate safe stack.

Fixes #99054

Signed-off-by: Anirudha Sarangi <anirudha.sarangi@amd.com>
Signed-off-by: Appana Durga Kedareswara rao <appana.durga.kedareswara.rao@amd.com>
2026-04-15 05:42:40 -04:00
Carlo Caione
229815dbd8 arm64: mm: use identity mapping for device MMIO
On ARM64, Zephyr uses identity mappings (VA = PA) for kernel code, data and
boot-time device regions. The MMU fully supports address translation but
Zephyr uses it primarily for access permission enforcement.

There are currently two independent paths for mapping device MMIO regions:

1. SoC-level mmu_regions.c files use MMU_REGION_FLAT_ENTRY() to create
   identity mappings (VA = PA) directly in the page tables at boot. This
   bypasses the kernel's virtual memory tracking entirely. SoC maintainers
   must manually list peripherals in mmu_regions.c for drivers that do not
   use the device MMIO API (e.g. most existing drivers) or cannot use it
   (e.g. the GIC, which is not a regular driver).

2. The device MMIO API (device_map()) goes through k_mem_map_phys_bare(),
   which allocates a virtual address from the SRAM range and maps (VA !=
   PA) device registers there. Mapping device MMIO into the SRAM virtual
   address space is nonsensical: it conflates device registers with memory,
   wastes virtual address pool space, and produces addresses that bear no
   relation to the hardware.

The CONFIG_KERNEL_DIRECT_MAP mechanism already supports identity mapping
through k_mem_map_phys_bare() with the K_MEM_DIRECT_MAP flag, but it
requires each board defconfig to enable the Kconfig and each driver to
explicitly pass the flag.

Make identity-mapped device MMIO automatic on ARM64:

1. ARM64 CPU_CORTEX_A selects KERNEL_DIRECT_MAP when MMU is enabled. This
   eliminates the need for per-board defconfig opt-in.

2. device_map() automatically injects K_MEM_DIRECT_MAP when
   CONFIG_KERNEL_DIRECT_MAP is enabled. This is transparent to
   drivers so no per-driver changes needed. The flag is gated on
   CONFIG_KERNEL_DIRECT_MAP rather than CONFIG_ARM64, keeping it
   architecture-agnostic.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2026-04-15 05:40:30 -04:00
Archilis Wang
9edd557451 arch: arm64: add thread-based stack unwinding
Implement thread-based unwinding to support the 'kernel thread unwind'
shell command on ARM64. This update ensures that 'thread' defaults to
'_current' when NULL, complying with the arch_stack_walk() API contract.

To enhance security, add stack bounds validation using stack_info,
and TLS pointers of the target thread. If these are not available, the
logic correctly falls back to is_address_mapped() to ensure robustness
during the unwinding process.

Signed-off-by: Archilis Wang <awm02289@gmail.com>
2026-04-14 22:38:19 -04:00
Nicolas Pitre
89dac23a5f arch: arm64: make MMU page size configurable via Kconfig
Add ARM64_PAGE_SIZE Kconfig choice allowing 4KB, 16KB and 64KB
page sizes. The MMU code already derived all constants from
PAGE_SIZE_SHIFT so most of the infrastructure was ready.

Changes:
- Add ARM64_PAGE_SIZE choice (4KB default, 16KB, 64KB) in Kconfig
- Derive PAGE_SIZE_SHIFT from CONFIG_MMU_PAGE_SIZE in mmu.h
- Select proper TCR granule bits (TG0/TG1) per page size in mmu.c
- Round ARCH_THREAD_STACK_RESERVED up to page alignment so that
  the user-accessible stack buffer starts on a page boundary
- Fix MEM_REGION_ALLOC in mem_protect test to use CONFIG_MMU_PAGE_SIZE

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2026-04-14 22:15:56 -04:00
Nicolas Pitre
c852f0cf9c arch: arm64: fix crash on SMP secondary CPUs when PAC is enabled
arch_secondary_cpu_init() never returns (it ends with fn(arg) into the
scheduler) but its definition lacks FUNC_NORETURN. The compiler
generates a PACIASP/AUTIASP pair and turns the final fn(arg) into a
tail-call: AUTIASP followed by BR. The AUTIASP causes a PAC
authentication failure (FPAC exception) on secondary CPUs.

Fix by marking the definition FUNC_NORETURN with CODE_UNREACHABLE,
matching the extern declaration. The compiler then generates a plain
BLR without the AUTIASP epilogue.

Also fix the function signature to take no arguments, matching the
extern declaration and actual call sites, and move both
arch_secondary_cpu_init() and z_arm64_mm_init() declarations into
boot.h instead of scattering extern declarations across source files.

Also remove the dead arch_cache_init() call in z_arm64_secondary_prep_c()
that was placed after the noreturn call and could never execute. It is
absent from the primary CPU path in z_prep_c() and the implementation
is empty on arm64 anyway.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2026-04-07 11:35:44 -05:00
Joakim Tjernlund
67b5d58c26 arm64: Set DS bit if GIC_SINGLE_SECURITY_STATE
This need to be set while in secure mode so do so in while in EL3

Signed-off-by: Joakim Tjernlund <joakim.tjernlund@infinera.com>
2026-04-07 11:34:49 -05:00
Nicolas Pitre
2d3d00bf54 arch: arm64: add ISB between SVE trap control and ZCR register writes
The ARM Architecture Reference Manual (DDI 0487) requires a context
synchronization event (ISB) between modifying SVE trap control registers
(CPTR_EL3.EZ, CPTR_EL2.TZ, CPACR_EL1.ZEN) and accessing the
corresponding ZCR_ELx registers: "The effect of the change is guaranteed
to be observable only after a Context synchronization event."

Without the ISB, the processor may still observe the old trap
configuration and generate an UNDEFINED exception on the ZCR write.

This also fixes the EL2 SVE initialization for non-VHE mode
(HCR_EL2.E2H=0): CPTR_EL2 bits [17:16] (ZEN) are RES0 in non-VHE
format and must not be set. SVE trapping at EL2 in non-VHE mode is
controlled by the TZ bit (bit 8) instead. The previous code wrote the
VHE-format ZEN bits which is architecturally UNPREDICTABLE in non-VHE
mode. Match the Linux kernel sequence (arch/arm64/include/asm/el2_setup.h).

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2026-03-11 17:56:31 +00:00
Jisheng Zhang
4936bd008a arch: arm64: Convert cpu_idle from ASM to C
ASM is notoriously harder to maintain than C and requires core specific
adaptation which impairs even more the readability of the code.

There's a bug in current arch_cpu_atomic_idle asm version:
	tst	x0, #(DAIF_IRQ_BIT) //here Z := (DAIF_IRQ_BIT == 0)
	beq	_irq_disabled //jump to _irq_disabled when Z is set
	msr	daifclr, #(DAIFCLR_IRQ_BIT)
_irq_disabled:
	ret

As can be seen, the asm code jumps to _irq_disabled when Z is set, but
per aarch64 architecture reference, DAIF_IRQ == 0 means the IRQ is
unmasked, I.E enabled. So the asm logic here is wrong. I fixed this bug
in C version. This shows the benefit of ASM -> C

As for performance concern, except the bug fix above, there's no
difference of generated code between ASM and C version.
ASM version:
<arch_cpu_idle>:
d5033f9f 	dsb	sy
d503207f 	wfi
d50342ff 	msr	daifclr, #0x2
d65f03c0 	ret

arch_cpu_atomic_idle>:
d50342df 	msr	daifset, #0x2
d5033fdf 	isb
d503205f 	wfe
f279001f 	tst	x0, #0x80
54000040 	b.eq	1001d10 <_irq_disabled>  // b.none
d50342ff 	msr	daifclr, #0x2

_irq_disabled>:
d65f03c0 	ret

C version:
<arch_cpu_idle>:
d5033f9f 	dsb	sy
d503207f 	wfi
d50342ff 	msr	daifclr, #0x2
d65f03c0 	ret

<arch_cpu_atomic_idle>:
d50342df 	msr	daifset, #0x2
d5033fdf 	isb
d503205f 	wfe
37380040 	tbnz	w0, #7, 1001d0c <arch_cpu_atomic_idle+0x14>
d50342ff 	msr	daifclr, #0x2
d65f03c0 	ret

And as can be seen, C version use the tbnz instruction to test bit and
branch. Unlike TST, TBNZ does not affect the Z, N, C, or V flags in the
processor state. So except the bug fix, C version looks a bit better
than asm version.

Other architectures such as x86, riscv, rx, xtensa, mips and even arm
cortex_m also use c version for cpu_idle, it's safe for ASM -> C.

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
2026-03-11 17:51:09 +00:00
Appana Durga Kedareswara rao
352fde02ea arch: arm64: mmu: page-align address and size in add_arm_mmu_region()
Static MMU region entries populated via
MMU_REGION_DT_COMPAT_FOREACH_FLAT_ENTRY() pass raw DTS reg address and
size values to __add_map(), which asserts page-alignment. DTS nodes may
legitimately have non-page-aligned reg sizes reflecting actual hardware
register footprints, causing an assert crash during early boot when
CONFIG_ASSERT=y.

Align the base address down and size up to CONFIG_MMU_PAGE_SIZE in
add_arm_mmu_region(), mirroring the k_mem_region_align() logic already
used by the dynamic DEVICE_MMIO_MAP path in kernel/mmu.c. This ensures
all static platform MMU region entries are mapped with page-granular
parameters regardless of DTS reg values.

Signed-off-by: Appana Durga Kedareswara rao <appana.durga.kedareswara.rao@amd.com>
2026-03-06 21:38:13 +01:00
Joakim Tjernlund
ce5397ef01 arm64: Enable CNTPS for EL1
Setting SCR_ST_BIT actually traps CNTPS access to EL3, opposite
to what the comment says. Remove to allow secure EL1 access.
Also initialize CNTPS_CVAL_EL1 to prevent spurious interrupts.

Signed-off-by: Joakim Tjernlund <joakim.tjernlund@infinera.com>
Co-authored-by: Sudan Landge <sudan.landge@arm.com>
2026-03-06 09:58:56 +01:00
Nicolas Pitre
0e372b687c arm64: mmu: support memory domain de-initialization
Implement arch_mem_domain_deinit() for ARM64 to release page tables
back to the pool when a memory domain is de-initialized. This reuses
the existing discard_table() mechanism to recursively free all
sub-tables in the hierarchy.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2026-03-03 17:57:44 +01:00
Joakim Tjernlund
2f2fa12991 arm64: Add CFI annotations in exceptions for gdb BT
gdb cannot unwind the stack from exceptions. This adds
CFI annotations to help gdb unwind.

Signed-off-by: Joakim Tjernlund <joakim.tjernlund@infinera.com>
2026-03-02 15:49:41 -08:00
Joakim Tjernlund
8d2018b5e3 arm64: init cntp
CNTP could be used by an application, best make sure it is running.

Signed-off-by: Joakim Tjernlund <joakim.tjernlund@infinera.com>
2026-02-21 15:35:09 +00:00
Joakim Tjernlund
b6fd653637 arm64: cnthctl_el2: Set EL1PCTEN/EL1PCEN for cntp in EL1
zeroing CNTHCTL_EL2 traps physical timer/counter access from EL1 to EL2,
but Zephyr has no hypervisor to handle those traps.
Enabling access is the standard EL2→EL1 drop behavior.

Signed-off-by: Joakim Tjernlund <joakim.tjernlund@infinera.com>
2026-02-21 15:35:09 +00:00
Joakim Tjernlund
bbeb260c9b arch: arm64: Setup ICC_SRE_EL2
ICC_SRE_EL2 needs the same setup as ICC_SRE_EL3 for SPI IRQs
to work.

Signed-off-by: Joakim Tjernlund <joakim.tjernlund@infinera.com>
2026-02-19 10:01:00 -06:00
Nicolas Pitre
13fe03a3b5 arm64: Add Branch Target Identification (BTI) support
Add support for ARMv8.5+ Branch Target Identification to protect against
Jump-Oriented Programming (JOP) attacks. This complements PAC to offer
complete protection against both ROP and JOP attacks, ensuring
comprehensive control flow integrity.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2026-02-03 09:36:09 +01:00
Nicolas Pitre
d1d439ca09 arm64: Add Pointer Authentication (PAC) support
Add support for ARMv8.3+ Pointer Authentication to protect against
Return-Oriented Programming (ROP) attacks. This implementation provides
PAC functionality with per-thread key isolation, secure key management,
and integration with Zephyr's thread model.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2026-02-03 09:36:09 +01:00
Grygorii Strashko
db1bea3ae2 drivers: xen: add XEN_EVENTS Kconfig option
The Xen events channel driver consume 72K of RAM, but may not be
required in all use cases.

Added a XEN_EVENTS Kconfig option so that Xen events can be gracefully
disabled if not required. Updated the relevant CMakeLists.txt and
Kconfig files to guard the inclusion of the Xen events driver and its
source files by this option.

Signed-off-by: Grygorii Strashko <grygorii_strashko@epam.com>
Signed-off-by: Svitlana Drozd <svitlana_drozd@epam.com>
2026-01-30 16:56:52 -06:00
Appana Durga Kedareswara rao
34076efa39 arm64: fpu: Clear K_FP_REGS flag in arch_float_disable()
The arch_float_disable() function was not clearing the K_FP_REGS flag
from thread->base.user_options after disabling FPU access. This caused
the float_disable test to fail as it verifies the flag is properly
cleared after FPU disable.

Signed-off-by: Appana Durga Kedareswara rao <appana.durga.kedareswara.rao@amd.com>
2026-01-26 11:56:59 +01:00
Jisheng Zhang
23dfe86f4a arch: arm64: remove ARM64_EXCEPTION_STACK_TRACE
After commit 02770ad963 ("debug: EXCEPTION_STACK_TRACE should depend
on arch Kconfigs"), the ARM64_EXCEPTION_STACK_TRACE isn't used any more,
remove it.

Signed-off-by: Jisheng Zhang <jszhang@kernel.org>
2026-01-09 10:39:41 +01:00
Mathieu Choplain
36170c4530 arch: *: remove check for CONFIG_SOC_PER_CORE_INIT_HOOK
soc_per_core_init_hook() is usually called from arch_kernel_init() and
arch_secondary_cpu_init() which are C functions. As such, there is no need
to check for CONFIG_SOC_PER_CORE_INIT_HOOK since platform/hooks.h provides
a no-op function-like macro implementation if the Kconfig option is not
enabled.

Remove the Kconfig option check from all files.

Signed-off-by: Mathieu Choplain <mathieu.choplain-ext@st.com>
2026-01-07 19:39:53 +01:00
Sudan Landge
c1ded6b9b6 arch: arm64: fix definition of ARCH_HAS_STACKWALK
Move ARCH_HAS_STACKWALK under CPU_CORTEX_A section since only Cortex-A
implements arch_stack_walk(), while Cortex-R does not.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2025-11-27 16:01:27 +01:00
Appana Durga Kedareswara rao
4e8cafe641 arch: arm64: mmu: Call k_panic() when translation tables exhausted
When CONFIG_MAX_XLAT_TABLES is too small and new_table() cannot allocate
a translation table, the system must halt rather than continue with
undefined behavior.

This change ensures k_panic() is called after reporting the error,
preventing the system from proceeding when it runs out of translation
tables. Additionally, adds printk() fallback for configurations where
CONFIG_LOG is disabled to ensure the error is always visible.

Signed-off-by: Appana Durga Kedareswara rao <appana.durga.kedareswara.rao@amd.com>
2025-11-24 14:57:25 -05:00
Mykyta Poturai
f3b9d18711 xen: Add support for changing Xen Sysctl interface version
Add a new Kconfig option CONFIG_XEN_SYSCTL_INTERFACE_VERSION that allows
to change the version of the Sysctl interface used by Zephyr to issue
sysctl hypercalls.
For now versions 0x15 is supported.

Signed-off-by: Mykyta Poturai <mykyta_poturai@epam.com>
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
2025-11-20 09:01:06 -05:00
Mykyta Poturai
4f6fb8989a xen: Add support for changing Xen Domctl interface version
Add a new Kconfig option CONFIG_XEN_DOMCTL_INTERFACE_VERSION that allows
to change the version of the Domctl interface used by Zephyr to issue
domctl hypercalls. Add compile-time checks to enable or disable certain
Domctl operations based on the selected Domctl interface version.
For now versions 0x15, 0x16, and 0x17 are supported.

Also it required to correctly guard domctl call that were not supported
prior to specified version.

Signed-off-by: Mykyta Poturai <mykyta_poturai@epam.com>
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
2025-11-20 09:01:06 -05:00
Dmytro Semenets
f131d5f3ed drivers: xen: dom0: add Xen sysctl hypercall
This hypercall can be used get some information about physical machine
and running guests:

- sysctl hypercall "xen_sysctl_getphysinfo" allows read information about
physical machine: number CPUs, memory sizes, hardware capabilities, etc.

- sysctl hypercall "xen_sysctl_getdomaininfolist" returns array of domain
info structures that provide information about particular domain(s).

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
Signed-off-by: Mykyta Poturai <mykyta_poturai@epam.com>
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
2025-11-20 09:01:06 -05:00
Dmytro Semenets
eaaa5400dc drivers: xen: add xen version hypercall
Xen API contains hypercall, which allows domains to identify Xen
version, that is currently used on the system. It can be used to check
if current version is supported by Zephyr or to change behavior of the
drivers or services.

Signed-off-by: Dmytro Semenets <dmytro_semenets@epam.com>
Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
2025-11-20 09:01:06 -05:00
TOKITA Hiroshi
571f5b92a0 drivers: xen: add DMOP hypercall wrappers
Add wrappers for following XEN_DMOP_* hypercalls.
These enables Xen device model control path:
dm_op provides operations to create/manage the ioreq server
so guest MMIO accesses are trapped and handled by the hypervisor.
These are guarded by CONFIG_XEN_DMOP.

- dmop
  - dmop_create_ioreq_server
    XEN_DMOP_create_ioreq_server
  - dmop_map_io_range_to_ioreq_server
    XEN_DMOP_map_io_range_to_ioreq_server
  - dmop_set_ioreq_server_state
    XEN_DMOP_set_ioreq_server_state
  - dmop_nr_vcpus
    XEN_DMOP_nr_vcpus
  - dmop_set_irq_level:
    XEN_DMOP_set_irq_level

Signed-off-by: TOKITA Hiroshi <tokita.hiroshi@gmail.com>
2025-11-20 06:06:43 -05:00
Nicolas Pitre
106d3db360 arch: arm64: Increase MAX_XLAT_TABLES for userspace tests
Memory protection and userspace tests require more MMU translation
tables than the default. Without this increase, tests fail with:

  E: CONFIG_MAX_XLAT_TABLES too small
  ASSERTION FAIL [ret == 0] @ arch/arm64/core/mmu.c:1244
	privatize_page_range() returned -12

Increase defaults when both USERSPACE and TEST are enabled:
- 32 tables for SMP configurations
- 24 tables for non-SMP configurations

This fixes:
- sample.kernel.memory_protection.shared_mem (all platforms)
- rtio.api.userspace (v8a, v9a)
- rtio.api.userspace.submit_sem (v8a, v9a)
- portability.posix.common.userspace

Consequently the demand paging test needed adjustment to its
qemu_cortex_a53 configs to keep working as this test is highly
sensitive to the amount of available free memory.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-18 17:49:40 -05:00
Nicolas Pitre
425af7ad06 arch: arm64: Increase stack sizes for userspace with FPU
Increase ARM64 stack sizes to accommodate deeper call stacks in
userspace and SMP configurations when FPU_SHARING is enabled:

- PRIVILEGED_STACK_SIZE: 1024 → 4096 bytes (with FPU_SHARING)
- TEST_EXTRA_STACK_SIZE: 2048 → 4096 bytes (with FPU_SHARING)

The default 1KB privileged stack is insufficient for ARM64 userspace
syscalls when FPU context switching is enabled.

Symptom: Userspace tests crash with Data Abort (EC 0x24) near stack
boundaries during syscalls, particularly on SMP configurations where
multiple threads exercise FPU lazy switching.

Fixes previously failing CI test on fvp_base_revc_2xaem SMP variants:
- kernel.threads.dynamic
- Multiple userspace tests with FPU_SHARING enabled

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-18 17:49:40 -05:00
Nicolas Pitre
ffd08f5385 arch: arm64: Implement SVE context switching for ARMv9-A
Implement Scalable Vector Extension (SVE) context switching support,
enabling threads to use SVE and SVE2 instructions with lazy context
preservation across task switches.

The implementation is incremental: if only FPU instructions are used
then only the NEON access is granted and preserved to minimize context
switching overhead. If SVE is used then the NEON context is upgraded to
SVE and then full SVE access is granted and preserved from that point
onwards.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-18 17:49:40 -05:00
Nicolas Pitre
051623c808 boards: arm: fvp: Add Cortex-A320 board variant support
Add Cortex-A320 support to the unified FVP board structure with ARMv9.2-A
specific configuration parameters.

New board target:
- fvp_base_revc_2xaem/a320

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-18 17:49:40 -05:00
Nicolas Pitre
2aef4fbe5b arch: arm64: Add ARMv9-A architecture and Cortex-A510 CPU support
Add ARMv9-A architecture support with Cortex-A510 CPU as the default
processor for generic ARMv9-A targets.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-18 17:49:40 -05:00
Nicolas Pitre
6c6f1a5e99 arch: arm64: mmu: revert useless cache handling
This reverts the following commits:

commit c9b534c4eb
("arch: arm64: mmu: avoid using of set/way cache instructions")

commit c4ffadb0b6
("arch: arm64: avoid invalidating of RO mem after mem map")

The reason for the former is about Xen not virtualizing set/way cache
operations used by sys_cache_data_invd_all() originally used prior to
enabling the MMU and data cache. But the cure is worse than the Xen
issue as:

- Cache invalidation is performed on _every_ mapping change.

- Those invalidations are completely unnecessary with a PIPT data cache.
  ARM64 implementations use Physically Indexed, Physically Tagged (PIPT)
  data caches where cache maintenance is not needed during MMU operations.

- arch_mem_map() invoked with K_MEM_MAP_UNPAGED triggers page faults
  when accessing the unmapped region for cache operations. The page
  fault handler in do_page_fault() tries to reacquire z_mm_lock which
  is already held by the caller of arch_mem_map(). This results in a
  deadlock.

And the latter commit disables cache operations for read-only mappings,
effectively rendering the workaround described in the first commit
inoperative on half the mappings, making the performance cost of the
first commit's approach unjustifiable since it doesn't actually solve
the problem it set out to fix.

Given the above, the actual "fix" should simply have been the removal of
the sys_cache_data_invd_all() as, in theory, it isn't strictly needed
and its replacement is already ineffective on read-only areas as mentioned.

So let's revert them, which fixes the deadlock-induced CI test failures
on ARM FVP SMP configurations that were triggered when demand paging or
memory mapping operations were involved.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-04 07:22:39 -05:00
Nicolas Pitre
5b43674098 arch: arm64: Fix SMP TLB invalidation on SMP systems
Use Inner Shareable (IS) TLB invalidation instructions in SMP
configurations to broadcast TLB invalidations to all CPUs.

Use TLBI VMALLE1IS instead of VMALLE1 in invalidate_tlb_all().

While at it, implement proper page-specific invalidation using TLBI VAE1IS
in invalidate_tlb_page() instead of falling back to full invalidation.

This fixes many SMP test failures with userspace enabled onArm's FVP.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-04 07:22:03 -05:00
Carles Cufi
ed60236f76 arch: arm64: Depend on SMP being disabled for single threading
Disabling multithreading is not possible when enabling SMP (logically)
so depend on SMP being disabled to enable
ARCH_HAS_SINGLE_THREAD_SUPPORT.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2025-10-31 22:39:49 +02:00
Anas Nashif
303af992e5 style: fix 'if (' usage in cmake files
Replace with 'if(' and 'else(' per the cmake style guidelines.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-29 11:44:13 +02:00
Daniel Leung
38d49efdac kernel: mem_domain: keep track of threads only if needed
Adds a new kconfig CONFIG_MEM_DOMAIN_HAS_THREAD_LIST so that
only the architectures requiring to keep track of threads in
memory domains will have the necessary list struct inside
the memory domain structs. Saves a few bytes for those arch
not needing this.

Also rename the struct fields to be most descriptive of what
they are.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-21 22:54:44 +03:00
Mathieu Choplain
0211d440f4 arch: *: prep_c: remove check for CONFIG_SOC_PREP_HOOK
soc_prep_hook() is always called from z_prep_c() which is implemented
as a C function. As such, there is no need to check for the associated
CONFIG_SOC_PREP_HOOK since the platform/hooks.h header will define hooks
as no-op function-like macros if their associated Kconfig isn't enabled.

Remove the Kconfig check from all arch implementations of z_prep_c() and
call soc_prep_hook() directly instead, to avoid duplicating the Kconfig
check already performed in platform/hooks.h

Signed-off-by: Mathieu Choplain <mathieu.choplain-ext@st.com>
2025-10-16 22:35:45 -04:00
Chris Friedt
1bccaeea96 arch: arm64: core: include kernel_arch_func.h to mitigate warning
A warning was promoted to error in twister runs due to implicit
declaration of the function `z_arm64_safe_exception_stack_init()`.

Include `kernel_arch_func.h` in `prep_c.c` to mitigate the warning.

Signed-off-by: Chris Friedt <cfriedt@tenstorrent.com>
2025-09-14 11:11:21 -04:00
Hoang Nguyen
1f6dd19462 arch: arm64: cortex_a: Add CPU load for Cortex-A
- Add calls to sys_trace_idle_exit before leaving idle state
  to track CPU load
- Extend CPU_LOAD to CPU_CORTEX_A in Kconfig

Signed-off-by: Hoang Nguyen <hoang.nguyen.jx@bp.renesas.com>
Signed-off-by: Nhut Nguyen <nhut.nguyen.kc@renesas.com>
2025-09-13 18:14:59 -04:00
Nicolas Pitre
6780dddbca arch: arm64: Enhance FPU debug traces with PC addresses
Improve FPU trap debugging by showing the program counter (PC) of
instructions that trigger FPU access traps instead of potentially
stale saved FPU context data.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-09-11 09:53:24 +02:00
Anas Nashif
f5d7081710 kernel: do not include ksched.h in subsys/soc code
Do not directly include and use APIs from ksched.h outside of the
kernel. For now do this using more suitable (ipi.h and
kernel_internal.h) internal APIs until more cleanup is done.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-09 11:45:06 +02:00
Anas Nashif
5e6e3a6de3 arch: mark z_prep_c as FUNC_NORETURN
z_prep_c does not return, mark it as such consistently across
architectures.  We had some arches do that, others not. This resolves a
few coding guideline violations in arch code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
25938ec2bf arch: init: rename z_data_copy -> arch_data_copy
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
6b46c826aa arch: init: z_bss_zero -> arch_bss_zero
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
d98184c8cb arch: boot: rename z_early_memcpy -> arch_early_memcpy
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
641fc4a018 arch: init: rename z_early_memset -> arch_early_memset
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
53a51b9287 kernel/arch: Move early init/boot code out of init/kernel headers
Cleanup init.c code and move early boot code into arch/ and make it
accessible outside of the boot process/kernel.

All of this code is not related to the 'kernel' and is mostly used
within the architecture boot / setup process.

The way it was done, some soc code was including kernel_internal.h
directly, which shouldn't be done.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00