Commit graph

23 commits

Author SHA1 Message Date
Yong Cong Sin e54b27b967 arch: define struct arch_esf and deprecate z_arch_esf_t
Make `struct arch_esf` compulsory for all architectures by
declaring it in the `arch_interface.h` header.

After this commit, the named struct `z_arch_esf_t` is only used
internally to generate offsets, and is slated to be removed
from the `arch_interface.h` header in the future.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Yong Cong Sin 3998e18ec4 arch: rename all esf struct to struct arch_esf
Rename every architecture's esf struct to `struct esf`.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Yong Cong Sin bbe5e1e6eb build: namespace the generated headers with zephyr/
Namespaced the generated headers with `zephyr` to prevent
potential conflict with other headers.

Introduce a temporary Kconfig `LEGACY_GENERATED_INCLUDE_PATH`
that is enabled by default. This allows the developers to
continue the use of the old include paths for the time being
until it is deprecated and eventually removed. The Kconfig will
generate a build-time warning message, similar to the
`CONFIG_TIMER_RANDOM_GENERATOR`.

Updated the includes path of in-tree sources accordingly.

Most of the changes here are scripted, check the PR for more
info.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-28 22:03:55 +02:00
Anas Nashif 7d3b6c6a40 arch: smp: make flush_fpu_ipi a common, optional interfaces
The interface to flush fpu is not unique to one architecture, make it a
generic, optional interface that can be implemented (and overriden) by a
platform.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-01-09 10:00:17 +01:00
Anas Nashif 552f7194e3 arch: exception: rename header guard
Match guard with header file name.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-12-11 18:22:40 -05:00
Anas Nashif ca3839ca38 arch: arm64: rename expection header
Rename exception header and use the same name as all architecture ports.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-12-11 18:22:40 -05:00
Jaxson Han 5d643ecd24 arch: arm64: Fix z_arm64_fatal_error declaration error
The z_arm64_fatal_error should be
extern void z_arm64_fatal_error(unsigned int reason, z_arch_esf_t *esf);

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-07-11 15:04:54 +02:00
Jaxson Han 463b1c9396 arch: arm64: Add safe exception stack init function
Add safe exception stack init function which does several things:
1) setting current cpu safe exception stack pointer to its corresponding
stack top.
2) init sp_el0 with the above safe exception stack.
That makes sure the sp_el0 points to per-cpu safe_stack in the kernel
space.
3) init the current_stack_limit and corrupted_sp with 0

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 3a5fa0498f arch: arm64: Add stack_limit to thread_arch_t
Add stack_limit to thread_arch_t to store the thread's stack limit.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 7040c55438 arch: arm64: Add stack check relevant variables to _cpu_arch_t
Add three per-cpu variables for the convenience of quickly accessing.

The safe_exception_stack stores the top of safe exception stack pointer.
The current_stack_limit stores the current thread's priv stack limit.
The corrputed_sp stores the priv sp or irq sp for the stack overflow
case, or 0 for the normal case.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Henri Xavier 45af717a66 arch/arm64: Implement ASID support in ARM64 MMU
Improves context-switch performance.

TLB invalidation and the nG bit are used conservatively. This could
be improved in future work.

Tested with tests/benchmarks/sched_userspace:

BEFORE:
```
Swapping  2 threads: 161562583 cyc & 1000000 rounds ->   1615 ns per ctx
Swapping  8 threads: 161569289 cyc & 1000000 rounds ->   1615 ns per ctx
Swapping 16 threads: 161649163 cyc & 1000000 rounds ->   1616 ns per ctx
Swapping 32 threads: 163487880 cyc & 1000000 rounds ->   1634 ns per ctx
```

AFTER:
```
Swapping  2 threads: 18129207 cyc & 1000000 rounds ->    181 ns per ctx
Swapping  8 threads: 49702891 cyc & 1000000 rounds ->    497 ns per ctx
Swapping 16 threads: 55898650 cyc & 1000000 rounds ->    558 ns per ctx
Swapping 32 threads: 58059704 cyc & 1000000 rounds ->    580 ns per ctx
```

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-12-13 17:21:11 +09:00
Gerard Marull-Paretas 16811660ee arch: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-06 19:57:22 +02:00
Nicolas Pitre 2ef47509c3 arm64: simplify user mode transition code
It is not necessary to go through the full exception exit code.
This is simpler, smaller and faster.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-15 22:24:22 -04:00
Nicolas Pitre 8affac64a7 arm64: improved arch_switch() implementation
Make it optimal without the need for an SVC/exception  roundtrip on
every context switch. Performance numbers from tests/benchmarks/sched:

Before:
unpend   85 ready   58 switch  258 pend  231 tot  632 (avg  699)

After:
unpend   85 ready   59 switch  115 pend  138 tot  397 (avg  478)

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-15 22:24:22 -04:00
Nicolas Pitre 90fcef4254 arm64: irq_offload: simpler implementation
Get rid of all those global variables and scheduler locking.
Use the reguler IRQ exit path to let tests properly validate preemption.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-14 22:03:05 -04:00
Jaxson Han d282d86d7e arm64: Create common mmu and mpu interfaces
Include the new introduced include/arch/arm64/mm.h instead of the
arm_mmu.h or arm_mpu.h.

Unify function names z_arm64_thread_pt_init/z_arm64_swap_ptables with
z_arm64_thread_mem_domains_init/z_arm64_swap_mem_domains for mmu and
mpu, because:
1. mmu and mpu have almost the same logic.
2. mpu doesn't have ptables.
3. using the function names help reducing "#if define" macros.

Similarly, change z_arm64_ptable_ipi to z_arm64_domain_sync_ipi

And fix a log bug in arm_mmu.c.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Nicolas Pitre 76494f8589 arm64: optimize offsets in z_arm64_context_switch
We can use build-time offsets from a struct k_thread pointer directly
to struct _callee_saved members. No need to compute that at run time.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-04 22:41:32 -04:00
Nicolas Pitre f1f63dda17 arm64: FPU context switching support
This adds FPU sharing support with a lazy context switching algorithm.

Every thread is allowed to use FPU/SIMD registers. In fact, the compiler
may insert FPU reg accesses in anycontext to optimize even non-FP code
unless the -mgeneral-regs-only compiler flag is used, but Zephyr
currently doesn't support such a build.

It is therefore possible to do FP access in IRS as well with this patch
although IRQs are then disabled to prevent nested IRQs in such cases.

Because the thread object grows in size, some tests have to be adjusted.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Nicolas Pitre a82fff04ff arm64: implement exception depth count
Add the exception depth count to tpidrro_el0 and make it available
through the arch_exception_depth() accessor.

The IN_EL0 flag is now updated unconditionally even if userspace is
not configured. Doing otherwise made the code rather hairy and
I doubt the overhead is measurable.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Carlo Caione 256ca55476 arm64: Rework stack usage
The ARM64 port is currently using SP_EL0 for everything: kernel threads,
user threads and exceptions. In addition when taking an exception the
exception code is still using the thread SP without relying on any
interrupt stack.

If from one hand this makes the context switch really quick because the
thread context is already on the thread stack so we have only to save
one register (SP) for the whole context, on the other hand the major
limitation introduced by this choice is that if for some reason the
thread SP is corrupted or pointing to some unaccessible location (for
example in case of stack overflow), the exception code is unable to
recover or even deal with it.

The usual way of dealing with this kind of problems is to use a
dedicated interrupt stack on SP_EL1 when servicing the exceptions. The
real drawback of this is that, in case of context switch, all the
context must be copied from the shared interrupt stack into a
thread-specific stack or structure, so it is really slow.

We use here an hybrid approach, sacrificing a bit of stack space for a
quicker context switch. While nothing really changes for kernel threads,
for user threads we now use the privileged stack (already present to
service syscalls) as interrupt stack.

When an exception arrives the code now switches to use SP_EL1 that for
user threads is always pointing inside the privileged portion of the
stack of the current running thread. This achieves two things: (1)
isolate exceptions and syscall code to use a stack that is isolated,
privileged and not accessible to user threads and (2) the thread SP is
not touched at all during exceptions, so it can be invalid or corrupted
without any direct consequence.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-23 06:32:20 -04:00
Nicolas Pitre 69a0fd3a6a aarch64: smp: make the cross-CPU swap_ptables call use its own IPI
Let's disentangle this from arch_sched_ipi() with an SGI for its
own purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-09 11:55:13 -04:00
Carlo Caione a43f3bade8 arm/arm64: Fix misc and trivials for ARM/ARM64 split
Fix the header guards, comments, github labeler, CODEOWNERS and
MAINTAINERS files.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Carlo Caione 3539c2fbb3 arm/arm64: Make ARM64 a standalone architecture
Split ARM and ARM64 architectures.

Details:

- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
  (arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
  boards/bcm_vk/viper directory

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00