Commit graph

247 commits

Author SHA1 Message Date
Gerard Marull-Paretas
28c139f653 drivers: pm_cpu_ops: psci: provide sys_poweroff hook
Instead of implementing a custom power off API (pm_system_off),
implement the sys_poweroff hook, and indicate power off is supported by
selecting HAS_POWEROFF. Note that according to the PSCI specification
(DEN0022E), the SYSTEM_OFF operation does not return, however, an error
is printed and system is halted in case this occurs.

Note that the pm_system_off has also been deleted, from now on, systems
supporting PSCI should enable CONFIG_POWEROFF and call the standard
sys_poweroff() API.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2023-08-04 16:59:36 +02:00
Girisha Dengi
75547dd522 soc: arm64: Add agilex5 soc folder and its configurations
Add Agilex5 soc folder, MMU table and its configurations for
Intel SoC FPGA Agilex5 platform for initial bring up.
Add ARM Cortex-a76 and Cortex-a55 HMP cluster type.

Signed-off-by: Teik Heng Chong <teik.heng.chong@intel.com>
Signed-off-by: Girisha Dengi <girisha.dengi@intel.com>
2023-07-25 16:58:01 +00:00
Carlo Caione
5b2d716e76 arm64: Keep the frame pointer in leaf functions
It helps with debugging and usually on ARM64 we don't care about a small
increase in code size.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-07-17 10:14:18 +00:00
Carlo Caione
c9a68afede arm64: Set FP to NULL before jumping to C code
When the frame-pointer based unwinding is enabled, the stop condition
for the stack backtrace is (FP == NULL).

Set FP to 0 before jumping to C code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-07-14 09:33:00 +00:00
Carlo Caione
18636af6d6 arm64: Add frame-pointer based stack unwinding
Add the frame-pointer based stack unwinding on ARM64.

This is a typical output with the feature enabled:

*** Booting Zephyr OS build zephyr-v3.4.0-1029-gae22ff648c16 ***
E: ELR_ELn: 0x00000000400011c8
E: ESR_ELn: 0x0000000096000046
E:   EC:  0x25 (Data Abort taken without a change in Exception level)
E:   IL:  0x1
E:   ISS: 0x46
E: FAR_ELn: 0x0000000000000000
E: TPIDRRO: 0x0100000040011650
E: x0:  0x0000000000000000  x1:  0x0000000000000003
E: x2:  0x0000000000000003  x3:  0x000000004005c6a0
E: x4:  0x0000000000000000  x5:  0x000000004005c7f0
E: x6:  0x0000000048000000  x7:  0x0000000048000000
E: x8:  0x0000000000000005  x9:  0x0000000000000000
E: x10: 0x0000000000000000  x11: 0x0000000000000000
E: x12: 0x0000000000000000  x13: 0x0000000000000000
E: x14: 0x0000000000000000  x15: 0x0000000000000000
E: x16: 0x0000000040004290  x17: 0x0000000000000000
E: x18: 0x0000000000000000  lr:  0x0000000040001208
E:
E: backtrace  0: fp: 0x000000004005c690 lr: 0x0000000040001270
E: backtrace  1: fp: 0x000000004005c6b0 lr: 0x0000000040001290
E: backtrace  2: fp: 0x000000004005c7d0 lr: 0x0000000040004ac0
E: backtrace  3: fp: 0x000000004005c7f0 lr: 0x00000000400013a4
E: backtrace  4: fp: 0x000000004005c800 lr: 0x0000000000000000
E:
E: >>> ZEPHYR FATAL ERROR 0: CPU exception on CPU 0
E: Current thread: 0x40011310 (unknown)
E: Halting system

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-07-13 17:04:05 +02:00
Jaxson Han
6fc3903bbe arch: arm64: mpu: Fix some minor CHECKIF issues
There are two CHECKIFs whose check condition is wrong.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-07-11 15:04:54 +02:00
Mykola Kvach
d0472aae7a soc: arm64: add Renesas Rcar Gen3 SoC support
Add files for supporting arm64 Renesas r8a77951 SoC.
Add config option CPU_CORTEX_A57.

Enable build of clock_control_r8a7795_cpg_mssr.c for
a new ARM64 SoC R8A77951.

Signed-off-by: Mykola Kvach <mykola_kvach@epam.com>
2023-07-11 11:17:41 +02:00
Roberto Medina
6622735ea8 arch: arm64: add support for coredump
* Add support for coredump on ARM64 architectures.
* Add the script used for post-processing coredump output.

Signed-off-by: Marcelo Ruaro <marcelo.ruaro@huawei.com>
Signed-off-by: Rodrigo Cataldo <rodrigo.cataldo@huawei.com>
Signed-off-by: Roberto Medina <roberto.medina@huawei.com>
2023-07-03 09:32:26 +02:00
Luca Fancellu
c8a9634a30 arch: arm64: Use atomic for fpu_owner pointer
Currently the lazy fpu saving algorithm in arm64 is using the fpu_owner
pointer from the cpu structure to understand the owner of the context
in the cpu and save it in case someone different from the owner is
accessing the fpu.

The semantics for memory consistency across smp systems is quite prone
to errors and reworks on the current code might miss some barriers that
could lead to inconsistent state across cores, so to overcome the issue,
use atomics to hide the complexity and be sure that the code will behave
as intended.

While there, add some isb barriers after writes to cpacr_el1, following
the guidance of ARM ARM specs about writes on system registers.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
2023-06-08 09:35:11 -04:00
Jaxson Han
08791d5fae arch: arm64: Enable FPU and FPU_SHARING for v8r aarch64
This commit is to enable FPU and FPU_SHARING for v8r aarch64.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han
0b3136c2e8 arch: arm64: Add cpu_map to map cpu id and mpid
cpu_node_list does not hold the corrent mapping of cpu id and mpid when
core booting sequence does not follow the DTS cpu node sequence. This
will cause an issue that sgi cannot deliver to the right target.

Add the cpu_map array to hold the corrent mapping between cpu id and
mpid.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han
f03a4cec57 arch: arm64: Fix the STACK_INIT logic during the reset
Each core should init their own stack during the reset when SMP enabled,
but do not touch others. The current init results in each core starting
init the stack from the same address which will break others.

Fix the issue by setting a correct start address.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han
101ae5d240 arch: arm64: mpu: Remove LOG print before mpu enabled
LOG system has unalignment access instruction which will cause an
alignment exception before MPU is enabled. Remove the LOG print before
MPU is enabled to avoid this issue.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Andy Ross
b89e427bd6 kernel/sched: Rename/redocument wait_for_switch() -> z_sched_switch_spin()
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.

This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Chad Karaginides
9ab7354a46 arch: arm64: SCR_EL3 EEL2 Enablement
For secure EL2 to be entered the EEL2 bit in SCR_EL3 must be set.  This
should only be set if Zephyr has not been configured for NS mode only,
if the device is currently in secure EL3, and if secure EL2 is supported
via the SEL2 bit in AA64PFRO_EL1.  Added logic to enable EEL2 if all
conditions are met.

Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
2023-05-26 13:51:50 -04:00
Chad Karaginides
f8b9faff24 arch: arm64: Added ISBs after SCTLR Modifications
Per the ARMv8 architecture document, modification of the system control
register is a context-changing operation. Context-changing operations are
only guaranteed to be seen after a context synchronization event.
An ISB is a context synchronization event.  One has been placed after
each SCTLR modification. Issue was found running full speed on target.

Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
2023-05-25 16:33:03 -04:00
Nicolas Pitre
8e9872a7c8 arm64: prevent possible deadlock on SMP with FPU sharing
Let's consider CPU1 waiting on a spinlock already taken by CPU2.

It is possible for CPU2 to invoke the FPU and trigger an FPU exception
when the FPU context for CPU2 is not live on that CPU. If the FPU context
for the thread on CPU2 is still held in CPU1's FPU then an IPI is sent
to CPU1 asking to flush its FPU to memory.

But if CPU1 is spinning on a lock already taken by CPU2, it won't see
the pending IPI as IRQs are disabled. CPU2 won't get its FPU state
restored and won't complete the required work to release the lock.

Let's prevent this deadlock scenario by looking for pending FPU IPI from
the spinlock loop using the arch_spin_relax() hook.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-05-25 08:25:11 +00:00
Carlo Caione
6f3a13d974 barriers: Move __ISB() to the new API
Remove the arch-specific ARM-centric __ISB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione
cb11b2e84b barriers: Move __DSB() to the new API
Remove the arch-specific ARM-centric __DSB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione
2fa807bcd1 barriers: Move __DMB() to the new API
Remove the arch-specific ARM-centric __DMB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione
c2ec3d49cd arm64: cache: Enable full inlining of the cache ops
Move all the cache functions to a standalone header to allow for the
inlining of these functions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-04-20 14:56:09 -04:00
Peter Mitsis
1e250b37df arch: arm64: Remove unused absolute symbols
Removes ARM64 specific unused absolute symbols that are defined
via the GEN_ABSOLUTE_SYM() macro.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-04-18 10:51:28 -04:00
Gerard Marull-Paretas
a5fd0d184a init: remove the need for a dummy device pointer in SYS_INIT functions
The init infrastructure, found in `init.h`, is currently used by:

- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices

They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:

```c
struct init_entry {
	int (*init)(const struct device *dev);
	/* only set by DEVICE_*, otherwise NULL */
	const struct device *dev;
}
```

As a result, we end up with such weird/ugly pattern:

```c
static int my_init(const struct device *dev)
{
	/* always NULL! add ARG_UNUSED to avoid compiler warning */
	ARG_UNUSED(dev);
	...
}
```

This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:

```c
static int my_init(void)
{
	...
}
```

This is achieved using a union:

```c
union init_function {
	/* for SYS_INIT, used when init_entry.dev == NULL */
	int (*sys)(void);
	/* for DEVICE*, used when init_entry.dev != NULL */
	int (*dev)(const struct device *dev);
};

struct init_entry {
	/* stores init function (either for SYS_INIT or DEVICE*)
	union init_function init_fn;
	/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
	 * to know which union entry to call.
	 */
	const struct device *dev;
}
```

This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.

**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

init: convert SYS_INIT functions to the new signature

Conversion scripted using scripts/utils/migrate_sys_init.py.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

manifest: update projects for SYS_INIT changes

Update modules with updated SYS_INIT calls:

- hal_ti
- lvgl
- sof
- TraceRecorderSource

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

tests: devicetree: devices: adjust test

Adjust test according to the recently introduced SYS_INIT
infrastructure.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

tests: kernel: threads: adjust SYS_INIT call

Adjust to the new signature: int (*init_fn)(void);

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-04-12 14:28:07 +00:00
Jaxson Han
e416c5f1bd arch: arm64: Update current stack limit on every context switch
Update current stack limit on every context switch, including switching
to irq stack and switching back to thread stack.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han
00adc0b493 arch: arm64: Enable safe exception stack
This commit mainly enable the safe exception stack including the stack
switch. Init the safe exception stack by calling
z_arm64_safe_exception_stack during the boot stage on every core. Also,
tweaks several files to properly switch the mode with different cases.

1) The same as before, when executing in userspace, SP_EL0 holds the
user stack and SP_EL1 holds the privileged stack, using EL1h mode.

2) When entering exception from EL0 then SP_EL0 will be saved in the
_esf_t structure. SP_EL1 will be the current SP, then retrieves the safe
exception stack to SP_EL0, making sure the always pointing to safe
exception stack as long as the system running in kernel space.

3) When exiting exception from EL1 to EL0 then SP_EL0 will be restored
from the stack value previously saved in the _esf_t structure. Still at
EL1h mode.

4) Either entering or exiting exception from EL1 to EL1, SP_EL0 will
keep holding the safe exception stack unchanged as memtioned above.

5) Do a quick stack check every time entering the exception from EL1 to
EL1. If check fail, set SP_EL1 to safe exception stack, and then handle
the fatal error.

Overall, the exception from user mode will be handled with kernel stack
at the assumption that it is impossible the stackoverflow happens at the
entry of exception from EL0 to EL1. However the exception from kernel
mode will be firstly checked with the safe exception stack to see if the
kernel stack overflows, because the exception might be triggered by
stack invalid accessing.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han
463b1c9396 arch: arm64: Add safe exception stack init function
Add safe exception stack init function which does several things:
1) setting current cpu safe exception stack pointer to its corresponding
stack top.
2) init sp_el0 with the above safe exception stack.
That makes sure the sp_el0 points to per-cpu safe_stack in the kernel
space.
3) init the current_stack_limit and corrupted_sp with 0

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han
3a5fa0498f arch: arm64: Add stack_limit to thread_arch_t
Add stack_limit to thread_arch_t to store the thread's stack limit.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han
61b8b34b27 arch: arm64: Add the sp variable in _esf_t
As the preparation for enabling safe exception stack, add a variable in
_esf_t to save the user stack held by sp_el0 at the point of the
exception happening from EL0. The newly added variable in _esf_t is
named sp from which the user stack will be restored when exceptions eret
to EL0.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han
7040c55438 arch: arm64: Add stack check relevant variables to _cpu_arch_t
Add three per-cpu variables for the convenience of quickly accessing.

The safe_exception_stack stores the top of safe exception stack pointer.
The current_stack_limit stores the current thread's priv stack limit.
The corrputed_sp stores the priv sp or irq sp for the stack overflow
case, or 0 for the normal case.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han
d8d74b1320 arch: arm64: Add el label for vector entry macro
Add a new label el for z_arm64_enter_exc to indicate which el the
exception comes from.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han
6c40abb99f arch: arm64: Introduce safe exception stack
Introduce two configs to prepare to enable the safe exception stack for
the kernel space. This is the preparation for enabling hardware stack
guard. Also define the safe exception stack for kernel exception stack
check.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Nicolas Pitre
bcef633316 arch/arm64/mmu: arch_mem_map() should not overwrite existing mappings
If so this is most certainly a bug. arch_mem_unmap() should be
used before mapping the same area again.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-13 09:15:37 +01:00
Nicolas Pitre
364d7527c1 arch/arm64/mmu: minor addition to debugging code
Display table allocations and whether or not mappings
can be overwritten.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-13 09:15:37 +01:00
Nicolas Pitre
abb50e1605 arch/arm64/mmu: fix table discarding code
First, we have commit 7d27bd0b85 ("arch: arm64: Disable infinite
recursion warning for `discard_table`") that blindly shut up a compiler
warning that did actually highlighted a real bug. Revert that and fix
the bug properly. And yes, mea culpa for having been the first to
approve that commit, or even creating the bug in the first place.

Then let's add proper table usage cound handling for discard_table() to
work properly and avoid leaking table pages.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-07 08:33:05 +01:00
Henri Xavier
45af717a66 arch/arm64: Implement ASID support in ARM64 MMU
Improves context-switch performance.

TLB invalidation and the nG bit are used conservatively. This could
be improved in future work.

Tested with tests/benchmarks/sched_userspace:

BEFORE:
```
Swapping  2 threads: 161562583 cyc & 1000000 rounds ->   1615 ns per ctx
Swapping  8 threads: 161569289 cyc & 1000000 rounds ->   1615 ns per ctx
Swapping 16 threads: 161649163 cyc & 1000000 rounds ->   1616 ns per ctx
Swapping 32 threads: 163487880 cyc & 1000000 rounds ->   1634 ns per ctx
```

AFTER:
```
Swapping  2 threads: 18129207 cyc & 1000000 rounds ->    181 ns per ctx
Swapping  8 threads: 49702891 cyc & 1000000 rounds ->    497 ns per ctx
Swapping 16 threads: 55898650 cyc & 1000000 rounds ->    558 ns per ctx
Swapping 32 threads: 58059704 cyc & 1000000 rounds ->    580 ns per ctx
```

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-12-13 17:21:11 +09:00
Carlo Caione
385f06e6a1 arm64: cache: Rework cache API
And use the new API.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-12-01 13:40:56 -05:00
Carlo Caione
189cd1f4a2 cache: Rework cache API
The cache operations must be quick, optimized and possibly inlined. The
current API is clunky, functions are not inlined and passing parameters
around that are basically always known at compile time.

In this patch we rework the cache functions to allow us to get rid of
useless parameters and make inlining easier.

In particular this changeset is doing three things:

1. `CONFIG_HAS_ARCH_CACHE` is now `CONFIG_ARCH_CACHE` and
   `CONFIG_HAS_EXTERNAL_CACHE` is now `CONFIG_EXTERNAL_CACHE`

2. The cache API has been reworked.

3. Comments are added.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-12-01 13:40:56 -05:00
Henri Xavier
b54ba9877f arm64: implement arch_system_halt
When PSCI is enabled, implement `arch_system_halt` using
PSCI_SHUTDOWN.

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-11-23 11:37:08 +01:00
Henri Xavier
55df3852bf arm64: Add comment that arch_curr_cpu and get_cpu must be synced
The is code duplication as one is in C, and the other is an assembly
macro. As there is no easy way to find out about this duplication,
adding a comment seems the best way to go.

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-11-02 16:08:00 -05:00
Kumar Gala
d60c62e457 arch: arm64: Convert to CONFIG_MP_MAX_NUM_CPUS
Convert CONFIG_MP_NUM_CPUS to CONFIG_MP_MAX_NUM_CPUS as we work on
phasing out CONFIG_MP_NUM_CPUS.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-25 10:52:34 +02:00
Kumar Gala
a1195ae39b smp: Move for loops to use arch_num_cpus instead of CONFIG_MP_NUM_CPUS
Change for loops of the form:

for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
   ...

to

unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
   ...

We do the call outside of the for loop so that it only happens once,
rather than on every iteration.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-21 13:14:58 +02:00
Kumar Gala
fc95ec98dd smp: Convert #if to use CONFIG_MP_MAX_NUM_CPUS
Convert CONFIG_MP_NUM_CPUS to CONFIG_MP_MAX_NUM_CPUS as we work on
phasing out CONFIG_MP_NUM_CPUS.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-20 22:04:10 +09:00
Gerard Marull-Paretas
178bdc4afc include: add missing zephyr/irq.h include
Change automated searching for files using "IRQ_CONNECT()" API not
including <zephyr/irq.h>.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-10-17 22:57:39 +09:00
Stephanos Ioannidis
2689590e67 arch: arm64: Disable ldp/stp Qn for consecutive 32-byte loads/stores
GCC may generate ldp/stp instructions with the Advanced SIMD Qn
registers for consecutive 32-byte loads and stores.

This commit disables this GCC behaviour because saving and restoring
the Advanced SIMD context is very expensive, and it is preferable to
keep it turned off by not emitting these instructions for better
context switching performance.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-09-23 12:10:25 +02:00
Huifeng Zhang
3d81d7f23f arch: arm64: fix the wrong way to send ipi interrupt
On GICv3, when we send an IPI interrupt, aff3, aff2 and aff1 should
be assigned a value corespond to a PE for which interrupt will be
generated. target_list only corresponds to aff0.

On real hardware, aff3, aff2, aff1 and aff0 should be treated as a
whole to determine a PE.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2022-09-09 16:36:37 +00:00
Huifeng Zhang
3ef14cae5e arch: arm64: init VMPIDR_EL2 in z_arm64_el2_init
VMPIDR_EL2 is assigned the value returned by EL2 reads of MPIDR_EL1

MPIDR_EL1 is the register holding the Multiprocessor ID which is to
identify different cores. Because of the virtualization requirements
for AArch64, MPIDR_EL1 should be virtualized (the different virtualized
cores can run on the same physical core). Thus the value of MPIDR_EL1
should be switched when the VM is switched. Setting the VMPIDR_EL2 is
the way to change the value returned by EL1 reads of MPIDR_EL1. Even
without virtualization, we still need to set VMPIDR_EL2 during booting
at EL2 or EL3. Otherwise, all cores' IDs are zero at the EL1 stage
which will break the SMP system.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2022-09-09 16:36:37 +00:00
Gerard Marull-Paretas
5954ea1c65 arch: arm64: core: smp: use DT_FOREACH_CHILD_STATUS_OKAY_SEP
Avoid auxiliary macros by using DT_FOREACH_CHILD_STATUS_OKAY_SEP.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-08-30 16:19:57 +02:00
Carlo Caione
4806e1087e cache: Fix cache API calling from userspace
When a cache API function is called from userspace, this results on
ARM64 in an OOPS (bad syscall error). This is due to at least two
different factors:

- the location of the cache handlers is preventing the linker to
  actually find the handlers
- specifically for ARM64 and ARC some cache handling functions are not
  implemented (when userspace is not used the compiler simply optimizes
  out these calls)

Fix the problem by:

- moving the userspace cache handlers to a their logical and proper
  location (in the drivers directory)
- adding the missing handlers for ARM64 and ARC

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-23 10:14:17 +02:00
Carlo Caione
31d65d63f6 arch: arm64: Fix cache-related Kconfig symbols
Switch to the new cache-related Kconfig symbols.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-18 11:30:49 +00:00
Fabio Baltieri
55b243e124 test,arch: fix few odd suffix include paths
Fix some more legacy include paths found in files with unusual suffixes.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2022-07-18 14:44:47 -04:00