Commit graph

192 commits

Author SHA1 Message Date
Jaxson Han 6fc3903bbe arch: arm64: mpu: Fix some minor CHECKIF issues
There are two CHECKIFs whose check condition is wrong.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-07-11 15:04:54 +02:00
Mykola Kvach d0472aae7a soc: arm64: add Renesas Rcar Gen3 SoC support
Add files for supporting arm64 Renesas r8a77951 SoC.
Add config option CPU_CORTEX_A57.

Enable build of clock_control_r8a7795_cpg_mssr.c for
a new ARM64 SoC R8A77951.

Signed-off-by: Mykola Kvach <mykola_kvach@epam.com>
2023-07-11 11:17:41 +02:00
Roberto Medina 6622735ea8 arch: arm64: add support for coredump
* Add support for coredump on ARM64 architectures.
* Add the script used for post-processing coredump output.

Signed-off-by: Marcelo Ruaro <marcelo.ruaro@huawei.com>
Signed-off-by: Rodrigo Cataldo <rodrigo.cataldo@huawei.com>
Signed-off-by: Roberto Medina <roberto.medina@huawei.com>
2023-07-03 09:32:26 +02:00
Luca Fancellu c8a9634a30 arch: arm64: Use atomic for fpu_owner pointer
Currently the lazy fpu saving algorithm in arm64 is using the fpu_owner
pointer from the cpu structure to understand the owner of the context
in the cpu and save it in case someone different from the owner is
accessing the fpu.

The semantics for memory consistency across smp systems is quite prone
to errors and reworks on the current code might miss some barriers that
could lead to inconsistent state across cores, so to overcome the issue,
use atomics to hide the complexity and be sure that the code will behave
as intended.

While there, add some isb barriers after writes to cpacr_el1, following
the guidance of ARM ARM specs about writes on system registers.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
2023-06-08 09:35:11 -04:00
Jaxson Han 08791d5fae arch: arm64: Enable FPU and FPU_SHARING for v8r aarch64
This commit is to enable FPU and FPU_SHARING for v8r aarch64.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han 0b3136c2e8 arch: arm64: Add cpu_map to map cpu id and mpid
cpu_node_list does not hold the corrent mapping of cpu id and mpid when
core booting sequence does not follow the DTS cpu node sequence. This
will cause an issue that sgi cannot deliver to the right target.

Add the cpu_map array to hold the corrent mapping between cpu id and
mpid.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han f03a4cec57 arch: arm64: Fix the STACK_INIT logic during the reset
Each core should init their own stack during the reset when SMP enabled,
but do not touch others. The current init results in each core starting
init the stack from the same address which will break others.

Fix the issue by setting a correct start address.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Jaxson Han 101ae5d240 arch: arm64: mpu: Remove LOG print before mpu enabled
LOG system has unalignment access instruction which will cause an
alignment exception before MPU is enabled. Remove the LOG print before
MPU is enabled to avoid this issue.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-05-30 10:14:55 +02:00
Andy Ross b89e427bd6 kernel/sched: Rename/redocument wait_for_switch() -> z_sched_switch_spin()
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.

This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Chad Karaginides 9ab7354a46 arch: arm64: SCR_EL3 EEL2 Enablement
For secure EL2 to be entered the EEL2 bit in SCR_EL3 must be set.  This
should only be set if Zephyr has not been configured for NS mode only,
if the device is currently in secure EL3, and if secure EL2 is supported
via the SEL2 bit in AA64PFRO_EL1.  Added logic to enable EEL2 if all
conditions are met.

Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
2023-05-26 13:51:50 -04:00
Chad Karaginides f8b9faff24 arch: arm64: Added ISBs after SCTLR Modifications
Per the ARMv8 architecture document, modification of the system control
register is a context-changing operation. Context-changing operations are
only guaranteed to be seen after a context synchronization event.
An ISB is a context synchronization event.  One has been placed after
each SCTLR modification. Issue was found running full speed on target.

Signed-off-by: Chad Karaginides <quic_chadk@quicinc.com>
2023-05-25 16:33:03 -04:00
Nicolas Pitre 8e9872a7c8 arm64: prevent possible deadlock on SMP with FPU sharing
Let's consider CPU1 waiting on a spinlock already taken by CPU2.

It is possible for CPU2 to invoke the FPU and trigger an FPU exception
when the FPU context for CPU2 is not live on that CPU. If the FPU context
for the thread on CPU2 is still held in CPU1's FPU then an IPI is sent
to CPU1 asking to flush its FPU to memory.

But if CPU1 is spinning on a lock already taken by CPU2, it won't see
the pending IPI as IRQs are disabled. CPU2 won't get its FPU state
restored and won't complete the required work to release the lock.

Let's prevent this deadlock scenario by looking for pending FPU IPI from
the spinlock loop using the arch_spin_relax() hook.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-05-25 08:25:11 +00:00
Carlo Caione 6f3a13d974 barriers: Move __ISB() to the new API
Remove the arch-specific ARM-centric __ISB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione cb11b2e84b barriers: Move __DSB() to the new API
Remove the arch-specific ARM-centric __DSB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione 2fa807bcd1 barriers: Move __DMB() to the new API
Remove the arch-specific ARM-centric __DMB() macro and use the new
barrier API instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Carlo Caione c2ec3d49cd arm64: cache: Enable full inlining of the cache ops
Move all the cache functions to a standalone header to allow for the
inlining of these functions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-04-20 14:56:09 -04:00
Peter Mitsis 1e250b37df arch: arm64: Remove unused absolute symbols
Removes ARM64 specific unused absolute symbols that are defined
via the GEN_ABSOLUTE_SYM() macro.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-04-18 10:51:28 -04:00
Gerard Marull-Paretas a5fd0d184a init: remove the need for a dummy device pointer in SYS_INIT functions
The init infrastructure, found in `init.h`, is currently used by:

- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices

They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:

```c
struct init_entry {
	int (*init)(const struct device *dev);
	/* only set by DEVICE_*, otherwise NULL */
	const struct device *dev;
}
```

As a result, we end up with such weird/ugly pattern:

```c
static int my_init(const struct device *dev)
{
	/* always NULL! add ARG_UNUSED to avoid compiler warning */
	ARG_UNUSED(dev);
	...
}
```

This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:

```c
static int my_init(void)
{
	...
}
```

This is achieved using a union:

```c
union init_function {
	/* for SYS_INIT, used when init_entry.dev == NULL */
	int (*sys)(void);
	/* for DEVICE*, used when init_entry.dev != NULL */
	int (*dev)(const struct device *dev);
};

struct init_entry {
	/* stores init function (either for SYS_INIT or DEVICE*)
	union init_function init_fn;
	/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
	 * to know which union entry to call.
	 */
	const struct device *dev;
}
```

This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.

**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

init: convert SYS_INIT functions to the new signature

Conversion scripted using scripts/utils/migrate_sys_init.py.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

manifest: update projects for SYS_INIT changes

Update modules with updated SYS_INIT calls:

- hal_ti
- lvgl
- sof
- TraceRecorderSource

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

tests: devicetree: devices: adjust test

Adjust test according to the recently introduced SYS_INIT
infrastructure.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>

tests: kernel: threads: adjust SYS_INIT call

Adjust to the new signature: int (*init_fn)(void);

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-04-12 14:28:07 +00:00
Jaxson Han e416c5f1bd arch: arm64: Update current stack limit on every context switch
Update current stack limit on every context switch, including switching
to irq stack and switching back to thread stack.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 00adc0b493 arch: arm64: Enable safe exception stack
This commit mainly enable the safe exception stack including the stack
switch. Init the safe exception stack by calling
z_arm64_safe_exception_stack during the boot stage on every core. Also,
tweaks several files to properly switch the mode with different cases.

1) The same as before, when executing in userspace, SP_EL0 holds the
user stack and SP_EL1 holds the privileged stack, using EL1h mode.

2) When entering exception from EL0 then SP_EL0 will be saved in the
_esf_t structure. SP_EL1 will be the current SP, then retrieves the safe
exception stack to SP_EL0, making sure the always pointing to safe
exception stack as long as the system running in kernel space.

3) When exiting exception from EL1 to EL0 then SP_EL0 will be restored
from the stack value previously saved in the _esf_t structure. Still at
EL1h mode.

4) Either entering or exiting exception from EL1 to EL1, SP_EL0 will
keep holding the safe exception stack unchanged as memtioned above.

5) Do a quick stack check every time entering the exception from EL1 to
EL1. If check fail, set SP_EL1 to safe exception stack, and then handle
the fatal error.

Overall, the exception from user mode will be handled with kernel stack
at the assumption that it is impossible the stackoverflow happens at the
entry of exception from EL0 to EL1. However the exception from kernel
mode will be firstly checked with the safe exception stack to see if the
kernel stack overflows, because the exception might be triggered by
stack invalid accessing.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 463b1c9396 arch: arm64: Add safe exception stack init function
Add safe exception stack init function which does several things:
1) setting current cpu safe exception stack pointer to its corresponding
stack top.
2) init sp_el0 with the above safe exception stack.
That makes sure the sp_el0 points to per-cpu safe_stack in the kernel
space.
3) init the current_stack_limit and corrupted_sp with 0

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 3a5fa0498f arch: arm64: Add stack_limit to thread_arch_t
Add stack_limit to thread_arch_t to store the thread's stack limit.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 61b8b34b27 arch: arm64: Add the sp variable in _esf_t
As the preparation for enabling safe exception stack, add a variable in
_esf_t to save the user stack held by sp_el0 at the point of the
exception happening from EL0. The newly added variable in _esf_t is
named sp from which the user stack will be restored when exceptions eret
to EL0.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 7040c55438 arch: arm64: Add stack check relevant variables to _cpu_arch_t
Add three per-cpu variables for the convenience of quickly accessing.

The safe_exception_stack stores the top of safe exception stack pointer.
The current_stack_limit stores the current thread's priv stack limit.
The corrputed_sp stores the priv sp or irq sp for the stack overflow
case, or 0 for the normal case.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han d8d74b1320 arch: arm64: Add el label for vector entry macro
Add a new label el for z_arm64_enter_exc to indicate which el the
exception comes from.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Jaxson Han 6c40abb99f arch: arm64: Introduce safe exception stack
Introduce two configs to prepare to enable the safe exception stack for
the kernel space. This is the preparation for enabling hardware stack
guard. Also define the safe exception stack for kernel exception stack
check.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2023-03-14 10:49:22 +01:00
Nicolas Pitre bcef633316 arch/arm64/mmu: arch_mem_map() should not overwrite existing mappings
If so this is most certainly a bug. arch_mem_unmap() should be
used before mapping the same area again.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-13 09:15:37 +01:00
Nicolas Pitre 364d7527c1 arch/arm64/mmu: minor addition to debugging code
Display table allocations and whether or not mappings
can be overwritten.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-13 09:15:37 +01:00
Nicolas Pitre abb50e1605 arch/arm64/mmu: fix table discarding code
First, we have commit 7d27bd0b85 ("arch: arm64: Disable infinite
recursion warning for `discard_table`") that blindly shut up a compiler
warning that did actually highlighted a real bug. Revert that and fix
the bug properly. And yes, mea culpa for having been the first to
approve that commit, or even creating the bug in the first place.

Then let's add proper table usage cound handling for discard_table() to
work properly and avoid leaking table pages.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-07 08:33:05 +01:00
Henri Xavier 45af717a66 arch/arm64: Implement ASID support in ARM64 MMU
Improves context-switch performance.

TLB invalidation and the nG bit are used conservatively. This could
be improved in future work.

Tested with tests/benchmarks/sched_userspace:

BEFORE:
```
Swapping  2 threads: 161562583 cyc & 1000000 rounds ->   1615 ns per ctx
Swapping  8 threads: 161569289 cyc & 1000000 rounds ->   1615 ns per ctx
Swapping 16 threads: 161649163 cyc & 1000000 rounds ->   1616 ns per ctx
Swapping 32 threads: 163487880 cyc & 1000000 rounds ->   1634 ns per ctx
```

AFTER:
```
Swapping  2 threads: 18129207 cyc & 1000000 rounds ->    181 ns per ctx
Swapping  8 threads: 49702891 cyc & 1000000 rounds ->    497 ns per ctx
Swapping 16 threads: 55898650 cyc & 1000000 rounds ->    558 ns per ctx
Swapping 32 threads: 58059704 cyc & 1000000 rounds ->    580 ns per ctx
```

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-12-13 17:21:11 +09:00
Carlo Caione 385f06e6a1 arm64: cache: Rework cache API
And use the new API.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-12-01 13:40:56 -05:00
Carlo Caione 189cd1f4a2 cache: Rework cache API
The cache operations must be quick, optimized and possibly inlined. The
current API is clunky, functions are not inlined and passing parameters
around that are basically always known at compile time.

In this patch we rework the cache functions to allow us to get rid of
useless parameters and make inlining easier.

In particular this changeset is doing three things:

1. `CONFIG_HAS_ARCH_CACHE` is now `CONFIG_ARCH_CACHE` and
   `CONFIG_HAS_EXTERNAL_CACHE` is now `CONFIG_EXTERNAL_CACHE`

2. The cache API has been reworked.

3. Comments are added.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-12-01 13:40:56 -05:00
Henri Xavier b54ba9877f arm64: implement arch_system_halt
When PSCI is enabled, implement `arch_system_halt` using
PSCI_SHUTDOWN.

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-11-23 11:37:08 +01:00
Henri Xavier 55df3852bf arm64: Add comment that arch_curr_cpu and get_cpu must be synced
The is code duplication as one is in C, and the other is an assembly
macro. As there is no easy way to find out about this duplication,
adding a comment seems the best way to go.

Signed-off-by: Henri Xavier <datacomos@huawei.com>
2022-11-02 16:08:00 -05:00
Kumar Gala d60c62e457 arch: arm64: Convert to CONFIG_MP_MAX_NUM_CPUS
Convert CONFIG_MP_NUM_CPUS to CONFIG_MP_MAX_NUM_CPUS as we work on
phasing out CONFIG_MP_NUM_CPUS.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-25 10:52:34 +02:00
Kumar Gala a1195ae39b smp: Move for loops to use arch_num_cpus instead of CONFIG_MP_NUM_CPUS
Change for loops of the form:

for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
   ...

to

unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
   ...

We do the call outside of the for loop so that it only happens once,
rather than on every iteration.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-21 13:14:58 +02:00
Kumar Gala fc95ec98dd smp: Convert #if to use CONFIG_MP_MAX_NUM_CPUS
Convert CONFIG_MP_NUM_CPUS to CONFIG_MP_MAX_NUM_CPUS as we work on
phasing out CONFIG_MP_NUM_CPUS.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-20 22:04:10 +09:00
Gerard Marull-Paretas 178bdc4afc include: add missing zephyr/irq.h include
Change automated searching for files using "IRQ_CONNECT()" API not
including <zephyr/irq.h>.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-10-17 22:57:39 +09:00
Stephanos Ioannidis 2689590e67 arch: arm64: Disable ldp/stp Qn for consecutive 32-byte loads/stores
GCC may generate ldp/stp instructions with the Advanced SIMD Qn
registers for consecutive 32-byte loads and stores.

This commit disables this GCC behaviour because saving and restoring
the Advanced SIMD context is very expensive, and it is preferable to
keep it turned off by not emitting these instructions for better
context switching performance.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-09-23 12:10:25 +02:00
Huifeng Zhang 3d81d7f23f arch: arm64: fix the wrong way to send ipi interrupt
On GICv3, when we send an IPI interrupt, aff3, aff2 and aff1 should
be assigned a value corespond to a PE for which interrupt will be
generated. target_list only corresponds to aff0.

On real hardware, aff3, aff2, aff1 and aff0 should be treated as a
whole to determine a PE.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2022-09-09 16:36:37 +00:00
Huifeng Zhang 3ef14cae5e arch: arm64: init VMPIDR_EL2 in z_arm64_el2_init
VMPIDR_EL2 is assigned the value returned by EL2 reads of MPIDR_EL1

MPIDR_EL1 is the register holding the Multiprocessor ID which is to
identify different cores. Because of the virtualization requirements
for AArch64, MPIDR_EL1 should be virtualized (the different virtualized
cores can run on the same physical core). Thus the value of MPIDR_EL1
should be switched when the VM is switched. Setting the VMPIDR_EL2 is
the way to change the value returned by EL1 reads of MPIDR_EL1. Even
without virtualization, we still need to set VMPIDR_EL2 during booting
at EL2 or EL3. Otherwise, all cores' IDs are zero at the EL1 stage
which will break the SMP system.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2022-09-09 16:36:37 +00:00
Gerard Marull-Paretas 5954ea1c65 arch: arm64: core: smp: use DT_FOREACH_CHILD_STATUS_OKAY_SEP
Avoid auxiliary macros by using DT_FOREACH_CHILD_STATUS_OKAY_SEP.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-08-30 16:19:57 +02:00
Carlo Caione 4806e1087e cache: Fix cache API calling from userspace
When a cache API function is called from userspace, this results on
ARM64 in an OOPS (bad syscall error). This is due to at least two
different factors:

- the location of the cache handlers is preventing the linker to
  actually find the handlers
- specifically for ARM64 and ARC some cache handling functions are not
  implemented (when userspace is not used the compiler simply optimizes
  out these calls)

Fix the problem by:

- moving the userspace cache handlers to a their logical and proper
  location (in the drivers directory)
- adding the missing handlers for ARM64 and ARC

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-23 10:14:17 +02:00
Carlo Caione 31d65d63f6 arch: arm64: Fix cache-related Kconfig symbols
Switch to the new cache-related Kconfig symbols.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-18 11:30:49 +00:00
Fabio Baltieri 55b243e124 test,arch: fix few odd suffix include paths
Fix some more legacy include paths found in files with unusual suffixes.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2022-07-18 14:44:47 -04:00
Jamie Iles dbc6f6a882 arch: arm64: initialize IRQ stack for CONFIG_INIT_STACKS
When CONFIG_INIT_STACKS is enabled all stacks should be filled with 0xaa
so that the thread analyzer can measure stack utilization, but the IRQ
stack was not filled and so `kernel stacks` on the shell would show that
the stack had been fully used and inferring an IRQ stack overflow
regardless of the IRQ stack size.

Fill the IRQ stack before it gets used so that we can have precise usage
reports.

Signed-off-by: Jamie Iles <quic_jiles@quicinc.com>
Signed-off-by: Dave Aldridge <quic_daldridg@quicinc.com>
2022-07-08 19:59:24 +00:00
Anas Nashif 516625ed6a arch: arm64: add mising braces to single line if statements
Following zephyr's style guideline, all if statements, including single
line statements shall have braces.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-07-06 11:00:45 -04:00
Keith Packard f2ae48e621 arch/arm64: Enable 'large' code model for large targets
Targets with text or data addresses above the 4GB boundary may need to use
the large code model to ensure relocations in the linker work correctly.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-07-04 15:42:53 +00:00
Eugene Cohen d903333422 arch: arm64: enable single thread support config
Enable single-threaded support for the arm64 archtecture.

This mode of execution is supported on an soc under
development and is validated regularly.

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-06-29 10:27:55 +02:00
Eugene Cohen 1f93ece43d arch: arm64: program TG[1] in mmu init
In performing a double check of Zephyr arm64 MMU config
against edk2, a different in the programming of the
Translation Control Register (TCR) was found.  TCR.TG[1]
should be set to address Cortex-A57 erratum 822227:

"Using unsupported 16K translation granules might cause
Cortex-A57 to incorrectly trigger a domain fault"

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-06-29 10:27:33 +02:00
Eugene Cohen b84ab912af arch: arm64: define A55 core
Define a CPU_CORTEX_A55 configuration and align the gcc
cpu type accordingly when selected.

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-06-29 10:27:19 +02:00
Eugene Cohen 434e748cbb arch: arm64: add WAIT_AT_RESET_VECTOR config
On platforms where reset vector catch is not possible
it is useful to have a compile-time option to spin
at the reset vector allowing a debugger to be attached
and then to manually resume execution.

Define a config option for arm64 to spin at the
reset vectdor so a debugger can be attached.

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-06-28 12:29:17 +02:00
Stephanos Ioannidis 7d27bd0b85 arch: arm64: Disable infinite recursion warning for discard_table
This commit selectively disables the infinite recursion warning
(`-Winfinite-recursion`), which may be reported by GCC 12 and above,
for the `disable_table` function because no actual infinite recursion
will occur under normal circumstances.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-06-16 16:02:23 -04:00
Carlo Caione 4d7d784d1e arm64: mmu: Support userspace memory mapping
arch_mem_map() on ARM64 is currently not supporting the K_MEM_PERM_USER
parameter so we cannot allocate userspace accessible memory using the
memory helpers. Fix this.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-10 09:48:23 +02:00
Gerard Marull-Paretas 96397b021e arch: arm64: smp: remove redundant soc.h include
<soc.h> has been traditionally been used as a proxy to HAL headers,
register definitions, etc. Nowadays, <soc.h> is anarchy. It serves a
different purpose depending on the SoC. In some cases it includes HALs,
in some others it works as a header sink/proxy (for no good reason), as
a register definition when there's no HAL... To make things worse, it is
being included in code that is, in theory, non-SoC specific.

This patch is part of a series intended to improve the situation by
removing <soc.h> usage when not needed, and by eventually removing it.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-06-05 14:48:40 +02:00
Gerard Marull-Paretas f465bd22c9 arch: arm64: mpu: remove unnecessary include
<soc.h> was not required.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-06-05 14:48:40 +02:00
Jaxson Han 04caf70bfe arm64: smp: Fix the wrong secondary core stack size
The init stack of the secondary core should use KERNEL_STACK_BUFFER + sz
Using Z_THREAD_STACK_BUFFER will calculate the wrong stack size.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-05-17 11:45:16 +09:00
Jaxson Han 933a8f9d12 arch: arm64: Fix coherence issue of SMP boot code
The current SMP boot code doesn't consider that the cores can boot at
the same time. Possibly, more than one core can boot into primary core
boot sequence. Fix it by using the atomic operation to make sure only
one core act as the primary core.

Correspondingly, sgi_raise_ipi should transfer CPU id to mpidr.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-05-17 11:45:16 +09:00
Jaxson Han 2f6087ba67 arch: arm64: Fix arm mpu SMP issues
Only primary core do the dynamic_areas_init.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-05-17 11:45:16 +09:00
Eugene Cohen 816229128d arch/arm64: update gicv3 sre enablement
Fix writing of ICC_SRE_EL3 to or-in bits to align
with original intent to read-modify-write this
register.

Also disable FIQ and IRQ bypass so interrupt delivery
occurs through GIC.  Platforms may choose to override
this behavior in z_arm64_el3_plat_init implementations.

Remove ICC_SRE_EL3 config from viper and qemu since
this is now handled in the arm64 arch core.

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-05-10 09:13:20 +02:00
Gerard Marull-Paretas 4b91c2d79f asm: update files with <zephyr/...> include prefix
Assembler files were not migrated with the new <zephyr/...> prefix.
Note that the conversion has been scripted, refer to #45388 for more
details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 12:45:29 -04:00
Gerard Marull-Paretas 16811660ee arch: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-06 19:57:22 +02:00
Nicolas Pitre f61b8b8c16 semihosting: fix inline assembly output dependency
Commit d8f186aa4a ("arch: common: semihost: add semihosting
operations") encapsulated semihosting invocation in a per-arch
semihost_exec() function. There is a fixed register variable declaration
for the return value but this variable is not listed as an output
operand to respective inline assembly segments which is an error.
This is not reported as such by gcc and the generated code is still OK
in those particular instances but this is not guaranteed, and clang
does complain about such cases.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-24 19:46:15 +02:00
Jordan Yates d8f186aa4a arch: common: semihost: add semihosting operations
Add an API that utilizes the ARM semihosting mechanism to interact with
the host system when a device is being emulated or run under a debugger.

RISCV is implemented in terms of the ARM implementation, and therefore
the ARM definitions cross enough architectures to be defined 'common'.

Functionality is exposed as a separate API instead of syscall
implementations (`_lseek`, `_open`, etc) due to various quirks with
the ARM mechanisms that means function arguments are not standard.

For more information see:
https://developer.arm.com/documentation/dui0471/m/what-is-semihosting-

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>

impl
2022-04-21 13:04:52 +02:00
Nicolas Pitre 563a8d11a4 arm64: refer to the link register as "lr" rather than "x30"
In ARM parlance, the subroutine call return address is stored in the
"link register" or simply lr. Refer to it as lr which is clearer than
the anonymous x30 designation.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-07 16:31:30 -05:00
Jiafei Pan 227d1ea1bb arm64: mmu: provide more memory mapping types for z_phys_map()
ARM64 supports more memory mapping types for device memory (nGnRnE,
nGnRE, GRE), add these mapping support for os common mapping API
function z_phys_map().

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2022-04-05 11:17:47 +02:00
Nicolas Pitre 47e4a4487f arm64: simplify the code around the call to z_get_next_switch_handle()
Remove the special SMP workaround and the extra wrapper.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-18 13:32:49 -04:00
Jaxson Han 7ea0591d30 arm64: v8r: Enable AARCH64_IMAGE_HEADER by default
Enable AARCH64_IMAGE_HEADER by default and fix the relevant warning

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-03-16 09:19:44 -05:00
Nicolas Pitre 2ef47509c3 arm64: simplify user mode transition code
It is not necessary to go through the full exception exit code.
This is simpler, smaller and faster.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-15 22:24:22 -04:00
Nicolas Pitre 8affac64a7 arm64: improved arch_switch() implementation
Make it optimal without the need for an SVC/exception  roundtrip on
every context switch. Performance numbers from tests/benchmarks/sched:

Before:
unpend   85 ready   58 switch  258 pend  231 tot  632 (avg  699)

After:
unpend   85 ready   59 switch  115 pend  138 tot  397 (avg  478)

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-15 22:24:22 -04:00
Nicolas Pitre bd941bcc68 arm64: implement CONFIG_IRQ_OFFLOAD_NESTED
It can easily be done now, so why not. Suffice to increment the nested
count like with actual IRQs.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-14 22:03:05 -04:00
Nicolas Pitre 90fcef4254 arm64: irq_offload: simpler implementation
Get rid of all those global variables and scheduler locking.
Use the reguler IRQ exit path to let tests properly validate preemption.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-14 22:03:05 -04:00
Nicolas Pitre 9d0bcfa884 arm64: isr_wrapper.S: tiny assembly optimization
Save one instruction in the ISR hot path.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-14 22:03:05 -04:00
Jaxson Han 65d7e64e06 board: arm64: fvp_baser_aemv8r: Fix misc SMP issues
Add CONFIG_SMP to fvp_baser_aemv8r_smp board.
Fix compile warnings by adding missing header file in arm_mpu.c.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-03-11 11:00:05 +01:00
Jaxson Han 3122b9ed10 arm64: smp: Fix broadcast_ipi issue
This commit mainly fixes the broadcast_ipi issue when one core broadcast
ipi to other cores using gic_raise_sgi. The issue doesn't affect the
functionality of Zephyr SMP but will happen when Zephyr runs on Xen.
Suppose Xen provides 4 CPUs to the Zephyr guest, for example, when cpu0
broadcasts ipi to the rest of the cores, the mask should be 0xE(0b1110),
but for now, Zephyr will send 0xFFFE. So for Xen, it will receive a
target list containing many invalid CPUs which don't exist. My solution
is: to generate the target list according to the online CPUs.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-03-11 11:00:05 +01:00
Jaxson Han fd231e32e9 arm64: Fix booting issue with FVP V8R >= 11.16.16
In the Armv8R AArch64 profile[1], the Armv8R AArch64 is always in secure
mode. But the FVP_BaseR_AEMv8R before version 11.16.16 doesn't strictly
follow this rule. It still has some non-secure registers
(e.g. CNTHP_CTL_EL2).

Since version 11.16.16, the FVP_BaseR_AEMv8R has fixed this issue. The
CNTHP_XXX_EL2 registers have been changed to CNTHPS_XXX_EL2. So the
FVP_BaseR_AEMv8R (version >= 11.16.16) cannot boot Zephyr. This patch
will fix it.

[1] https://developer.arm.com/documentation/ddi0600/latest/

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Change-Id: If986f34dc080ae7a8b226bba589b6fe616a4260b
2022-03-08 11:09:13 +01:00
Carles Cufi e83a13aabf kconfig: Rename the TEST_EXTRA stack size option to align with the rest
All stack sizes should end with STACK_SIZE.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2022-02-22 08:23:05 -05:00
Hou Zhiqiang 1fca05b7f8 arm64: cache: Fix data corruption issue on DCACHE range invalidation
Currently, the DCACHE range invalidation can cause data corruption when
the ends of the given range is not aligned to a full cache line.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2022-02-21 22:00:16 -05:00
Nicolas Pitre 34d425fbe5 arm64: switch to the IRQ stack during ISR execution
Avoid executing ISRs using the thread stack as it might not be sized
for that. Plus, we do have IRQ stacks already set up for us.

The non-nested IRQ context is still (and has to be) saved on the thread
stack as the thread could be preempted.

The irq_offload case is never nested and always invoked with the
sched_lock held so it can be simplified a bit.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-21 21:53:23 -05:00
Nicolas Pitre 6381ee7391 arm64: update _current_cpu->nested properly
This is an uint32_t so the proper register width must be used, otherwise
the adjacent structure member will be overwritten (didn't happen in
practice because of struct member alignment but still). This makes the
inc_nest_counter and dec_nest_counter macros rather unwieldy, especially
with upcoming changes, so let's just remove them.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-21 21:53:23 -05:00
Nicolas Pitre fa8c851993 arm64: simple memcpy/memset alternatives to be used during early boot
Let's provide our own z_early_memset() and z_early_memcpy() rather than
making our own .bss clearing function that risk missing out on updates
to the main version.

Also remove extra stuff already provided by kernel_internal.h.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-21 21:00:12 -05:00
Carlo Caione a74dac89ba kernel: Reset the switch_handler only in the arch code
Avoid setting the switch_handler in the z_get_next_switch_handle() code
when the context is not fully saved yet to avoid a race against other
cores waiting on wait_for_switch().

See issue #40795 and discussion in #41840

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-01-18 10:41:35 -05:00
Daniel Leung aa20e081d2 arm: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Dmytro Firsov 01a9b117fe xenvm: arm64: add Xen Enlighten and event channel support
This commit adds support of Xen Enlighten page and initial support for
Xen event channels. It is needed for future Xen PV drivers
implementation.

Now enlighten page is mapped to the prepared memory area on
PRE_KERNEL_1 stage. In case of success event channel logic gets
inited and can be used ASAP after Zephyr start. Current implementation
allows to use only pre-defined event channels (PV console/XenBus) and
works only in single CPU mode (without VCPUOP_register_vcpu_info).
Event channel allocation will be implemented in future versions.

Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
2021-12-07 12:15:38 -05:00
Daniel Leung 1cd7cccbb1 kernel: mem_domain: arch_mem_domain functions to return errors
This changes the arch_mem_domain_*() functions to return errors.
This allows the callers a chance to recover if needed.

Note that:
() For assertions where it can bail out early without side
   effects, these are converted to CHECKIF(). (Usually means
   that updating of page tables or translation tables has not
   been started yet.)
() Other assertions are retained to signal fatal errors during
   development.
() The additional CHECKIF() are structured so that it will bail
   early if possible. If errors are encountered inside a loop,
   it will still continue with the loop so it works as before
   this changes with assertions disabled.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Andy Ross 35af02fe3d arch/arm64: Add hook for CONFIG_SCHED_THREAD_USAGE accounting in ISRs
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.

This is pretty much exactly where we want it, just after the context
saving steps (which we can't elide since the hook is in C) and
immediately before the tracing hook for ISR entry.  And as I'm reading
the code, this is purely for Zephyr-registered interrupts, meaning
that software exceptions will be accounted for (correctly) as part of
the excepting thread.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Dmytro Firsov c4ab278688 arm64: xenvm: Add Xen hypercall interface for arm64
This commit adds Xen hypervisor call interface for arm64 architecture.
This is needed for further development of Xen features in Zephyr.

Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
2021-10-29 15:23:33 +02:00
Jiafei Pan 799f37b421 arm64: add nocache memory segment support
In some drivers, noncache memory need to be used for dma coherent
memroy, so add nocache memory segment mapping and support for ARM64
platforms.

The following variables definition example shows they will use nocache
memory allocation:
   int var1 __nocache;
   int var2 __attribute__((__section__(".nocache")));

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-10-20 08:56:40 -05:00
Neil Armstrong c24d0c8405 arm64: mmu: implement arch_virt_region_align()
Add the arm64 MMU arch_virt_region_align() implementation used
to return a possible virtual addres alignment in order to
optimize the MMU table layout and possibly avoid using L3 tables
and use some L1 & L3 blocks instead for most of the mapping.

Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-11 21:00:28 -04:00
Neil Armstrong 866840e4e8 arm64: mmu: don't use a Level block if PA is not aligned
When mapping the following:
device_map(&base0, DEVA_BASE, DEVA_SIZE, K_MEM_CACHE_NONE);
device_map(&base1, DEVB_BASE , DEVB_SIZE, K_MEM_CACHE_NONE);

with:
- DEVA_SIZE not multiple of a 4KB granule L2 block size (0x200000)
- DEVB_SIZE more than 2 x 4KB granule L2 block size

The mmu code will fill the first device_map() in a L3 table, then
on the second mapping the mmu code will complete the previous L3
table.
At the end of this table, the actual code will select an L2 block
instead of a table because the *virtual address* is multiple with
the L2 block size.

But if the physical address is not, the virtual block offset will
be ORed to the physical address, and not added.

Leading to a weird scenario where virtual memory is duplicated
resulting of the addresses ORing and not addition.

Example:
device_map(&base0, DEVA_BASE, 0x20000, K_MEM_CACHE_NONE);
device_map(&base1, 0x44000000 , 0x400000, K_MEM_CACHE_NONE);

First will result in VA 0x5ffe0000 and second in VA 0x5fbe0000.

The MMU code will use a table to map 0x5ffe0000 to 0x5fbfffff.

For 0x5fc00000 to 0x5fdfffff, since the VA is multiple of an L2
block size, the L3 table is not used.

But the L2 block description entry address is 0x44060000, meaning
that for each access in this L2 block, the following will be done:

0x44060000 | (VA & 1FFFFF)

This is working for the 0x5fc40000 to 0x5fc5ffff access, but for the
0x5fbc60000 (0x5fbe0000 + 0x80000) access the PA gets calculated as :

0x44060000 | (0x5fc60000 & 1FFFFF) = 0x44060000 | 0x60000 = 0x44060000

Instead of the expected 0x44080000.

The solution is to check if the PA descriptor is aligned with the
level block size, if not move to the next level.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-07 10:54:28 +02:00
Jaxson Han 207926c479 arm64: Kconfig: Enable userspace feature
Enable userspace for Armv8R aarch64

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Jaxson Han 27ed237f6d arm64: arm_mpu: Add userspace
Add dynamic_areas_init. It will mark a mpu region as a dynamic region
area. The dynamic region areas is designed to be the background
regions, so that the system could re-program the thread regions on
the backgroud regions.

Add configure_dynamic_mpu_regions to re-program the thread regions on
the backgroud regions. The configure_dynamic_mpu_regions function is
the core function of implementing the userspace for the MPU. This
function is used in thread creation and context switch.

During context switch, the pre thread's regions should be disabled,
and the new thread's regions will be re-programed. Since the thread's
stack region will also be switched, there will be a problem before
new thread's region being re-programed which is the new thread's
stack usage. To avoid the exception generated by stack usage caused by
unprogramed new thread's stack region, I disable mpu first before
flush_dynamic_regions_to_mpu and then enable it.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Jaxson Han ac0c0a61d5 include: arm64: Refine the mem alignment macros
Add a new macro MEM_DOMAIN_ALIGN_AND_SIZE for mmu and mpu mem
alignment.
MEM_DOMAIN_ALIGN_AND_SIZE is
  - CONFIG_MMU_PAGE_SIZE, when mmu is enabled.
  - CONFIG_ARM_MPU_REGION_MIN_ALIGN_AND_SIZE when mpu enabled.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Jaxson Han d282d86d7e arm64: Create common mmu and mpu interfaces
Include the new introduced include/arch/arm64/mm.h instead of the
arm_mmu.h or arm_mpu.h.

Unify function names z_arm64_thread_pt_init/z_arm64_swap_ptables with
z_arm64_thread_mem_domains_init/z_arm64_swap_mem_domains for mmu and
mpu, because:
1. mmu and mpu have almost the same logic.
2. mpu doesn't have ptables.
3. using the function names help reducing "#if define" macros.

Similarly, change z_arm64_ptable_ipi to z_arm64_domain_sync_ipi

And fix a log bug in arm_mmu.c.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Jaxson Han 34d6c7caa7 arm64: cortex_r: Move mpu code to a better place
This patch mainly moves mpu related code from
arch/arm64/core/cortex_r/mpu/ to arch/arm64/core/cortex_r/ and moves
the mpu header files from include/arch/arm64/cortex_r/mpu/ to
include/arch/arm64/cortex_r/

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Neil Armstrong d55991b98e arm64: isr_wrapper: ignore Special INTIDs between 1020..1023
Referring the Arm Generic Interrupt Controller Architecture
Specification GIC architecture version 3 and version 4 document
(see 2.2.1 Special INTIDs paragraph), these INTIDs are reserved
for special purposes and should be ignored for now.

For the ITS implementation, the INTID 1023 must be ignored since this
special INTID will trigger after an LPI acknowledge, thus triggering
the spurious interrupt handler.

The GICv3 Linux implementation ignores these INTIDs the same way.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-09-28 19:45:29 -04:00
Neil Armstrong 078113982f arm64: isr_wrapper: fixup out of bounds for large number of irqs
In case we enable a large number of IRQs, like when enabling LPIs using
interrupts > 8192, we hit an assembler error where the immediate value
is too large.

Copy the IRQ number into x1 to permit using a large IRQ number.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-09-28 19:45:29 -04:00
Torsten Rasmussen 3d82c7c828 linker: align _image_text_start/end/size linker symbols name
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.

The symbols _image_text_start and _image_text_end sometimes includes
linker/kobject-text.ld. This mean there must be both the regular
__text_start and __text_end symbols for the pure text section, as well
as <group>_start and <group>_end symbols.

The symbols describing the text region which covers more than just the
text section itself will thus be changed to:
_image_text_start -> __text_region_start
_image_text_end   -> __text_region_end

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen c6aded2dcb linker: align _image_rodata and _image_rom start/end/size linker symbols
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.

The symbols _image_rom_start and _image_rom_end corresponds to the group
ROMABLE_REGION defined in the ld linker scripts.

The symbols _image_rodata_start and _image_rodata_end is not placed as
independent group but covers common-rom.ld, thread-local-storage.ld,
kobject-rom.ld and snippets-rodata.ld.

This commit align those names and prepares for generation of groups in
linker scripts.

The symbols describing the ROMABLE_REGION will be renamed to:
_image_rom_start -> __rom_region_start
_image_rom_end   -> __rom_region_end

The rodata will also use the group symbol notation as:
_image_rodata_start -> __rodata_region_start
_image_rodata_end   -> __rodata_region_end

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Manuel Argüelles 9ff6282089 arch: arm64: invalidate TLBs after ptables swap
This prevent the new thread to attempt accessing cached ptable entries
which are no longer valid.

Signed-off-by: Manuel Argüelles <manuel.arguelles@coredumplabs.com>
2021-08-20 06:26:05 -04:00