Commit graph

2243 commits

Author SHA1 Message Date
Øyvind Rønningstad a2cfb8431d arch: arm: Add code for swapping threads between secure and non-secure
This adds code to swap_helper.S which does special handling of LR when
the interrupt came from secure. The LR value is stored to memory, and
put back into LR when swapping back to the relevant thread.

Also, add special handling of FP state when switching from secure to
non-secure, since we don't know whether the original non-secure thread
(which called a secure service) was using FP registers, so we always
store them, just in case.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-05-05 13:00:31 +02:00
Ioannis Glaropoulos ad808354d2 arch: arm: Add config for non-blocking secure calls
Introduce a Kconfig option to allow Secure function calls to be
pre-empted.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-05-05 13:00:31 +02:00
Mahesh Mahadevan d6b50233ac arch: arm: Setup Static MPU regions earlier in boot flow
Setup the static MPU regions before PRE_KERNEL_1 and
PRE_KERNEL_2 functions are invoked. This will setup
the MPU for SRAM regions in case code relocated to SRAM
is invoked from any of these functions.

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Mahesh Mahadevan 1b36c6c00e arch: arm: Create a MPU entry for relocated code
Code relocated using CONFIG_CODE_DATA_RELOCATION_SRAM should
be allowed to execute from SRAM

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Mahesh Mahadevan 64e973fdcd Kconfig: Add a new config CODE_DATA_RELOCATION_SRAM
1. This will help us identify if the relocation is to
SRAM which is used when setting up the MPU entry
for the SRAM region where code is relocated
2. Move CODE_DATA_RELOCATION configs to ARM specific
folder

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Nicolas Pitre 949ef7c660 Kconfig: clean up FPU and FPU_SHARING entries
CONFIG_FPU: The architecture dependency list is redundant.
Having CPU_HAS_FPU being selected by those archs as a dependency
is sufficient and cleaner.

CONFIG_FPU_SHARING: The default should always be y to be on the safe
side here, but as a compromise for not affecting existing config, let's
move the default selection local to those configs that care, again to
avoid a growing list of conditionals here. Adjust the help text which
applies to more than just Cortex-M.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Daniel Leung 43f0726985 arm: aarch32: timing: fix potential divide by zero if DWT
There is a possibility that the DWT frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta DWT cycles
until they both are not zero.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 16:49:17 -04:00
Gerard Marull-Paretas f163bdb280 power: move reboot functionality to os lib
Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-28 20:34:00 -04:00
Jennifer Williams 734c65ad23 arch: arm: core: aarch32: cortex_m: fault: fix if...else ifs
bus_fault() and hard_fault() were missing final else statement
in the if else if constructs. This commit adds non-empty else {}
to comply with coding guideline 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams a5c27d69b5 arch: arm: core: aarch32: cortex_m: debug: remove if...else if construct
z_arm_debug_monitor_event_error_check() was missing final
else statement in the if else if construct so violated guideline
15.7. This commit removes the else if for symmetry in the limited
early-exit conditions, rather than empty final else {}, to comply.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Ioannis Glaropoulos fdb4df26d3 arm: cortex-m: minor doc updates in swap_helper.S
Inline some minor clarifications regarding the
Lazy Stacking feature in the cortex-m pendSV
handler, for ease of understanding. Also, fix
some minor style issues in comments.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-04-23 15:18:16 -05:00
Mahesh Mahadevan a9397e3b3a arm: cortex_m: Update get DWT frequency for NXP SoC's
Get the DWT cycle count frequency for NXP devices from
CMSIS SystemCoreClock symbol

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-04-21 20:40:24 -04:00
Bradley Bolen 92a3209c5c arch: arm: aarch32: cortex_a_r: Dump callee saved registers on fault
Some of these registers may contain nuggets of information that would be
beneficial when debugging, so include them in the fault dump.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Bradley Bolen c96ae584bf arch: arm: aarch32: cortex_a_r: Correct syntax for srs
The writeback specification should be after the register, not after the
mode according to the documentation at

Link: https://developer.arm.com/documentation/dui0489/h/arm-and-thumb-instructions/srs

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Bradley Bolen 18ec84803c arch: arm: aarch32: Use ARRAY_SIZE in for loop
Do not hardcode the array size in the loop for printing out the floating
point registers of the exception stack frame.  The size of this array
will change when Cortex-R support is added.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Krzysztof Chruscinski ae4adea463 arch: arm: cortex_m: z_arm_pendsv in vector table when multithreading
When CONFIG_MULTITHREADING=n kernel specific pendsv is not used. Remove
from vector table.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-20 16:00:39 +02:00
Bradley Bolen 6734c6e874 arch: arm: aarch32: Fix spurious interrupt handling
The GIC can return 0x3ff to indicate a spurious interrupt.  Other
interrupt controllers could return something different.  Check that the
pending interrupt is valid in order to avoid indexing past the end of
the isr_table.

This fixes #30465 and is based on the aarch64 fix in 9dd2731d.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 08:30:41 -04:00
Krzysztof Chruscinski 8bee027ec4 arch: arm: Unconditionally compile IRQ_ZERO_LATENCY flag
Flag was present only when ZLI was enabled. That resulted in additional
ifdefs needed whenever code supports ZLI and non-ZLI mode.

Removed ifdefs, added build assert to irq connections to fail at
compile time if IRQ_ZERO_LATENCY is set but ZLI is disabled. Additional
clean up made which resulted from removing the ifdef.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-12 07:33:27 -04:00
Flavio Ceolin 4f5460ad6a arch: arm: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Ioannis Glaropoulos d307bd2fdd arm: add note explaining why Hard ABI is disabled for tfm builds
Add a note in the Kconfig help text that explains why Hard ABI
is not possible on builds with TF-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-04-09 11:48:55 -05:00
Øyvind Rønningstad 80a351e22d arch: arm: Disallow FP_HARDABI when building with TFM
When building with TFM, the app is linked with libraries built by the
TFM build system. TFM is always built with -msoft-float which is
equivalent to -mfloat-abi=soft. FP_HARDABI adds -mfloat-abi=hard
which gives errors when linking with the libs from TFM since they are
built with a different ABI.

Fixes https://github.com/zephyrproject-rtos/zephyr/issues/33956

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-04-09 11:48:55 -05:00
Flavio Ceolin 95cd021cea arch: arm: Fix 14.4 guideline violation
The controlling expression of an if statement has to be an
essentially boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-06 10:25:24 -04:00
Carlo Caione 3539c2fbb3 arm/arm64: Make ARM64 a standalone architecture
Split ARM and ARM64 architectures.

Details:

- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
  (arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
  boards/bcm_vk/viper directory

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Kumar Gala 520ebe4d76 arch: arm: remove compat headers
These compat headers have been moved since at least v2.4.0 release so we
can now remove them.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-25 16:40:25 +01:00
Katsuhiro Suzuki 59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Kumar Gala 95e4b3eb2c arch: arm: Add initial support for Cortex-M55 Core
Add initial support for the Cortex-M55 Core which is an implementation
of the Armv8.1-M mainline architecture and includes support for the
M‑profile Vector Extension (MVE).

The support is based on the Cortex-M33 support that already exists in
Zephyr.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-23 13:13:32 -05:00
Anas Nashif 771cc9705c clock: z_clock_isr -> sys_clock_isr
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Carlo Caione f3d11cccf4 aarch64: userspace: Enable userspace
Add ARCH_HAS_USERSPACE to enable userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 2936998591 aarch64: GCC10: Add -mno-outline-atomics
GCC10 introduced by default calls to out-of-line helpers to implement
atomic operations with the '-moutline-atomic' option. This is breaking
several tests because the embedded calls are trying to access the
zephyr_data region from userspace that is declared as MT_P_RW_U_NA,
triggering a memory fault.

Since there is currently no support for MT_P_RW_U_RO (and probably never
will be), disable the out-of-line helpers disabling the GCC option.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 8cbd9c7d8e aarch64: userspace: Add missing entries in vector table
To support exceptions taken in EL0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 1347fdbca7 aarch64: userspace: Increase KOBJECT_TEXT_AREA
This is needed to have some tests run successfully.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Nicolas Pitre 2b5b054b0b aarch64: userspace: bump the global number of available page tables
Each memory domain requires a few pages for itself.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione b52f769908 aarch64: mmu: Fix MMU permissions for zephyr code and data
User threads still need to access the code and the RO data. Fix the
permissions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Nicolas Pitre a74f378cdc aarch64: mmu: apply domain switching on all CPUs if SMP
It is apparently possible for one CPU to change the memory domain
of a thread already being executed on another CPU.

All CPUs must ensure they're using the appropriate mapping after a
thread is newly added to a domain.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione ec70b2bc7a aarch64: userspace: Add support for page tables swapping
Introduce the necessary routines to have the user thread stack correctly
mapped and the functions to swap page tables on context switch.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Kumar Gala 7d35a8c93d kernel: remove arch_mem_domain_destroy
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed.  So remove
arch_mem_domain_destroy as well.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-18 16:30:47 +01:00
Carles Cufi 59a51f0e09 debug: Clean up thread awareness data sections
There's no need to duplicate the linker section for each architecture.
Instead, move the section declaration to common-rom.ld.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-03-17 14:43:01 -05:00
Nicolas Pitre f062490c7e aarch64: mmu: add TLB flushing on mapping changes
Pretty crude for now, as we always invalidate the entire set.
It remains to be seen if more fined grained TLB flushing is worth
the added complexity given this ought to be a relatively rare event.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Carlo Caione a010651c65 aarch64: mmu: Add initial support for memory domains
Introduce the basic support code for memory domains. To each domain
is associated a top page table which is a copy of the global kernel
one. When a partition is added, corresponding memory range is made
private before its mapping is adjusted.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre c77ffebb24 aarch64: mmu: apply proper locking
We need to protect against concurrent modifications to page tables and
their use counts.

It would have been nice to have one lock per domain, but we heavily
share page tables across domains. Hence the global lock.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre e4cd3d4292 aarch64: mmu: code to split/combine page tables
Two scenarios are possible.

privatize_page_range:

Affected pages are made private if they're not. This means a whole
new page branch starting from the top may be allocated and content
shared with the reference page tables, except for the private range
where content is duplicated.

globalize_page_range:

That's the reverse operation where pages for given range is shared with
the reference page tables and no longer needed pages are freed.

When changing a domain mapping the range needs to be privatized first.

When changing a global mapping the range needs to be globalized last.

This way page table sharing across domains is maximized and memory
usage remains optimal.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre 402636153d aarch64: mmu: factor out table expansion code
Make the allocation, population and linking of a new table into
a function of its own for easier code reuse.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Peng Fan b4f5b9e237 aarch64: reset: initialize CNTFRQ_EL0 in the highest EL
Can only be written at the highest Exception level implemented.
For example, if EL3 is the highest implemented Exception level,
CNTFRQ_EL0 can only be written at EL3.

Also move z_arm64_el_highest_plat_init to be called when is_el_highest

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-11 12:24:18 +01:00
Carlo Caione dacd176991 aarch64: userspace: Implement syscalls
This patch adds the code managing the syscalls. The privileged stack
is setup before jumping into the real syscall.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Nicolas Pitre f2995bcca2 aarch64: arch_buffer_validate() implementation
This leverages the AT (address translation) instruction to test for
given access permission. The result is then provided in the PAR_EL1
register.

Thanks to @jharris-intel for the suggestion.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 9ec1c1a793 aarch64: userspace: Introduce arch_user_string_nlen
Introduce the arch_user_string_nlen() assembly routine and the necessary
C code bits.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione a7a3e800bf aarch64: fatal: Restrict oops-es when in user-mode
User mode is only allowed to induce oopses and stack check failures via
software-triggered system fatal exceptions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 6978160427 aarch64: userspace: Introduce arch_is_user_context
The arch_is_user_context() function is relying on the content of the
tpidrro_el0 register to determine whether we are in user context or not.

This register is set to '1' when in EL1 and set back to '0' when user
threads are running in userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 6cf0d000e8 aarch64: userspace: Introduce skeleton code for user-threads
Introduce the first pieces needed to schedule user threads by defining
two different code paths for kernel and user threads.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione a7d3d2e0b1 aarch64: fatal: Add arch_syscall_oops hook
Add the arch_syscall_oops hook for the AArch64.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
James Harris 4e1926d508 arch: aarch64: do EL2 init in EL3 if necessary
If EL2 is implemented but we're skipping EL2, we should still
do EL2 init. Otherwise we end up with a bunch of things still
at their (unknown) reset values.

This in particular causes problems when different
cores have different virtual timer offsets.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-10 06:50:36 -05:00
Carlo Caione 8388794c9b aarch64: Rename z_arm64_get_cpu_id macro
z_arm64_* prefix should not be used for macros. Rename it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-09 04:52:40 -05:00
Carlo Caione bdbe33b795 aarch64: Rework {inc,dec}_nest_counter
There are several issues with the current implemenation of the
{inc,dec}_nest_counter macros.

The first problem is that it's internally using a call to a misplaced
function called z_arm64_curr_cpu() (for some unknown reason hosted in
irq_manage.c) that could potentially clobber the caller-saved registers
without any notice to the user of the macro.

The second problem is that being a macro the clobbered registers should
be specified at the calling site, this is not possible given the current
implementation.

To fix these issues and make the call quicker, this patch rewrites the
code in assembly leveraging the availability of the _curr_cpu array. It
now clobbers only two registers passed from the calling site.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-09 04:52:40 -05:00
Erwan Gouriou 19314514e6 arch/arm: cortex_m: Disable DWT based null-pointer exception detection
Null-pointer exception detection using DWT is currently incompatible
with current openocd runner default implementation that leaves debug
mode on by default.
As a consequence, on all targets that use openocd runner, null-pointer
exception detection using DWT will generated an assert.
As a consequence, all tests are failing on such platforms.

Disable this until openocd behavior is fixed (#32984) and enable
the MPU based solution for now.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
2021-03-08 19:19:14 -05:00
Peng Fan e27c9c7c52 arch: arm64: select SCHED_IPI_SUPPORTED when SMP enabled
Select SCHED_IPI_SUPPORTED when SMP enabled.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan a2ea20dd6d arch: arm: aarch64: add SMP support
With timer/gic/cache added, we could add the SMP support.
Bringup cores

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 14b9b752be arch: arm: aarch64: add arch_dcache_range
Add arch_dcache_range to support flush and invalidate

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan e10d9364d0 arch: arm64: irq/switch: accessing nested using _cpu_t
With _kernel_offset_to_nested, we only able to access the nested counter
of the first cpu. Since we are going to support SMP, we need accessing
nested from per cpu.

To get the current cpu, introduce z_arm64_curr_cpu for asm usage,
because arch_curr_cpu could not be compiled in asm code.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 251b1d39ac arch: arm: aarch64: export z_arm64_mmu_init for SMP
Export z_arm64_mmu_init for SMP usage

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 6182330fc3 arm: core: aarch64: save switch_handle
Save old_thread to switch_handle for wait_for_thread usage

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Ioannis Glaropoulos 191c3088af arm: cortex_m: fix arguments to dwt_init() function
Fix the call to z_arm_dwt_init(), remove the NULL argument.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-05 18:13:22 -06:00
Carlo Caione 9d908c78fa aarch64: Rewrite reset code using C
There is no strict reason to use assembly for the reset routine. Move as
much code as possible to C code using the proper helpers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Carlo Caione bba7abe975 aarch64: Use helpers instead of inline assembly
No need to rely on inline assembly when helpers are available.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Carlo Caione a2226f5200 aarch64: Fix registers naming in cpu.h
The name for registers and bit-field in the cpu.h file is incoherent and
messy. Refactor the whole file using the proper suffixes for bits,
shifts and masks.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Nicolas Pitre 0c45b548e2 aarch64: rationalize exception entry/exit code
Each vector slot has room for 32 instructions. The exception context
saving needs 15 instructions already. Rather than duplicating those
instructions in each out-of-line exception routines, let's store
them directly in the vector table. That vector space is otherwise
wasted anyway. Move the z_arm64_enter_exc macro into vector_table.S
as this is the only place where it should be used.

To further reduce code size, let's make z_arm64_exit_exc into a
function of its own to avoid code duplication again. It is put in
vector_table.S as this is the most logical location to go with its
z_arm64_enter_exc counterpart.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-03 16:26:40 +03:00
Ioannis Glaropoulos f1a27a8189 arm: cortex_m: assert if DebugMonitor exc is enabled in debug mode
Assert if the null pointer de-referencing detection (via DWT) is
enabled when the processor is in debug mode, because the debug
monitor exception can not be triggered in debug mode (i.e. the
behavior is unpredictable). Add a note in the Kconfig definition
of the null-pointer detection implementation via DWT, stressing
that the solution requires the core be in normal mode.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 77c76a3b79 arm: cortex_m: build time assert for null-pointer exception page size
We introduce build time asserts for
CONFIG_CORTEX_M_DEBUG_NULL_POINTER_EXCEPTION_PAGE_SIZE
to catch that the user-supplied value has, as requested
by the Kconfig symbol specification, a power of 2 value.
For the MPU-based implementation of null-pointer detection
we can use an existing macro for the build time assert,
since the region for catching null-pointer exceptions
is a regular MPU region, with different restrictions,
depending on the MPU architecture. For the DWT-based
implementation, we introduce a custom build-time assert.

We add also a run-time ASSERT for the MPU-based
implementation in ARMv8-M platforms, which require
that the null pointer exception detection page is
already mapped by the MPU.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 1db78aae73 arm: cortex_m: ensure DebugMonitor is targeting Secure domain
By design, the DebugMonitor exception is only employed
for null-pointer dereferencing detection, and enabling
that feature is not supported in Non-Secure builds. So
when enabling the DebugMonitor exception, assert that
it is not targeting the Non Secure domain.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 1b22f6b8c8 arm: cortex_m: enable null-pointer exception detection in the tests
Enable the null-pointer dereferencing detection by default
throughout the test-suite. Explicitly disable this for the
gen_isr_table test which needs to perform vector table reads.
Disable null-pointer exception detection on qemu_cortex_m3
board, as DWT it is not emulated by QEMU on this platform.
Additionally, disable null-pointer exception detection on
mps2_an521 (QEMU target), as DWT is not present and the MPU
based solution won't work, since the target does not have
the area 0x0 - 0x400 mapped, but the QEMU still permits
read access.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos d86d2c6f65 arm: cortex_m: implement null pointer exception detection with MPU
Implementation for null pointer exception detection feature
using the MPU on Cortex-M. Null-pointer detection is implemented
by programming an MPU to guard a limited area starting at
address 0x0. on non ARMv8-M we program an MPU region with
No-access policy. On ARMv8-M we program a region with any
permissions, assuming the region will overlap with fixed
FLASH0 region. We add a compile-time message to warn the
user if the MPU-based null-pointer exception solution can
not be used (ARMv8-M only).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 66ef96fded arm: cortex_m: add vector table padding for null pointer detection
Padding inserted after the (first-stage) vector table,
so that the Zephyr image does not attempt to use the
area which we reserve to detect null pointer dereferencing
(0x0 - <size>). If the end of the vector table section is
higher than the upper end of the reserved area, no padding
 will be added. Note also that the padding will be added
only once, to the first stage vector table, even if the current
snipped is included multiple times (this is for a corner case,
when we want to use this feature together with SW Vector Relaying
on MCUs without VTOR but with an MPU present).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 0bac92db96 arm: cortex-m: null pointer detection additions for ARMv8-M
Additions to the null-pointer exception detection mechanism
for ARMv8-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 3054c1351a arm: cortex_m: null-pointer exception detection via DWT
Implement the functionality to detect null pointer dereference
exceptions via the DWT unit in the ARMv7-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos f97ccd940c arm: cortex-m: build debug.c for null-pointer detection feature
When we enable the null pointer exceptino feature (using DWT)
we include debug.c in the build. debug.c contains the functions
to configure and enable null pointer detection using the Data
Watchdog and Trace unit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos c42a8d9d24 arm: cortex_m: fault: hook up debug monitor exception handler
Extend the debug monitor exception handler to
- return recoverable faults when the debug monitor
  is enabled but we do not get an expected DWT event,
- call a debug monitor routine to check for null pointer
  exceptions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 712a7951db arm: cortex_m: move static inline DWT functions in internal header
Move the DWT utility functions, present in timing.c
in an internal cortex-m header.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos b3cd5065eb arm: cortex_m: Kconfig symbols for null pointer detection feature
Introduce the required Kconfig symbol framework for the
Cortex-M-specific null pointer dereferencing detection
feature. There are two implementations (based on DWT and
MPU) so we introduce the corresponding choice symbols,
including a choice symbol to signify that the feature
is to be disabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Carlo Caione eb72b2d72a aarch64: smccc: Retrieve up to 8 64-bit values
The most common secure monitor firmware in the ARM world is TF-A. The
current release allows up to 8 64-bit values to be returned from a
SMC64 call from AArch64 state.

Extend the number of possible return values from 4 to 8.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione bc7cb75a82 aarch64: smccc: Use offset macros
Instead of relying on hardcoded offset in the assembly code, introduce
the offset macros to make the code more clear.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione 998856bacb aarch64: smccc: Update specs link
The link points to an outdated version. Update it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione 90859c6bf3 aarch64: smccc: Decouple PSCI from SMCCC
The current code is assuming that the SMC/HVC helpers can only be used
by the PSCI driver. This is wrong because a mechanism to call into the
secure monitor should be made available regardless of using PSCI or not.

For example several SoCs relies on SMC calls to read/write e-fuses,
retrieve the chip ID, control power domains, etc...

This patch introduces a new CONFIG_HAS_ARM_SMCCC symbol to enable the
SMC/HVC helpers support and export that to drivers that require it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Nicolas Pitre 443e3f519e arm64: mmu: initialize early
This is fundamental enough that it better be initialized ASAP.
Many other things get initialized soon afterwards assuming the MMU
is already operational.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 9461600c86 aarch64: mmu: rationalize debugging output
Make it into a generic call that can be used in various places.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre b40a2fdb8b aarch64: mmu: fix common MMU mapping
Location of __kernel_ram_start is too far and _app_smem .bss areas
are not covered. Use _image_ram_start instead.

Location of __kernel_ram_end is also way too far. We should stop at
_image_ram_end where the expected unmapped area starts.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre fb3de16f0c aarch64: mmu: use a range (start..end) for common MMU mapping
This is easier to cover multiple segments this way. Especially since
not all boundary symbols from the linker script come with a size
derrivative.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre cb49e4b789 aarch64: mmu: invert the MT_OVERWRITE flag
The MT_OVERWRITE case is much more common. Redefine that flag as
MT_NO_OVERWRITE instead for those fewer cases where it is needed.

One such case is platform provided mappings. Apply them after the
common kernel mappings and use the MT_NO_OVERWRITE on them.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 56c77118d3 aarch64: mmu: factor out the phys argument out of set_mapping()
Minor cleanup.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre f53bd24a4d aarch64: mmu: move get_region_desc() closer to usage points
Simple code tidiness.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre b696090bb7 aarch64: mmu: make page table pool global
There is no real reason for keeping page tables into separate pools.
Make it global which allows for more efficient memory usage and
simplifies the code.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 459bfed9ea aarch64: mmu: dynamic mapping support
Introduce a remove_map() to ... remove a mapping.

Add a use count to the page table pool so pages can be dynamically
allocated, deallocated and reused.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 861f6ce2c8 aarch64: a few trivial assembly optimizations
Removed some instructions when possible.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-25 10:35:37 -05:00
Andy Ross 6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Ioannis Glaropoulos 8289b8c877 arch: arm: cortex_m: fix ASSERT expression in MemManage handler
We need to form the ASSERT expression inside the MemManage
fault handler for the case we building without USERSPACE
and STACK GUARD support, in the same way it is formed for
the case with USERSPACE or MPU STACK GUARD support, that
is, we only assert if we came across a stacking error.

Data access violations can still occur even without user
mode or guards, e.g. when trying to write to Read-only
memory (such as the code region).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-23 11:29:49 +01:00
Carlo Caione 3f055058dc aarch64: Remove QEMU 'wfi' issue workaround
The problem is not reproducible when CONFIG_QEMU_ICOUNT=n. We can now
revert the commit aebb9d8a45.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-19 16:26:38 +03:00
Carlo Caione fadbe9d2f2 arch: aarch64: Add XIP support
Add the missing pieces to enable XIP for AArch64. Try to simulate the
XIP using QEMU using the '-bios' parameter.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-17 14:13:10 +03:00
Carlo Caione b27bca4b45 aarch64: mmu: Remove SRAM memory region
Now that the arch_mem_map() is actually working correctly we can remove
the big SRAM region.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-15 08:07:55 -05:00
Anas Nashif 1cea902fad license: add missing SPDX headers
Add missing SPDX header.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Carlo Caione d6316aae27 aarch64: Fix corrupted IRQ state when tracing enabled
The call to sys_trace_idle() is potentially clobbering x0 resulting in a
wrong value being used by the following code. Save and restore x0 before
and after the call to sys_trace_idle() to avoid any issue.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Suggested-by: James Harris <james.harris@intel.com>
2021-02-10 10:16:03 -05:00
Ioannis Glaropoulos 8bc242ebb5 arm: cortex-m: add extra stack size for test build with FPU_SHARING
Additional stack for tests when building with FPU_SHARING
enabled is required, because the option may increase ESF
stacking requirements for threads.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-05 11:41:25 -05:00
Ioannis Glaropoulos ef926e714b arm: cortex_m: fix vector table relocation in non-XIP builds
When VTOR is implemented on the Cortex-M SoC, we can
basically use any address (properly aligned) for the
vector table starting address. We fix the setting of
VTOR in prep_c.c for non-XIP images, in this commit,
so we do not need to always have the vector table be
present at the start of RAM (CONFIG_SRAM_BASE_ADDRESS)
and allow for extra linker sections being placed before
the vector table section.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-03 10:44:17 -05:00
Ioannis Glaropoulos 73288490f6 arm: cortex_m: log EXC_RETURN value in fatal.c
If CONFIG_EXTRA_EXCEPTION_INFO is enabled, log
the value of EXC_RETURN in the fault handler.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos cafe04558c arm: cortex_m: make lazy FP stacking enabling dynamic
Under FPU sharing mode, any thread is allowed to generate
a Floating Point context (use FP registers in FP instructions),
regardless of whether threads are pre-tagged with K_FP_REGS
option when they are created.

When building with MPU stack guard feature enabled,
a large MPU stack guard is required to catch stack
overflows, if lazy FP stacking is enabled. When lazy
FP stacking is not enabled, a default 32 byte guard is
sufficient.

If lazy stacking is enabled by default, all threads may
potentially generate FP context, so they would need to
program a large MPU guard, carved out of their reserved
stack memory.

To avoid this memory waste, we modify the behavior, and make
lazy stacking a dynamically enabled feature, implemented as
follows:
- threads that are not pre-tagged with K_FP_REGS, and have
not generated an FP context use a default MPU guard and disable
lazy stacking. As long as the threads do not have an active FP
context, they won't stack FP registers, anyway, on ISRs and
exceptions, while they will benefit from reserving a small
MPU guard size
- as soon as a thread starts using FP registers, ISR might
temporarily experience some increased ISR latency due to lazy
stacking being disabled. This will be the case until the next
context switch, where the threads that have active FP context
will be tagged with K_FP_REGS, enable lazy stacking, and
program a wide MPU guard.

The implementation is a tradeoff between performance (ISR
latency) and memory consumption.

Note that when MPU STACK GUARD feature is not enabled, lazy
FP stacking is always activated.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos 2642eb28bf arm: cortex_m: force FP context stacking by default
For the standard multi-theading builds, we will
enforce FP context stacking only when FPU_SHARING
is set. For the single-threading use case we enable
context stacking by default.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos 56dd787627 arm: cortex_m: skip clearing CONTROL if this is done at boot
If CONTROL register is done in reset.S we can skip
clearing the FPCA when enabling the floating point
support, to save a few instructions. The CONTROL
register is cleared right after boot, if the symbol
CONFIG_INIT_ARCH_HW_AT_BOOT is enabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Nicolas Pitre 7fcf5519d0 aarch64: mmu: cleanups and fixes
Major changes:

- move related functions together
- optimize add_map() not to walk the page tables *twice* on
  every loop
- properly handle leftover size when a range is already mapped
- don't overwrite existing mappings by default
- return an error when the mapping fails

and make the code clearer overall.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-01-28 20:24:30 -05:00
Daniel Leung 0d099bdd54 linker: remove asterisk from IRQ/ISR section name macro
Both _IRQ_VECTOR_TABLE_SECTION_NAME and _SW_ISR_TABLE_SECTION_NAME
are defined with asterisk at the end in an attempt to include
all related symbols in the linker script. However, these two
macros are also being used in the source code to specify
the destination sections for variables. Asterisks in the name
results in older GCC (4.x) complaining about those asterisks.
So create new macros for use in linker script, and keep
the names asterisk free.

Fixes #29936

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-26 16:24:11 -05:00
Stephanos Ioannidis f769a03081 arch: arm: aarch32: Fix interrupt nesting
In the current interrupt nesting implementation, if an ISR is
interrupted while executing inside a branch, the lr_svc register will
be corrupted, and the branch of the interrupted ISR will exit to the
return address of the final branch of the interrupting ISR, which may
or may not correspond to the intended return address.

This commit fixes the aforementioned bug by storing the lr_svc register
in the stack at the ISR entry, and restoring its value before exiting
the ISR.

For more details, refer to the issue #30517.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Stephanos Ioannidis c00169daba arch: arm: aarch32: Fix exception exit failures
This commit fixes the following bugs in the AArch32 z_arm_exc_exit
routine:

1. Invalid return address when calling `z_arm_pendsv` from the
   exception-specific mode

2. Caller-saved register is referenced after a call to `z_arm_pendsv`

For more details, refer to the issue #31511.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Stephanos Ioannidis d86fdb2154 arch: arm: aarch32: Update stale references to _IntExit
This commit updates the stale references to the `_IntExit` function in
the in-line documentation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Volodymyr Babchuk cd86ec2655 aarch64: add ability to generate image header
Image header is compatible with Linux aarch64 boot protocol,
so zephyr can be booted with U-boot or Xen loader.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
2021-01-24 13:59:55 -05:00
Daniel Leung d3218ca515 debug: coredump: remove z_ prefix for stuff used outside subsys
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-21 22:08:59 -05:00
Carlo Caione 57f7e31017 drivers: PSCI: Add driver and subsystem
Firmware implementing the PSCI functions described in ARM document
number ARM DEN 0022A ("Power State Coordination Interface System
Software on ARM processors") can be used by Zephyr to initiate various
CPU-centric power operations.

It is needed for virtualization, it is used to coordinate OSes and
hypervisors and it provides the functions used for SMP bring-up such as
CPU_ON and CPU_OFF.

A new PSCI driver is introduced to setup a proper subsystem used to
communicate with the PSCI firmware, implementing the basic operations:
get_version, cpu_on, cpu_off and affinity_info.

The current implementation only supports PSCI 0.2 and PSCI 1.0

The PSCI conduit (SMC or HVC) is setup reading the corresponding
property in the DTS node.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-18 19:06:53 +01:00
Carlo Caione c5b898743a aarch64: Fix alignment fault on z_bss_zero()
Using newlibc with AArch64 is causing an alignement fault in
z_bss_zero() when the code is run on real hardware (on QEMU the problem
is not reproducible).

The main cause is that the memset() function exported by newlibc is
using 'DC ZVA' to zero out memory.

While this is often a nice optimization, this is causing the issue on
AArch64 because memset() is being used before the MMU is enabled, and
when the MMU is disabled all data accesses will be treated as
Device_nGnRnE.

This is a problem because quoting from the ARM reference manual: "If the
memory region being zeroed is any type of Device memory, then DC ZVA
generates an Alignment fault which is prioritized in the same way as
other alignment faults that are determined by the memory type".

newlibc tries to be a bit smart about this reading the DCZID_EL0
register before deciding whether using 'DC ZVA' or not. While this is a
good idea for code running in EL0, currently the Zephyr kernel is
running in EL1. This means that the value of the DCZID_EL0 register is
actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because
EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is
unconditionally used in the newlibc memset() implementation.

The "standard" solution for this case is usually to use a different
memset routine to be specifically used for two cases: (1) against IO
memory or (2) against normal memory but with MMU disabled (which means
all memory is considered device memory for data accesses).

To fix this issue in Zephyr we avoid calling memset() when clearing the
bss, and instead we use a simple loop to zero the memory region.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-14 13:37:47 -08:00
Ioannis Glaropoulos c6c14724ba arch: arm: cortex_m: fix stack overflow error detection
In rare cases when a thread may overflow its stack, the
core will not report a Stacking Error. This is the case
when a large stack array is created, making the PSP cross
beyond the stack guard; in this case a MemManage fault
won't cause a stacking error (but only a Data Access
Violation error). We fix the fault handling logic so
such errors are reported as stack overflows and not as
generic CPU exceptions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-01-14 12:35:47 +01:00
Ioannis Glaropoulos 202c2fde54 arch: arm: cortex_m: do not read MMFAR if MMARVALID is not set
When the MMARVALID bit is not set, do not read the MMFAR
register to get the fault address in a MemManage fault.
This change prevents the fault handler to erroneously
assume MMFAR contains a valid address.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-01-14 12:35:47 +01:00
Julien Massot d3345dd54d arch: arm: Add Cortex-R7 support
Pass the correct -mcpu flags to the compiler when building for the
Cortex-R7.

Signed-off-by: Julien Massot <julien.massot@iot.bzh>
2021-01-13 15:04:43 +01:00
Carlo Caione e710d36f77 aarch64: mmu: Enable CONFIG_MMU
Enable CONFIG_MMU for AArch64 and add the new arch_mem_map() required
function.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione 6a3401d6be aarch64: mmu: Fix variable types
Before hooking up the MMU driver code to the Zephyr MMU core code it's
better to match the expected variable types of the two parts.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione 0a0061d901 aarch64: mmu: Do not assume a single set of pagetables is used
The MMU code is currently assuming that Zephyr only uses one single set
of page tables shared by kernel and user threads. This could possibly be
not longer true in the future when multiple set of page tables can be
present and swapped at run-time.

With this patch a new arm_mmu_ptables struct is introduced that is used
to host a buffer pointing to the memory region containing the page
tables and the helper variables used to manage the page tables. This new
struct is then used by the ARM64 MMU code instead of assuming that the
kernel page tables are the only ones present.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione 1a4a73da97 aarch64: mmu: Makes memory mapping functions more generic
The ARM64 MMU code used to create the page tables is strictly tied to
the custom arm_mmu_region struct. To be able to hook up this code to the
Zephyr MMU APIs we need to make it more generic.

This patch makes the mapping function more generic and creates a new
helper function add_arm_mmu_region() to map the regions defined by the
old arm_mmu_region structs using this new generic function.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione 2581009c3e aarch64: mmu: Move xlat tables to one single array
In the current code the base xlat table is a standalone array. This is
done because we know at compile time the size of this table so we can
allocate the correct size and save a bit of memory. All the other xlat
tables are statically allocated in a different array with full size.

With this patch we move all the page tables in one single array,
including the base table. This is probably going to waste a bit of space
but it makes easier to:

- have all the page tables mapped in one single contiguous memory region
  instead of having to take care of two different arrays in two
  different locations
- duplicate the page tables more quickly if we need to
- use a pre-allocated space to host the page tables
- use a pre-computed set of page tables saved in a contiguous memory
  region

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione 1d68c48786 aarch64: mmu: Fix typo in mask definition
Fix typo in mask definition (s/UPPER/LOWER)

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-06 08:18:27 -06:00
Carlo Caione 181088600e aarch64: mmu: Avoid creating a new table when not needed
In the current MMU code a new table is created when mapping a memory
region that is overlapping with a block already mapped. The problem is
that the new table is created also when the new and old mappings have
the same attributes.

To avoid using a new table when not needed the attributes of the two
mappings are compared before creating the new table.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-04 23:53:04 -08:00
Wojciech Sipak 56c06e852b arch: arm: cortex_r: disable ECC on TCMs
This commit adds possibility to disable ECC in Tightly Coupled
Memory in Cortex-R.
Linker scripts places stacks in this memory and marks it as
.noinit section. With ECC enabled, stack read accesses without
previous write result in Data Abort Exception.

Signed-off-by: Wojciech Sipak <wsipak@antmicro.com>
2020-12-27 18:16:00 +01:00
Peng Fan cca070c80a arch: arm64: mmu: support using MT_NS attribute
According to CONFIG_ARMV8_A_NS, using MT_SECURE or MT_NS, to simplify
code change, use MT_DEFAULT_SECURE_STATE instead

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2020-12-17 08:08:00 -05:00
Peng Fan 0d0f168460 arm: aarch64: Kconfig: introduce ARMV8_A_NS
Add new Kconfig entry ARMV8_A_NS for zephyr runs in normal world usage.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2020-12-17 08:08:00 -05:00
Luke Starrett 430952f0b2 arch: arm64: GICv2/v3 handling causes abort on spurious interrupt
In _isr_wrapper, the interrupt ID read from the GIC is blindly used to
index into _sw_isr_table, which is only sized based on CONFIG_NUM_IRQ.

It is possible for both GICv2 and GICv3 to return 1023 for a handful
of scenarios, the simplest of which is a level sensitive interrupt
which has subsequently become de-asserted.  Borrowing from the Linux
GIC implementation, a read that returns an interrupt ID of 1023 is
simply ignored.

Minor collateral changes to gic.h to group !_ASMLANGUAGE content
together to allow this header to be used in assembler files.

Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
2020-12-16 08:46:03 -05:00
Carlo Caione 0d1a6dc27f intc: gic: Use SYS_INIT instead of custom init function
The GIC interrupt controller driver is using a custom init function
called directly from the prep_c function. For consistency move that to
use SYS_INIT.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-12-11 10:17:27 -05:00
Anas Nashif c10d4b377d power: move z_pm_save_idle_exit prototype to power.h
Maintain power prototypes in power.h instead of kernel and arch headers.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif e0f3833bf7 power: remove SYS_ and sys_ prefixes
Remove SYS_ and sys_ from all PM related functions and defines.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif dd931f93a2 power: standarize PM Kconfigs and cleanup
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE

and use PM_ as the prefix for all PM related Kconfigs

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Carlo Caione 7e36bd31fe arch: aarch64: Use SP_EL0 instead of SP_ELx
ARM64 is currently using SP_ELx as stack pointer for kernel and threads
because everything is running in EL1. If support for EL0 is required, it
is necessary to switch to use SP_EL0 instead, that is the only stack
pointer that can be accessed at all exception levels by threads.

While it is not required to keep using SP_EL0 also during the
exceptions, the current code implementation makes it easier to use the
same stack pointer as the one used by threads also during the
exceptions.

This patch moves the code from using SP_ELx to SP_EL0 and fill in the
missing entries in the vector table.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-12-04 08:13:42 -05:00
Alexandre Bourdiol 4cf1d4380e arch: arm: aarch32:cortex_m: timing.c: cortex M7 may need DWT unlock
On Cortex M7, we need to check the optional presence of
Lock Access Register (LAR) which is indicated in
Lock Status Register (LSR).
When present, a special access token must be written to unlock DWT
registers.

Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
2020-12-02 10:58:08 +02:00
Peng Fan 346ecb2a5a arch: arm64: correct vector_table alignment
The 2K alignment assembler directives should be under
'SECTION_SUBSEC_FUNC(exc_vector_table,_vector_table_section,_vector_table)'
Otherwise the _vector_table is actually 0x80 bytes aligned.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2020-12-01 10:04:59 -06:00
Krzysztof Chruscinski 3ed8083dc1 kernel: Cleanup logger setup in kernel files
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2020-11-27 09:56:34 -05:00
Carlo Caione 47ebde30b9 aarch64: error: Handle software-generated fatal exceptions
Introduce a new SVC call ID to trigger software-generated CPU
exceptions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-25 12:02:11 +02:00
Håkon Øye Amundsen 2ce570b03f arch: arm: clear SPLIM registers before z_platform_init
Allow z_platform_init to perform stack operations.

Signed-off-by: Håkon Øye Amundsen <haakon.amundsen@nordicsemi.no>
2020-11-24 20:53:49 +02:00
Andrew Boie 5a58ad508c arch: mem protect Kconfig cleanups
Adds a new CONFIG_MPU which is set if an MPU is enabled. This
is a menuconfig will some MPU-specific options moved
under it.

MEMORY_PROTECTION and SRAM_REGION_PERMISSIONS have been merged.
This configuration depends on an MMU or MPU. The protection
test is updated accordingly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-18 08:02:08 -05:00
Andrew Boie 00cdb597ff arm: de-couple MPU code from k_mem_partition
k_mem_partition is part of the CONFIG_USERSPACE abstraction,
but some older MPU code was depending on it even if user mode
isn't enabled. Use a new structure z_arm_mpu_partition instead,
which will insulate this code from any changes to the core
kernel definition of k_mem_partition.

The logic in z_arm_configure_dynamic_mpu_regions has been
adjusted to copy the necessary information out of the
memory domain instead of passing the addresses of the domain
structures directly to the lower-level MPU code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-18 08:02:08 -05:00
Carlo Caione f095e2fd05 arch: arm64: mmu: Rework defines
Every time I try to decode all the defines in this driver what I get is
only a huge headache. This patch:

- adds a few sensible comments
- remove the redundant defines
- rename the defines to be more self-explanatory
- reorder the defines
- try to make sense of some mysterious derived values

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 19:04:25 -05:00
Carlo Caione 96f574c7a4 aarch64: Use macro-generated absolute symbols for the ESF
As done already for other structs, use the macro-generated offsets when
referencing register in the ESF.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:59:23 -05:00
Carlo Caione daa94e5e59 aarch64: Remove redundant init_stack_frame
The init_stack_frame is the same as the the ESF. No need to have two
separate structs. Consolidate everything into one single struct and make
register entries explicit.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:59:23 -05:00
Carlo Caione a7d94b003e aarch64: Use absolute symbols for the callee saved registers
Use GEN_OFFSET_SYM macro to genarate absolute symbols for the
_callee_saved struct and use these new symbols in the assembly code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:59:23 -05:00
Carlo Caione 666974015e aarch64: error: Introduce CONFIG_EXCEPTION_DEBUG
Introduce CONFIG_EXCEPTION_DEBUG to discard exception debug strings and
code when not needed.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:52:45 -05:00
Carlo Caione 2683a1ed97 aarch64: error: Enable recoverable errors
For some kind of faults we want to be able to put in action some
corrective actions and keep executing the code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:52:45 -05:00
Carlo Caione a054e424e4 aarch64: error: Beautify error printing
Make the printing of errors a bit more descriptive and print the FAR_ELn
register only when strictly required.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:52:45 -05:00
Carlo Caione 6d05c57781 arch: aarch64: Branch SError vector table entry
Each vector table entry has 128-bytes to host the vector code. This is
not always enough and in general it's better to branch to the actual
exception handler elsewhere in memory.

Move the SError entry to a branched code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:52:45 -05:00
Wentong Wu bfc7785da0 arch: arm: push ssf to thread privileged stack to complete stack frame
Pushes the seventh argument named ssf to thread's privileged
stack to follow below syscall prototype.

uintptr_t z_mrsh_xx(uintptr_t arg0, uintptr_t arg1, uintptr_t arg2,
		    uintptr_t arg3, uintptr_t arg4, void *more, void *ssf)

Fixes: #29386.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2020-11-12 17:12:38 -05:00
Daniel Leung 11e6b43090 tracing: roll thread switch in/out into thread stats functions
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Daniel Leung 9be37553ee timing: do not repeatedly do init()/start()/stop()
We should not be initializing/starting/stoping timing functions
multiple times. So this changes how the timing functions are
structured to allow only one initialization, only start when
stopped, and only stop when started.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Carlo Caione 3afb493858 arch: arm64: mmu: Restore SRAM region
In a5f34d85c2 ("soc: arm: qemu_cortex_a53: Remove SRAM region") the
SRAM memory region was removed.

While this is correct when userspace is not enabled, when userspace is
enabled new regions are introduced outside the boundaries of
the mapped [__kernel_ram_start,__kernel_ram_end] region. This means that
we need to map again the whole SRAM to include all the needed regions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-11 08:21:53 -05:00
Carlo Caione 7b7c328f7a aarch64: mmu: Enable support for unprivileged EL0
The current MMU code is assuming that both kernel and threads are both
running in EL1, not supporting EL0. Extend the support to EL0 by adding
the missing attribute to mirror the access / execute permissions to EL0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-04 13:58:19 -08:00
Carlo Caione fd559f16a5 aarch64: mmu: Create new header file
The MMU coded is polluted by a lot of preprocessor code. Move that code
to a proper header file.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-04 13:58:19 -08:00
Carlo Caione b9a407b65c aarch64: mmu: Move MMU files in a sub-directory
We are probably going to do more work on the MMU side and more files
will be added. Create a new sub-directory to host all the MMU related
files.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-04 13:58:19 -08:00
Ioannis Glaropoulos 47e87d8459 arch: arm: cortex_m: implement functionality for ARCH core regs init
Implement the functionality for configuring the
architecture core registers to their warm reset
values upon system initialization. We enable the
support of the feature in the Cortex-M architecture.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-11-02 15:02:24 +01:00
Ioannis Glaropoulos 89658dad19 arch: arm: aarch32: cortex_m: improve documentation of z_arm_reset
We enhance the documentation of z_arm_reset, stressing that
the function may either be loaded by the processor coming
out of reset, or by another image, e.g. a bootloader. We
also specify what is required at minimum when executing the
reset function.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-11-02 15:02:24 +01:00
Carlo Caione b3ff89bd51 arch: arm64: Remove _BIT suffix
This is redundant and not coherent with the rest of the file. Thus
remove the _BIT suffix from the bit field names.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione 24c907d292 arch: arm64: Add missing vector table entries
The current vector table is missing some (not used) entries. Fill these
in for the sake of completeness.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione f0b2e3d652 arch: arm64: Use mov_imm when possible in the start code
Instead of relying on mov.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione 673803dc48 arch: arm64: Rename z_arm64_svc to z_arm64_sync_exc
The SVC handler is not only used for the SVC call but in general for all
the synchronous exceptions. Reflect this in the handler name.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione e738631ddf arch: arm64: Fix indentation
Fix indentation for the ISR wrapper.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione 78b5e5563d arch: arm64: Reword comments
Fix, reword and rework comments.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione 9e897ea2c3 arch: arm64: Remove unused macro parameters
Remove z_arm64_{enter,exit}_exc parameter leftovers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Daniel Leung 388725870f arm: cortex_m: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Note that since Cortex-M does not have the thread ID or
process ID register needed to store TLS pointer at runtime
for toolchain to access thread data, a global variable is
used instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung 778c996831 arm: cortex_r: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung df77e2af8b arm64: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Carlo Caione 645082791b arch: aarch64: Catch early errors in EL3 and EL1
Setup the stack as early as possible to catch any possible errors in the
reset routine and handle also EL3 fatal errors.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-12 12:22:15 -04:00
Carlo Caione 758fb93b0b arch: arm64: Remove useless assembly code
The content of the SCR_EL3 register is overwritten by a later
instruction. Also no need to route SError, IRQs and FIQs to EL3.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-12 12:22:15 -04:00
Carlo Caione 06adb96c1a arch: arm64: Introduce {inc,dec}_nest_counter macros
The same code is used in several place / files. Make a macro out of it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-06 10:25:56 -04:00
Carlo Caione 2f3962534a arch: arm64: Remove new thread entry wrapper
Instead of having some special stack frame when first scheduling new
thread and a new thread entry wrapper to pull out the needed data, we
can reuse the context restore code by adapting the initial stack frame.

This reduces the lines of code and simplify the code at the expense of a
slightly bigger initial stack frame.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-06 10:25:56 -04:00
Aastha Grover 83b9f69755 code-guideline: Fixing code violation 10.4 Rule
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.

Changes are related to converting the integer constants to the
unsigned integer constants

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-10-01 17:13:29 -04:00
Carlo Caione 871bdd0712 arch: arm64: Deprecate booting from EL2
We are deprecating booting and running in EL2.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-01 10:42:47 -04:00
Carlo Caione fb2bf23ec1 arch: arm64: Remove EL2/EL3 code
Zephyr is only supposed to be running at EL1 (+ EL0). Now that we drop
in EL1 from ELn at start we can remove all the EL2/EL3 unused code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-01 10:42:47 -04:00
Carlo Caione 7d40208ef7 arch: arm64: Remove CONFIG_SWITCH_TO_EL1
Remove the useless CONFIG_SWITCH_TO_EL1 since there should be no reason
to run Zephyr in EL3. So just drop to EL1 by default when booting from
EL3. Remove also non-reachable code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-01 10:42:47 -04:00
Luke Starrett 4800b03e56 arch: arm64: cosmetic changes to register dump
- Display full 64-bits register width in crash dumps
- Some values were prefixed 0x, some not.  Made consistent.

Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
2020-10-01 07:29:27 -04:00
Luke Starrett 169e7c5e75 arch: arm64: Fix arm64 crash dump output
- x0/x1 register printing is reversed
- The error stack frame struct (z_arch_esf_t) had the SPSR and ELR in
  the wrong position, inconsistent with the order these regs are pushed
  to the stack in z_arm64_svc.  This caused all register printing to be
  skewed by two.
- Verified by writing known values (abcd0000 -> abcd000f) to x0 - x15
  and then forcing a data abort.

Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
2020-10-01 07:29:27 -04:00
Andrew Boie f5a7e1a108 kernel: handle thread self-aborts on idle thread
Fixes races where threads on another CPU are joining the
exiting thread, since it could still be running when
the joiners wake up on a different CPU.

Fixes problems where the thread object is still being
used by the kernel when the fn_abort() function is called,
preventing the thread object from being recycled or
freed back to a slab pool.

Fixes a race where a thread is aborted from one CPU while
it self-aborts on another CPU, that was currently worked
around with a busy-wait.

Precedent for doing this comes from FreeRTOS, which also
performs final thread cleanup in the idle thread.

Some logic in z_thread_single_abort() rearranged such that
when we release sched_spinlock, the thread object pointer
is never dereferenced by the kernel again; join waiters
or fn_abort() logic may free it immediately.

An assertion added to z_thread_single_abort() to ensure
it never gets called with thread == _current outside of an ISR.

Some logic has been added to ensure z_thread_single_abort()
tasks don't run more than once.

Fixes: #26486
Related to: #23063 #23062

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-30 14:11:59 -04:00
Flavio Ceolin 27fcdaf71e arch: arm: Fix undefined symbol reference
_isr_wrapper is not defined when building with
CONFIG_GEN_SW_ISR_TABLE = n.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-09-29 12:36:33 +02:00
Ioannis Glaropoulos 3b89cf173b arch: arm: cortex-m: enable IRQs before main() in single-thread mode
Enable interrupts before switching to main()
in cortex-m builds with single-thread mode
(CONFIG_MULTITHREADING=n).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-29 10:47:43 +02:00
Øyvind Rønningstad 407ebf8132 cortex_m: secure_entry_functions.ld: Increase SAU alignment to 32
The spec requires SAU regions to be aligned on 32 bytes.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-23 13:15:38 +02:00
Øyvind Rønningstad 81e7608c03 arm: tz: secure_entry_functions.ld: Fix NSC_ALIGN for nRF devices
If the location counter ('.') is within the area that the veneers
should go, the current solution will give a linker error ("Cannot move
location counter backwards"). This patch places the veneers in the next
SPU region in this case.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-23 13:15:38 +02:00
Øyvind Rønningstad 2b56b86190 arm: tz: secure_entry_functions.ld: Fix NSC_ALIGN redefinition
Allow CONFIG_ARM_NSC_REGION_BASE_ADDRESS to override the nRF-specific
logic for alignment.

Fixes issue https://github.com/zephyrproject-rtos/zephyr/issues/27544

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-23 13:15:38 +02:00
Ioannis Glaropoulos 14f248fe1b arch: arm: cortex_m: cleanup SW_VECTOR_RELAY_CLIENT dependencies
CPU Cortex-M implies Mainline Cortex-M, therfore, the dependency
on ARMV6_M_ARMV8_M_BASELINE is redundant and can be removed. The
change in this commit is a no-op.

We also add the ARMV6_M_ARMV8_M_BASELINE dependency on option
CPU_CORTEX_M0_HAS_VECTOR_TABLE_REMAP to make sure it cannot be
selected for non Cortex-M Baseline SoCs (at least, not without
a warning).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-21 11:19:22 +02:00
Crist Xu ac3d9438ed drivers: usb: Fix usb fail when using the on-chip memory
Using SCB_CleanInvalidateDcache instead of SCB_DisableDcache
 & SCB_EnableDcache when config the non-cache area, in case
of the cache will effect the configuration of the non-cache
area

Signed-off-by: Crist Xu <crist.xu@nxp.com>
2020-09-17 16:56:28 -05:00
Tomasz Bursztyka ed98883795 device: Fixing new left over device instance made constant
Recent addition that went under the radar.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-15 09:18:43 -05:00
Andrew Boie aebb9d8a45 aarch64: work around QEMU 'wfi' issue
Work around an issue where the emulator ignores host OS
signals when inside a `wfi` instruction.

This should be reverted once this has been addressed in the
AARCH64 build of QEMU in the SDK.

See https://github.com/zephyrproject-rtos/sdk-ng/issues/255

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-10 21:31:15 +02:00
Carlo Caione d6f608219c arm64: tracing: Fix double tracing
When _arch_switch() API is used, the tracing of the thread swapped out
is done in the C kernel code (in do_swap() for cooperative scheduling
and in set_current() during preemption). In the assembly code we only
have to trace the thread when swapped in.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-09-09 15:36:43 -04:00
Ioannis Glaropoulos 394d2912a1 arch: arm: cortex-m: implement timing.c based on DWT
For Cortex-M platforms with DWT we implement
the timing API (timing.c).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-05 13:28:38 -05:00
Ioannis Glaropoulos 6f84d7d3fd arch: arm: cortex_m: conditionally select ARCH_HAS_TIMING_FUNCTIONS
Cortex-M SoCs implement (optionally) the Data Watchpoint and
Tracing Unit (DWT), which can be used for timing functions.
Select the corresponding ARCH capability if the SoC implements
the DWT.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-05 13:28:38 -05:00
Anas Nashif 6e27478c3d benchmarking: remove execution benchmarking code
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.

For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.

Furthermore, much of the assembly code used had issues.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-05 13:28:38 -05:00
Carlo Caione df4aa230c8 arch: arm64: Use _arch_switch() API
Switch to the _arch_switch() API that is required for an SMP-aware
scheduler instead of using the old arch_swap mechanism.

SMP is not supported yet but this is a necessary step in that direction.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-09-05 12:06:38 +02:00
Torsten Rasmussen c55c64e242 toolchain: improved toolchain abstraction for compilers and linker
First abstraction completed for the toolchains:
- gcc
- clang

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2020-09-04 20:36:59 +02:00
Øyvind Rønningstad 2be0086e87 cortex_m: tz_ns.h: Various fixes (late comments on PR)
Fix dox and restructure ASM.
No functional changes.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-04 19:05:58 +02:00
Pavel Král 06342e3474 arch: arm: mpu: Removal of include path pollution
Removes unnecessary and incorrect directories from include path.

Signed-off-by: Pavel Král <pavel.kral@omsquare.com>
2020-09-04 13:58:38 +02:00
Øyvind Rønningstad c00f33dcb0 arch: arm: cortex_m: Add tz_ns.h
Provide a TZ_SAFE_ENTRY_FUNC() macro for wrapping non-secure entry
functions in calls to k_sched_lock()/k_sched_unlock()

Provide a __TZ_WRAP_FUNC() macro which helps in creating a function
that "wraps" another in a preface and postface function call.

	int foo(char *arg); // Implemented somewhere else.
	int __attribute__((naked)) foo_wrapped(char *arg)
	{
		WRAP_FUNC(bar, foo, baz);
	}

is equivalent to

	int foo(char *arg); // Implemented somewhere else.
	int foo_wrapped(char *arg)
	{
		bar();
		int res = foo(arg);
		baz();
		return res;
	}

This commit also adds tests for __TZ_WRAP_FUNC().

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-04 11:58:41 +02:00
Andrew Boie ffc1da08f9 kernel: add z_thread_single_abort to private hdr
We shouldn't be copy-pasting extern declarations like this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Andrew Boie 3425c32328 kernel: move stuff into z_thread_single_abort()
The same code was being copypasted in k_thread_abort()
implementations, just move into z_thread_single_abort().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Andrew Boie e34ac286b7 arm: don't lock irqs during thread abort
This isn't needed; match the vanilla implementation
in kernel/thread_abort.c and do this unlocked. This
should improve system latency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Andrew Boie 0a99011357 arm: thread_abort: clarify what's going on
A check was being done that was a more obscure way of
calling arch_is_in_isr(). Add a comment explaining
why we need to trigger PendSV.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Ioannis Glaropoulos e08dfec77c arch: arm: cortex-m: add ARM-only API to set all IRQS to Non-Secure
We implement an ARM-only API for ARM Secure Firmware,
to set all NVIC IRQ lines to target the Non-Secure state.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-02 15:01:30 +02:00
Ioannis Glaropoulos 4ec7725110 arch: arm: cortex-m: Modify ARM-only API for IRQ target state mgmt
we modify the ARM Cortex-M only API for managing the
security target state of the NVIC IRQs. We remove the
internal ASSERT checking allowing to call the API for
non-implemented NVIC IRQ lines. However we still give the
option to the user to check the success of the IRQ target
state setting operation by allowing the API function to
return the resulting target state.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-02 15:01:30 +02:00
Tomasz Bursztyka 93cd336204 arch: Apply dynamic IRQ API change
Switching to constant parameter.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka 7def6eeaee arch: Apply IRQ offload API change
Switching to constant parameter.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka e18fcbba5a device: Const-ify all device driver instance pointers
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.

A coccinelle rule is used for this:

@r_const_dev_1
  disable optional_qualifier
@
@@
-struct device *
+const struct device *

@r_const_dev_2
 disable optional_qualifier
@
@@
-struct device * const
+const struct device *

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Chris Coleman 99a268fa16 arch: arm: Collect full register state in Cortex-M Exception Stack Frame
To debug hard-to-reproduce faults/panics, it's helpful to get the full
register state at the time a fault occurred. This enables recovering
full backtraces and the state of local variables at the time of a
crash.

This PR introduces a new Kconfig option, CONFIG_EXTRA_EXCEPTION_INFO,
to facilitate this use case. The option enables the capturing of the
callee-saved register state (r4-r11 & exc_return) during a fault. The
info is forwarded to `k_sys_fatal_error_handler` in the z_arch_esf_t
parameter. From there, the data can be saved for post-mortem analysis.

To test the functionality a new unit test was added to
tests/arch/arm_interrupt which verifies the register contents passed
in the argument match the state leading up to a crash.

Signed-off-by: Chris Coleman <chris@memfault.com>
2020-08-31 10:13:27 +02:00
Andrew Boie 00f71b0d63 kernel: add CONFIG_ARCH_MEM_DOMAIN_SYNCHRONOUS_API
Saves us a few bytes of program text on arches that don't need
these implemented, currently all uniprocessor MPU-based systems.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Andrew Boie 2222fa1426 arm: fix memory domain arch_ API implementations
All of these should be no-ops for the following reasons:

1. User threads cannot configure memory domains, only supervisor
   threads.
2. The scope of memory domains is user thread memory access,
   supervisor threads can access the entire memory map.

Hence it's never required to reprogram the MPU when a memory domain
API is called.

Fixes a problem where an assertion would fail if a supervisor thread
added a partition and then immediately removes it, and possibly
other problems.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Andrew Boie 91f1bb5414 arm: clarify a memory domain assertion
Dump the partition information to make this assertion
less ambiguous.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Daniel Leung 181d07321f coredump: add support for ARM Cortex-M
This adds the necessary bits in arch code, and Python scripts
to enable coredump support for ARM Cortex-M.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Anas Nashif b234660f4f tracing: cortex_a53: fix order of swap tracing
We had switched_in and switched_out mixed up.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Carlo Caione 310057d641 arch: arm64: Parametrize registers usage for z_arm64_{enter,exit}_exc
Make explicit what registers we are going to be touched / modified when
using z_arm64_enter_exc and z_arm64_exit_exc.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-08-18 15:17:39 +02:00
Carlo Caione d187830929 arch: arm64: Rework registers allocation
Rationalize the registers usage trying to reuse the smallest set of
registers possible.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-08-18 15:17:39 +02:00
Andrew Boie ed972b9582 arm: remove custom k_thread_abort() for Cortex-R
The default implementation is the same as this custom
one now, as the assertion that the context switch occurs
at the end of the ISR is true for all arches.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-18 08:36:35 +02:00
Henrik Brix Andersen e7f51fa918 arch: arm: aarch32: add support for Cortex-M1
Add support for the ARM Cortex-M1 CPU.

Signed-off-by: Henrik Brix Andersen <henrik@brixandersen.dk>
2020-08-14 13:35:39 -05:00
Anas Nashif ce59510127 arch: xip: cleanup XIP Kconfig
unify how XIP is configured across architectures. Use imply instead of
setting defaults per architecture and imply XIP on riscv arch and remove
XIP configuration from individual defconfig files to match other
architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-07 09:50:22 -04:00
Ioannis Glaropoulos fa04bf615c arch: arm: cortex-m: hw stack protection under no multi-threading
This commit adds the support for HW Stack Protection when
building Zephyr without support for multi-threading. The
single MPU guard (if the feature is enabled) is set to
guard the Main stack area. The stack fail check is also
updated.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Ioannis Glaropoulos 4338552175 arch: arm: cortex-m: introduce custom switch to main function
For the case of building Zephy with no-multithreading
support (CONFIG_MULTITHREADING=n) we introduce a
custom (ARCH-specific) function to switch to main()
from cstart(). This is required, since the Cortex-M
initialization code is temporarily using the interrupt
stack and main() should be using the z_main_stack,
instead. The function performs the PSP switching,
the PSPLIM setting (for ARMv8-M), FPU initialization
and static memory region initialization, to mimic
what the normal (CONFIG_MULTITHREADING=y) case does.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Ioannis Glaropoulos e9a85eec28 arch: arm: cortex-m: extract common code block into a static function
We extract the common code for both multithreading and
non-multithreading cases into a common static function
which will get called in Cortex-M archictecture initialization.
This commit does not introduce behavioral changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Ioannis Glaropoulos 1a5390f438 arch: arm: cortex-m: fix the accounted size of IRQ stack in reset.S
This patch is simply adding the guard area (if applicable) to
the calculations for the size of the interrupt stack in reset.S
for ARM Cortex-M architecture. If exists, the GUARD area is
always reserved aside from CONFIG_ISR_STACK_SIZE, since the
interrupt stack is defined using the K_KERNEL_STACK_DEFINE.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Andrew Boie 8b4b0d6264 kernel: z_interrupt_stacks are now kernel stacks
This will save memory on many platforms that enable
user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie 8ce260d8df kernel: introduce supervisor-only stacks
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.

Two new arch defines are introduced:

- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN

New public declaration macros:

- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF

If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.

Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie e4cc84a537 kernel: update arch_switch_to_main_thread()
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.

Based on #24467 authored by Flavio Ceolin.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie b0c155f3ca kernel: overhaul stack specification
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.

thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.

thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.

CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.

thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.

Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie 24825c8667 arches: fix arch_new_thread param names
MISRA-C wants the parameter names in a function implementaion
to match the names used by the header prototype.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie 0c69561469 arch: remove duplicate docs for arch_new_thread
This interface is documented already in
kernel/include/kernel_arch_interface.h

Other architectural notes were left in place except where
they were incorrect (like the thread struct
being in the low stack addresses)

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie 62eb7d99dc arch_interface: remove unnecessary params
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Ioannis Glaropoulos b0c5e6335a arch: arm: cortex-m: move the relay table section after vector table
In CPUs with VTOR we are free to place the relay vector table
section anywhere inside ROM_START section (as long as we respect
alignment requirements). This PR moves the relay table towards
the end of ROM_START. This leaves sufficient area for placing
some SoC-specific sections inside ROM_START that need to start
at a fixed address.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-07-27 13:23:36 +02:00
Andrzej Głąbek ec8cb07da2 arch: arm: Export vector table symbols with GDATA instead of GTEXT
_vector_table and __vector_relay_table symbols were exported with GTEXT
(i.e. as functions). That resulted in bit[0] being incorrectly set in
the addresses they represent (for functions this bit set to 1 specifies
execution in Thumb state).
This commit corrects this by switching to exporting these objects as
objects, i.e. with GDATA.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2020-07-24 12:04:28 +02:00
Daniel Leung 3f6ac9fdfd arm: add include guard for offset files
MISRA-C directive 4.10 requires that files being included must
prevent itself from being included more than once. So add
include guards to the offset files, even though they are C
source files.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-24 10:01:12 +02:00
David Leach 5803ec1bf0 arch: arm: mpu: Use temporary MPU mapping while reprogramming NXP MPU
Race conditions exist when remapping the NXP MPU. When writing the
start, end, or attribute registers of a MPU descriptor, the hardware
will automatically clear the region's valid bit. If that region gets
accessed before the code is able to set the valid bit, the core will
fault.

Issue #20595 revealled this problem with the code in region_init()
when the compiler options are set to no optimizations. The code
generated by the compiler put local variables on the stack and then
read those stack based variables when writing the MPU descriptor
registers. If that region mapped the stack a memory fault would occur.
Higher compiler optimizations would store these local variables in
CPU registers which avoided the memory access when programming the
MPU descriptor.

Because the NXP MPU uses a logic OR operation of the MPU descriptors,
the fix uses the last descriptor in the MPU hardware to remap all of
dynamic memory for access instead of the first of the dynamic memory
descriptors as was occuring before. This allows reprogramming of the
primary discriptor blocks without having a memory fault. After all
the dynamic memory blocks are mapped, the unused blocks will have
their valid bits cleared including this temporary one, if it wasn't
alread changed during the mapping of the current set.

Fixes #20595

Signed-off-by: David Leach <david.leach@nxp.com>
2020-07-22 11:27:40 +02:00
Andrew Boie ff294e02cd arch: add CONFIG_CPU_HAS_MMU
Indicate that the CPU has a memory management unit,
similar to CPU_HAS_MPU for MPUs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Rafał Kuźnia f2b0bfda8f arch: arm: aarch32: Always use VTOR when it is available
Zephyr applications will always use the VTOR register when it is
available on the CPU and the register will always be configured
to point to applications vector table during startup.

SW_VECTOR_RELAY_CLIENT is meant to be used only on baseline ARM cores.

SW_VECTOR_RELAY is intended to be used only by the bootloader.
The bootloader may configure the VTOR to point to the relay table
right before chain-loading the application.

Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
2020-07-14 16:17:30 +02:00
Andrzej Puzdrowski 4152ccf124 arch/arm/aarch32: ensured SW IRQ relay modes exclusive
Select either SW_VECTOR_RELAY or SW_VECTOR_RELAY_CLIENT
at the time.

Removed #ifdef-ry in irq_relay.S as SW_VECTOR_RELAY was
refined so it became reserved for the bootloader and it
conditionally includes irq_relay.S for compilation.
See SHA #fde3116f1981cf152aadc2266c66f8687ea9f764

Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
2020-07-14 16:17:30 +02:00
Rafał Kuźnia 89bf746ebe arch/arm/aarch32: add IRQ relay mechanism to ARMv7/8-M
This patch allows the `SW_VECTOR_RELAY` and
`SW_VECTOR_RELAY_CLIENT` pair to be
enabled on the ARMv7-M and ARMv8-M architectures
and covers all additional interrupt vectors.

Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
2020-07-14 16:17:30 +02:00
Ioannis Glaropoulos e80e655b01 arch: arm: cortex_m: align vector table based on VTOR requirements
Enforce VTOR table offset alignment requirements on Cortex-M
vector table start address.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-07-14 13:03:25 +02:00
Ioannis Glaropoulos fde3116f19 arch: arm: cortex_m: Add config for SW_VECTOR_RELAY_CLIENT
Define vector relay tables for bootloader only.
If an image is not a bootloader image (such as an MCUboot image)
but it is a standard Zephyr firmware, chain-loadable by a
bootloader, then this image will not need to relay IRQs itself.
In this case SW_VECTOR_RELAY_CLIENT should be used to setting the
vector table pointer in RAM so the parent image can forward the
interrupts to it.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Co-authored-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
2020-07-03 13:34:50 -04:00
Scott Branden d77fbcb86a arch: arm64: mmu: create macro for TCR_PS_BITS
Create macro for TCR_PS_BITS instead of programmatically looking up
a static value based on a CONFIG option.  Moving to macro
removes logically dead code reported by Coverity static analysis tool.

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
2020-06-18 12:47:30 +02:00
Kumar Gala a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Ioannis Glaropoulos 9d8111c88f arch: arm: cortex-m: fix placement of ARMv7-M-related MPU workaround
The workaround for ARMv7-M architecture (which proactively
decreases the available thread stack by the size of the MPU
guard) needs to be placed before we calculate the pointer of
the user-space local thread data, otherwise this pointer will
fall beyond the boundary of the thread stack area.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-05-27 19:48:27 +02:00
Ioannis Glaropoulos 6b54958a0e arch: arm: aarch32: cortex-m: fix logic for detecting guard violation
We fix (by inverting) the logic of the IS_MPU_GUARD_VIOLATION
macro, with respect to the value of the supplied 'fault_addr'.
We shall only be inspecting the fault_addr value if it is not
set to -EINVAL.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-05-27 10:10:22 +02:00
Ioannis Glaropoulos 7284aee7d7 arch: arm: aarch32: cortex_m: add note in mem_manage_fault()
It is possible that MMFAR address is not written by the
Cortex-M core; this occurs when the stacking error is
not accompanied by a data access violation error (i.e.
when stack overflows due to the exception entry frame
stacking): z_check_thread_stack_fail() shall be able to
handle the case of 'mmfar' holding the -EINVAL value.

Add this node in mem_manage_fault() function to clarify
that it is valid for z_check_thread_stack_fail() to be
called with invalid mmfar address value.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-05-27 10:10:22 +02:00
Andrew Boie 2873afb7fe aarch32: fix a build failure
Some wires were crossed when an older PR was merged that
had build conflicts with newer code. Update this header
to reflect were the 'nested' member is in the kernel CPU
struct.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-08 13:59:17 -05:00
Andrew Boie a203d21962 kernel: remove legacy fields in _kernel
UP should just use _kernel.cpus[0].

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-08 17:42:49 +02:00
Stephanos Ioannidis aaf93205bb kconfig: Rename CONFIG_FP_SHARING to CONFIG_FPU_SHARING
This commit renames the Kconfig `FP_SHARING` symbol to `FPU_SHARING`,
since this symbol specifically refers to the hardware FPU sharing
support by means of FPU context preservation, and the "FP" prefix is
not fully descriptive of that; leaving room for ambiguity.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-05-08 10:58:33 +02:00
Abhishek Shah 2f85c01eaa arch: arm: aarch64: Add Cortex-A72 config
Add Cortex-A72 config in order to set "-mcpu" correctly.

Signed-off-by: Abhishek Shah <abhishek.shah@broadcom.com>
2020-05-08 10:46:23 +02:00
Stephanos Ioannidis 8b27d5c6b9 linker: Clean up section name definitions
This commit cleans up the section name definitions in the linker
sections header file (`include/linker/sections.h`) to have the uniform
format of `_(SECTION)_SECTION_NAME`.

In addition, the scope of the short section reference aliases (e.g.
`TEXT`, `DATA`, `BSS`) are now limited to the ASM code, as they are
currently used (and intended to be used) only by the ASM code to
specify the target section for functions and variables, and these short
names can cause name conflicts with the symbols used in the C code.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-30 13:42:36 -04:00
Sandeep Tripathy d4f1f2a07e arch: arm64: add public header for asm macros
Move generic macros to exported assembly header file
'macro.inc'. Rename the existing 'macro.inc' to 'macro_priv.inc'.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-04-28 10:44:42 -07:00
Stephanos Ioannidis 0e6ede8929 kconfig: Rename CONFIG_FLOAT to CONFIG_FPU
This commit renames the Kconfig `FLOAT` symbol to `FPU`, since this
symbol only indicates that the hardware Floating Point Unit (FPU) is
used and does not imply and/or indicate the general availability of
toolchain-level floating point support (i.e. this symbol is not
selected when building for an FPU-less platform that supports floating
point operations through the toolchain-provided software floating point
library).

Moreover, given that the symbol that indicates the availability of FPU
is named `CPU_HAS_FPU`, it only makes sense to use "FPU" in the name of
the symbol that enables the FPU.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-27 19:03:44 +02:00
Andrew Boie 618426d6e7 kernel: add Z_STACK_PTR_ALIGN ARCH_STACK_PTR_ALIGN
This operation is formally defined as rounding down a potential
stack pointer value to meet CPU and ABI requirments.

This was previously defined ad-hoc as STACK_ROUND_DOWN().

A new architecture constant ARCH_STACK_PTR_ALIGN is added.
Z_STACK_PTR_ALIGN() is defined in terms of it. This used to
be inconsistently specified as STACK_ALIGN or STACK_PTR_ALIGN;
in the latter case, STACK_ALIGN meant something else, typically
a required alignment for the base of a stack buffer.

STACK_ROUND_UP() only used in practice by Risc-V, delete
elsewhere.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie 1f6f977f05 kernel: centralize new thread priority check
This was being done inconsistently in arch_new_thread(), just
move to the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie c0df99cc77 kernel: reduce scope of z_new_thread_init()
The core kernel z_setup_new_thread() calls into arch_new_thread(),
which calls back into the core kernel via z_new_thread_init().

Move everything that doesn't have to be in z_new_thread_init() to
z_setup_new_thread() and convert to an inline function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Stephanos Ioannidis ae427177c0 arch: arm: aarch32: Rework non-Cortex-M exception handling
This commit reworks the ARM AArch32 non-Cortex-M (i.e. Cortex-A and
Cortex-R) exception handling to establish the base exception handling
framework and support detailed exception information reporting.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-20 18:22:46 +02:00
Stephanos Ioannidis c442203c08 arch: arm: aarch32: Fix incorrect z_arm_{int,exc}_exit usage
In the ARM Cortex-M architecture implementation, the concepts of
"exceptions" and "interrupts" are interchangeable; whereas, in the
Cortex-A/-R architecture implementation, they are considered separate
and therefore handled differently (i.e. `z_arm_exc_exit` cannot be used
to exit an "interrupt").

This commit fixes all `z_arm_exc_exit` usages in the interrupt handlers
to use `z_arm_int_exit`.

NOTE: In terms of the ARM AArch32 Cortex-A and Cortex-R architecture
      implementations, the "exceptions" refer to the "Undefined
      Instruction (UNDEF)" and "Prefetch/Data Abort (PABT/DABT)"
      exceptions, while "interrupts" refer to the "Interrupt (IRQ)",
      "Fast Interrupt (FIQ)" and "Software Interrupt/Supervisor Call
      (SWI/SVC)".

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-20 18:22:46 +02:00
Stephanos Ioannidis b14d53435b arch: arm: aarch32: Split fault_s.S for Cortex-M and the rest
The exception/fault handling mechanisms for the ARM Cortex-M and the
rest (i.e. Cortex-A and Cortex-R) are significantly different and there
is no benefit in having the two implementations in the same file.

This commit relocates the Cortex-M fault handler to
`cortex_m/fault_s.S` and the Cortex-A/-R generic exception handler to
`cortex_a_r/exc.S` (note that the Cortex-A and Cortex-R architectures
do not provide direct fault vectors; instead, they provide the
exception vectors that can be used to handle faults).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-20 18:22:46 +02:00
Stephanos Ioannidis 37f44193f3 arch: arm: aarch32: Split exc_exit.S for Cortex-M and the rest
The amount of shared code in exc_exit.S between the ARM Cortex-M and
the rest (i.e. Cortex-A and Cortex-R) is minimal and there is little
benefit in having the two implementations in the same file.

This commit splits the interrupt/exception exit code for the
Cortex-A/-R and Cortex-M into separate files to improve readability as
well as maintainability.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-20 18:22:46 +02:00
Sandeep Tripathy 1dc095c949 arch: arm64: use callee saved reg to stash
Use calee saved register to preserve value accoss sequence.
Procedure calls are mandated to follow ABI spec and preserve
x19 to x29.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-04-20 16:14:36 +02:00
Sandeep Tripathy 82724de6a5 arch: arm64: refactor for EL3 specific init
Zephyr being an OS is typically expected to run at EL1. Arm core
can reset to EL3 which typically requires a firmware to run at EL3
and drop control to lower EL. In that case EL3 init is done by the
firmware allowing the lower EL software to have necessary control.

If Zephyr is entered at EL3 and it is desired to run at EL1, which
is indicated by 'CONFIG_SWITCH_TO_EL1', then Zephyr is responsible
for doing required EL3 initializations to allow lower EL necessary
control.

The entry sequence is modified to have control flow under single
'switch_el'.

Provisions added by giving weak funcions to do platform specific
init from EL3.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-04-20 16:14:36 +02:00
Sandeep Tripathy c6f8771311 arch: arm64: macro for mov immediate
Single mov instruction can not be used to move non-zero
64b immediate value to the 64b register.
Implement macro to generate mov/ movk and movz sequences
depending on immediate value width.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-04-20 16:14:36 +02:00
Kumar Gala 5648df39ac arch: arm: cortex_m: Rework DT_NUM_IRQ_PRIO_BITS
To remove the need to have DT_NUM_IRQ_PRIO_BITS defined in every
dts_fixup.h we can just handle the few variant cases in irq.h.  This
allows us to remove DT_NUM_MPU_REGIONS from all the dts_fixup.h files.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-17 15:17:43 +02:00
Kumar Gala c5e5d531ca arch: arm: cortex_m: arm_mpu: Rework DT usage for DT_NUM_MPU_REGIONS
To remove the need to have DT_NUM_MPU_REGIONS defined in every
dts_fixup.h we can just handle the few variant cases in arm_mpu.c
directly.  This allows us to remove DT_NUM_MPU_REGIONS from all the
dts_fixup.h files.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-17 15:17:43 +02:00
Stephanos Ioannidis 2d6194170b arch: arm: aarch32: Fix read_timer_end_of_isr register preservation
The current implementation to preserve r0 and r3 registers around the
call to `read_timer_end_of_isr` function has the following problems:

1. STM and LDM mnemonics are used without proper suffixes, in attempt
   to implement PUSH and POP (i.e. STMFD and LDMFD). The suffix-less
   STM mnemonic is equivalent to STMEA (increment after), which clearly
   is not a PUSH operation, and this corrupts the interrupt stack,
   leading to crashes on the Cortex-R.

2. The current implementation unnecessarily preserves additional r1, r2
   and lr registers. There is no need to preserve r1 and r2 because the
   values contained in these registers are not used after the function
   call; as for the lr register, it is already pushed to the stack when
   the interrupt service routine enters.

This commit removes all the unnecessary register preservations and
fixes the incorrect STM and LDM usages.

Note that the PUSH and POP aliases are used in place of the STMFD and
LDMFD mnemonics because they are used throughout the rest of the code.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 15:49:27 +02:00
Bobby Noelte 68cd1b7f9e arch: arm: aarch32: fix system clock driver selection for cortex m
The selection of the Cortex M systick driver to be used as a system
clock driver is controlled by CONFIG_CORTEX_M_SYSTICK.

To replace it by another driver CONFIG_CORTEX_M_SYSTICK must be set
to 'n'. Unfortunately this also controls the interrupt vector for
the systick interrupt. It is now routed to z_arm_exc_spurious.

Remove the dependecy on CONFIG_CORTEX_M_SYSTICK and route to
z_clock_isr as it was before #24012.

Fixes #24347

Signed-off-by: Bobby Noelte <b0661n0e17e@gmail.com>
2020-04-15 12:16:10 +02:00
Stephanos Ioannidis a1e838872c arch: arm: Remove extraneous root cmake files
The ARM architecture root directory contains `aarch32.cmake` and
`aarch64.cmake` files whose contents are better suited to go into other
more purpose-specific files.

This commit removes the aforementioned files and moves their contents
to other files following the convention used by other architectures.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 11:23:56 +02:00
Stephanos Ioannidis eeddc7566d arch: arm: aarch32: Add missing arch flag for Cortex-R5
This commit adds the GCC `-march` flag for the ARM Cortex-R5 targets.

Note that `armv7-r+idiv` must be specified instead of `armv7-r`,
because the GCC internally resolves `-mcpu=cortex-r5` to it.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 11:23:56 +02:00
Stephanos Ioannidis 3cf1a9139e arch: arm: Clean up configurations
This is a minor clean-up for the ARM architecture configurations.

Note that the `CPU_CORTEX_A` symbol is moved from the AArch64 to the
ARM root Kconfig because it can be selected from both AArch32 and
AArch64.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 11:23:56 +02:00
Anas Nashif b90fafd6a0 kernel: remove unused offload workqueue option
Those are used only in tests, so remove them from kernel Kconfig and set
them in the tests that use them directly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-04-12 18:42:27 -04:00
Ioannis Glaropoulos 95da5d479b arch: arm: minor fixes in the docs for ARM kernel_arch headers
Fix documentation in kernel_arch_data.h and kernel_arch_func.h
headers for ARM, to indicate that these are common headers for
all ARM architecture variants.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-09 13:13:42 -07:00
Ioannis Glaropoulos 25060b0f2e arch: arm: aarch32: rename z_arm_reserved to z_arm_exc_spurious
In the Cortex-M exception table we rename z_arm_reserved()
function to z_arm_exc_spurious(), as it is invoked when
existing (that is, non-reserved) but un-installed exceptions
are triggered, accidentaly, by software, or hardware. This
currently applies to SysTick and SecureFault exceptions.

Since fault.S is shared between Cortex-M and other AARCH32
architectures, we keep z_arm_reserved as a defined symbol
there. This commit does some additional, minor, "no-op"
cleanup in #ifdef's for Cortex-M and Cortex-R.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-07 09:57:12 -05:00
Ioannis Glaropoulos d3fa2eebb0 arch: arm: aarch32: cortex_m: add z_arm_reserved only if core has SE
If the Cortex-M core does not implement the Security Extension,
we should not be adding z_arm_reserved in the corresponding
vector table entry. That is because the entry is reserved by
the ARM architecture.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-07 09:57:12 -05:00
Ioannis Glaropoulos 4364f2d455 arch: arm: aarch32: add z_arm_reserved only when we have SysTick
If the Cortex-M core does not implement the SysTick peripheral,
we should not be adding z_arm_reserved in the corresponding
vector table entry. If we do have SysTick implemented but we
are not using it as the system timer, we shall install the
reserved interrupt at the vector table entry.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-07 09:57:12 -05:00
Ioannis Glaropoulos d725402daf arch: arm: aarch32: cortex_m: write 0x0 to reserved exception entries
Write 0x0 instead of z_arm_reserved to vector exception
entries that are always reserved for future use by the
ARM architecture. These vector table entries cannot be
fetched to be executed by the Cortex-M exception entry,
so having z_arm_reserved gives a false impression, since
it is a function that may be invoked in the code. This
modification is safe since these vector entries are also
not supposed to be read / written by the code.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-07 09:57:12 -05:00
Stephanos Ioannidis b63a028fbc arch: arm: aarch32: Rework non-Cortex-M context preservation
The current context preservation implementation saves the spsr and
lr_irq registers, which contain the cpsr and pc register values of the
interrupted context, in the thread callee-saved block and this prevents
nesting of interrupts because these values are required to be part of
the exception stack frame to preserve the nested interrupt context.

This commit reworks the AArch32 non-Cortex-M context preservation
implementation to save the spsr and lr_irq registers in the exception
stack frame to allow preservation of the nested interrupt context as
well as the interrupted thread context.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-02 09:22:38 +02:00
Carlo Caione 99a8155914 arm: AArch64: Add support for nested exception handlers
In the current implementation both SPSR and ELR registers are saved with
the callee-saved registers and restored by the context-switch routine.
To support nested IRQs we have to save those on the stack when entering
and exiting from an ISR.

Since the values are now carried on the stack we can now add those to
the ESF and the initial stack and take care to restore them for new
threads using the new thread wrapper routine.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-31 19:24:48 +02:00
Stephanos Ioannidis 33928f18ae arch: arm: aarch32: Add header shims for cortex_a_r renaming
Out-of-tree code can still be using the old file locations. Introduce
header shims to include the headers from the new correct location and
print a warning message.

These shims should be removed after two releases.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-26 11:20:36 +01:00
Stephanos Ioannidis a033683783 arch: arm: aarch32: Rename cortex_r to cortex_a_r
This commit renames the `cortex_r` directory under the AArch32 to
`cortex_a_r`, in preparation for the AArch32 Cortex-A support.

The rationale for this renaming is that the Cortex-A and Cortex-R share
the same base design and the difference between them, other than the
MPU vs. MMU, is minimal.

Since most of the architecture port code and configurations will be
shared between the Cortex-A and Cortex-R architectures, it is
advantageous to have them together in the same directory.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-26 11:20:36 +01:00
Stephanos Ioannidis bafb623239 arch: arm: aarch32: Reorganise configurations
This commit re-organises AArch32 configurations for consistency.

1. Move Cortex-M-specific includes to `cortex_m/Kconfig`.

2. Relocate the "TrustZone" configurations to `cortex_m/tz/Kconfig`
  since these are really the TrustZone-M configurations and do not
  apply to the TrustZone-A.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-26 11:20:36 +01:00
Carlo Caione 67e4ccbc51 arch: aarch64: Add check on context switch
Check whether we actually need to schedule a new thread before calling
the context switch routine.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-23 12:13:07 +01:00
Carlo Caione b41e5e67d0 arch: aarch64: Rewrite comments and rename swap routines
Rewrite the comments for the swap routine removing the references to the
old aarch32 code and rename z_arm64_pendsv() ->
z_arm64_context_switch().

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-23 12:13:07 +01:00
Carlo Caione fbf9b2675d aarch64: swap: Remove redundant code
Delete redundant / useless code from z_arm64_pendsv().

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-23 12:13:07 +01:00
Carlo Caione 99e63a799d arch: aarch64: Rework exception entry/exit code
Rework the assembly code for the ISR wrapper and SVC to share the
entry/exit code that is currently scattered amoung several files /
places. No functional changes.

Rename also macro.h -> macro.inc to fool the CI.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-20 14:15:43 +01:00
Ioannis Glaropoulos fec399e74a arch: arm: aarch32: correct documentation of arch_cpu_atomic_idle
z_CpuIdleInit has been renamed to z_arm_cpu_idle_init, so
we need to correct that function's name in the documentation
of arch_cpu_atomic_idle.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-20 11:53:14 +01:00
Stephanos Ioannidis 3f395f5698 arch: arm: aarch32: Add memory barriers to arch_cpu_idle
This commit adds the required memory barriers to the `arch_cpu_idle`
function in order to ensure proper idle operation in all cases.

1. Add ISB after setting BASEPRI to ensure that the new wake-up
  interrupt priority is visible to the WFI instruction.

2. Add DSB before WFI to ensure that all memory transactions are
  completed before going to sleep.

3. Add ISB after CPSIE to ensure that the pending wake-up interrupt
  is serviced immediately.

Co-authored-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-20 11:53:14 +01:00
Stephanos Ioannidis ba0bcaf41b arch: arm: aarch32: Fix arch_cpu_idle interrupt masking
The current AArch32 `arch_cpu_idle` implementation enables interrupt
before executing the WFI instruction, and this has the side effect of
allowing interruption and thereby calling wake-up notification
functions before the CPU enters sleep.

This commit fixes the problem described above by ensuring that
interrupt is disabled when the WFI instruction is executed and
re-enabled only after the processor wakes up.

For ARMv6-M, ARMv8-M Baseline and ARM-R, the PRIMASK (ARM-M)/
CPSR.I (ARM-R) is used to lock interrupts and therefore it is not
necessary to do anything before executing the WFI instruction.

For ARMv7-M and ARMv8-M Mainline, the BASEPRI is used to lock
interrupts and the PRIMASK is always cleared in non-interrupt context;
therefore, it is necessary to set the PRIMASK to mask interrupts,
before clearing the BASEPRI to configure wake-up interrupt priority to
the lowest.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-20 11:53:14 +01:00
Stephanos Ioannidis 50e4f2a671 arch: arm: aarch32: Fix whitespaces in cpu_idle.S
This commit fixes whitespaces in cpu_idle.S.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-20 11:53:14 +01:00
Andrew Boie 28be793cb6 kernel: delete separate logic for priv stacks
This never needed to be put in a separate gperf table.
Privilege mode stacks can be generated by the main
gen_kobject_list.py logic, which we do here.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Øyvind Rønningstad c3ee533b5e arch: arm: tz: secure_entry_functions: Add support for nRF53
The nRF53 has different region size than nRF91.
This patch is aware of Erratum 19 (wrong SPU region size).

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-03-17 11:41:19 +01:00
Andrew Boie 80a0d9d16b kernel: interrupt/idle stacks/threads as array
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.

There is now a centralized declaration in kernel_internal.h.

On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.

The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.

The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 23:17:36 +02:00
Stephanos Ioannidis cd90d49a86 arch: arm: Optimise Cortex-R exception return function.
z_arm_exc_exit (z_arm_int_exit) requires the current execution mode to
be specified as a parameter (through r0). This is not necessary because
this value can be directly read from CPSR.

This commit modifies the exception return function to retrieve the
current execution mode from CPSR and removes all provisions for passing
the execution mode parameter.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-14 11:49:22 +01:00
Stephanos Ioannidis 91ceee782f arch: arm: aarch64: Refactor interrupt interface
The current AArch64 interrupt system relies on the multi-level
interrupt mechanism and the `irq_nextlevel` public interface to invoke
the Generic Interrupt Controller (GIC) driver functions.

Since the GIC driver has been refactored to provide a direct interface,
in order to resolve various implementation issues described in the GIC
driver refactoring commit, the architecture interrupt control functions
are updated to directly invoke the GIC driver functions.

This commit also adds support for the ARMv8 cores (e.g. Cortex-A53)
that allow interfacing to a custom external interrupt controller
(i.e. non-GIC) by mapping the architecture interrupt control functions
to the SoC layer interrupt control functions when
`ARM_CUSTOM_INTERRUPT_CONTROLLER` configuration is enabled.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-13 09:59:59 +01:00
Stephanos Ioannidis 2c5ca5505c arch: arm: aarch32: Refactor interrupt interface
The current AArch32 (Cortex-R and to-be-added Cortex-A) interrupt
system relies on the multi-level interrupt mechanism and the
`irq_nextlevel` public interface to invoke the Generic Interrupt
Controller (GIC) driver functions.

Since the GIC driver has been refactored to provide a direct interface,
in order to resolve various implementation issues described in the GIC
driver refactoring commit, the architecture interrupt control functions
are updated to directly invoke the GIC driver functions.

This commit also adds support for the Cortex-R cores (Cortex-R4 and R5)
that allow interfacing to a custom external interrupt controller
(i.e. non-GIC) by introducing the `ARM_CUSTOM_INTERRUPT_CONTROLLER`
configuration that maps the architecture interrupt control functions to
the SoC layer interrupt control functions.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-13 09:59:59 +01:00
Ioannis Glaropoulos d9a6e1d0c0 arch: arm: aarch32: rename z_arm_int_lib_init() function
We rename the z_arm_int_lib_init() function to
z_arm_interrupt_init(), aligning to how other
ARCHes name their IRQ initialization function.
There is nothing about 'library' in this
functionality, so we remove the 'lib' in-fix.

The commit does not introduce any behavior changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-12 20:11:44 +02:00
Carlo Caione b4335a04ac arm: aarch64: Reintroduce _ASM_FILE_PROLOGUE
This is currently missing from the AArch64 assembly files.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-11 09:34:24 +01:00
Ioannis Glaropoulos 0773fd5963 arch: arm: aarch32: fix z_irq_spurious() implementation
We align the implementation of z_irq_spurious() handler
with the other Zephyr ARCHEs, i.e. we will be calling
directly the ARM-specific fatal error function with
K_ERR_SPURIOUS_IRQ as the error type. This is already
the case for aarch64.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Ioannis Glaropoulos a31795c440 arch: arm: aarch32: fix documentation in z_irq_spurious definition
Correct documentation note in z_irq_spurious() definition,
stressing that the function is installed in _sw_isr_table
entries at boot time (which may be or not be used for
dynamic interrupts).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Stephanos Ioannidis 7c5db4b755 arch: arm: cortex_r: Enable Thumb2 instruction set support
The ARMv7-R architecture supports both Thumb-2 (T32) and ARM (A32)
instruction sets.

This commit selects the `ISA_THUMB2` symbol to indicate that the
ARMv7-R architecture supports the Thumb-2 instruction set, which can
be enabled by selecting the `COMPILER_ISA_THUMB2` symbol.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-10 17:51:32 +01:00
Stephanos Ioannidis 0bd86f3604 arch: arm: aarch32: Allow selecting compiler instruction set
This commit introduces the `COMPILER_ISA_THUMB2` symbol to allow
choosing either the ARM or Thumb instruction set for C code
compilation.

In addition, this commit introduces the `ASSEMBLER_ISA_THUMB2` helper
symbol to specify the default target instruction set for the assembler.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-10 17:51:32 +01:00
Flavio Ceolin 8ed4b62dc0 syscalls: arm: Fix possible overflow in is_in_region function
This function is widely used by functions that validate memory
buffers. Macros used to check permissions, like Z_SYSCALL_MEMORY_READ
and Z_SYSCALL_MEMORY_WRITE, use these functions to check that a
pointers passed by user threads in a syscall.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-03-07 13:12:51 +02:00
Ioannis Glaropoulos 502b67ceba arch: arm: aarch32: userspace: fix syscall ID validation
We need an unsigned comparison when evaluating whether
the supplied syscall ID is lower than the syscall ID limit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-07 09:22:23 +02:00
Anders Montonen 219d9fc082 kconfig: Fix typo in ARM_MPU help
The ARMv7-M MPU requires power-of-two alignment, not the ARMv8-M MPU, as
noted a few lines later.

Signed-off-by: Anders Montonen <Anders.Montonen@iki.fi>
2020-03-04 10:18:27 +02:00
Luuk Bosma dfb80526b4 arch: arm: aarch32: clear CONTROL.FPCA for every CPU that has a FPU
Upon reset, the CONTROL.FPCA bit is, normally, cleared. However,
it might be left un-cleared by firmware running before Zephyr boot,
for example when Zephyr image is loaded by another image.
We must clear this bit to prevent errors in exception unstacking.
This caused stack offset when booting from a build-in EFM32GG bootloader

Fixes #22977

Signed-off-by: Luuk Bosma <l.bosma@interay.com>
2020-02-27 19:26:04 +02:00