Commit graph

5548 commits

Author SHA1 Message Date
Ruud Derwig 9bccb5cc4b ARC: fx possible memory corruption with userspace
Use  Z_KERNEL_STACK_BUFFER instead of
Z_THREAD_STACK_BUFFER for initial stack.

Fixes #50467

Signed-off-by: Ruud Derwig <Ruud.Derwig@synopsys.com>
2022-09-21 18:46:06 +00:00
Chen Peng1 4c85c84ec2 x86: Kconfig: update dependency for X86_FP_USE_SOFT_FLOAT
Update Kconfig dependency for X86_FP_USE_SOFT_FLOAT.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2022-09-21 18:43:11 +00:00
Nicolas Pitre c76d8c88c0 riscv: smp: fix secondary cpus' initial stack
Z_THREAD_STACK_BUFFER() must not be used here. This is meant for stacks
defined with K_THREAD_STACK_ARRAY_DEFINE() whereas in this case we are
given a stack created with K_KERNEL_STACK_ARRAY_DEFINE().

If CONFIG_USERSPACE=y then K_THREAD_STACK_RESERVED gets defined with
a bigger value than K_KERNEL_STACK_RESERVED. Then Z_THREAD_STACK_BUFFER()
returns a pointer that is more advanced than expected, resulting in a
stack pointer outside its actual stack area and therefore memory
corruption ensues.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-09-21 09:01:58 +00:00
Nicolas Pitre 1c857f37da riscv: pmp: fix SMP build with assertion enabled
Fix SMP build with assertion enabled.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-09-20 09:39:35 +02:00
Enjia Mai d9206aa29b arch: arm: userspace: fix the incorrect ssf under bad syscall
The parameter ssf of the handler_bad_syscall got null pointer
due to that the R1 does not push into the stack in a right
order on cortex-M0. Adjust the pushing order of stack to make
the ssf being passed correctly.

Fixes #50146.

Signed-off-by: Enjia Mai <enjia.mai@intel.com>
2022-09-19 09:17:26 +02:00
Andy Ross 99dd845067 arch/posix: Fix main() renaming trickery
It turns out that SOF is already using a symbol named
"zephyr_app_main()", so this produces a collision.  Pick something
that looks more relevant to "posix", and put an underscore on it (it's
a "system" symbol, after all).

Signed-off-by: Andy Ross <andyross@google.com>
2022-09-15 16:23:11 +00:00
Andy Ross ec44bc435c arch/posix: Fix 32 bit x86 fuzzing
It seems like libfuzzer wants to relocate 32 bit instrumented code
sections at runtime at addresses different than the ones in the ELF
file.  This is problematic, because Zephyr files are compiled
statically and so will crash the first time they try to jump to an
absolute .text address (basically at the first function call after a
fuzzer entry point).

It seems that building with -fPIC is enough to defeat this (we use the
host linker script, which will manage the GOT/PLT entries for us),
which will work as long as the fuzzer isn't playing games with data
other than text.  None of this seems to be documented, so... I guess
it's as good as we can get.  It works, at least.

(x86_64 binaries don't show the same behavior, they run where they
were linked)

Signed-off-by: Andy Ross <andyross@google.com>
2022-09-15 16:23:11 +00:00
Kai Vehmanen 48276fde5c xtensa: use lower-case hex in backtrace output
Align backtrace output with the style used in rest of the codespace.
This makes it more convenient to compare the backtrace to e.g. objdump
output.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
2022-09-09 14:09:33 -05:00
Huifeng Zhang 3d81d7f23f arch: arm64: fix the wrong way to send ipi interrupt
On GICv3, when we send an IPI interrupt, aff3, aff2 and aff1 should
be assigned a value corespond to a PE for which interrupt will be
generated. target_list only corresponds to aff0.

On real hardware, aff3, aff2, aff1 and aff0 should be treated as a
whole to determine a PE.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2022-09-09 16:36:37 +00:00
Huifeng Zhang 3ef14cae5e arch: arm64: init VMPIDR_EL2 in z_arm64_el2_init
VMPIDR_EL2 is assigned the value returned by EL2 reads of MPIDR_EL1

MPIDR_EL1 is the register holding the Multiprocessor ID which is to
identify different cores. Because of the virtualization requirements
for AArch64, MPIDR_EL1 should be virtualized (the different virtualized
cores can run on the same physical core). Thus the value of MPIDR_EL1
should be switched when the VM is switched. Setting the VMPIDR_EL2 is
the way to change the value returned by EL1 reads of MPIDR_EL1. Even
without virtualization, we still need to set VMPIDR_EL2 during booting
at EL2 or EL3. Otherwise, all cores' IDs are zero at the EL1 stage
which will break the SMP system.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2022-09-09 16:36:37 +00:00
Mateusz Sierszulski 2ed5763baa arch: riscv: core: Place vectors section through zephyr_linker_sources()
This commit is fixing placing the vectors section through
zephyr_linker_sources(ROM_START ...) (as done in the ARM
architecture port) so its order can be adjusted by SORT_KEY.

Fixes #49903

Signed-off-by: Mateusz Sierszulski <msierszulski@antmicro.com>
2022-09-08 10:39:31 +02:00
Andy Ross b141551cba arch/xtensa: Properly namespace special register API
The Xtensa arch has historically had state/user register accessor
macros with bare three-byte symbol names.  I think this might have
been in the original Cadence-contributed arch integration, but I'm not
sure.  In any case they also exist in the same names in vendor
HAL/toolchain code and are causing collisions.  We never should have
had these symbols exposed in our header.

Put them under an XTENSA_ prefix to decollide.

Signed-off-by: Andy Ross <andyross@google.com>
2022-09-07 20:28:06 -04:00
Gerard Marull-Paretas be38456279 include: types: remove ulong_t
ulong_t was mainly used in MIPS/RISC-V. Just use "unsigned long".

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-09-06 18:16:33 +02:00
Gerard Marull-Paretas 79e6b0e0f6 includes: prefer <zephyr/kernel.h> over <zephyr/zephyr.h>
As of today <zephyr/zephyr.h> is 100% equivalent to <zephyr/kernel.h>.
This patch proposes to then include <zephyr/kernel.h> instead of
<zephyr/zephyr.h> since it is more clear that you are including the
Kernel APIs and (probably) nothing else. <zephyr/zephyr.h> sounds like a
catch-all header that may be confusing. Most applications need to
include a bunch of other things to compile, e.g. driver headers or
subsystem headers like BT, logging, etc.

The idea of a catch-all header in Zephyr is probably not feasible
anyway. Reason is that Zephyr is not a library, like it could be for
example `libpython`. Zephyr provides many utilities nowadays: a kernel,
drivers, subsystems, etc and things will likely grow. A catch-all header
would be massive, difficult to keep up-to-date. It is also likely that
an application will only build a small subset. Note that subsystem-level
headers may use a catch-all approach to make things easier, though.

NOTE: This patch is **NOT** removing the header, just removing its usage
in-tree. I'd advocate for its deprecation (add a #warning on it), but I
understand many people will have concerns.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-09-05 16:31:47 +02:00
Gerard Marull-Paretas 082043c6e8 drivers: display: intel_multibootfb: convert to DT
Convert the device to be Devicetree based. Adjusted tests and other
areas that were using old Kconfig properties.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-09-02 14:16:08 +02:00
Gerard Marull-Paretas 11860face3 drivers: display: framebuffer: rework to make it self-contained
The "framebuf" driver was an incomplete driver expecting _clients_ to
implement missing functionality (i.e. init and device definition)
outside of the driver. This pattern of scattering driver code throughout
the tree is not common (if used at all). If certain drivers share
functionality, one can create a common module within the subsystem (see
e.g. ILI9XXX drivers).

The _generic_ framebuffer code was only used to implement the Intel
Multiboot framebuffer driver. This patch centralizes all the scattered
code in the subsystem and adjusts the driver name to "intel_multibootfb"
to make things clear. If there's ever another framebuffer driver that
shares code, it can be split into multiple modules.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-09-02 14:16:08 +02:00
Gerard Marull-Paretas 5954ea1c65 arch: arm64: core: smp: use DT_FOREACH_CHILD_STATUS_OKAY_SEP
Avoid auxiliary macros by using DT_FOREACH_CHILD_STATUS_OKAY_SEP.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-08-30 16:19:57 +02:00
Ederson de Souza 4d66eedd10 arch/xtensa/core: Fix timing API issues
Two issues:
 - A unnecessary parentheses pair caused rounding errors (by truncating
   a small value before multiplying it).
 - arch_timing_cycles_to_ns_avg() wasn't actually converting the result
   to nanoseconds.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-08-29 16:09:50 -04:00
Stephanos Ioannidis 40bbf78d77 arch: arc: Rename ARC64 output format to elf64-littlearc64
This commit renames the ARC64 output format from `elf64-littlearc` to
`elf64-littlearc64` as required by the updated ARC patches for the GCC
12.1 release.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-08-29 16:57:18 +02:00
Carlo Caione 6503795dc1 riscv: Introduce BitManip extensions
Add Zba, Zbb, Zbc and Zbs BitManip extensions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-29 16:57:18 +02:00
Carlo Caione 5fece03d7d riscv: Introduce Zicsr and Zifencei extensions
And enable the new extensions on all the SoCs.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-29 16:57:18 +02:00
Mahesh Mahadevan c029b081cc cmake: Add support to add symbols to nocache section
This PR allows the user to add symbols to the nocache
section. The use for this could be as follows

zephyr_linker_sources_ifdef(CONFIG_NOCACHE_MEMORY
  NOCACHE_SECTION
  nocache.ld
)

nocache.ld (as shown below) can define additional
symbols to  go into the nocache section

. = ALIGN(4);
KEEP(*(NonCacheable))

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2022-08-29 11:19:48 +02:00
Evgeniy Paltsev 99142065fc ARC: add non-multithreading mode support
Add non-multithreading mode support for all ARC non-SMP
targets.

Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2022-08-26 21:38:56 -04:00
Anas Nashif b04dc92c52 xtensa: make xtensa cache/uncache operations optional
Do not build those on platforms not supporting them.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-08-26 13:17:02 -04:00
Andy Ross 65d657685e arch/posix: Add libfuzzer support
Add support for LLVM's libfuzzer utility.  This works by building an
executable with a "LLVMFuzzerTestOneInput()" entry point (which is
external to Zephyr, running in the host process environment!), which
it drives out of its own main() routine.  The toolchain API is exposed
as just another sanitizer variant, which is clean.

Signed-off-by: Andy Ross <andyross@google.com>
2022-08-26 11:57:46 +02:00
Stephanos Ioannidis 8506979f27 arch: arm: mpu: Fix -Wstringop-overread warning
GCC 12 performs bounds checking on the pointer arguments specified like
an array (e.g. `int arg[]`) and treats such arguments with an empty
length as having the length of 0, resulting in the compiler printing
out `stringop-overread' warning when they are accessed.

This commit corrects any pointer arguments declared using the array
expression to use the pointer expression instead.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-08-25 22:29:28 +09:00
Peter Marheine d400b8135c arch/riscv: support CONFIG_CODE_DATA_RELOCATION
This implements support for relocating code to chosen memory regions via
the `zephyr_code_relocate` CMake function for RISC-V SoCs. ARM-specific
assumptions that were made by gen_relocate_app.py need to be corrected,
in particular not assuming any particular name for the default RAM
section (which is 'SRAM' for most ARM pltaforms) and not assuming 32-bit
pointers (so the test works on RV64).

Signed-off-by: Peter Marheine <pmarheine@chromium.org>
2022-08-24 10:08:06 +02:00
Peter Marheine c30833da3a arch: move CODE_DATA_RELOCATION to top level
Support for CODE_DATA_RELOCATION is not inherently limited to ARM, so
move the Kconfig definition to top-level so it can be used by other
architectures. Since support is opt-in (requiring linker script
support), add a helper symbol enabled by architecture config that gates
whether CODE_DATA_RELOCATION is available instead of listing all
supported systems inline.

Signed-off-by: Peter Marheine <pmarheine@chromium.org>
2022-08-24 10:08:06 +02:00
Carlo Caione 4806e1087e cache: Fix cache API calling from userspace
When a cache API function is called from userspace, this results on
ARM64 in an OOPS (bad syscall error). This is due to at least two
different factors:

- the location of the cache handlers is preventing the linker to
  actually find the handlers
- specifically for ARM64 and ARC some cache handling functions are not
  implemented (when userspace is not used the compiler simply optimizes
  out these calls)

Fix the problem by:

- moving the userspace cache handlers to a their logical and proper
  location (in the drivers directory)
- adding the missing handlers for ARM64 and ARC

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-23 10:14:17 +02:00
Carlo Caione e05c4b0a92 s2ram: Deal with system off failure
Some platforms have the possibility to cancel the powering off until the
very latest moment (for example if an IRQ is received). Deal with this
kind of failures.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-19 12:10:25 +02:00
Evgeniy Paltsev 6ce3c531d8 ARC: ARcv3: 64bit: manage accumulator reg properly
In case of ARCv3 64 bit we have only one 64bit accumulator
register instead of register pair, so fixup register
save & restore code.

While we at it also make ARC_HAS_ACCL_REGS option (which
controls accumulator reg/regs save & restore) default
for HS5x and HS6x as well - as it should be.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2022-08-19 12:09:37 +02:00
Gerard Marull-Paretas e0125d04af devices: constify statically initialized device pointers
It is frequent to find variable definitions like this:

```c
static const struct device *dev = DEVICE_DT_GET(...)
```

That is, module level variables that are statically initialized with a
device reference. Such value is, in most cases, never changed meaning
the variable can also be declared as const (immutable). This patch
constifies all such cases.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-08-19 11:51:26 +02:00
Andy Ross 02b23f3733 arch/posix: Add MemorySanitizer support
Wire this up the same way ASAN works.  Right now it's support only by
recent clang versions (not gcc), and only in 64 bit mode.  But it's
capable of detecting uninitialized data reads, which ASAN is not.

This support is wired into the sys_heap (and thus k_heap/k_malloc)
layers, allowing detection of heap misuse like use-after-free.  Note
that there is one false negative lurking: due to complexity, in the
case where a sys_heap_realloc() call is able to shrink memory in
place, the now-unused suffix is not marked uninitialized immediately,
making it impossible to detect use-after-free of those particular
bytes.  But the system will recover cleanly the next time the memory
gets allocated.

Also no attempt was made to integrate this handling into the newlib or
picolibc allocators, though that should hopefully be possible via
similar means.

Signed-off-by: Andy Ross <andyross@google.com>
2022-08-19 08:30:01 +02:00
Andy Ross 74cc534758 cmake: Update CONFIG_ASAN support
This had bitrotten a bit, and didn't build as shipped.  Current
libasan implementations want -fsanitize=address passed as a linker
argument too.  We have grown a "lld" linker variant that needs the
same cmake treatment as the "ld" binutils one, but never got it.  But
the various flags had been cut/pasted around to different places, with
slightly different forms.  That's really sort of a mess, as sanitizer
support was only ever support with host toolchains for native_posix
(and AFAICT no one anywhere has made this work on cross compilers in
an embedded environment).  And the separate "gcc" vs. "llvm" layers
were silly, as there has only ever been one API for this feature (from
LLVM, then picked up compatibly by gcc).

Pull this stuff out and just do it in one place in the posix arch for
simplicity.

Also recent sanitizers are trying to add instrumentation padding
around data that we use linker trickery to pack tightly
(c.f. SYS_INIT, STRUCT_SECTION_ITERABLE) and we need a way
("__noasan") to turn that off.  Actually for gcc, it was enough to
just make the records const (already true for most of them, except a
native_posix init struct), but clang apparently isn't smart enough.

Finally, add an ASAN_RECOVER kconfig that enables the use of
"halt_on_error=0" in $ASAN_OPTIONS, which continues execution past the
first error.

Signed-off-by: Andy Ross <andyross@google.com>
2022-08-19 08:30:01 +02:00
Torsten Rasmussen 35263386f0 kconfig: change $(ARCH_DIR) to arch/
Changing $(ARCH_DIR)/common/Kconfig to arch/common/Kconfig.

The use of ARCH_DIR at this place is wrong, as it suddenly requires out
of tree archs to support a common/Kconfig file, which may make no sense
to them.

If an out of tree arch wants to place common Kconfig code in a common
Kconfig file, that's their choice and they should source such file
themselves.

Instead just source the Zephyr arch common file directly.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2022-08-18 14:29:14 +02:00
Carlo Caione 27fcef082d arch: x86: Fix cache-related Kconfig symbols
Switch to the new cache-related Kconfig symbols.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-18 11:30:49 +00:00
Carlo Caione 4932f92457 arch: arc: Fix cache-related Kconfig symbols
Switch to the new cache-related Kconfig symbols.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-18 11:30:49 +00:00
Carlo Caione 31d65d63f6 arch: arm64: Fix cache-related Kconfig symbols
Switch to the new cache-related Kconfig symbols.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-18 11:30:49 +00:00
Carlo Caione 710e7f24fe arch: arm: Fix cache-related Kconfig symbols
Switch to the new cache-related Kconfig symbols.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-18 11:30:49 +00:00
Carlo Caione ae82071ae4 arch: Rework cache-related Kconfig symbols
We have now:

- CPU_HAS_{D,I}CACHE: when the CPU has support for d-cache and i-cache

- {D,I}CACHE: to enable / disable d-cache and i-cache

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-08-18 11:30:49 +00:00
Chris Coleman 443f1cb58c arch: arm: aarch32: cortex_m: fault: Prevent BusFault from HardFault
A Cortex-M BusFault often arises from the execution of a function
pointer that got corrupted.

The Zephyr Cortex-M fault handler de-references the `$pc` in
`z_arm_is_synchronous_svc()` to determine if the fault was due to a
kernel oops (ARCH_EXCEPT). This can cause a BusFault if the pc itself
was corrupt. A BusFault from a HardFault will trigger ARM Cortex-M
"Lockup" preventing the Zephyr fault handler from running to
completion. This in turn, results in no fault handling information
getting dumped by the Zephyr fault handler.

To fix the issue, we can simply set the `CCR.BFHFNMIGN` bit prior to
the instruction address dereference which will cause the processor to
ignore the BusFault and return a value of 0x0 instead of entering
lockup. After the operation is complete, we clear `CCR.BFHFNMIGN` as
it would be unexpected for any other code in the fault handler to
trigger a fault.

The issue can be reproduced programmatically with:

```
  void (*unaligned_func)(void) = (void (*)(void))0x50000001;
  unaligned_func();
```

I bumped into this problem while debugging an issue on the nRF9160DK
(`west build --board nrf9160dk_nrf9160ns`) and confirmed that after
making this change I now see the full fault handler print:

```
[00:00:45.582,214] <err> os: Exception occurred in Secure State
[00:00:45.582,244] <err> os: ***** HARD FAULT *****
[...]
[00:00:45.583,984] <err> os: Current thread: 0x2000d340 (shell_uart)
[00:00:45.829,498] <err> fatal_error: Resetting system
```

Signed-off-by: Chris Coleman <chris@memfault.com>
2022-08-10 11:59:38 +02:00
Joakim Andersson f29c53dabf arch: arm: Allow enabling FPU hard ABI with TF-M
Allow enabling FPU with TF-M with the following limitations:
- Only IPC mode is supported by TF-M.
- Disallow FPU hard ABI when building the NS application, the TF-M build
system does not pass the flags correctly to all dependencies.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2022-08-10 11:59:19 +02:00
Stephanos Ioannidis 7751fbca44 arch: riscv: Align semihost_exec function at 16-byte boundary
QEMU requires that the semihosting trap instruction sequence, which
consists of three uncompressed instructions, lie in the same page, and
refuses to interpret the trap sequence if these instructions are placed
across two different pages.

This commit adds 16-byte alignment requirement to the `semihost_exec`
function, which occupies 12 bytes, to ensure that the three trap
sequence instructions in this function are never placed across two
different pages.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-08-08 10:52:34 +02:00
Flavio Ceolin b507365b46 arch: x86: Fix wrong identation
Wrong identation in z_x86_prep_c.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-08-07 14:27:56 +01:00
Gerard Marull-Paretas 736a1a9113 soc: riscv: remove usage of SOC_ERET
All SOC_ERET definitions expand to the mret instruction (used to return
from a trap: exception or interruption). The 'eret' instruction existed
in previous RISC-V privileged specs, but it doesn't seem to be used in
Zephyr (ref. RISC-V Privileged Architectures 3.2.2).

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-08-04 13:44:48 +02:00
Dat Nguyen Duy 8e55e59c59 arch: introduce config DCLS
Some processors support Dual-redundant Core Lock-step
DCLS) topology but the processor still can be ran in
split-lock mode (by default or changed at flash time).
So, introduce config DCLS that is enabled by default if
config CPU_HAS_DCLS is set, it should be disabled if
processor is used in split-lock mode.

Signed-off-by: Dat Nguyen Duy <dat.nguyenduy@nxp.com>
2022-08-04 12:51:25 +09:00
Gerard Marull-Paretas 92b855f9de arch: arc: remove unused <soc.h>
Header was not used, so remove it.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-08-03 07:46:14 -04:00
Gerard Marull-Paretas b2a1eeb6ac soc: arc: define ICI in DT
ICI (Inter-Core Interrupt Unit) interrupts and priorities were hardcoded
in C files. This patch moves this information to Devicetree and updates
code to make use of it.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-08-03 07:46:14 -04:00
Julius Barendt 42da90f6bf SPARC: reduce z_thread_entry_wrapper
Transfer the entry point and initial parameters in the callee_saved
struct rather than on the stack. This saves 48 byte stack per thread
and simplifies the logic.

Signed-off-by: Julius Barendt <julius.barendt@gaisler.com>
2022-08-03 12:05:49 +02:00
Hake Huang 2acbf01ff7 arch: arm: call z_early_memset instead memset directly
change to call z_early_memset instead of memset so that we can
relocate memset

Signed-off-by: Hake Huang <hake.huang@oss.nxp.com>
2022-08-01 18:09:28 +01:00
Manuel Arguelles b64d99091b arm: mpu: dsb after writing to SCTLR on MPU disable
Execute data and instruction sync barriers after writing to SCTLR
to disable the MPU, to ensure the registers are set before
proceeding and that the new changes are seen by the instructions
that follow.

Signed-off-by: Manuel Arguelles <manuel.arguelles@nxp.com>
2022-07-26 11:09:42 +00:00
Manuel Arguelles a189e93a44 arm: mpu: dsb after writing to SCTLR on MPU enable
Execute data and instruction sync barriers after writing to SCTLR
to enable the MPU, to ensure the registers are set before
proceeding and that the new changes are seen by the instructions
that follow.

Signed-off-by: Manuel Arguelles <manuel.arguelles@nxp.com>
2022-07-26 11:09:42 +00:00
Andy Ross 910c96b7d8 intel_adsp: meteorlake: Initialize stack flush pointer SR
The simulator seems to drop garbage addresses (somewhere in the ROM it
looks like) into this SR at arbitrary times.  I don't know if this is
a hardware exception handler that we can't turn off, or a simulator
bug, or what.  But our code that assumes it will be cleared to zero or
valid is breaking.  Set it every time in every context switch for now
pending someone figuring out what's going wrong.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-07-25 16:00:22 -04:00
Ryan McClelland 1cf8de4b40 arch: arm: cache: fix undefined references to cmsis
When compiling OpenAMP with Zephyr Cache Management, undefined references
are listed for all functions called with in the cache management

Signed-off-by: Ryan McClelland <ryanmcclelland@fb.com>
2022-07-25 09:40:32 +02:00
Anas Nashif 01438a1998 intel_adsp: move imr configs to headers
Move those defines and values back to headers. Kconfig is not a good
place for this, later this should move to DTS.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-07-21 17:55:41 -04:00
Benjamin Björnsson 386487acd8 arch: xtensa: core: include: Update header to use guard macros
Remove usage of pragma once for consistency across all headers.

Signed-off-by: Benjamin Björnsson <benjamin.bjornsson@gmail.com>
2022-07-20 13:39:23 -05:00
Anas Nashif 7d799fdff0 kconfig: guard MPU logging macros
MPU logging Kconfigs should only appear when MPU is enabled.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-07-20 18:28:43 +02:00
Simon Hein b5522fffbc arch: comply to coding guidelines MISRA C:2012 Rule 14.4
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)

Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.
Use comparisons with NULL instead of implicitly testing pointers.
Use comparisons with NUL instead of implicitly testing plain chars.

This commit is a subset of the original auditable-branch commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a

Signed-off-by: Simon Hein <SHein@baumer.com>
2022-07-20 09:28:38 -05:00
Evgeniy Paltsev 1bc2cb7fd7 ARC: fix SMP race in ASM ARC interrupt handling code
In interrupt chandler code we don't save full current task context
on stack (we don't save callee regs) before z_get_next_switch_handle()
call, but we passing _current to it, so z_get_next_switch_handle
saves current task to switch_handle, which means that this CPU
current task can be picked by other CPU before we fully store it
context on this CPU.

Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2022-07-20 09:26:24 -05:00
Fabio Baltieri 55b243e124 test,arch: fix few odd suffix include paths
Fix some more legacy include paths found in files with unusual suffixes.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2022-07-18 14:44:47 -04:00
Tomislav Milkovic 0fe2c1fe90 everywhere: Fix legacy include paths
Any project with Kconfig option CONFIG_LEGACY_INCLUDE_PATH set to n
couldn't be built because some files were missing zephyr/ prefix in
includes
Re-run the migrate_includes.py script to fix all legacy include paths

Signed-off-by: Tomislav Milkovic <milkovic@byte-lab.com>
2022-07-18 16:16:47 +00:00
Tobias Röhmel 1f7847eaad arch: arm: cortex_r: Use spsr_cxsf instead of spsr_hyp
The use of spsr_hyp is "UNPREDICTABLE" for the ARM Cortex-R52.
Some implements choose to implement the behavior, but it
should not be assumed.
Fixes #47330

Signed-off-by: Tobias Röhmel <tobias.roehmel@rwth-aachen.de>
2022-07-18 13:25:26 +00:00
Gerard Marull-Paretas f400c94adf arch: arm: aarch32: cortex_m: fault: use CMSIS CFSR defines
We can use definitions provided by "standard CMSIS" to access
MEMFAULT/BUSFAULT/USGFAULT fields in CFSR.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-07-15 11:51:12 +00:00
Johann Fischer 3c971307dc arch/kernel/soc/samples: use unsigned int for irq_lock()
irq_lock() returns an unsigned integer key.
Generated by spatch using semantic patch
scripts/coccinelle/irq_lock.cocci

Signed-off-by: Johann Fischer <johann.fischer@nordicsemi.no>
2022-07-14 14:37:13 -05:00
Anas Nashif 98ab67d7dc scripts: move user_wordsize.py to scripts/build/user_wordsize.py
Move scripts needed by the build system and not designed to be run
individually or standalone into the build subfolder.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-07-12 10:03:45 +02:00
Carlo Caione dd0bf0e59a riscv: Disable IRQ_VECTOR_TABLE_JUMP_BY_CODE for CLIC
Quoting from the SiFive Interrupt Cookbook [0]

  CLIC vectored mode has a similar concept to CLINT vectored mode, where
  an interrupt vector table is used for specific interrupts. However, in
  CLIC vectored mode, the handler table contains the address of the
  interrupt handler instead of an opcode containing a jump instruction.
  When an interrupt occurs in CLIC vectored mode, the address of the
  handler entry from the vector table is loaded and then jumped to in
  hardware

So, when CLIC is present we must use IRQ_VECTOR_TABLE_JUMP_BY_ADDRESS
instead of IRQ_VECTOR_TABLE_JUMP_BY_CODE.

[0] https://starfivetech.com/uploads/sifive-interrupt-cookbook-v1p2.pdf

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-07-12 09:54:13 +02:00
Jamie Iles 6868058c03 arch: arm: cache: Add cache maintenance functions
This commit adds icache and dcache maintenance functions
for aarch32.

Signed-off-by: Jamie Iles <quic_jiles@quicinc.com>
Signed-off-by: Dave Aldridge <quic_daldridg@quicinc.com>
2022-07-11 16:03:31 +00:00
Carlo Caione 0ed637a7b6 arch: cortex-m: Enable support for S2RAM
Enable S2RAM for Cortex-M hooking up the provided API functions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-07-11 15:26:26 +02:00
Carlo Caione 1e74f1bff5 arch: Introduce S2RAM interface
Add a new API used by arch to implement suspend-to-RAM (S2RAM).

The API is composed by a single function to save the CPU context on
suspend.

A CPU context is the arch-specific set of registers that must be
preserved on power-off (in retained RAM) to be able to resume the
execution from the point it was suspended without going through the
whole kernel startup stage.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-07-11 15:26:26 +02:00
Manuel Arguelles 354254ff2b arch: arm: aarch32: mpu: fix is in region check
Buffer size must be decreased by one when non-zero to calculate the
right end address, and this must be checked for overflows.

Variables for region limit renamed for clarity since they may be
understood as the raw register values.

Signed-off-by: Manuel Arguelles <manuel.arguelles@nxp.com>
2022-07-11 11:17:02 +02:00
Julien Massot ddcc5fb28d arch: arm: aarch32: add ARMv8-R MPU support
ARMv8-R aarch32 processor has support for
ARM PMSAv8-32. To add support for ARMv8-R we reuse the
ARMv8-M effort and change access to the different registers
such as rbar, rlar, mair, prselr.

Signed-off-by: Julien Massot <julien.massot@iot.bzh>
Signed-off-by: Manuel Arguelles <manuel.arguelles@nxp.com>
2022-07-11 11:17:02 +02:00
Jamie Iles dbc6f6a882 arch: arm64: initialize IRQ stack for CONFIG_INIT_STACKS
When CONFIG_INIT_STACKS is enabled all stacks should be filled with 0xaa
so that the thread analyzer can measure stack utilization, but the IRQ
stack was not filled and so `kernel stacks` on the shell would show that
the stack had been fully used and inferring an IRQ stack overflow
regardless of the IRQ stack size.

Fill the IRQ stack before it gets used so that we can have precise usage
reports.

Signed-off-by: Jamie Iles <quic_jiles@quicinc.com>
Signed-off-by: Dave Aldridge <quic_daldridg@quicinc.com>
2022-07-08 19:59:24 +00:00
Carlo Caione 5a4affdcda gen_isr_tables.py: Move to scripts directory
There is no reason to have this script in a different place than all the
other python scripts. Move it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-07-07 17:58:34 +00:00
Carlo Caione 0e788b89a6 riscv: Use IRQ vector table for vectored mode
For vectored interrupts use the generated IRQ vector table instead of
relying on a custom-generated table.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-07-07 10:00:20 +02:00
Carlo Caione 86a67faeaa arch: Add support for IRQ vector tables with jump opcodes
The whole mechanism of IRQ table generation is build around the
assumption that the IRQ vector table contains an array of addresses the
PC will be assigned to when the corresponding interrupt is triggered.

While this is correct for the majority of architectures (ARM, RISCV with
CLIC in vectored mode, etc...) this is not valid in general (for example
RISCV with CLINT/HLINT in vectored mode).

In this alternative format for the IRQ vector table, the pc will get
assigned by the hardware to the address of the vector table index
corresponding to the interrupt ID. From the vector table index, a
subsequent jump will occur from there to service the interrupt.

This means that the IRQ vector table contains an opcode that is a jump
instruction to a specific location instead of the address of the
location itself.

This patch is introducing support for this alternative IRQ vector table
format. The user can now select one format or the other one by acting on
IRQ_VECTOR_TABLE_JUMP_BY_ADDRESS or IRQ_VECTOR_TABLE_JUMP_BY_CODE
Kconfig symbols.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-07-07 10:00:20 +02:00
Kevin Townsend 0cc2b37d04 arch: arm: aarch32: Disable FPU with TF-M
Removes the ability to enable the FPU with TF-M -- added in
PR #45906, and which is causing CI failures -- until a more
robust solution can be implemented for FPU support w/TF-M.

Signed-off-by: Kevin Townsend <kevin.townsend@linaro.org>
2022-07-06 11:53:51 -05:00
Anas Nashif a408b56e12 arch: mips: add mising braces to single line if statements
Following zephyr's style guideline, all if statements, including single
line statements shall have braces.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-07-06 11:00:45 -04:00
Anas Nashif 516625ed6a arch: arm64: add mising braces to single line if statements
Following zephyr's style guideline, all if statements, including single
line statements shall have braces.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-07-06 11:00:45 -04:00
Enjia Mai 05147693ca arch: x86: workaround for EFI call return with interrupt enabled
The EFI console output call return with interrput enabled, it is a
firmware bug. And there was a solution that disabled interrupt it
return right away. But in some case the interrupt could happen
during the efi call context. If an interrupt was handled, a printk
call again will make it re-entried, or a swap might be happens.
This is suggested solution appiled for EFI console output:

1. Skip printk call when it is called in interrupt context.
2. Disable the schedule during the EFI call window.

Signed-off-by: Enjia Mai <enjia.mai@intel.com>
2022-07-05 16:52:32 -04:00
Enjia Mai 89a9eab652 drivers: console: add a minimal EFI console driver to support printf
Add a minimal EFI console driver to support printf, this console driver
only supports console output. Otherwise the printf will not work.

Signed-off-by: Enjia Mai <enjia.mai@intel.com>
2022-07-05 16:52:32 -04:00
Carlo Caione 7a11d883cc riscv: Introduce RISCV_ALWAYS_SWITCH_THROUGH_ECALL
Some early RISC-V SoCs have a problem when an `mret` instruction is used
outside a trap handler.

After the latest Zephyr RISC-V huge rework, the arch_switch code is
indeed calling `mret` when not in handler mode, breaking some early
RISC-V platforms.

Optionally restore the old behavior by adding a new
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL symbol.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-07-04 18:18:10 +02:00
Keith Packard f2ae48e621 arch/arm64: Enable 'large' code model for large targets
Targets with text or data addresses above the 4GB boundary may need to use
the large code model to ensure relocations in the linker work correctly.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-07-04 15:42:53 +00:00
Nicolas Pitre 83de5b4532 riscv: _isr_wrapper: get rid of the ASSUME_EQUAL() macro
This is really useful only for one case i.e. when testing against zero.
Do that test inline where it is needed and make the rest of the code
independent from the actual numerical value being tested to make code
maintenance easier if/when new cases are added.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-07-04 09:49:16 +02:00
Abramo Bagnara ad8778d019 coding guidelines: comply with MISRA C:2012 Rule 4.1
MISRA C:2012 Rule 4.1 (Octal and hexadecimal escape sequences shall be
terminated.)

Use string literal concatenation to properly terminate hexadecimal
escape sequences.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
Signed-off-by: Simon Hein <SHein@baumer.com>
2022-06-30 19:51:59 -04:00
Abramo Bagnara 8521b43546 coding guidelines: comply with MISRA C:2012 Rule 21.13
MISRA C:2012 Rule 21.13 (Any value passed to a function in <ctype.h>
shall be representable as an unsigned char or be the value EOF).

Functions in <ctype.h> have undefined behavior if they are called with
any other value. Callers affected by this change are not prepared to
handle EOF anyway. The addition of these casts avoids the issue
and does not result in any performance penalty.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
Signed-off-by: Simon Hein <SHein@baumer.com>
2022-06-30 17:34:28 -04:00
Joakim Andersson cb32d8e8e9 modules: tfm: Allow enabling FPU in the application with TF-M enabled
Allow the application to enable the FPU when TF-M has been enabled.
Pass the correct compilation flags according to the TF-M integration
guide.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2022-06-29 14:45:39 +00:00
Eugene Cohen d903333422 arch: arm64: enable single thread support config
Enable single-threaded support for the arm64 archtecture.

This mode of execution is supported on an soc under
development and is validated regularly.

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-06-29 10:27:55 +02:00
Eugene Cohen 1f93ece43d arch: arm64: program TG[1] in mmu init
In performing a double check of Zephyr arm64 MMU config
against edk2, a different in the programming of the
Translation Control Register (TCR) was found.  TCR.TG[1]
should be set to address Cortex-A57 erratum 822227:

"Using unsupported 16K translation granules might cause
Cortex-A57 to incorrectly trigger a domain fault"

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-06-29 10:27:33 +02:00
Eugene Cohen b84ab912af arch: arm64: define A55 core
Define a CPU_CORTEX_A55 configuration and align the gcc
cpu type accordingly when selected.

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-06-29 10:27:19 +02:00
Carlo Caione f943ae1156 arch: Use a more sane ALIGN value
By default ARCH_IRQ_VECTOR_TABLE_ALIGN and ARCH_SW_ISR_TABLE_ALIGN are
set to 0. Use a more proper value.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-28 12:29:42 +02:00
Carlo Caione 219d5b5adb arm: vector_table: Automatically place the IRQ vector table
Instead of using a custom linker script, rely on the automatic placement
of the IRQ vector table.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-28 12:29:42 +02:00
Carlo Caione b07907057b arc: vector_table: Automatically place the IRQ vector table
Instead of using a custom linker script, rely on the automatic placement
of the IRQ vector table.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-28 12:29:42 +02:00
Carlo Caione 3a48365bab irq: Fix IRQ vector table relocation
The generation of the software ISR table and the IRQ vector table
(respectively generated by CONFIG_GEN_SW_ISR_TABLE and
CONFIG_GEN_IRQ_VECTOR_TABLE) should (in theory) go through three stages:

1. A placeholder table is generated in arch/common/isr_tables.c and
   placed in an orphaned .gnu.linkonce.{irq_vector_table, sw_isr_table}
   section

2. The real table is generated by arch/common/gen_isr_tables.py (creating
   the build/zephyr/isr_tables.c file)

3. The real table is un-orphaned by moving it in a proper section with a
   proper alignment

While all the steps are done automatically for the software ISR table,
for the IRQ vector table each architectures must take care of modiying
its own linker script to place somewhere the generated IRQ vector table
(basically step 3 is missing).

This is currently only done for 2 architectures: Cortex-M (ARMv7) and
ARC. But when another architecture tries to use the IRQ vector table,
the linker complains about that. For example:

  Linking C executable zephyr/zephyr.elf
  riscv64-zephyr-elf/bin/ld.bfd: warning: orphan section
    `.gnu.linkonce.irq_vector_table' from
    `zephyr/CMakeFiles/zephyr_final.dir/isr_tables.c.obj' being placed in
    section `.gnu.linkonce.irq_vector_table'

In this patch we introduce a new CONFIG_ARCH_IRQ_VECTOR_TABLE_ALIGN to
support the architectures requiring a special alignment for the IRQ
vector table and we also introduce a way to automatically place the IRQ
vector table in place in the same way it is done for the ISR software
table.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-28 12:29:42 +02:00
Eugene Cohen 434e748cbb arch: arm64: add WAIT_AT_RESET_VECTOR config
On platforms where reset vector catch is not possible
it is useful to have a compile-time option to spin
at the reset vector allowing a debugger to be attached
and then to manually resume execution.

Define a config option for arm64 to spin at the
reset vectdor so a debugger can be attached.

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-06-28 12:29:17 +02:00
Carlo Caione d6df78e3b0 gen_isr_tables: Cleanup IRQ vector table generation
Under no circumstances the generated IRQ vector table can and should
contain NULL values. This is correctly enforced at generation time by
the gen_isr_tables.py script making the existence of the ISR_WRAPPER
define useless.

The enforced behaviour is:
- When the ISR software table exists defaults to _isr_wrapper
- Otherwise defaults to z_irq_spurious

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-24 20:29:20 +02:00
Nicolas Pitre 147728775f riscv: pmp: properly initialize per-thread m-mode/u-mode entry array
Retrieve the pmpaddr value matching the last global PMP slot and add it
to the per-thread m-mode and u-mode entry array. Even if that value is
not written out again on thread context switch, that value can still be
used by set_pmp_entry() to attempt a single-slot TOR mapping with it.

Nicely abstract this with the new z_riscv_pmp_thread_init() where the
PMP_M_MODE(thread) and PMP_U_MODE(thread) argument generators can be
used.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-06-23 15:56:00 -05:00
Lauren Murphy 318e6db239 debug: coredump: add xtensa intel adsp, support toolchains
Adds compatibility with Intel ADSP GDB from Zephyr SDK and
from Cadence toolchain to coredump_gdbserver.py.

Adds CAVS 15-25 (APL) register definitions. Implements
handle_register_single_read_packet to serve ADSP GDB
p packets.

Prevents BSA from changing between stack dump printout
and coredump by taking lock. Observed to be necessary for
accurate results on slower simulated platforms.

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2022-06-23 15:44:45 -04:00
Lauren Murphy b034711f59 arch: xtensa: implement ARCH_EXCEPT
Triggers CPU exception with illegal instruction when z_except_reason
is called (e.g. in k_panic, k_oops). Creates exception stack frame
for use by coredump. Adds unique cause code for ARCH_EXCEPT. Disables
test case failure for qemu_xtensa.

Without an ARCH_EXCEPT implementation, z_except_reason calls
z_fatal_error directly with a null ESF and bypasses
xtensa_excint1_c's error logging. An ESF is required to coredump.

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2022-06-23 15:44:45 -04:00
Nicolas Pitre b6377ccdd7 riscv: pmp: work around another QEMU bug
A QEMU bug may create bad transient PMP representations causing
false access faults to be reported. Work around it by setting
pmp registers to zero from the update start point to the end
before updating them with new values.

The QEMU fix is here with more details about this bug:
https://lists.gnu.org/archive/html/qemu-devel/2022-06/msg02800.html

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-06-23 13:12:05 -04:00
Nicolas Pitre 00a9634c05 riscv: new TLS-based arch_is_user_context() implementation
This reverts the bulk of commit c8bfc2afda ("riscv: make
arch_is_user_context() SMP compatible") and replaces it with a flag
stored in the thread local storage (TLS) area, therefore making TLS
mandatory for userspace support on RISC-V.

This has many advantages:

- The tp (x4) register is already dedicated by the standard for this
  purpose, making TLS support almost free.

- This is very efficient, requiring only a single instruction to clear
  and 2 instructions to set.

- This makes the SMP case much more efficient. No need for funky
  exception code any longer.

- SMP and non-SMP now use the same implementation making maintenance
  easier.

- The is_user_mode variable no longer requires a dedicated PMP mapping
  and therefore freeing one PMP slot for other purposes.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>

5f65dbcc9dab3d39473b05397e05.
2022-06-23 13:12:05 -04:00
Nicolas Pitre 3f8e326d1a riscv: stop preserving the tp register needlessly
The tp (x4) register is neither caller nor callee saved according to
the RISC-V standard calling convention. It only has to be set on thread
context switching and is otherwise read-only.

To protect the kernel against a possible rogue user thread, the tp is
also re-set on exception entry from u-mode.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-06-23 13:12:05 -04:00
Nicolas Pitre 95b18c7f9f riscv: abstract RV32E register access
... and avoid macro duplication.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-06-23 13:12:05 -04:00
Krzysztof Chruscinski 041f0e5379 all: logging: Remove log_strdup function
Logging v1 has been removed and log_strdup wrapper function is no
longer needed. Removing the function and its use in the tree.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2022-06-23 13:42:23 +02:00
Abramo Bagnara d1d5acd2cd coding guidelines: comply with MISRA C:2012 Rule 8.2
MISRA C:2012 Rule 8.2 (Function types shall be in prototype form with
named parameters.)

Added missing parameter names.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-06-22 17:17:39 -04:00
Carlo Caione 741b9dc65d riscv: Rename __irq_wrapper to _isr_wrapper
For some reasons RISCV is the only arch where the vector table entry is
called __irq_wrapper instead of _isr_wrapper. This is not only a
cosmetic change but Zephyr expects the common ISR handler to be called
_isr_wrapper (for example when generating the IRQ vector table).

Change it.

find ./ -type f -exec sed -i 's/__irq_wrapper/_isr_wrapper/g' {} \;

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-21 20:27:20 -04:00
Stephanos Ioannidis 0ff1e05486 arch: arm: Migrate to K_KERNEL_STACK_ARRAY_DECLARE
This commit updates all deprecated `K_KERNEL_STACK_ARRAY_EXTERN` macro
usages to use the `K_KERNEL_STACK_ARRAY_DECLARE` macro instead.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-06-20 10:25:52 +02:00
Stephanos Ioannidis 19ba592f07 global: Correct extern K_THREAD_STACK_DEFINE usage
This commit corrects all `extern K_THREAD_STACK_DEFINE` macro usages
to use the `K_THREAD_STACK_DECLARE` macro instead.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-06-20 10:25:52 +02:00
Stephanos Ioannidis 33f87408c4 global: Correct extern K_KERNEL_STACK_ARRAY_DEFINE usage
This commit corrects all `extern K_KERNEL_STACK_ARRAY_DEFINE` macro
usages to use the `K_KERNEL_STACK_ARRAY_DECLARE` macro instead.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-06-20 10:25:52 +02:00
Stephanos Ioannidis 7d27bd0b85 arch: arm64: Disable infinite recursion warning for discard_table
This commit selectively disables the infinite recursion warning
(`-Winfinite-recursion`), which may be reported by GCC 12 and above,
for the `disable_table` function because no actual infinite recursion
will occur under normal circumstances.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-06-16 16:02:23 -04:00
Keith Packard 1c2f3c4cef arch/xtensa: Mark 'exit' with CODE_UNREACHABLE
gcc in 'hosted' mode checks the implementation of 'exit' to make sure it
doesn't return.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-06-14 01:50:36 +09:00
Carlo Caione 4d7d784d1e arm64: mmu: Support userspace memory mapping
arch_mem_map() on ARM64 is currently not supporting the K_MEM_PERM_USER
parameter so we cannot allocate userspace accessible memory using the
memory helpers. Fix this.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-10 09:48:23 +02:00
Carlo Caione 673f41e708 riscv: Introduce support for RV32E
Introduce support for RV32E.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-08 18:50:22 +09:00
Carlo Caione 737dccec1a riscv: Move syscall parameter from a7 to t0
To prepare for RV32E support.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-08 18:50:22 +09:00
Andy Ross 12eda76939 arch/xtensa: Add CCOUNT-based timing API
Expose the Xtenesa CCOUNT timing register (the lowest level CPU cycle
counter) using the arch_timing_*() API.

This is the simplest possible way to get this working.  Future work
might focus on moving the rate configuration into devicetree in a
standard way, integrating with the platform clock driver on intel_adsp
such that the reported cycle rate tracks runtime changes (though IIRC
this is not a SOF requirement), and adding better test coverage to the
timing layer, which right now isn't exercised anywhere but in
benchmarks.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-06-07 19:04:42 +02:00
Gerard Marull-Paretas 96397b021e arch: arm64: smp: remove redundant soc.h include
<soc.h> has been traditionally been used as a proxy to HAL headers,
register definitions, etc. Nowadays, <soc.h> is anarchy. It serves a
different purpose depending on the SoC. In some cases it includes HALs,
in some others it works as a header sink/proxy (for no good reason), as
a register definition when there's no HAL... To make things worse, it is
being included in code that is, in theory, non-SoC specific.

This patch is part of a series intended to improve the situation by
removing <soc.h> usage when not needed, and by eventually removing it.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-06-05 14:48:40 +02:00
Gerard Marull-Paretas f465bd22c9 arch: arm64: mpu: remove unnecessary include
<soc.h> was not required.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-06-05 14:48:40 +02:00
Gerard Marull-Paretas 93ce49e53f arch: arm: aarch32: mpu: remove redundant soc.h usage
<soc.h> has been traditionally been used as a proxy to HAL headers,
register definitions, etc. Nowadays, <soc.h> is anarchy. It serves a
different purpose depending on the SoC. In some cases it includes HALs,
in some others it works as a header sink/proxy (for no good reason), as
a register definition when there's no HAL... To make things worse, it is
being included in code that is, in theory, non-SoC specific.

This patch is part of a series intended to improve the situation by
removing <soc.h> usage when not needed, and by eventually removing it.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-06-05 14:48:40 +02:00
Gerard Marull-Paretas f51674ac24 arch: x86: core: early_serial: obtain NS16550 uart base address from DT
The NS16550 UART base address was hardcoded in <soc.h> headers. This
bypasses the console choice defined in Devicetree. Hardcoded hardware
choices must be avoided now that DT is in place.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-06-05 14:48:40 +02:00
Carlo Caione 3e92f11d1f riscv: Optimize t* registers usage
In preparation for the support of RV32E optimize a bit the t* registers
usage limiting that to t{0-2}.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-05 14:44:06 +02:00
Carlo Caione 10061efdc4 riscv: Rework and cleanup Kconfig
This patch is doing several things:

- Core ISA and extension Kconfig symbols have now a formalized name
  (CONFIG_RISCV_ISA_* and CONFIG_RISCV_ISA_EXT_*)

- a new Kconfig.isa file was introduced with the full set of extensions
  currently supported by the v2.2 spec

- a new Kconfig.core file was introduced to host all the RISCV cores
  (currently only E31)

- ISA and extensions settings are moved to SoC configuration files

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-06-05 14:28:42 +02:00
Fabio Baltieri 93f20d7a7a include: add zephyr/ on script generated #include
Fix few script generated #include that needed the zephyr/ prefix.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2022-05-27 15:20:27 -07:00
Fabio Baltieri e24314f10f include: add more missing zephyr/ prefixes
Adds few missing zephyr/ prefixes to leftover #include statements that
either got added recently or were using double quote format.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2022-05-27 15:20:27 -07:00
Carles Cufi 56512dae8b arch: riscv: switch: Add a comment on the return of z_riscv_switch
When returning from z_riscv_switch, depending on whether the thread that
has just been swapped in was earlier swapped out synchronously (i.e. via
regular function call) or asynchronously (i.e. via exception/irq) we
will return to arch_switch() or __irq_wrapper respectively. Comment this
fact for clarity.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2022-05-26 17:15:21 +02:00
Carles Cufi 11da0b6f28 arch: riscv: Remove outdated comment
After the introduction of arch_switch() in #43085, ECALL is no longer
used for context switching by default, so remove the comment stating so.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2022-05-26 17:15:21 +02:00
Lukasz Majewski f4f9a8291f kconfig: Add CONFIG_DCACHE option
This option is by default defined and explicitly enables the data
cache on a target platform.

Signed-off-by: Lukasz Majewski <lukma@denx.de>
2022-05-24 08:47:20 -07:00
Andy Ross 58eb132d06 arch/xtensa: Fix return context for nested interupts
The xtensa interrupt return path was forgetting to check the nested
interrupt state and calling into the scheduler to select the context
to which to return, which of course is completely wrong.  We MUST
return to the ISR we interrupted.

In fact in practice this was only visible in the case of a nested
interrupt that causes a context switch, otherwise the "interrupted"
argument just gets returned and things work.  In particular, it can
happen when the nested context is a fatal exception that aborts the
current thread, which is how this was discovered.  The timing required
to see this on live interrupts on real applications is likely to have
been extremely difficult to detect.

Fixes #45779

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-20 12:37:59 +02:00
Nicolas Pitre 1cb557dccf riscv: rationalize PMP related Kconfig options
ARCH_HAS_USERSPACE and ARCH_HAS_STACK_PROTECTION are direct functions
of RISCV_PMP regardless of the SoC.

PMP_STACK_GUARD is a function of HW_STACK_PROTECTION (from
ARCH_HAS_STACK_PROTECTION) and not the other way around.

This allows for tests/kernel/fatal/exception to test protection against
various stack overflows based on the PMP stack guard functionality.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-05-18 10:54:53 +02:00
Nicolas Pitre e76fb204db riscv: report stack overflow errors correctly
Add the necessary checks to determine when the stack pointer is
 out of bounds.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-05-18 10:54:53 +02:00
Nicolas Pitre a4b82ab4fe riscv: fix IRQ stack guard location
_current_cpu->irq_stack is not yet initialized when this is executed on
CPU 0. Also the guard area is outside of CONFIG_ISR_STACK_SIZE now
e.g. it is within the K_KERNEL_STACK_RESERVED area at the start of
the buffer. So simply use z_interrupt_stacks[] directly instead.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-05-18 10:54:53 +02:00
Nicolas Pitre 92409f36de riscv: drop user stack guard area when using separate privileged stacks
A separate privileged stack is used when CONFIG_GEN_PRIV_STACKS=y. The
main stack guard area is no longer needed and can be made available to
the application upon transitioning to user mode. And that's actually
required if we want a naturally aligned power-of-two buffer to let the
PMP map a NAPOT entry on it which is the whole point of having this
CONFIG_PMP_POWER_OF_TWO_ALIGNMENT option in the first place.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-05-18 10:54:53 +02:00
Nicolas Pitre 6051ea7d3c riscv: clarify stack size and alignment parameters
The StackGuard area is used to save the esf and run the exception code
resulting from a StackGuard trap. Size it appropriately.

Remove redundancy, clarify documentation, etc.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-05-18 10:54:53 +02:00
Nicolas Pitre 3997f7bed2 riscv: pmp: make PMP debug display more comprehensive
Decoding those values by hand gets tedious.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-05-18 10:54:53 +02:00
Jaxson Han 04caf70bfe arm64: smp: Fix the wrong secondary core stack size
The init stack of the secondary core should use KERNEL_STACK_BUFFER + sz
Using Z_THREAD_STACK_BUFFER will calculate the wrong stack size.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-05-17 11:45:16 +09:00
Jaxson Han 933a8f9d12 arch: arm64: Fix coherence issue of SMP boot code
The current SMP boot code doesn't consider that the cores can boot at
the same time. Possibly, more than one core can boot into primary core
boot sequence. Fix it by using the atomic operation to make sure only
one core act as the primary core.

Correspondingly, sgi_raise_ipi should transfer CPU id to mpidr.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-05-17 11:45:16 +09:00
Jaxson Han 2f6087ba67 arch: arm64: Fix arm mpu SMP issues
Only primary core do the dynamic_areas_init.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-05-17 11:45:16 +09:00
Christoph Coenen b3dfc244ad arch: arm: Add support for multiple zero-latency irq priorities
Add the ability to have multiple irq priority levels which are not
masked by irq_lock() by adding CONFIG_ZERO_LATENCY_LEVELS.

If CONFIG_ZERO_LATENCY_LEVELS is set to a value > 1 then multiple zero
latency irqs are reserved by the kernel (and not only one). The priority
of the zero-latency interrupt can be configured by IRQ_CONNECT.

To be backwards compatible the prio argument in IRQ_CONNECT is still
ignored and the target prio set to zero if CONFIG_ZERO_LATENCY_LEVELS
is 1 (default).

Implements #45276

Signed-off-by: Christoph Coenen <ccoenen@baumer.com>
2022-05-13 08:38:28 -05:00
Mark Holden df6b8c3cc4 coredump: arm: Capture callee registers during k_panic() / k_oops
Ensure callee registers included in coredump.
Push callee registers onto stack and pass as param to
z_do_kernel_oops for CONFIG_ARMV7_M_ARMV8_M_MAINLINE
when CONFIG_EXTRA_EXCEPTION_INFO enabled.

Signed-off-by: Mark Holden <mholden@fb.com>
2022-05-12 19:03:34 -04:00
Robert Szczepanski 8647e2f63c tracing: riscv: Add missing invoke of sys_trace_isr_exit()
Change suggested by @WealianLiao in #41995.

Signed-off-by: Robert Szczepanski <rszczepanski@antmicro.com>
2022-05-11 12:03:41 -04:00
Evgeniy Paltsev 1b1d328101 ARC: define PROPERTY_OUTPUT_FORMAT for all ARC elf formats
Now we define PROPERTY_OUTPUT_FORMAT (which is used for
binutils) only for ARCv3 32 bit. Let's define it for all
ARC elf formats instead of relying on default values.

Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2022-05-10 14:12:25 -04:00
Evgeniy Paltsev fa5bfb5880 ARC: ARCv3: MWDT: provide required options for building with mwdt
Provide required compiler/assembler options for building with mwdt
toolchain for ARCv3 64 bit.

Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2022-05-10 14:12:25 -04:00
Evgeniy Paltsev 48301dde0f ARC: ARCv3: add HS5x support
Add HS5x CPU support - ARCv3 32bit ISA.

Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2022-05-10 14:12:25 -04:00
Jordan Yates d778d5c711 arch: aarch32: improve very early debugging
Debugger plugins use the `z_sys_post_kernel` variable to detect whether
the kernel is currently running, and hence whether any threads exist. As
this is just a standard variable however, after a reset the initial
value of this variable is whatever it was before reset (true) until the
bss section is zeroed halfway through `z_arm_prep_c`. Debuggers are
therefore unable to differentiate between a normally running application
and the very first stages of the boot process.

Clearing this variable as the first action upon reset allows debuggers
to display the correct thread state after the first 3 instructions have
run.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2022-05-10 18:36:51 +02:00
Eugene Cohen 816229128d arch/arm64: update gicv3 sre enablement
Fix writing of ICC_SRE_EL3 to or-in bits to align
with original intent to read-modify-write this
register.

Also disable FIQ and IRQ bypass so interrupt delivery
occurs through GIC.  Platforms may choose to override
this behavior in z_arm64_el3_plat_init implementations.

Remove ICC_SRE_EL3 config from viper and qemu since
this is now handled in the arm64 arch core.

Signed-off-by: Eugene Cohen <quic_egmc@quicinc.com>
2022-05-10 09:13:20 +02:00
Gerard Marull-Paretas 45776650c2 arch: gen_isr_tables: migrate to <zephyr/...> include prefix
The gen_usr_tables scripts were not updated to make use of the
<zephyr/...> include prefix, fix this.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 12:45:29 -04:00
Gerard Marull-Paretas 4b91c2d79f asm: update files with <zephyr/...> include prefix
Assembler files were not migrated with the new <zephyr/...> prefix.
Note that the conversion has been scripted, refer to #45388 for more
details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 12:45:29 -04:00
Gerard Marull-Paretas 16811660ee arch: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-06 19:57:22 +02:00
Gerard Marull-Paretas bad523d1aa arch: x86: zefi: support multiple include paths
When legacy mode is enabled, Zephyr includes both include/ and
include/zephyr. Allow the zefi.py script to accept multiple include
paths to cover this scenario.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-05 14:26:05 -05:00
Bradley Bolen 88ba97fea4 arch: arm: aarch32: cortex_a_r: Add shared FPU support
This adds lazy floating point context switching.  On svc/irq entrance,
the VFP is disabled and a pointer to the exception stack frame is saved
away.  If the esf pointer is still valid on exception exit, then no
other context used the VFP so the context is still valid and nothing
needs to be restored.  If the esf pointer is NULL on exception exit,
then some other context used the VFP and the floating point context is
restored from the esf.

The undefined instruction handler is responsible for saving away the
floating point context if needed.  If the handler is in the first
irq/svc context and the current thread uses the VFP, then the float
context needs to be saved.  Also, if the handler is in a nested context
and the previous context was using the FVP, save the float context.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Stephanos Ioannidis 80bd814131 arch: arm: cortex_r: Initialise VFP D32 registers for DCLS
This commit updates the Cortex-R reset routine to initialise
(synchronise) the VFP D16-D31 registers when Dual-redundant Core
Lock-step (DCLS) is enabled.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-05-05 12:03:27 +09:00
Bradley Bolen 7f44e28619 arch: arm: aarch32: Create z_arm_floating_point_init() for Cortex-R
This will enable the VFP unit on boot to handle the case where
FPU_SHARING is not enabled.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Bradley Bolen 7c1e399179 arch: arm: aarch32: Create a fpu stack frame
Grouping the FPU registers together will make adding FPU support for
Cortex-A/R easier later.  It provides the ability to get the sizeof and
offsetof FPU registers easier.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Bradley Bolen 3f7162fc07 arch: arm: aarch32: Rearrange exception stack frame
Cortex-A/R use a descending stack frame and the hardware does not help
with the stacking.  This led to some less than desirable workarounds in
the exception code where the basic stack frame was saved twice.
Rearranging the order of the exception stack frame removes that problem
and provides a clearer path to saving CPU context in a fully descending
manner.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Stephanos Ioannidis 5181c61797 arch: arm: Add unified floating-point configuration symbols
This commit adds the unified floating-point configuration symbols for
the ARM architectures.

These configuration symbols allow specification of the floating-point
coprocessors, such as VFP (also known as FP for Cortex-M) and NEON,
for the ARM architectures.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-05-05 12:03:27 +09:00
Flavio Ceolin f5a0d4cd26 arch: xtensa: Optimize cache management for pinned threads
When building with CONFIG_SCHED_CPU_MASK_PIN_ONLY we can assume that a
thread will always be executed in a same CPU and consequently skip the
cache invalidation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-05-04 13:46:48 -04:00
Andy Ross e931b7ba47 arch/x86: Use EFI console as default printk handler
Where we have access to a bootstrap UEFI environment, it's productive
to use that console as the default printk handler.  That avoids the
bringup hassle of trying to configure UART settings blindly, as has
been customary.  It also emits nice text to the framebuffer on devices
with no serial port or other debug harness at all.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-04 11:34:55 +03:00
Nicolas Pitre f51d89df30 riscv: pmp: work around a QEMU bug
The NAPOT mode isn't computed properly in qemu when the full address
range is covered. Let's hardcode the value that the qemu code checks
explicitly until the appropriate fix is applied to qemu itself.

For reference, here's the qemu patch:
https://lists.gnu.org/archive/html/qemu-devel/2022-04/msg00961.html

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre ec9c2ec2d8 riscv: pmp: rename CONFIG_PMP_SLOT
The plural form is clearer.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre 554f24661f riscv: pmp: remove previous implementation
Overall diffstat with the new PMP code in place:

 18 files changed, 866 insertions(+), 1372 deletions(-)

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre 2fece49a14 riscv: pmp: switch over to the new implementation
Add the appropriate hooks effectively replacing the old implementation
with the new one.

Also the stackguard wasn't properly enforced especially with the
usermode combination. This is now fixed.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre 7a55bda7e1 riscv: pmp: add new usermode support
The idea here is to compute the PMP register set on demand i.e. upon
scheduling in the affected threads, and only if changes occurred.
A simple sequence number is used to stay in sync with the latest update.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre 68b8f0e5ce riscv: pmp: new stackguard implementation
Stackguard uses the PMP to prevents many types of stack overflow by
making any access to the bottom stack area raise a CPU exception. Each
thread has its set of precomputed PMP entries and those are written to
PMP registers at context switch time.

This is the code to set it up. It will be connected later.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre 2e66da3bc3 riscv: pmp: new implementation
This is the core code to manage PMP entries with only the global entries
initialisation for now. It is not yet linked into the build.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Evgeniy Paltsev 9ce0d31c33 ARC: SMP: debug: workaround MDB changing debug_select value
MDB debugger may modify debug_select and debug_mask registers
on start, so we can't rely on debug_select reset value.

Let's set correct value on primary CPU without reading initial
value from debug_select.

Internal ID: P10019563-50516

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2022-04-29 12:34:21 +02:00
Keith Packard f623571a73 riscv: Initialize TP register when starting threads
Set TP in exception context so that it gets loaded into the CPU when
first running the thread. Set TP for secondary cores to related idle TLS
area.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-04-28 11:09:01 +09:00
Keith Packard 1638d4851e arch/arm: Use TPIDRURO on cortex-a too
V7-A also supports TPIDRURO, so go ahead and use that for TLS, enabling
thread local storage for the other ARM architectures.

Add __aeabi_read_tp function in case code was compiled to use that.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-04-28 11:09:01 +09:00
Andy Ross 64a3159dee arch/xtensa: Optimize cache management on context switch
Making context switch cache-coherent in SMP is hard.  The
KERNEL_COHERENCE handling was conservatively invalidating the stack
region of a thread that was being switched in.  This was because it
might have (1) run on this CPU in the past, but (2) run most recently
on a different CPU.  In that case we might have stale data still in
our local dcache!

But this has performance impact in the (very common!) case of a thread
being switched out briefly and then back in (e.g. k_sleep() for a
small duration).  It will come back having lost all of its cached
stack context, and will have to fetch all that information back from
shared SRAM!

Treat this by tracking a "last_cpu" for each thread in the arch part
of the thread struct.  If we're coming back to the same CPU we left,
we know we can skip the invalidate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-04-27 18:54:10 -04:00
Nicolas Pitre f61b8b8c16 semihosting: fix inline assembly output dependency
Commit d8f186aa4a ("arch: common: semihost: add semihosting
operations") encapsulated semihosting invocation in a per-arch
semihost_exec() function. There is a fixed register variable declaration
for the return value but this variable is not listed as an output
operand to respective inline assembly segments which is an error.
This is not reported as such by gcc and the generated code is still OK
in those particular instances but this is not guaranteed, and clang
does complain about such cases.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-24 19:46:15 +02:00
Anas Nashif 399a0b4b31 debug: generate call graph profile data using gprof
This will generate profile data that can be analyzed using gprof. When
you build the application (currently for native_posix only), after
running the application you will get a file "gmon.out" with the call
graph which can be processed with gprof:

  gprof build/zephyr/zephyr.exe gmon.out > analysis.txt

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-04-22 16:04:08 -04:00
Jordan Yates d8f186aa4a arch: common: semihost: add semihosting operations
Add an API that utilizes the ARM semihosting mechanism to interact with
the host system when a device is being emulated or run under a debugger.

RISCV is implemented in terms of the ARM implementation, and therefore
the ARM definitions cross enough architectures to be defined 'common'.

Functionality is exposed as a separate API instead of syscall
implementations (`_lseek`, `_open`, etc) due to various quirks with
the ARM mechanisms that means function arguments are not standard.

For more information see:
https://developer.arm.com/documentation/dui0471/m/what-is-semihosting-

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>

impl
2022-04-21 13:04:52 +02:00
Jordan Yates 070422db46 arch: common: dedicated SEMIHOST symbol
Control the usage of semihosting with a dedicated symbol, instead of
implying semihosting from the usage of `SEMIHOST_CONSOLE`. This allows
semihosting to be used without the semihost console.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2022-04-21 13:04:52 +02:00
Mahesh Mahadevan b2d3fdceff cmake: Add support to add symbols to ramfunc section
This PR allows the user to add symbols to the ramfunc
section. The use for this could be as follows:

zephyr_linker_sources_ifdef(CONFIG_ARCH_HAS_RAMFUNC_SUPPORT
  RAMFUNC_SECTION
  quick_access_code.ld
)

quick_access_code.ld (as shown below) can define additional
symbols to  go into the ramfunc section

. = ALIGN(4);
KEEP(*(CodeQuickAccess))

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2022-04-18 17:24:12 -07:00
Stephanos Ioannidis f9a3f02b86 x86: Initialise FPU regs during thread creation for eager FPU sharing
When "eager FPU sharing" mode is enabled, FPU registers must be
initialised at the time of thread creation because the floating-point
context is always active and no further FPU initialisation is performed
later.

Note that, in case of the "lazy FPU sharing" mode, floating-point
context is inactive by default and the FPU is initialised when the
first floating-point instruction is executed.

Refer to the issue #44902 for more details.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-04-18 17:23:48 -07:00
Ryan McClelland f7ddcd2713 arch: arm: aarch32: initialize FPSCR to reset value for ARMv8.1
With GCC 11 now supporting low overhead branching in ARMv8.1, ASM "LE"
(loop-end) instructions would trigger an INVSTATE hard-fault after
FPSCR was set to 0. This was due to the FPSCR getting a new field in
ARMv8.1. LTPSIZE is now set to it's reset value of Tail predication not
applied.

Signed-off-by: Ryan McClelland <ryanmcclelland@fb.com>
2022-04-15 10:33:48 -07:00
Ryan McClelland c5b59282d6 arch: arm: aarch32: add Kconfig for arm cortex-m that implements a cache
The Cache is an optional configuration of both the ARM Cortex-M7 and
Cortex-M55. Previously, it was just checking that it was just an M7
rather than knowing that the CPU actually was built with the cache.

Signed-off-by: Ryan McClelland <ryanmcclelland@fb.com>
2022-04-14 16:12:03 -05:00
Immo Birnbaum 60ee14db96 arch: arm: aarch32: remove unnecessary "EOF" comments
remove unnecessary EOF comment lines at the end of each file.

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2022-04-14 14:43:52 -05:00
Ederson de Souza c0b7864840 arch/xtensa: Enable backtrace on panic on Intel ADSP platforms
Platform specific functions necessary to enable this feature were
implemented (z_xtensa_ptr_executable() and
z_xtensa_stack_ptr_is_sane() for Intel ADSP platforms.

Current implementation just ensures stack pointer and program counter
are within relevant areas defined in the linker scripts, without going
too fine grained.

Also, `.iram1` section, used by the backtrace code, also added to
Intel ADSP linker script.

Finally, update west manifest to use up-to-date SOF, which contains a
patch to fix build issues related to the linker changes.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-04-14 11:03:40 -04:00
Mark Holden eba9c872b1 coredump: Add callee registers to arm arch block
Add version 2 to coredump arm_arch_block
which includes callee registers

Signed-off-by: Mark Holden <mholden@fb.com>
2022-04-13 13:26:37 -07:00
Mateusz Sierszulski ded324c61d arch: arm: change dependency on CODE_DATA_RELOCATION
This commit changes the CODE_DATA_RELOCATON dependency by
adding CPU_AARCH32_CORTEX_R next to CPU_CORTEX_M.

Signed-off-by: Mateusz Sierszulski <msierszulski@antmicro.com>
2022-04-11 10:17:14 +02:00
Bradley Bolen 570c254eda arch: arm: aarch32: ARM_STORE_EXC_RETURN only applies to Cortex-M
Cortex-M code is the only flavor that supports switching between secure
and non-secure state so make sure this kconfig only applies to it.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-04-11 10:16:41 +02:00
Bradley Bolen fd2aab3861 arch: arm: aarch32: Fix when mode offset is defined
Commit a2cfb8431d ("arch: arm: Add code for swapping threads between
secure and non-secure") changed the mode variable in the _thread_arch to
be defined by ARM_STORE_EXC_RETURN or USERSPACE.  The generated offset
define for mode was enabled by FPU_SHARING or USERSPACE.  This broke
Cortex-R with FPU, but with ARM_STORE_EXC_RETURN disabled.  Reconcile
the checks.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-04-11 10:16:41 +02:00
Daniel Leung 7a431dca95 x86: qemu: add a newline after "Booting from ROM.."
Under QEMU and SeaBIOS, everything gets to be printed
immediately after "Booting from ROM.." as there is no newline.
This prevents parsing QEMU console output for the very first
line where it needs to match from the beginning of the line.
So add a dummy newline here so the next output is at
the beginning of a line.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-04-08 15:48:41 -07:00
Martí Bolívar f433001185 Kconfig: move CONFIG_BOARD to boards/Kconfig
Moving this option to the subdirectory for boards might make it easier
to find, and will keep it next to some other board-related Kconfig
options set in the same file.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2022-04-08 10:30:54 -07:00
Nicolas Pitre 563a8d11a4 arm64: refer to the link register as "lr" rather than "x30"
In ARM parlance, the subroutine call return address is stored in the
"link register" or simply lr. Refer to it as lr which is clearer than
the anonymous x30 designation.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-07 16:31:30 -05:00
Jiafei Pan 227d1ea1bb arm64: mmu: provide more memory mapping types for z_phys_map()
ARM64 supports more memory mapping types for device memory (nGnRnE,
nGnRE, GRE), add these mapping support for os common mapping API
function z_phys_map().

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2022-04-05 11:17:47 +02:00
Jimmy Brisson 89d0553ca9 cortex-m: Clear pending mpu fault during mpu fault
This is a strange one: The printing code pushes a floating point
register, and is called during the mpu falt. If the floating point
registers are lazily stacked, this fp push can cause another mpu
fault to be pending during the current mpu fault, and tail chained
without returning to PendSV. Since we're already cleaning up the
fp execption reason, we might as well also clean up thisp pending,
spurious mpu exception.

Signed-off-by: Jimmy Brisson <jimmy.brisson@linaro.org>
2022-04-01 09:16:27 -05:00
Jimmy Brisson 35f9a5d715 cortex-m: Abort pending SVC when a thread is killed
If an SVC was pending during the stack overflow, it will run
after the return of the memory manage fault. To the SVC's misfortune of
the SVC handler, the it's invariant, that PSP point to the
hardware-stacked context is no longer valid. When the user has a
k_sys_fatal_error_handler that tries to kill the thread that caused a
stack overflow, this manifests as the svc reading the memory of whatever
is on the stack after being adjusted by the mem manage fault handler, and
that leads to unending, spurious hard faults, locking up the system.

This patch prevents that.

Signed-off-by: Jimmy Brisson <jimmy.brisson@linaro.org>
2022-04-01 09:16:27 -05:00
Nathan Krueger 6a5520c626 arch/riscv: Adding KConfig options for 'A' and 'M' RISC-V extensions
New KConfig options for 'A' and 'M' RISC-V extensions have been
added.  These are used to configure the '-march' string used by GCC
to produce a compatible binary for the requested RISC-V variant.
In order to maintain compatibility with all currently defined SoC,
default the options for HW mul / Atomics support to 'y', but allow
them to be overridden for any SoC which does not support these.

I tested this change locally via twister agaisnt a few RISC-V platforms
including some 32bit and 64bit. To verify the 4 possibilities of Atomics
& HW Mul: (No, No), (No, Yes), (Yes, No), (Yes, Yes -- current behavior),
I used an out-of-tree GCC (xPack RISC-V GCC) which has multilib support
for rv32i, rv32ia, rv32ima to test against our out-of-tree Intel Nios V/m
processor in HW.  The Zephyr SDK RISCV GCC currently does not contain
multilib support for all variants exposed by these new KConfig options.

Signed-off-by: Nathan Krueger <nathan.krueger@intel.com>
2022-03-22 18:00:32 -04:00
Tomasz Bursztyka 1d3dbd49e1 arch/x86: Initialize early serial a tiny bit later
In case of EFI, efi_init must be called before initializing early
serial: if that one as X86_SOC_EARLY_SERIAL_PCIDEV defined, its pcie
access will try to initialise pcie mmio access which one will try to
find an ACPI table. At this point, calling ACPI API prior to initialize
EFI will make RSDP looked up already... and since it cannot find it
without EFI being initialized first, ACPI is then broken.

Just moving early serial to initialize after multiboot/efi being setup.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka abf079ce86 arch/x86: Get ACPI RSDP from EFI
EFI may have provide that pointer alread, so let's get it first.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka f78a4ab7cf zefi: Add an EFI boot argument passing ACPI RSDP info
If such table pointer is present with EFI system table, this will speed
up ACPI initialization later on.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka b51a5d3d7c zefi: Adding status code to header
This will be usefull when calling EFI functions.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka c7090c5ee6 zefi: Expose EFI configuration lookup function
This will be useful to get various information such as ACPI table
pointer etc...

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka 27df16ea8e arch/x86: Prepare EFI support
As for Multiboot, let prep_c be aware of EFI boot.
In the futur, EFI will pass an argument to it.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka f19f9db8df arch/x86: Expand cpu boot argument
In order to mitigate at runtime whether it booted on multiboot or EFI,
let's introduce a dedicated x86 cpu argument structure which holds the
type and the actual pointer delivered by the method (multiboot_info, or
efi_system_table)

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka 9fb80d04b4 arch/x86: Expose multiboot init function even when disabled
Just a dummy function will do.

When enabled, the code does not need the #ifdef as cmake is handling
this properly already. This was also the wrong CONFIG_ used there
anyway.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka dd7e012458 zefi: Improve generic EFI header
This will prove to be useful to get a better EFI support.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Jaxson Han a7a8a64e9c arch32: Fix incorrect exc_exit sequence
The incorrect sequence will cause the thread cannot be aborted in the
ISR context. The following test case failed:
tests/kernel/fatal/exception/kernel.common.stack_sentinel.

The stack sentinel detects the stack overflow as normal during a timer
ISR exit. Note that, currently, the stack overflow detection is behind
the context switch checking, and then the detection will call svc to
raise a fatal error resulting in increasing the nested counter(+1). At
this point, it needs a context switch to finally abort the thread.
However, after the fatal error handling, the program cannot do a context
switch either during the svc exit[1], or during the timer ISR exit[2].

[1] is because the svc context is in an interrupt nested state (the
nested counter is 2).
[2] is because the current point (after svc context pop out) is right
behind the switch checking.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-03-21 07:31:29 -04:00
Nicolas Pitre c8bfc2afda riscv: make arch_is_user_context() SMP compatible
This is painful. There is no way for u-mode code to know if we're
currently executing in u-mode without generating a fault, besides
stealing a general purpose register away from the standard ABI
that is. And a global variable doesn't work on SMP as this must be
per-CPU and we could be migrated to another CPU just at the right
moment to peek at the wrong CPU variable (and u-mode can't disable
preemption either).

So, given that we'll have to pay the price of an exception entry
anyway, let's at least make it free to privileged threads by using
the mscratch register as the non-user context indicator (it must
be zero in m-mode for exception entry to work properly). In the
case of u-mode we'll simulate a proper return value in the
exception trap code. Let's settle on the return value in t0
and omit the volatile to give the compiler a chance to cache
the result.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre af2d875c5d riscv: isr.S: compute _current_cpu using CPU number on SMP
To do so efficiently on systems without the mul instruction, we use
shifts and adds which is faster and sometimes smaller than a plain loop.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 4f5374854e riscv: isr.S: dedicate a register to &current_cpu
Stop using &_kernel as this is not SMP friendly. Let's use s0 (after
preserving its content) to hold &current_cpu instead so it won't have
to be reloaded 	each time it is needed. This will be even more relevant
when SMP support is added.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 69d06a901c riscv: isr.S: optimize FP regs save/restore decision
Rely on mstatus rather than thread->base.user_options since it is always
up to date (updated by z_riscv_switch) to simplify the code and be SMP
proof. Also carry over SF_INIT to the mstatus being restored in case
it was changed in the mean time.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre ce8dabfe9e riscv: implement arch_switch()
The move to arch_switch() is a prerequisite for SMP support.

Make it optimal without the need for an ECALL roundtrip on every
context switch. Performance numbers from tests/benchmarks/sched:

Before:
unpend  107 ready  102 switch  188 pend  218 tot  615 (avg  615)

After:
unpend  107 ready  102 switch  170 pend  217 tot  596 (avg  595)

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 247d2c8e3b riscv: move the tp register from caller-saved to callee-saved
This is a per-thread register that gets updated only when context
switching. No need to load and save it on every exception entry.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 50c0df1bd2 riscv: align struct __esf properly
The minimum stack alignment is 16. Therefore, the stack space to store
a struct __esf object must be rounded up to the next 16-byte boundary.

It is not sufficient to do the rounding on the __z_arch_esf_t_SIZEOF
definition. When the stack is constructed in arch_new_thread() it is
also necessary to do the rounding there too.

Let's make the structure itself carry the alignment attribute instead to
make it work in all cases.

While at it, remove the unused _K_THREAD_NO_FLOAT_SIZEOF definition.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre df852a0b77 riscv: implement CONFIG_IRQ_OFFLOAD_NESTED
It can easily be done now, so why not. Suffice to increment the nested
count like with actual IRQs.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre cb5221c087 riscv: irq_offload: simpler implementation
Get rid of all those global variables and IRQ locking.
Use the regular IRQ exit path to let tests validate preemption properly.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre a50c433012 riscv: exception code mega simplification and optimization
Complete revamp of the exception entry code, including syscall handling.
Proper syscall frame exception trigger. Many correctness fixes, hacks
removal, etc. etc.

I tried to make this into several commits, but this stuff is all
inter-related and a pain to split.

The diffstat summary:

 14 files changed, 250 insertions(+), 802 deletions(-)

Binary size (before):

   text	   data	    bss	    dec	    hex	filename
   1104	      0	      0	   1104	    450	isr.S.obj
     64	      0	      0	     64	     40	userspace.S.obj

Binary size (after):

   text	   data	    bss	    dec	    hex	filename
    600	      0	      0	    600	    258	isr.S.obj
     36	      0	      0	     36	     24	userspace.S.obj

Run of samples/userspace/syscall_perf (before):

*** Booting Zephyr OS build zephyr-v3.0.0-325-g3748accae018  ***
Main Thread started; qemu_riscv32
Supervisor thread started
User thread started
Supervisor thread(0x80010048):       384 cycles	     509 instructions
User thread(0x80010140):           77312 cycles	   77437 instructions

Run of samples/userspace/syscall_perf (after):

*** Booting Zephyr OS build zephyr-v3.0.0-326-g4c877a2753b3  ***
Main Thread started; qemu_riscv32
Supervisor thread started
User thread started
Supervisor thread(0x80010048):       384 cycles	     509 instructions
User thread(0x80010138):            7040 cycles     7165 instructions

Yes, that's more than a 10x speed-up!

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre bfb7919ed0 riscv: better abstraction for register-wide FP load/store opcodes
Same rationale as preceding commit. Let's create pseudo-instructions in
assembly scope to make the code more uniform and readable.

Furthermore the definition of COPY_ESF_FP() was wrong as the width of
floating point registers vary not according to CONFIG_64BIT but
CONFIG_CPU_HAS_FPU_DOUBLE_PRECISION. It is therefore wrong to use
lr/sr (previously RV_OP_LOADREG/RV_OP_STOREREG) and a regular temporary
register to transfer such content.

Note: There are far more efficient ways to copy FP context around but
      such optimisations will come separately.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 1fd79b3ef4 riscv: better abstraction for register-wide load/store opcodes
Those are prominent enough that having RV_OP_LOADREG and RV_OP_STOREREG
shouting at you all over the place is rather unpleasant and bad taste.

Let's create pseudo-instructions of our own with assembler macros
rather than preprocessor defines and only in assembly scope.
This makes the asm code way more uniform and readable.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 94f39e5a80 riscv: fix wrong access width in assembly code
The thread->base.user_options field is an uint8_t. Access it using lb.
A "copy" of it is made into __esf.fp_state. Make that field an uint8_t
too and access it with lb/sb.

_callee_saved.fcsr is an uint32_t. Access it with lw/sw.
Ditto for is_user_mode.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 9ed17943b9 riscv: use simplest asm expression when possible
Let's take advantage of assembler pseudoinstructions:

- convert `addi rd, rs, 0` to `mv rd, rs`
- convert `jal x0, somewhere` to `j somewhere`
- convert `csrrs x0, csrreg, rs` to `csrs csrreg, rs`
- convert `fscsr x0, rs` to `fscsr rs`

And simplify zero offsets to simply 0.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre f2bb937547 Revert "arch/riscv: Get current CPU properly instead of assuming single CPU"
This reverts commit 8686ab5472.

The purpose of this commit will be reintroduced later on top of
a cleaner codebase.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 442ab22bdc Revert "arch/riscv: Use arch_switch() for context swap"
This reverts commit be28de692c.

The purpose of this commit will be reintroduced later on top of
a cleaner codebase.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 13a7047ea9 Revert "arch/riscv: Do not use irq_lock() on arch_irq_offload"
This reverts commit b0458201cc.

The purpose of this commit will be reintroduced later on top of
a cleaner codebase.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-21 07:28:05 -04:00
Nicolas Pitre 47e4a4487f arm64: simplify the code around the call to z_get_next_switch_handle()
Remove the special SMP workaround and the extra wrapper.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-18 13:32:49 -04:00
Nazar Kazakov f483b1bc4c everywhere: fix typos
Fix a lot of typos

Signed-off-by: Nazar Kazakov <nazar.kazakov.work@gmail.com>
2022-03-18 13:24:08 -04:00
Julien Massot 1e538607b8 arch: arm: aarch32: Do not relocate vector table on ARMv8-R
ARMv8-R allows to set the vector table address using VBAR
register, so there is no need to relocate it.

Move away vector_table setting from reset.S and move it to
relocate vector table function as it's done for Cortex-M
CPU.

Signed-off-by: Julien Massot <julien.massot@iot.bzh>
2022-03-17 15:57:15 -05:00
Jaxson Han 7ea0591d30 arm64: v8r: Enable AARCH64_IMAGE_HEADER by default
Enable AARCH64_IMAGE_HEADER by default and fix the relevant warning

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-03-16 09:19:44 -05:00
Corey Wharton 72afd96c9b arch: riscv: ensure fcsr is cleared on thread start or FPU enable
Ensure fcsr is always initially cleared for FPU enabled threads.

Signed-off-by: Corey Wharton <xodus7@cwharton.com>
2022-03-16 10:25:50 +01:00
Nicolas Pitre 2ef47509c3 arm64: simplify user mode transition code
It is not necessary to go through the full exception exit code.
This is simpler, smaller and faster.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-15 22:24:22 -04:00
Nicolas Pitre 8affac64a7 arm64: improved arch_switch() implementation
Make it optimal without the need for an SVC/exception  roundtrip on
every context switch. Performance numbers from tests/benchmarks/sched:

Before:
unpend   85 ready   58 switch  258 pend  231 tot  632 (avg  699)

After:
unpend   85 ready   59 switch  115 pend  138 tot  397 (avg  478)

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-15 22:24:22 -04:00
Nicolas Pitre bd941bcc68 arm64: implement CONFIG_IRQ_OFFLOAD_NESTED
It can easily be done now, so why not. Suffice to increment the nested
count like with actual IRQs.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-14 22:03:05 -04:00
Nicolas Pitre 90fcef4254 arm64: irq_offload: simpler implementation
Get rid of all those global variables and scheduler locking.
Use the reguler IRQ exit path to let tests properly validate preemption.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-14 22:03:05 -04:00
Nicolas Pitre 9d0bcfa884 arm64: isr_wrapper.S: tiny assembly optimization
Save one instruction in the ISR hot path.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-14 22:03:05 -04:00
Nazar Kazakov 9713f0d47c everywhere: fix typos
Fix a lot of typos

Signed-off-by: Nazar Kazakov <nazar.kazakov.work@gmail.com>
2022-03-14 20:22:24 -04:00
Jaxson Han 65d7e64e06 board: arm64: fvp_baser_aemv8r: Fix misc SMP issues
Add CONFIG_SMP to fvp_baser_aemv8r_smp board.
Fix compile warnings by adding missing header file in arm_mpu.c.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-03-11 11:00:05 +01:00
Jaxson Han 3122b9ed10 arm64: smp: Fix broadcast_ipi issue
This commit mainly fixes the broadcast_ipi issue when one core broadcast
ipi to other cores using gic_raise_sgi. The issue doesn't affect the
functionality of Zephyr SMP but will happen when Zephyr runs on Xen.
Suppose Xen provides 4 CPUs to the Zephyr guest, for example, when cpu0
broadcasts ipi to the rest of the cores, the mask should be 0xE(0b1110),
but for now, Zephyr will send 0xFFFE. So for Xen, it will receive a
target list containing many invalid CPUs which don't exist. My solution
is: to generate the target list according to the online CPUs.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2022-03-11 11:00:05 +01:00
Julien Massot 7a510245c9 arch: arm: cortex_a_r: Add support to start in HYP mode
The ARMv8-R processors always boot into Hyp mode (EL2)

To enter EL1:
Program the HACTLR register because it defaults
to only allowing EL2 accesses. HACTLR controls
whether EL1 can access memory region registers and CPUACTLR.
Program the SPSR before entering EL1.
Other registers default to allowing accesses at EL1 from reset.
Set VBAR to the correct location for the vector table.
Set ELR to point to the entry point of the EL1 code and call ERET.

Signed-off-by: Julien Massot <julien.massot@iot.bzh>
2022-03-11 10:59:48 +01:00
Julien Massot 59aae63f51 arch: arm: Add support for Cortex-R52
Cortex-R52 is an ARMv8-R processor with AArch32 profile.

Signed-off-by: Julien Massot <julien.massot@iot.bzh>
2022-03-11 10:59:48 +01:00
Fabio Baltieri cfa0205c6f arm: cortex-m: add an option to trap unaligned access
Cortex-M mainline cores have an option to generate a fault on word and
halfword unaligned access [1], this patch adds a Kconfig option for
enabling the feature.

[1] https://developer.arm.com/documentation/dui0552/a/cortex-m3-peripherals/system-control-block/configuration-and-control-register

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2022-03-10 13:47:41 -05:00
Gerard Marull-Paretas a87c811ec9 arch: x86: use DEVICE_DT_GET_ONE
Improve code by using DEVICE_DT_GET_ONE instead of device_get_binding,
since the intel_vt_d device instance can be obtained at compile time.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-03-10 13:45:59 -05:00
Gerard Marull-Paretas dffaf5375c kconfig: tweak Kconfig prompts
Tweak some Kconfig prompts after the removal of "Enable...".

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-03-09 15:35:54 +01:00
Gerard Marull-Paretas 95fb0ded6b kconfig: remove Enable from boolean prompts
According to Kconfig guidelines, boolean prompts must not start with
"Enable...". The following command has been used to automate the changes
in this patch:

sed -i "s/bool \"[Ee]nables\? \(\w\)/bool \"\U\1/g" **/Kconfig*

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-03-09 15:35:54 +01:00
Jaxson Han fd231e32e9 arm64: Fix booting issue with FVP V8R >= 11.16.16
In the Armv8R AArch64 profile[1], the Armv8R AArch64 is always in secure
mode. But the FVP_BaseR_AEMv8R before version 11.16.16 doesn't strictly
follow this rule. It still has some non-secure registers
(e.g. CNTHP_CTL_EL2).

Since version 11.16.16, the FVP_BaseR_AEMv8R has fixed this issue. The
CNTHP_XXX_EL2 registers have been changed to CNTHPS_XXX_EL2. So the
FVP_BaseR_AEMv8R (version >= 11.16.16) cannot boot Zephyr. This patch
will fix it.

[1] https://developer.arm.com/documentation/ddi0600/latest/

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
Change-Id: If986f34dc080ae7a8b226bba589b6fe616a4260b
2022-03-08 11:09:13 +01:00
Krzysztof Chruscinski 47ae656cc1 all: Deprecate UTIL_LISTIFY and replace with LISTIFY
UTIL_LISTIFY is deprecated. Replacing it with LISTIFY.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2022-03-08 11:03:30 +01:00
Ederson de Souza 2aab236c12 arch/riscv: Add IPI support
Use CLINT to send interrupts to another CPU. SMP support is kinda
incomplete without it.

This patch only enables it for riscv-privilege platforms - specifically,
"virt" one.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-02-25 19:13:50 -05:00
Ederson de Souza b0458201cc arch/riscv: Do not use irq_lock() on arch_irq_offload
With SMP, it's the wrong with to do, according to
3b145c0d4b.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-02-25 19:13:50 -05:00
Ederson de Souza d9ab35577b arch/riscv: Boot secondary CPUs for SMP support
Secondary CPUs are now initialised and made available to the system. If
the system has more CPUs than configured via CONFIG_MP_NUM_CPUS, those
are still left looping as before.

Some implementations of `soc_interrupt_init` also changed to use
`arch_irq_lock` instead of `irq_lock`.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-02-25 19:13:50 -05:00
Ederson de Souza be28de692c arch/riscv: Use arch_switch() for context swap
Enable `arch_switch()` as preparation for SMP support. This patch
doesn't try to keep support for old style context swap - only switch
based swap is supported, to keep things simple.

A fair amount of refactoring was done in this patch, specially regarding
the code that decides what to do about the ISR. In RISC-V, ECALL
instructions are used to signalize several events, such as user space
system calls, forced syscall, IRQ offload, return from syscall and
context switch. All those handled by the ISR - which also handles
interrupts. After refactor, this "dispatching" step is done at the
beginning of ISR (just after saving generic registers).

As with other platforms, the thread object itself is used as the thread
"switch handle" for the context swap.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-02-25 19:13:50 -05:00
Ederson de Souza 8686ab5472 arch/riscv: Get current CPU properly instead of assuming single CPU
isr.S code currently gets CPU information from global `_kernel` assuming
there's only one CPU. In order to prepare for upcoming SMP support,
change code to actually get current CPU information.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-02-25 19:13:50 -05:00
Ederson de Souza fdf7c96994 arch/riscv: Implement arch_curr_cpu()
Implement function that will be necessary for upcoming SMP support.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-02-25 19:13:50 -05:00
Bradley Bolen c0dd594d4d arch: arm: aarch32: Change CPU_CORTEX_R kconfig option
Change the CPU_CORTEX_R kconfig option to CPU_AARCH32_CORTEX_R to
distinguish the armv7 version from the armv8 version of Cortex-R.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-02-23 08:14:15 -06:00
Tomasz Bursztyka 0c9ce49d2a arch/x86: Fix MSI MAP destination
When Zephyr runs directly on actual hardware, it will be always
directing MSI messages to BSP (BootStrap Processor). This was fine until
Zephyr could be ran on virtualizor that may NOT run it on BSP.

So directing MSI messages on current processor. If Zephyr runs on actual
hardware, it will be BSP since such setup is always made at boot time by
the BSP. On other use case it will be whatever is relevant at that time.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-02-22 10:35:39 -05:00
Tomasz Bursztyka 0affb29572 arch/x86: Add a CPUID function to get initial APIC ID
Depending on whether X2APIC is enabled or not, it will be safer to grab
such ID from the right place.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-02-22 10:35:39 -05:00
Tomasz Bursztyka 7ea9b169f7 arch/x86: Have a dedicated place for CPUID related functions
This will centralize CPUID related accessors. There was no need for it
so far, but this is going to change.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-02-22 10:35:39 -05:00
Carles Cufi e83a13aabf kconfig: Rename the TEST_EXTRA stack size option to align with the rest
All stack sizes should end with STACK_SIZE.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2022-02-22 08:23:05 -05:00
Carlo Caione 240c975ad4 core: z_data_copy does not depend on CONFIG_XIP
When XIP is not enabled, z_data_copy() already falls back to an empty
function. No need to ifdef it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-02-22 10:22:53 +01:00
Andy Ross 73453a39d1 arch: Add IRQ_OFFSET_NESTED feature
The x86 and xtensa implementations of irq_offload() invoke synchronous
interrupts on the local CPU, and are therefore safe to use from within
an interrupt context.  This is a cheap and portable way to exercise
nested interrupts, which are otherwise highly platform-dependent to
test.  Add a kconfig to signal the capability.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-02-21 22:10:03 -05:00
Andy Ross c174ade4a1 arch/xtensa: Rework irq_offload: automatic config, SMP-safe
The Xtensa implementation of arch_irq_offload() required that the user
select the correct interrupt manually, and would race with itself if
invoked from separate CPUs (it was saved here by the main
irq_offload() function which has a semaphore to serialize access).

Use the new gen_zsr.py script to automatically detect the highest
available software interrupt, and keep a per-CPU set of
callback/parameter pointers.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-02-21 22:10:03 -05:00
Hou Zhiqiang 1fca05b7f8 arm64: cache: Fix data corruption issue on DCACHE range invalidation
Currently, the DCACHE range invalidation can cause data corruption when
the ends of the given range is not aligned to a full cache line.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2022-02-21 22:00:16 -05:00
Nicolas Pitre 34d425fbe5 arm64: switch to the IRQ stack during ISR execution
Avoid executing ISRs using the thread stack as it might not be sized
for that. Plus, we do have IRQ stacks already set up for us.

The non-nested IRQ context is still (and has to be) saved on the thread
stack as the thread could be preempted.

The irq_offload case is never nested and always invoked with the
sched_lock held so it can be simplified a bit.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-21 21:53:23 -05:00
Nicolas Pitre 6381ee7391 arm64: update _current_cpu->nested properly
This is an uint32_t so the proper register width must be used, otherwise
the adjacent structure member will be overwritten (didn't happen in
practice because of struct member alignment but still). This makes the
inc_nest_counter and dec_nest_counter macros rather unwieldy, especially
with upcoming changes, so let's just remove them.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-21 21:53:23 -05:00
Nicolas Pitre fa8c851993 arm64: simple memcpy/memset alternatives to be used during early boot
Let's provide our own z_early_memset() and z_early_memcpy() rather than
making our own .bss clearing function that risk missing out on updates
to the main version.

Also remove extra stuff already provided by kernel_internal.h.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-21 21:00:12 -05:00
Bradley Bolen 48333e612a arch: arm: core: aarch32: Fix Cortex-M userspace regression
This was introduced when trying to fix a previous merge conflict.  It
broke userspace tests on nucleo_l073rz.

Fixes #42627

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-02-10 08:40:45 -05:00
Bradley Bolen 643084de0b arch: arm: core: aarch32: Use cmsis functions
These functions help the code to be more self-documenting.  Use them to
make the code's intent clearer.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-02-08 07:35:43 -05:00
Bradley Bolen 4704f598b8 arch: arm: core: aarch32: Change Cortex-R config check
Replace CONFIG_CPU_CORTEX_R with CONFIG_ARMV7_R since it is clearer with
respect to the difference between v7 and v8 Cortex-R.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-02-08 07:35:43 -05:00
Bradley Bolen 2a357e5dfd arch: arm: core: aarch32: Fix the syscall design for Cortex-R
When calling a syscall, the SVC routine will now elevate the thread to
privileged mode and exit the SVC setting the return address to the
syscall handler.  When the thread is swapped back in, it will be running
z_do_arm_syscall in system mode.  That function will run the syscall
then automatically return the thread to usr mode.

This allows running the syscall in sys mode on a thread so that we can
use syscalls that sleep without doing unnatural things.  The previous
implementation would enable interrupts while still in the SVC call and
do weird things with the nesting count.  An interrupt could happen
during this time when the syscall was still in the exception state, but
the nested count had been decremented too soon.  Correctness of the
nested count is important for future floating point unit work.

The Cortex-R behavior now matches that of Cortex-M.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-02-08 07:35:43 -05:00
Henry Hsieh 58d50a0e97 riscv: fix non-standard assembly of RISC-V
Non-standard `jalr rd, rs` pseudo-instructions are used.
This commit changes them to `ret` for standard return pseudo-instruction
or `jalr rd, rs, 0` for no offset jump register and link.

Fixes #41100.

Signed-off-by: Henry Hsieh <r901042004@yahoo.com.tw>
2022-02-04 11:23:39 +01:00
Daniel Leung 35c1d3615f xtensa: xcc: add a dummy atexit()
Some XCC toolchains do not provide atexit() which results
in undefined reference error. So add a weak dummy atexit()
for this siutation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-25 21:16:32 -05:00
Andy Ross 50a9c29d08 arch/xtensa: Fix xcc regression with ZSR
Turns out that xt-xcc will bail when faced with a real core-isa.h (it
wants you to rely on the builtins in the compiler).  Undefine __XCC__
to force it to actually parse and emit declarations for its own
header.

(Also adds a newline to the generated one-line C file to silence a warning)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-20 14:37:13 -05:00
Andy Ross d175c18cbb arch/xtensa: Use ZSR assignments for interrupt return
We had a similar sequence for interrupt return, where we were
selecting (actually only for the benefit of qemu) the highest priority
EPCn/EPSn registers for our RFI instruction.  That works much better
in python the preprocessor.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-20 12:58:00 -05:00
Andy Ross 642fc7ad54 arch/xtensa: Use ZSR assignments for stack flush markers
The kernel coherence cache flush code was using a scratch register to
mark the top of the stack.  Likewise a good candidate for ZSR use.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-20 12:58:00 -05:00
Andy Ross 3c7905b916 arch/xtensa: Use ZSR assignments for the alloca exception
This is actually Cadence-authored code, but its use of EXCSAVE1 as a
sideband input to the exception handler is very much in the same
family of tricks.  Use ZSR assignments here too.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-20 12:58:00 -05:00
Andy Ross ca7024e1d6 arch/xtensa: Use ZSR assignments for the CPU pointer
Use the zsr.h assignments for the special register containing the
current CPU pointer.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-20 12:58:00 -05:00
Andy Ross 82071be443 arch/xtensa: Add special register allocation generator
Zephyr likes to use the various Xtensa scratch registers for its own
purposes in several places.  Unfortunately, owing to the
configurability of the architecture, we have to use different
registers for different platforms.  This has been done so far with a
collection of different tricks, some... less elegant than others.

Put it all in one place.  This is a python script that emites a
"zsr.h" header with register assignments for all the existing users.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-20 12:58:00 -05:00
Antony Pavlov 9175ed8244 timer: add support for MIPS CP0 timer
This commit adds a kernel device driver for the MIPS CP0 timer.

Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com>
2022-01-19 13:48:21 -05:00
Antony Pavlov 0369998e61 arch: add MIPS architecture support
MIPS (Microprocessor without Interlocked Pipelined Stages) is a
instruction set architecture (ISA) developed by MIPS Computer
Systems, now MIPS Technologies.

This commit provides MIPS architecture support to Zephyr. It is
compatible with the MIPS32 Release 1 specification.

Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com>
2022-01-19 13:48:21 -05:00
Daniel Leung 2e5501a3fe kernel: move CONFIG_MMU into kernel Kconfig
This moves CONFIG_MMU and its children from arch/Kconfig into
kernel/Kconfig. These are used to enable kernel support of MMU
so they should be under kernel/.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-18 19:18:30 -05:00
Jim Shu fd2c07682e arch: riscv: pmp: Fix is_user_mode in RV64
Currently, is_user_mode is 8-byte in riscv64 and it breaks a 4-byte PMP
region protecting it. Because is_user_mode is a single flag, we could
just fix it's size to 4-byte in both riscv32 and riscv64.

Signed-off-by: Jim Shu <cwshu09@gmail.com>
2022-01-18 13:11:36 -05:00
Jim Shu 10e618ff33 arch: riscv: pmp: Fix RV64 compatibility of register size
In RV64, all general-purpose registers and pmpcfg CSR are 64-bit
instead of 32-bit. Fix these registers and related C variables/literals
to be 32/64-bit compatible.

Signed-off-by: Jim Shu <cwshu09@gmail.com>
2022-01-18 13:11:36 -05:00
Jim Shu 595b01fc1d arch: riscv: pmp: Fix 64-bit compatibility of pointer size
Fix 64-bit compatibility of pointer size of RISC-V PMP/userspace code.

Signed-off-by: Jim Shu <cwshu09@gmail.com>
2022-01-18 13:11:36 -05:00
Carlo Caione a74dac89ba kernel: Reset the switch_handler only in the arch code
Avoid setting the switch_handler in the z_get_next_switch_handle() code
when the context is not fully saved yet to avoid a race against other
cores waiting on wait_for_switch().

See issue #40795 and discussion in #41840

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2022-01-18 10:41:35 -05:00
Daniel Leung aa20e081d2 arm: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Daniel Leung de9f396854 arc: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Daniel Leung 25f87aac87 x86: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Daniel Leung e2e40862c1 xtensa: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Daniel Leung bb16e162a7 sparc: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Daniel Leung 7f794db27b posix: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Daniel Leung ceca27cd44 nios2: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Daniel Leung 61d0c3cfe7 riscv: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Andy Ross 97ada8bc04 arch/xtensa: Promote adsp RPO/cache utilities to an arch API
This is trick (mapping RAM twice so you can use alternate Region
Protection Option addresses to control cacheability) is something any
Xtensa hardware designer might productively choose to do.  And as it
works really well, we should encourage that by making this a generic
architecture feature for Zephyr.

Now everything works by setting two kconfig values at the soc level
defining the cached and uncached regions.  As long as these are
correct, you can then use the new arch_xtensa_un/cached_ptr() APIs to
convert between them and a ARCH_XTENSA_SET_RPO_TLB() macro that
provides much smaller initialization code (in C!) than the HAL
assembly macros.  The conversion routines have been generalized to
support conversion between any two regions.

Note that full KERNEL_COHERENCE still requires support from the
platform linker script, that can't be made generic given the way
Zephyr does linkage.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-11 11:53:53 +01:00
Jim Shu 76c8c6ed79 arch: riscv: pmp: add PMP protection of code and rodata
This commit enable PMP-based memory protection of code and rodata
instead of relying on non-writable real HW (e.g. flash). Use static
PMP region with PMP Lock bit to protect them in both user/supervisor
mode.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Jim Shu df166ddda1 arch: riscv: pmp: change mechanism of arch_buffer_validate()
Implement new mechanism of arch_buffer_validate() to support checking
static PMP regions. This is preparation patch for code/rodate protection
via RISC-V PMP.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Jim Shu 35ef71f7c0 arch: riscv: pmp: simplify thread initialization
Thread init related to PMP & userspace contains 5 parts:

1. User/supervisor thread clear PMP context
2. User thread clear it's context
3. User/supervisor thread assign to different entry
4. Supervisor thread assign mstatus.MPRV for M-mode PMP protection
5. User/supervisor thread setup PMP regions of stack guard if enabled

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Jim Shu 9683c9e71c arch: riscv: pmp: reorder function definitions
Reorder the memory domain async functions to:
  arch_mem_domain_partition_add()
  arch_mem_domain_partition_remove()
  arch_mem_domain_thread_add()
  arch_mem_domain_thread_remove()

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Jim Shu b13dd54fb4 arch: riscv: pmp: simplify pmp region number computation
Simplify multiple ifdef case in computing region number. Also move these
macros to core_pmp.c because they are only used in one file.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Jim Shu e3c8b4cae4 arch: riscv: pmp: introduce riscv_pmp_region structure
Using struct riscv_pmp_region to modulize PMP CSR handling, including
PMP NAPOT/TOR mode handling. This patch can make us more easily to
add/remove RISC-V PMP regions without considering register handling.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Jim Shu e4c5d96a8b arch: riscv: pmp: enable MPU log module for debugging
Cleanup logging API in core_pmp.c. Remove old printf-based debugging API
and change the log module of PMP to individual MPU log module.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Jim Shu 5fc5beabe2 arch: riscv: pmp: fix IRQ handling of PMP stack guard
This commit add 2 minor fixes of IRQ handling:

1. Save caller registers before calling z_riscv_configure_stack_guard()
in RISC-V assembly.

2. reschedule and no_reschdule code paths use different interrupt
return path after supporting of CONFIG_PMP_STACK_GUARD. back-to-back
interrupt checking is in the reschedule code path so that it should
jump to interrupt return path of reschedule.

Signed-off-by: Jim Shu <cwshu09@gmail.com>
2022-01-11 11:47:03 +01:00
Jim Shu e0329a5525 arch: riscv: pmp: fix return value of arch_mem_domain_partition_remove()
If no thread use this memory domain, there isn't any user PMP region
translated from memory partitions in domain. In this case, memory
partition removal doesn't need to remove user PMP region and
arch_mem_domain_partition_remove() could just successfully return.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Jim Shu fd1e5aebc0 arch: riscv: fix sp of supervisor thread in _Fault function.
Although CONFIG_USERSPACE is enabled, there are supervisor threads who
don't have privileged stack using exception handler. Only let user
threads to switch to privileged stack in exception handler.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2022-01-11 11:47:03 +01:00
Tomasz Bursztyka 4090962386 drivers/interrupt_controller: Add source id to VT-D interrupt remap
Change the API and apply that change where relevant.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka 345e122dd2 arch/x86: Add a function to retrieve ID from ACPI's DMAR
This will be necessary to get IOAPIC and HPET source ids for VT-D.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka 1012e254cc arch/x86: PCIE MSI address and data may be out of remapping
In fact, in case of VT-D being enabled, it will require to get an
address and data for its own MSI based interrupts which cannot be
remapped (i.e.: will directly go to the relevant APIC).

This is necessary to get the Fault event supported in VT-D.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka 1a1bc0d242 drivers/interrupt_controller: Make VT-D remap generic and handle flags
This will not only be used by MSI remapping but by all relevant
interrupts.

Fix also IRTE settings:
- handle x2apic for destination id
- destination mode is always logical (as for IOAPIC)

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka 4047b793c8 drivers/interrupt_controller: Generate proper MSI address on VT-D
SHV bit depends on the number of vectors allocated.
If it's facing a multi-vector MSI array, it will set the bit.
If not the bit must be 0.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka 6ed593f861 drivers/pcie: Extending parameters to pcie_msi_map
n_vector will be necessary for VT-D actually.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka 25b8df0bdb drivers/pcie: Even single MSI based interrupt needs to be remapped
Refactor to handle this case. This is valid only when MSI multi-vector
feature is enabled.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka fa34b135f5 arch/x86: Make sure PCIE allocated IRTEs are tighten to irq/vector
As all interruption need to go through VT-D, calling vt-d remap will
happen on lower level as seen next, so make sure all pcie related
irq/vector get tighten to their respective allocated IRTE.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka 84319db9fe arch/x86: All dynamic IRQ connection need to be remapped
Allocate an IRTE for all irq being connected through
arch_irq_connect_dynamic(). This will be mandatory since VT-D expects to
filter all interruptions (but the one it generates, as we will see
later).

Taking into account CONFIG_INTEL_VTD_ICTL_XAPIC_PASSTHROUGH, which could
help for debugging.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka ad8ab01488 arch/x86: On irq remapping, all PCIE MSI/MSI-X need to be remapped
There is no need to differentiate with multi-vector or not, MSI vs
MSI-x: all need to be remapped if Intel VT-D is on.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Tomasz Bursztyka f0a7f250a0 arch/x86: Fixing MSI vector allocation
Fixing an out of bound issue.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-01-07 10:47:27 -05:00
Mark Holden 7803a4e590 arch: riscv: ARCH_EXCEPT macro
Enable ARCH_EXCEPT macro for non-usermode scenario for RISC-V
Macro will now raise an illegal instruction exception so that mepc will
hold expected value in exception handler, and generated coredump can
reconstruct the failing stack

Coredump tests running on renode (for RISC-V) can now utilize fatal error
path through k_panic

Signed-off-by: Mark Holden <mholden@fb.com>
2022-01-01 07:38:20 -05:00
Tomasz Bursztyka 2623315802 arch/x86: PCIE MSI vector allocator can use arch IRQ allocator
Instead of messing up with the PCI bus.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-12-22 12:16:52 +01:00
Tomasz Bursztyka 88bac5d0b5 arch/x86: Implement the IRQ allocation and usage interfaces for intel 64
This is the only architecture user for this at the moment.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-12-22 12:16:52 +01:00
Tomasz Bursztyka c76651b9ab arch/x86: Do not call irq controller on dedicated irq/vector function
MSI/MSI-x interrupt do not need any interrupt controller handling
(ioapic/loapic).

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-12-22 12:16:52 +01:00
TOKITA Hiroshi 2de3133a05 riscv: Add an option for configuring mcause exception mask
GD32V processor core is used non-standard bitmask
for mcause register. Add option to configure the bitmask
to support GD32V.

Signed-off-by: TOKITA Hiroshi <tokita.hiroshi@gmail.com>
2021-12-20 17:51:30 +01:00
Andy Ross 1a2fecec6d soc/intel_adsp: Unify Xtensa CPU reset between cores
Startup on these devices was sort of a mess, with multiple variants of
Xtensa and platform initialization code from multiple ancestries being
invoked at different places for different purposes.  Just use one code
path for everyone.

Bootloader entry starts with a minimal assembly stub that simply sets
WINDOW{START,BASE}, PS and a stack pointer and then jumps to C code.
That then uses the cpu_early_init() implementation from cAVS 2.5's
secondary cores to finish Xtensa initialization, and then flows
directly into the pre-existing bootloader C code to initialize cache
and memory and copy the HP-SRAM image, then it invokes Zephyr via a
simple C function call to z_cstart().

Likewise, remove the "reset vector" from Zephyr.  This was never a
reset vector, reset on these devices goes to a fixed address in a ROM.
CPU initialization is handled explicitly and completely in the
bootloader now, in a way that can be unified between the main and
secondary cores.  Entry from the bootloader now goes directly into
z_cstart() via a C call (via a single jump instruction placed at the
entry point address -- that's going away soon too once we're using a
unified link).

Now that vector table initialization happens in a uniform way, there's
no need to copy the VECBASE value during arch_start_cpu().

Finally note that this also reverts the
CONFIG_RESET_VECTOR_IN_BOOTLOADER kconfig variable added for these
platforms, because it's no longer a tunable and true always.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-12-14 18:43:05 -06:00
Lauren Murphy c1711997bc debug: coredump: add xtensa coredump
Adds Xtensa as supported architecture for coredump. Fixes
a few typos in documentation, Kconfig and a C file. Dumps
minimal set of registers shown by 'info registers' in GDB
for the sample_controller and ESP32 SOCs. Updates tests.

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-12-14 07:40:55 -05:00
Carles Cufi 4f64ae383d x86: acpi: Fix address-of-packed-mem warning
The warning below appears once -Waddress-of-packed-mem is enabled:

/home/carles/src/zephyr/zephyr/arch/x86/core/acpi.c: In function
'z_acpi_find_table':
/home/carles/src/zephyr/zephyr/arch/x86/core/acpi.c:190:24: warning:
taking address of packed member of 'struct acpi_xsdt' may result in an
unaligned pointer value [-Waddress-of-packed-member]
  190 |    for (uint64_t *tp = &xsdt->table_ptrs[0]; tp < end; tp++) {

To avoid the warning, use an intermediate void * variable.

More info in #16587.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-12-10 14:08:59 +01:00
Sebastian Bøe 1f87642f08 arch: cortex_m: Fix dwt cyccnt assert
Fix the assert that checks for existence of a cycle counter.

The field is named NO CYCCNT, so when it is 1, there is no cycle
counter. But we are asserting the opposite.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2021-12-10 12:27:49 +01:00
Mark Holden 1a697ccf59 coredump: add support for RISC-V
This adds the necessary bits in arch code, and Python scripts
to enable coredump support for RISC-V

Signed-off-by: Mark Holden <mholden@fb.com>
2021-12-08 08:54:32 -05:00
Dmytro Firsov 01a9b117fe xenvm: arm64: add Xen Enlighten and event channel support
This commit adds support of Xen Enlighten page and initial support for
Xen event channels. It is needed for future Xen PV drivers
implementation.

Now enlighten page is mapped to the prepared memory area on
PRE_KERNEL_1 stage. In case of success event channel logic gets
inited and can be used ASAP after Zephyr start. Current implementation
allows to use only pre-defined event channels (PV console/XenBus) and
works only in single CPU mode (without VCPUOP_register_vcpu_info).
Event channel allocation will be implemented in future versions.

Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
2021-12-07 12:15:38 -05:00
Gerard Marull-Paretas 7d1bfb51ae drivers: timer: cortex_m_systick: improve ISR installation
A Cortex-M specific function (sys_clock_isr()) was defined as a weak
function, so in practice it was always available when system clock was
enabled, even if no Cortex-M systick was available. This patch
introduces an auxiliary Kconfig option that, when selected, the ISR
function gets installed. External SysTick drivers can also make use of
this function, thus achieving the same functionality offered today but
in a cleaner way.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-12-04 07:34:53 -05:00
Yuguo Zou abeaf94855 soc: arc: fix ARC_HAS_ACCL_REGS settings
ARC_HAS_ACCL_REGS should set to y to protect ACCL and ACCH registers
during irq. These registers could be used as GPRs by compilers and
therefore need store/restore during irq.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-12-02 11:32:14 -06:00
Daniel Leung dc34f6c84d xtensa: introduce support for GDB stub
This adds basic support for GDB stub on Xtensa. Note that
this only provides the common bits on the architecture side.
SoC support is also required to fully enable GDB stub on
each Xtensa SoC.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Daniel Leung 650a629b08 debug: gdbstub: remove start argument from z_gdb_main_loop()
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Daniel Leung e1180c8cee x86: gdbstub: add arch-specific funcs to read/write registers
This adds some architecture-specific functions to read/write
registers for the GDB stub. This is in preparation for the actual
introduction of these functions in the core GDB stub code to
avoid breaking the build in between commits.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Daniel Leung 1cd7cccbb1 kernel: mem_domain: arch_mem_domain functions to return errors
This changes the arch_mem_domain_*() functions to return errors.
This allows the callers a chance to recover if needed.

Note that:
() For assertions where it can bail out early without side
   effects, these are converted to CHECKIF(). (Usually means
   that updating of page tables or translation tables has not
   been started yet.)
() Other assertions are retained to signal fatal errors during
   development.
() The additional CHECKIF() are structured so that it will bail
   early if possible. If errors are encountered inside a loop,
   it will still continue with the loop so it works as before
   this changes with assertions disabled.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Julien Massot 36f116b47f scripts/arch: remove usage of deprecated LooseVersion
replace with version.parse from packaging module.

prevent this warning message:
DeprecationWarning: The distutils package is deprecated
and slated for removal in Python 3.12. Use setuptools or
check PEP 632 for potential alternatives

Signed-off-by: Julien Massot <julien.massot@iot.bzh>
2021-11-19 19:16:11 -05:00
Flavio Ceolin 7dd4297214 pm: Remove unused parameter
The number of ticks on z_pm_save_idle_exit is not used and there is
no need to have it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-17 11:15:49 -05:00
Michel Haber 9d815e5251 timing: use runtime cycles for cortex-m systick
Use sys_clock_hw_cycles_per_sec() instead of
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC to determine clock cycles.

Signed-off-by: Michel Haber <michel-haber@hotmail.com>
2021-11-16 10:43:18 +01:00
Andy Ross 1238410914 arch/x86_64: Add hook for CONFIG_SCHED_THREAD_USAGE accounting in ISRs
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.

This has to go into the assembly immediately before the callback-based
dispatch.  Note that the dispatch code was putting the vector number
in RCX, which was unfortunate as that's a caller-saved register.
Would be nice to clean this up in the future so it lives in a
preserved register but it's mildly complicated to make work with the
way we do the stack layout right now.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 76b848e38c arch/sparc: Add hook for CONFIG_SCHED_THREAD_USAGE accounting in ISRs
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross c815996606 arch/arc: Add hook for CONFIG_SCHED_THREAD_USAGE accounting in ISRs
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 35af02fe3d arch/arm64: Add hook for CONFIG_SCHED_THREAD_USAGE accounting in ISRs
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.

This is pretty much exactly where we want it, just after the context
saving steps (which we can't elide since the hook is in C) and
immediately before the tracing hook for ISR entry.  And as I'm reading
the code, this is purely for Zephyr-registered interrupts, meaning
that software exceptions will be accounted for (correctly) as part of
the excepting thread.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 884f1bf39d arch/xtensa: Add hook for CONFIG_SCHED_THREAD_USAGE accounting in ISRs
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.

Note that this hook is after the register save and stack swap has
happened, so it still incldues some overhead.  But calling out from
the interrupted stack on Xtensa gets really, really hairy due to the
weird intermediate states we leverage (once we've saved enough context
to make a C call safely, we've lost the ability to use register
windows per the C ABI!).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Torsten Rasmussen 9c74027a7b cmake: CMake linker script generator pass handling
To prepare for linker script creation with flexible number of linker
passes depending on system configuration then the Zephyr CMake linker
script generator has been updated to use pass names instead of pass
numbers.

This allows greater flexibility as a section can now be active based on
the settings on the pass and not the linking pass index number.

As part of this, the `PASS` processing in `linker_script_common.cmake`
has been adjusted so that it properly handles when a linking pass is
handling multiple settings, such as both `LINKER_APP_SMEM_UNALIGNED`
and `DEVICE_HANDLES_PASS1` in same linking pass.

As the number of linking passes are more flexible, then the PASS
argument in `zephyr_linker_section()` and
`zephyr_linker_section_configure()` has been updated to also support
a `NOT <name>` argument, for example: `PASS NOT LINKER_ZEPHYR_FINAL`.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-11-08 20:45:07 +01:00
Nikolai Kondrashov 533b8c971a arch: arm: aarch32: Fix spelling of "want"
Fix spelling of "want" in a comment in _arch_isr_direct_mp().

Signed-off-by: Nikolai Kondrashov <spbnick@gmail.com>
2021-11-02 10:46:00 +01:00
Dmytro Firsov c4ab278688 arm64: xenvm: Add Xen hypercall interface for arm64
This commit adds Xen hypervisor call interface for arm64 architecture.
This is needed for further development of Xen features in Zephyr.

Signed-off-by: Dmytro Firsov <dmytro_firsov@epam.com>
2021-10-29 15:23:33 +02:00
Immo Birnbaum c6141c49c1 arch: arm: core: aarch32: enable ARMv7-R/Cortex-R code for ARMv7-A/Cortex-A
Modify #ifdefs so that any code that is compiled if CONFIG_ARMV7_R is
set is also compiled if CONFIG_ARMV7_A is set.
Modify #ifdefs so that any code that is compiled if CONFIG_CPU_CORTEX_R
is set is also compiled if CONFIG_CPU_AARCH32_CORTEX_A is set.
Modify source dir inclusion in CMakeLists.txt accordingly.

Brief file descriptions have been updated to include Cortex-A whereever
only Cortex-M and Cortex-R were mentioned so far.

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2021-10-28 15:26:50 +02:00
Immo Birnbaum 70c403c215 arch: arm: core: aarch32: introduce basic ARMv7 MMU support
An initial implementation for memory management using the ARMv7 MMU.
A single L1 translation table for the whole 4 GB address space is al-
ways present, a configurable number of L2 page tables are linked to
the L1 table based on the static memory area configuration at boot
time, or whenever arch_mem_map/arch_mem_unmap are called at run-time.

Currently, a CPU with the Multiprocessor Extensions and execution at
PL1 are always assumed. Userspace-related features or thread stack
guard pages are not yet supported. Neither are LPAE, PXN or TEX re-
mapping. All mappings are currently assigned to the same domain. Re-
garding the permissions model, access permissions are specified using
the AP[2:1] model rather than the older AP[2:0] model, which, accor-
ding to ARM's documentation, is deprecated and should no longer be
used. The newer model adds some complexity when it comes to mapping
pages as unaccessible (the AP[2:1] model doesn't support explicit
specification of "no R, no W" permissions, it's always at least "RO"),
this is accomplished by invalidating the ID bits of the respective
page's PTE.

Includes sources, Kconfig integration, adjusted CMakeLists and the
modified linker command file (proper section alignment!).

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2021-10-28 15:26:50 +02:00
Immo Birnbaum eac90eeb52 arch: arm: core: aarch32: limit ACTLR register access to Cortex-R
The configuration bits ATCMPCEN, B0TCMPCEN and B1TCMPCEN in the ACTLR
register referenced in the function z_arm_tcm_disable_ecc are only de-
fined for Cortex-R CPUs. For Cortex-A CPUs, those bits are declared
as reserved.

Comp.: https://arm-software.github.io/CMSIS_5/Core_A/html/group__CMSIS__ACTLR.html

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2021-10-28 15:26:50 +02:00
Immo Birnbaum 85f53376dc arch: arm: core: aarch32: Updated brief file description
Updated brief file description so that it also mentions the aarch32
Cortex-A CPUs.

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2021-10-28 15:26:50 +02:00
Immo Birnbaum 38dc87d4d9 arch: arm: core: aarch32: Add ARMv7-A/Cortex-A(9) related Kconfig items
Add the ARMV7_A, CPU_AARCH32_CORTEX_A and CPU_CORTEX_A9 configuration
items.

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2021-10-28 15:26:50 +02:00
Immo Birnbaum 305550a775 arch: arm: core: aarch32: Updated brief file description
Updated brief file description so that it also mentions the aarch32
Cortex-A CPUs.

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2021-10-28 15:26:50 +02:00
Keith Packard 177f95464e arm: Use correct macro for z_interrupt_stacks declaration in stack.h
There are two macros for declaring stack arrays:

K_KERNEL_STACK_ARRAY_DEFINE:

	Defines the array, allocating storage and setting the section name

K_KERNEL_STACK_ARRAY_EXTERN

	Declares the name of a stack array allowing code to reference
	the array which must be defined elsewhere

arch/arm/include/aarch32/cortex_m/stack.h was mis-using
K_KERNEL_STACK_ARRAY_DEFINE to declare z_interrupt_stacks by sticking
'extern' in front of the macro use. However, when this macro also set
the object file section for the symbol, having two of those caused a
conflict in the compiler due to the automatic unique name mechanism used
for sections to allow unused symbols to be discarded during linking.

This patch makes the header use the correct macro.

Signed-off-by: Keith Packard <keithp@keithp.com>
2021-10-21 07:34:56 -04:00
Jiafei Pan 799f37b421 arm64: add nocache memory segment support
In some drivers, noncache memory need to be used for dma coherent
memroy, so add nocache memory segment mapping and support for ARM64
platforms.

The following variables definition example shows they will use nocache
memory allocation:
   int var1 __nocache;
   int var2 __attribute__((__section__(".nocache")));

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-10-20 08:56:40 -05:00
Chris Reed 6d2b91461b arm: cortex-m: initialise ptr_esf in get_esf() in fault.c.
This was producing a -Wsometimes-uninitialized warning.

Signed-off-by: Chris Reed <chris.reed@arm.com>
2021-10-17 10:57:03 -04:00
Evgeniy Paltsev a7d07cb62c ARC: forbid FIRQ or multiple register banks w/ 1 IRQ priority level
Don't allow to enable multiple register banks / fast
interrupts if we have only one interrupt priority level.

NOTE: we duplicate some checks by adding dependencies to ARC
Kconfig and adding build-time checks in C code. We do it
intentionally as for some reason we can violate dependencies
in architecture-level Kconfig by adding incorrect default in
SoC-level Kconfig. Such violation happens without any
warnings / errors from the Kconfig.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-10-13 20:41:29 -04:00
Neil Armstrong c24d0c8405 arm64: mmu: implement arch_virt_region_align()
Add the arm64 MMU arch_virt_region_align() implementation used
to return a possible virtual addres alignment in order to
optimize the MMU table layout and possibly avoid using L3 tables
and use some L1 & L3 blocks instead for most of the mapping.

Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-11 21:00:28 -04:00
Daniel Leung 88ccb5f8f0 Revert "xtensa: remove unused script"
This reverts commit 67d290540e.

The script is actually used to generate the _soc_inthandlers.h
file when introducing a new Xtensa SoC. So restore it.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-10-07 16:04:11 -04:00
Kumar Gala a6355cb475 arm: aarch32: mpu: Fix build issue with assert
The assert log of z_priv_stacks_ram_start failed to build due to passing
&z_priv_stacks_ram_start instead of just z_priv_stacks_ram_start.

Fixes #39190

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-10-07 10:53:09 -05:00
Neil Armstrong 866840e4e8 arm64: mmu: don't use a Level block if PA is not aligned
When mapping the following:
device_map(&base0, DEVA_BASE, DEVA_SIZE, K_MEM_CACHE_NONE);
device_map(&base1, DEVB_BASE , DEVB_SIZE, K_MEM_CACHE_NONE);

with:
- DEVA_SIZE not multiple of a 4KB granule L2 block size (0x200000)
- DEVB_SIZE more than 2 x 4KB granule L2 block size

The mmu code will fill the first device_map() in a L3 table, then
on the second mapping the mmu code will complete the previous L3
table.
At the end of this table, the actual code will select an L2 block
instead of a table because the *virtual address* is multiple with
the L2 block size.

But if the physical address is not, the virtual block offset will
be ORed to the physical address, and not added.

Leading to a weird scenario where virtual memory is duplicated
resulting of the addresses ORing and not addition.

Example:
device_map(&base0, DEVA_BASE, 0x20000, K_MEM_CACHE_NONE);
device_map(&base1, 0x44000000 , 0x400000, K_MEM_CACHE_NONE);

First will result in VA 0x5ffe0000 and second in VA 0x5fbe0000.

The MMU code will use a table to map 0x5ffe0000 to 0x5fbfffff.

For 0x5fc00000 to 0x5fdfffff, since the VA is multiple of an L2
block size, the L3 table is not used.

But the L2 block description entry address is 0x44060000, meaning
that for each access in this L2 block, the following will be done:

0x44060000 | (VA & 1FFFFF)

This is working for the 0x5fc40000 to 0x5fc5ffff access, but for the
0x5fbc60000 (0x5fbe0000 + 0x80000) access the PA gets calculated as :

0x44060000 | (0x5fc60000 & 1FFFFF) = 0x44060000 | 0x60000 = 0x44060000

Instead of the expected 0x44080000.

The solution is to check if the PA descriptor is aligned with the
level block size, if not move to the next level.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-07 10:54:28 +02:00
Daniel Leung 1ec2dbd662 xtensa: fix implicit declaration of _xtensa_handle_one_int*
Some Xtensa SoCs may not have that many levels of interrupts.
So limit the call to DEF_INT_C_HANDLER() to only supported
levels to avoid calling non-existent functions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-09-28 20:33:56 -04:00
Jaxson Han 207926c479 arm64: Kconfig: Enable userspace feature
Enable userspace for Armv8R aarch64

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Jaxson Han 27ed237f6d arm64: arm_mpu: Add userspace
Add dynamic_areas_init. It will mark a mpu region as a dynamic region
area. The dynamic region areas is designed to be the background
regions, so that the system could re-program the thread regions on
the backgroud regions.

Add configure_dynamic_mpu_regions to re-program the thread regions on
the backgroud regions. The configure_dynamic_mpu_regions function is
the core function of implementing the userspace for the MPU. This
function is used in thread creation and context switch.

During context switch, the pre thread's regions should be disabled,
and the new thread's regions will be re-programed. Since the thread's
stack region will also be switched, there will be a problem before
new thread's region being re-programed which is the new thread's
stack usage. To avoid the exception generated by stack usage caused by
unprogramed new thread's stack region, I disable mpu first before
flush_dynamic_regions_to_mpu and then enable it.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Jaxson Han ac0c0a61d5 include: arm64: Refine the mem alignment macros
Add a new macro MEM_DOMAIN_ALIGN_AND_SIZE for mmu and mpu mem
alignment.
MEM_DOMAIN_ALIGN_AND_SIZE is
  - CONFIG_MMU_PAGE_SIZE, when mmu is enabled.
  - CONFIG_ARM_MPU_REGION_MIN_ALIGN_AND_SIZE when mpu enabled.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Jaxson Han d282d86d7e arm64: Create common mmu and mpu interfaces
Include the new introduced include/arch/arm64/mm.h instead of the
arm_mmu.h or arm_mpu.h.

Unify function names z_arm64_thread_pt_init/z_arm64_swap_ptables with
z_arm64_thread_mem_domains_init/z_arm64_swap_mem_domains for mmu and
mpu, because:
1. mmu and mpu have almost the same logic.
2. mpu doesn't have ptables.
3. using the function names help reducing "#if define" macros.

Similarly, change z_arm64_ptable_ipi to z_arm64_domain_sync_ipi

And fix a log bug in arm_mmu.c.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Jaxson Han 34d6c7caa7 arm64: cortex_r: Move mpu code to a better place
This patch mainly moves mpu related code from
arch/arm64/core/cortex_r/mpu/ to arch/arm64/core/cortex_r/ and moves
the mpu header files from include/arch/arm64/cortex_r/mpu/ to
include/arch/arm64/cortex_r/

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-09-28 20:06:06 -04:00
Neil Armstrong d55991b98e arm64: isr_wrapper: ignore Special INTIDs between 1020..1023
Referring the Arm Generic Interrupt Controller Architecture
Specification GIC architecture version 3 and version 4 document
(see 2.2.1 Special INTIDs paragraph), these INTIDs are reserved
for special purposes and should be ignored for now.

For the ITS implementation, the INTID 1023 must be ignored since this
special INTID will trigger after an LPI acknowledge, thus triggering
the spurious interrupt handler.

The GICv3 Linux implementation ignores these INTIDs the same way.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-09-28 19:45:29 -04:00
Neil Armstrong 078113982f arm64: isr_wrapper: fixup out of bounds for large number of irqs
In case we enable a large number of IRQs, like when enabling LPIs using
interrupts > 8192, we hit an assembler error where the immediate value
is too large.

Copy the IRQ number into x1 to permit using a large IRQ number.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-09-28 19:45:29 -04:00
Iuliana Prodan a6364da1a3 arch: xtensa: add workaround for small vector table entries
For some platforms, like NXP's IMX8 or Mediatek's MT8195,
the size of an interrupt vector table entry is 0x1C bytes,
less than usual (0x30 for Intel's platforms).
So, the interrupt handlers don't fit in the vector table
entries.

I've added a small indirection to bypass this size
constraint and moved the default handlers to the end
of vector table, renaming them to
_Level\LVL\()VectorHelper.
For this, I've added a generic configuration -
XTENSA_SMALL_VECTOR_TABLE_ENTRY.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
2021-09-10 10:59:44 -04:00
Alexandre Bourdiol 23c0e16782 arch: arm: core: aarch32: fix regression introduced with Cortex-R
Regression introduced on ARMV6_M_ARMV8_M_BASELINE by Cortex-R PR #28231
Fixes #38421

Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
2021-09-09 19:49:37 -04:00
Wolfgang Reißnegger 535fc38fe7 riscv: Don't reschedule on back-to-back interrupts
In some cases the 'reschedule' code path is executed when the current
thread is the same as the next thread in the ready Q. If this happens,
the swap_return_value of the thread is ifalsely being reset to -EAGAIN.

This commit prevents the rescheduling code to run if the current thread
is the same as the thread in the ready Q.

Signed-off-by: Wolfgang Reißnegger <gnagflow@fb.com>
2021-09-03 12:20:03 -04:00
Daniel Leung d33017b458 x86: x86-64: add arch_float_en-/dis-able() functions
This adds arch_float_enable() and arch_float_disable() to x86-64.
As x86-64 always has FP/SSE enabled, these operations are basically
no-ops. These are added just for the completeness of arch interface.

Fixes #38022

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-09-03 10:00:02 -04:00
Andy Ross 37bbe7aeea arch/xtensa: Add arch_cpu_idle() workarounds
A simple WAITI isn't sufficient in all cases.  The cAVS 2.5 hardware
uses WAITI as the entry state for per-core power gating, which is very
difficult to debug.  Provide a fallback that simply spins in the idle
loop waiting for interrupts to provide a stable system while this
feature stabilizes.

Also, the SOF code for those platforms references a known bug with the
Xtensa LX6 core IP (or at least some versions), and will prefix the
WAIT instruction with 128 NOP.N's followed by an ISYNC and EXTW.  This
bug hasn't been seen under Zephyr yet, and details are sketchy.  But
the code is simply enough to import and works correctly.

Place both workaround under new kconfig variables and select them both
(even though they're actually mutually exclusive -- if you select both
CPU_IDLE_SPIN overrides) for cavs_v25.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-03 07:19:34 -04:00
Andy Ross b76bc6c80d arch/xtensa: Fix outgoing stack flush for dummy threads
On CPU startup, When we reach the cache flush code in arch_switch(),
the outgoing thread is a dummy.  The behavior of the existing code was
to leave the existing value in the SR unchanged (probably NULL at
startup).  Then the context switch would walk from that address up to
the top of the outgoing stack, flushing everything in between.  That's
wrong, because the outgoing stack is a real pointer (generally the
interrupt stack of the current CPU), and we're flushing everything in
memory underneath it.

This also reverts commit 29abc8adc0 ("xtensa: fix booting secondary
cores on the dummy thread"), which appears to have been an early
attempt to address this issue.  It worked (modulo all the extra and
potentially incorrect flushing) on cavs v1.5/1.8 because of the way
the entry code worked there.  But on 2.5 we now hit the first context
switch in a case where those extra lines are in address space already
marked unwritable by the CPU, so the flush explodes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-03 07:19:34 -04:00
Evgeniy Paltsev 60fdec616b ARC: MWDT: get rid of MWDT startup libs
__cxa_atexit implementation provided by MWDT startup code calls
malloc which isn't supported right now. As we don't support
calling static destructors in Zephyr let's provide our own
__cxa_atexit stub and get rid of MWDT startup libs
entirely.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-09-01 17:08:32 -04:00
Stephanos Ioannidis 41fd6e003c arch: arm: aarch32: Add half-precision floating-point configs
This commit adds the half-precision (16-bit) floating-point
configurations to the ARM AArch32 architectures.

Enabling CONFIG_FP16 has the effect of specifying `-mfp16-format`
option (in case of GCC) which allows using the half-precision floating
point types such as `__fp16` and `_Float16`.

Note that this configuration can be used regardless of whether a
hardware FPU is available or supports half-precision operations.

When an FP16-capable FPU is not available, the compiler will
automatically provide the software emulations.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-08-30 18:17:47 +02:00
Torsten Rasmussen 94a010107a arch: linker: specify intList section in the IDT_LIST region
This commit specifies the intList section in the IDT_LIST region in the
arch/common CMakeLists.txt file.

It uses zephyr_linker_section to setup the intList section for first
pass linker file and configures the section to hold irq_info and
intList input section.

For second pass linker file, the irq_info and intList input sections are
placed in the /DISCARD/ section.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-30 08:54:23 -04:00
Torsten Rasmussen 38040292c3 cmake: linker: converter arm and common ld scripts into CMake code
Converted existing ld script templates into CMake files.

This commit takes the common-ram.ld, common-rom.ld, debug-sections.ld,
and thread-local-storage.ld and creates corresponding CMake files for
the linker script generator.

The CMake files uses the new Zephyr CMake functions:
- zephyr_linker_section()
- zephyr_linker_section_configure()
- zephyr_linker_section_obj_level()

to generate the same linker result as the existing C preprocessor based
scheme.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-30 08:54:23 -04:00
Torsten Rasmussen da926f6855 asm: .eabi_attribute Tag_ABI_align_preserved, 1
Tell armlink that files has ensured proper stack alignment.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-30 08:54:23 -04:00
Iuliana Prodan f9810ccbe1 arch: xtensa: modify asm for interrupt sections
For IMX, for timer interrupt, the interrupt handler
was not the correct one executed and that’s because
the handlers were not at the expected address.
For IMX the size constraint of the interrupt vector
table entry is 0x1C bytes of code, less than usual.

I've added a small indirection to bypass this size
constraint and moved the default handlers to the end
of vector table, renaming them to
_Level\LVL\()VectorHelper.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
2021-08-28 23:27:02 -04:00
Torsten Rasmussen 302fd804ce interrupts: safeguard isr_wrapper and isr_install
ld linker will only resolve undefined symbols inside functions that is
actually being called.

However, not all linkers behaves this way. Certain linkers, for example
armlink, resolves all undefined symbols even if during a later stage at
the linking the function will be pruned.

Therefore `ifdef CONFIG_GEN_ISR_TABLES` has been placed to safeguard
functions that will call undefined symbols when CONFIG_GEN_ISR_TABLES=y.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen f57483664b arch: arm: swap_helper.S: safe guarding GTEXT(z_arm_do_syscall)
z_arm_do_syscall is only defined and used when CONFIG_USERSPACE=y.

Defining the symbol z_arm_do_syscall in assembly without a corresponding
implementation is fine for GNU ld as long as the function is not
actively called, but armlink fails to link in such cases.

Safegaurd GTEXT(z_arm_do_syscall) so the symbol is only referenced when
actively used, that is when CONFIG_USERSPACE=y.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen 3d82c7c828 linker: align _image_text_start/end/size linker symbols name
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.

The symbols _image_text_start and _image_text_end sometimes includes
linker/kobject-text.ld. This mean there must be both the regular
__text_start and __text_end symbols for the pure text section, as well
as <group>_start and <group>_end symbols.

The symbols describing the text region which covers more than just the
text section itself will thus be changed to:
_image_text_start -> __text_region_start
_image_text_end   -> __text_region_end

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen c6aded2dcb linker: align _image_rodata and _image_rom start/end/size linker symbols
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.

The symbols _image_rom_start and _image_rom_end corresponds to the group
ROMABLE_REGION defined in the ld linker scripts.

The symbols _image_rodata_start and _image_rodata_end is not placed as
independent group but covers common-rom.ld, thread-local-storage.ld,
kobject-rom.ld and snippets-rodata.ld.

This commit align those names and prepares for generation of groups in
linker scripts.

The symbols describing the ROMABLE_REGION will be renamed to:
_image_rom_start -> __rom_region_start
_image_rom_end   -> __rom_region_end

The rodata will also use the group symbol notation as:
_image_rodata_start -> __rodata_region_start
_image_rodata_end   -> __rodata_region_end

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen 510d7dbfb6 linker: align _ramfunc_ram/rom_start/size linker symbol names
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.

Generally, start and end symbols uses the following pattern, as:
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end

However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end
Section size symbol:      __foo_size
Section LMA start symbol: __foo_load_start

This commit aligns the symbols for _ramfunc_ram/rom to other symbols and
in such a way they follow consistent pattern which allows for linker
script and scatter file generation.

The symbols are named according to the section name they describe.
Section name is `ramfunc`

The following symbols are aligned in this commit:
-  _ramfunc_ram_start  -> __ramfunc_start
-  _ramfunc_ram_end    -> __ramfunc_end
-  _ramfunc_ram_size   -> __ramfunc_size
-  _ramfunc_rom_start  -> __ramfunc_load_start

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Yuguo Zou eb14e21d18 arch: arc: add support of mpu v6
Add support of ARC mpu v6
* minimal region size down to 32 bytes
* maximal region number up to 32
* not support uncacheable region and volatile uncached region
* clean up mpu code for better readablity

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-08-27 11:45:43 -04:00
Yuguo Zou 333501e871 arch: arc: add support of mpu v3
Add support of ARC mpu version 3 which can have region size down to 32
bytes

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-08-27 11:45:43 -04:00
Daniel Leung c2a01af003 x86: pin z_x86_set_stack_guard()
This function should be pinned in memory instead of simply
putting it in the boot section, as this function will be
used when new threads are created at runtime.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 7605619c1e x86: userspace: page in stack before starting user thread
If generic section is not present at boot, the thread stack
may not be in physical memory. Unconditionally page in the stack
instead of relying on page fault to speed up a little bit
on starting the thread.

Also, this prevents a double fault during thread setup when
setting up stack permission in z_x86_userspace_enter().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung ea0f9474f7 x86: gen_mmu: don't force extra map argument to be base 16
When converting the address and size arguments for extra mappings,
the script assumes they are always base 16. This is not always
the case. So let Python's own int() decides how to interpret
the values as it supports "0x" prefix also.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung c11ad59ed6 x86: mmu: don't mark generic sections as present if desired
With demand paging, it is possible for data pages to not be
present in physical memory. The gen_mmu.py script is updated
so that, if so desired, the generic sections are marked
non-present so the paging mechanism can bring them in
if needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 30e5968d34 x86: don't clear BSS if not in physical memory at boot
If the BSS section is not present in physical memory at boot,
do not zero the section, or else page faults would occur.
The zeroing of BSS will be done once the paging mechanism
has been initialized.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 2dfae4a0f7 kernel: demand_paging: allow reserving page frames
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Jim Shu 073cfa9cdf arch: riscv: introduce global pointer relative addressing support
Enable RISC-V GP relative addressing by linker relaxation to reduce
the code size. It optimizes addressing of globals in small data section
(.sdata).

The gp initialization at program start needs each SoC support. Also,
if RISC-V SoC has custom linker script, SoC should provide
__global_pointer$ symbol in it's linker script.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-08-20 18:53:23 -04:00
Manuel Argüelles 9ff6282089 arch: arm64: invalidate TLBs after ptables swap
This prevent the new thread to attempt accessing cached ptable entries
which are no longer valid.

Signed-off-by: Manuel Argüelles <manuel.arguelles@coredumplabs.com>
2021-08-20 06:26:05 -04:00
Maureen Helm 9b6122d5ac arch: riscv: Increase default CONFIG_TEST_EXTRA_STACKSIZE for 32-bit
Increases the default CONFIG_TEST_EXTRA_STACKSIZE for the 32-bit RISC-V
architecture. This fixes the portability.posix.fs test on the
qemu_riscv32 platform.

Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
2021-08-18 20:54:46 -04:00
Jim Shu 97fa203330 Revert "arch: riscv: added support for custom initialization of gp register"
This reverts commit 7b09d031fa. Because
context save of GP register is removed, we don't need to initialize GP
at thread init. GP will be a constant value so that it could only be
initialized at program start.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-08-18 05:18:55 -04:00
Jim Shu e3fe63a221 arch: riscv: remove unneeded context switch to gp register
RISC-V global pointer (GP) register is neither caller nor callee
register, and it's a constant value in the single ELF file. Thus, we
don't need to save/restore GP at ISR enter/exit. Remove it to optimize
context switch performance.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-08-18 05:18:55 -04:00
Jim Shu e1c7333dc7 arch: riscv: fix typo of context switch macro
Fix typo of LOAD_CALLER/CALLEE macros.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-08-18 05:18:55 -04:00
Phil Erwin 78ba3ddbc5 arch: arm: mpu: Put a lock around MPU buffer validate
Related to github #22290.  Getting interrupt during mpu buffer validate
is corrupting index register.  Fix applied to ARC is to disable
interrupts during the buffer validate operation.

Signed-off-by: Phil Erwin <phil.erwin@lexmark.com>
2021-08-17 06:06:33 -04:00
Bradley Bolen 046f93627c arch: arm: cortex_r: Support nested exception detection
Cortex-A/R does not have hardware supported nested interrupts, but it is
easily emulatable using the nesting level stored in the kernel
structure.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-08-17 06:06:33 -04:00
Bradley Bolen 1e153b5091 arch: arm: cortex_r: Add support for recoverable data abort
Add functionality based on Cortex-M that enables recovery from a data
abort using zephyr's exception recovery framework.  If there is a
registered z_exc_handle for a function, then use its fixup address if
that function aborts.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-08-17 06:06:33 -04:00
Bradley Bolen ff1a5e7858 arch: arm: cortex_r: Add ARCH_EXCEPT macro
With the addition of userspace support, Cortex-R needs to use SVC calls
to handle oops exceptions.  Add that support by defining ARCH_EXCEPT to
do a svc call.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-08-17 06:06:33 -04:00
Bradley Bolen 65dcab81d0 arch: arm: cortex_r: Do not use user stack in svc/isr modes
The user thread cannot be trusted so do not use the stack pointer it
passes in.  Use the thread's privilege stack when in privileged modes to
make sure a user thread does not trick the svc/isr handlers into writing
to memory it should not.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-08-17 06:06:33 -04:00
Phil Erwin e0bed3b989 arch: arm: cortex_r: Add MPU and USERSPACE support
Use Cortex-M code as a basis for adding MPU support for the Cortex-R.

Signed-off-by: Phil Erwin <phil.erwin@lexmark.com>
2021-08-17 06:06:33 -04:00
Daniel Leung 7862724c50 arm64: smp: arm64_smp_init to be done at PRE_KERNEL_2
The arm64_smp_init() is the same initialization level
and priority as the GICv3 interrupt controller. This means
that arm64_smp_init() can be called before the interrupt
controller driver has been initialized if linker decides
to put the driver init entry later. This would result in
faults when arm64_smp_init() is trying to connect interrupts.
So move arm64_smp_init() to PRE_KERNEL_2 instead. SMP
initialization is called later in the boot process so
this should not affect SMP operations.

This is in preparation of making interrupt controller
drivers to be build as static library. The linking order
is going to change which will result in this being
initialized before the interrupt contoller driver.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-17 06:06:03 -04:00
Stephanos Ioannidis 6df8f7e435 arch: arm: cortex_m: Add ARMv8.1-M MVE configs
This commit adds the ARMv8.1-M M-Profile Vector Extension (MVE)
configurations as well as the compiler flags to enable it.

The M-Profile Vector Extension consists of the MVE-I and MVE-F
instruction sets which are integer and floating-point vector
instruction sets, respectively.

The MVE-I instruction set is a superset of the ARM DSP instruction
set (ARMv7E-M) and therefore depends on ARMV8_M_DSP, and the MVE-F
instruction set is a superset of the ARM MVE-I instruction set and
therefore depends on ARMV8_1_M_MVEI.

The SoCs that implement the MVE instruction set should select the
following configurations:

  select ARMV8_M_DSP
  select ARMV8_1_M_MVEI
  select ARMV8_1_M_MVEF (if floating-point MVE is supported)

The GCC compiler flags for the MVE instruction set are specified
through the `-mcpu` flag.

In case of the Cortex-M55 (the only supported processor type for
ARMv8.1-M at the time of writing), the `-mcpu=cortex-m55` flag, by
default, enables all the supported extensions which are DSP, MVE-I and
MVE-F.

The extensions that are not supported can be specified by appending
`+no(ext)` to the `-mcpu=cortex-m55` flag:

  -mcpu=cortex-m55           Cortex-M55 with DSP + MVE-I + MVE-F
  -mcpu=cortex-m55+nomve.fp  Cortex-M55 with DSP + MVE-I
  -mcpu=cortex-m55+nomve     Cortex-M55 with DSP
  -mcpu=cortex-m55+nodsp     Cortex-M55 without any extensions

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-08-14 20:29:57 -04:00
Evgeniy Paltsev 44e53eeacf ARC: MWDT: fix SMP build for MWDT toolchain
Metaware assembler doesn't accept '@' symbol in the beginning
of symbol name like GNU does.

Drop excessive '@' for _curr_cpu symbol.

Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-08-10 07:36:25 -04:00
Evgeniy Paltsev 7ca190c20f ARC: 64BIT: Kconfig increase stacks sizes for 64bit platforms
Increase default stacks sizes for 64bit platforms where it is
required.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-08-07 20:36:23 -04:00
Evgeniy Paltsev 5ed232b62c ARC: ARCv3 64: adopt ARC SMP code for ARCv3 64 bit
Rewrite ARC SMP code with ASM-compat macros so it can be
used for ARCv3 64 bit.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-08-07 20:36:23 -04:00
Daniel Leung c661765f1d arm: cortex-m: setup TLS pointer before switching to main
The TLS global pointer is only set during context switch.
So for the first switch to main thread, the TLS pointer
is NULL which would cause access violation when trying
to access any thread local variables in main thread.
Fix it by setting it before going into main thread.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-30 20:16:47 -04:00
Ioannis Glaropoulos ca5623d288 arm: swap: cleanup an #ifdef statement in swap routine
Cleanup an #ifdef statement in swap_helper.S; use
ARMV6_M_ARMV8_M_BASELINE instead of listing all
Cortex-M baseline implementation variants. This
fixes an issue with Cortex-M23 whose Kconfig
define was not included in the original list.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos f795672743 arm: cortex-m: enhance information dump during HardFault escalation
When inside an escalated HardFault, we would like to get
more information about the reason for this escalation. We
first check if the reason for thise escalation is an SVC,
which occurs within a priority level that does not allow
it to trigger (e.g. fault or another SVC). If this is true
we set the error reason according to the provided argument.

Only when this is not a synchronous SVC that caused the HF,
do we check the other reasons for HF escalation (e.g. a BF
inside a previous BF).

We also add a case for a debug event, to complete going through
the available flags in HFSR.

Finally we ASSERT if we cannot find the reason for the escalation.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos 7930829826 arm: cortex-m: move synchronous SVC assessment in a separate function
Move the assessment of a synchronous SVC error into a
separate function. This commit does not introduce any
behavioral changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos a8d6c14d30 arm: cortex-m: clean up some more hard-coded constants in swap_helper
Clean up a few more hard-coded constants
in swap_helper.S and replace them with
CMSIS-like defines in cpu.h. No behavioral
changes in this commit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos 03c4bcd920 arm: use BASEPRI_MAX instead of BASEPRI to mask interrupts
When locking interrupt in a critical session, it is
safer to do MSR BASEPRI_MAX instead of BASEPRI. The
rationale is that when writing to BASEPRI_MAX, the
writing is conditional, and is only applied if the
change is to a higher priority level. This commit
replaces BASEPRI with BASEPRI_MAX in operations that
aim to lock some specific interrupts:
- irq_lock()
- masking out PendSV
So, for example, it is not possible to actually
unmask any interrupts by doing an irq_lock operation.
The commit does not introduce behavioral changes.
However, it makes irq_lock() more robust against
future changes to the IRQ locking mechanism.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos 7156183985 arm: fix the VTOR alignment requirement for Baseline Cortex-M
Baseline Cortex-M requires VTOR to be aligned on 64-word
boundary. That is because bit-7 of VTOR is also RAZ/WI.
The commit updates the vector table section alignment for
Baseline Cortex-M to reflect the implementation constraint.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos ebcd5de596 arm: cortex_a_r: rename z_platform_init to z_arm_platform_init
Platform specific initialization during early boot
has been a feature supported only by Cortex-M; the
Kconfig symbol is define in arch/arm Kconfig space.
We rename the z_platform_init() function to
z_arm_platform_init(), to indicate more clearly that
this is an internal, private ARM-only API.

This commit does not introduce behavioral changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos 1706b4dfaa arm: rename z_platform_init to z_arm_platform_init
Platform specific initialization during early boot
has been a feature supported only by Cortex-M; the
Kconfig symbol is defined in arch/arm Kconfig space.
We rename the z_platform_init() function to
z_arm_platform_init(), to indicate more clearly that
this is an internal, private ARM-only API.

This commit does not introduce behavioral changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos 70984a1587 arm: set DebugMonitor IRQ unconditionally during initialization
If the DebugMonitor extension is implemented by the core,
the interrupt may be pended and become active, even if it
is not enabled. Set the priority level of DebugMonitor upon
system initialization to the intended value unconditionally
so we do not end up in undefined behavior, if the exception
is accidentally pended. Since the priority level is set at
init, we can remove resetting the priority in DWT driver
initialization.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos 6981b84550 arm: ensure SysTick IRQ level is set unconditionally
When the SoC implements SysTick, but the system
does not use it as the driver for system timing
we still need to set its interrupt level. This
is because the SysTick IRQ is always enabled,
so we must ensure the interrupt priority is set
to a level lower than the kernel interrupts (for
the assert mechanism to work properly) in case
the SysTick interrupt is accidentaly raised.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos 28a59f67b9 arm: route PendSV to spurious IRQ handler if it is unused
If the PendSV interrupt is not used by Zephyr (this is
the case when we build with single-thread support) we
route the interrupt to z_arm_exc_spurious, instead of
assigning 0 to the vector table entry. This is because
the interrupt is always enabled and always exists, so
it is safer to always get the proper error report, in
case we accidentally pend the PendSV, for any reason.

We also add a comment in the PendSV priority setting,
explaining why it has to be assigned a priority level
even if it is not used.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos 41d3d38aec arm: aarch32: sort the source files lists alphabetically
Re-organize the library sources list so the files
are sorted alphabetically.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Bradley Bolen 379bb70728 arch: aarch32: cortex_m/r: Add arch exception helper
Create z_arm_preempted_thread_in_user_mode to abstract the
implementation differences between Cortex-M and R to determine if an
exception came from userspace.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-07-28 21:08:09 -04:00
Bradley Bolen 50a6dafdc5 arch: aarch32: cortex_m/r: Add arch helper function
Create z_arm_thread_is_user_mode to abstract the implementation
differences between Cortex-M and R to determine if the current thread is
in user or kernel mode.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-07-28 21:08:09 -04:00
Chen Peng1 fbe13b7bc2 cmake: oneApi: add oneApi support on windows.
add .S file extension suffix into CMAKE_ASM_SOURCE_FILE_EXTENSIONS,
because clang from OneApi can't recongnize them as asm files on
windows, then they won't be added into build system.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-07-27 07:20:12 -04:00
Dong Wang a6800cefb1 x86/cache: fix issues in arch dcache flush function
Correct the wrong operand of clflush instruction. The old operand
points to a location inside stack and doesn't work. The new one
works well by taking linux kernel code as reference.

End address instead of size should get round up

Add Kconfig option to disable the usage of mfence intruction for
SoC that has clfulsh but no mfence supported.

Signed-off-by: Dong Wang <dong.d.wang@intel.com>
2021-07-23 16:22:07 -04:00
Martin Åberg a1d1a5f547 SPARC: Keep interrupts disabled during kernel init
This commit avoids enabling interrupts during Zephyr init.

Details:
Interrupts will be enabled only when the first thread starts or if
arch_irq_unlock() is called before that.

The logic is now:
1. Enable traps, disable interrupts globally
2. Initialize bss
3. Call _PrepC

Use in-place memset() to avoid register window overflow and underflow
traps. That is perhaps not the common scenario, but could happen with
memset() implementation which contains SAVE instructions on a system
with few register windows.

The second, and more important, item this commit addresses is that it
increases the processor interrupt level (priority) to highest. That
is, it enters _PrepC with all maskable interrupts levels disabled.

This fixes some cases where interrupts could be taken after
z_clock_driver_init() while the system was still initializing. That
seem to have occurred when clearing large thread stacks.

The third thing is that we now start out with current window pointer
0 (PSR.CWP=0) instead of 1. It has no practical implication except
for preparing for possible future support for systems with only
two windows.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-07-22 10:25:53 -04:00
Dominik Ermel 86a1252556 arch/Kconfig: Remove stray tab from USERSPACE help
Commit removes stray tab from help.

Signed-off-by: Dominik Ermel <dominik.ermel@nordicsemi.no>
2021-07-15 22:58:28 +03:00
Dylan Hung b61ea62b6f arch: give the choice "Cache type" a name
Give the choice a name so that the soc/board developers can change the
default selection in their Kconfig.*.

For example:
choice CACHE_TYPE
	default HAS_EXTERNAL_CACHE
endchoice

There was a similar issue had beed discussed:
https://github.com/zephyrproject-rtos/zephyr/issues/6948

Signed-off-by: Dylan Hung <dylan_hung@aspeedtech.com>
Change-Id: I07c3e78a5243b30912f8e44fa3181fa163016318
2021-07-14 10:54:59 +03:00
Huifeng Zhang 0eab654b13 arch: arm64: select SCHED_IPI_SUPPORTED for Armv8_R
Armv8_R supports IPI

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2021-07-13 09:30:29 -04:00
Huifeng Zhang c34960bc87 arch: arm64: Unify the initialization of MMU and MPU
Because MMU and MPU should not be enabled together and they provide
the same functionalities.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2021-07-13 09:30:29 -04:00
Felipe Neves 7b09d031fa arch: riscv: added support for custom initialization of gp register
Plus added implementation for esp32c3 SoC.

Signed-off-by: Felipe Neves <ryukokki.felipe@gmail.com>
Signed-off-by: Felipe Neves <felipe.neves@espressif.com>
2021-07-07 20:58:50 -04:00
Evgeniy Paltsev fbc9fbf92f ARC: save/restore accumulator registers on all ARCv2 HS CPUs by default
Accumulator registers (ACCL, ACCH) are used on HS CPUs not only
in case of FPU usage but also in case of MPY usage. We enable MPY
for all ARCv2 HS in commit
18a24c3f6 ARC: gcc-m-cpu: use -mcpu=archs as a default for ARCv2 HS
but we didn't enable accumulator registers management.

Let's enable accumulator registers save/restore on all ARCv2 HS CPUs
by default.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-07-06 15:17:26 -05:00
Watson Zeng c6fcdc24ac arch: arc: update ARConnect ICD select mask when new cpu come online
The ARConnect Inter-core Debug Unit (ICD) provides additional
debug assist features in multi-core scenarios. It's useful to halt
other cores when one core is halted.

Before we program ICD in master core(core 0) initial stage, add
all cores to mask. so we need to make sure other slave cores have
launched and in running mode before we enable ICD in master core.

If we launch master core first, then launch slave cores by master
core conditionally, in this scenario, it's not OK.

Let's update arc connect debug (ARConnect ICD) select mask
when new slave core come online by slave core self, instead of
use hardcoded select mask.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-07-06 15:10:39 -05:00
Maksim Masalski 466c5d9dea arch: x86: core: remove order eval of 'z_x86_check_stack_bounds' args
The code depends on the order of evaluation 'z_x86_check_stack_bounds'
function arguments.
The solution is to assign these values to variables and then pass
them in.
The fix would be to make 2 local variables, assign them the values
of _df_esf.esp and .cs, and then call the function with those 2 local
variables as arguments.
Found as a coding guideline violation (MISRA R13.2) by static
coding scanning tool.

Change "int reason" to "unsigned reason" like in other functions.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-23 07:10:18 -04:00
Maksim Masalski cbfd33f2ec arch: add comments to empty default case, add default LOG_ERR
According to the Zephyr Coding Guideline all switch statements
shall be well-formed.
Add a comment to the empty default case.
Add a LOG_ERR to the default case.

Found as a coding guideline violation (MISRA R16.1) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-22 08:23:43 -04:00
Andy Ross b651aa9f7d arch/x86/zefi: Fix entry-nop hack for EFI entry
commit 5e9c583c24 ("arch/x86_64: Terrible, awful hackery to
bootstrap entry") introduced a terrible trick which begins execution
at the bottom of .locore with a jump, which then gets replaced with
NOP instructions for the benefit of 16 bit real mode startup of the
other CPUs later on.

But I forgot that EFI enters in 64 bit code natively, and so never
hits that path.  And moving it to the 64 bit setup code doesn't work,
because at that point when we are NOT loaded from EFI, we already have
the Zephyr page tables in place that disallow writes to .locore.

So do it in the EFI loader, which while sort of a weird place, has the
benefit of being in C instead of assembly.

Really all this code needs to go away.  A proper x86 entry
architecture would enter somewhere in the main blob, and .locore
should be a tiny stub we copy in at runtime.

Fixes #36107

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-06-14 08:22:34 -04:00
Shih-Wei Teng d109805cb2 RISC-V: Round up pre-populated stack frame to arch stack alignment
The stack frame size, used for context switch, is rounded up to 16-bytes
alignment. Therefore, we need round down the pointer of top of the
pre-populated stack frame so that the preserved stack frame size is also
rounded up to 16-bytes alignment.

Fixes #29535

Signed-off-by: Shih-Wei Teng <swteng@andestech.com>
2021-06-11 16:13:01 +02:00
Daniel Leung 253314aabe x86: reduce VM size if ACPI to 1GB
Since physical memory is no longer wholly identity mapped,
it is not needed to set the VM size to be larger than
physical memory size. The VM size was 2GB (max physical
memory size of x86 boards) + 1GB (for memory mappings).
So simply shrink the size to 1GB, as the kernel size is
small and we still have a large chunk of space to do
memory mapping.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Daniel Leung 39ba281686 x86: acpi: no need to map all physical memory
With ACPI doing dynamic memory mapping and unmapping
to access ACPI tables, there is no need to identity
map all the physical memory anymore. So remove
the "select" statement in ACPI kconfig.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Daniel Leung 454522430f x86: acpi: use memory mapping/unmapping to access ACPI tables
Instead of accessing ACPI tables through physical address, do
memory mapping/unmapping so they can be accessed via virtual
addresses. This allows us to avoid identity mapping all
physical memory, and thus no need for a page table large enough
to map everything.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Daniel Leung a3e817700f x86: acpi: limit search on where EBDA can be
This limits the search for Extended BIOS Data Area (EBDA) to
0x80000 to 0x100000 as this is usually the area for it.
If 0000:040e has an address not pointing to this area, it is
probably an invalid address, and should not be de-referenced
to avoid segfault.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Jeremy Bettis 2de4a902de cmake: Support coverage flags on all archs
Most arch's CMakeLists.txt contain rules to add compiler and linker
flags for coverage if CONFIG_COVERAGE is enabled, but 4 of them were
missing this.

Instead, set the coverage flags in arch/common/CMakeLists.txt which
affects all archs.

Signed-off-by: Jeremy Bettis <jbettis@chromium.org>
2021-06-10 18:01:36 -04:00
Maksim Masalski e96df40004 arch: x86: cast to the same size composite expression
Essential type of RHS operand (64 bit) is wider than essential
type of composite expression in LHS operand (32 bit).
LHS entry_val is 32 bit, and RHS (phys+offset) is 64 bit.
Cast RHS composite expression to the (pentry_t) type.

Found as a coding guideline violation (MISRA R10.7) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-10 17:17:23 -04:00
Jaxson Han 0c03a0572b arch: arm64: mpu: Fix mpu init assertion fail
During mpu init, we check MSA_frac bits[55:52] and MSA bits[51:48] of
the ID_AA64MMFR0_EL1 register. Currently we only allow 1F to pass the
check. But according to Armv8-R AArch64 manual [1], both 1F and 2F
indicates the processor supports MPU. This commit aims at fixing this.

[1]: https://developer.arm.com/documentation/ddi0600/latest/

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-06-09 23:40:03 -05:00
Jaxson Han cd536ae8ff arch: arm64: Refine the assertion in arch_start_cpu
When SMP enabled, the primary core calls arch_start_cpu to start
secondary cpus. There is an assertion checking the core mpid to make
sure it is called by primary core.

But the checking is bogus. After the first secondary core is brought
up, arm64_cpu_boot_params.mpid will be changed, which will fail the
assertion.

The current solution restores the arm64_cpu_boot_params.mpid.
However, using the arch_curr_cpu()->id == 0 as the assertion will be
better.

The _current_cpu->id will always fail assertion inside this macro
(__ASSERT_NO_MSG(!z_smp_cpu_mobile()), so I use arch_curr_cpu()->id
instead.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-06-09 05:42:00 -05:00
Jim Shu 1b4dad433f arch: riscv: enable FPU of threads in unshared FP mode
In unshared FP mode, only 1 thread can use FPU but kernel doesn't know
which one, so riscv arch would enable FPU of each thread.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-06-08 11:47:02 -05:00
Øyvind Rønningstad 382bbacb0a tfm: Put saving of FPU context into its own file so it can be reused
Also, this eases readability.

The new API can be used any time all FP registers must be manually
saved and restored for an operation.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-06-07 15:23:22 +02:00
Bradley Bolen 131af7648f arch: arm: cortex_r: Use assembler macros for exceptions
Most of the code for the three exception functions is identical so use
macros to make things easier to read.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-06-04 16:18:01 -05:00
Bradley Bolen 90e76bd891 arch: arm: cortex_r: Use macro for svc call
Use the context switch macro for z_arm_cortex_r_svc to be more clear
about the svc call being executed.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-06-04 16:18:01 -05:00
Andy Ross 9cb8dcbf84 arch/x86_64: Use modern CR0 assembly
The 16 bit bootstrap code for SMP CPUs was using the 286-era "lmsw"
instruction (load machine status word) to set the protected bit in CR0
(which is the modern evolution of the same register), presumably
because this is 16 bit code and we can't move a dword into CR0.

But that's wrong, because the full instruction set *is* available in
real mode on a 386, you just have to use a operand size prefix to get
to it, which the assembler emits for you automatically when you use
the .code16 directive.

Write this conventionally and use modern (e.g. 1986-era) instructions.
It also has the advantage of not confusing much more modern
hypervisors like ACRN by issuing instructions they (and I!) never knew
existed.

Fixes #35076

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-06-03 20:07:50 -05:00
Andy Ross 5e9c583c24 arch/x86_64: Terrible, awful hackery to bootstrap entry
Because of a historical misunderstanding, by default the ACRN
hypervisor wants to load Zephyr at address 0x1000 and enter the binary
at that same address.  This entry point corresponds to the __start
symbol of the build they were given, which is a 1-cpu non-SMP
configuration.  Unfortunately, when we build with
CONFIG_MP_NUM_CPUS=1, the code in locore.S #if's out the 16 bit entry
point for the auxiliary CPUs at the start of the section.  So in the
build ACRN received, the start address happened to be 0x7000, the same
address we need to launch the AP processors from.

That's right: under ACRN, the SAME ADDRESS used to enter the OS in 32
bit mode needs to be used later to boot CPUs running in 16 bit real
mode!

The solution, such as it is, is to put a 32 bit jump at the entry
address which hops to the 32 bit OS entry code, and then scribble NOP
instructions over that jump once we get there so that the next time we
reach that address (in real mode) we fall through to the correct
entry.

This patch should be considered a temporary workaround.  While it
works on all x86 hardware, it's not really needed.  A much better
solution would be to eliminate the locore linker region entirely
(which causes other headaches) and enter the Zephyr binary in a 32 bit
address somewhere in the contiguous high memory area.  All that locore
is needed for is the 16 bit bootstrap code for SMP processors, which
is ~6 instructions and can be copied in from the kernel at runtime.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-06-03 20:07:50 -05:00
Daniel Leung dfa4b7e375 kernel: mmu: z_backing_store* to k_mem_paging_backing_store*
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Daniel Leung 31c362d966 kernel: mmu: rename z_eviction* to k_mem_paging_eviction*
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Martin Åberg aa0a90d09c SPARC: add the Flush windows software trap
This commit implements the SPARC V8 ABI "Flush windows" software trap.
It enables support for C++ exceptions and longjmp().

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-05-28 06:32:36 -05:00
Henrik Brix Andersen 2b0a481291 arch: arm: cortex-m: add support for clearing NXP MPU regions at boot
Clear NXP MPU regions at boot if CONFIG_INIT_ARCH_HW_AT_BOOT is
enabled.

Fixes: #34045

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2021-05-26 18:14:03 -05:00
Ioannis Glaropoulos b3b36f69a6 arm: cortex-m: shrink hidden option for null-pointer detection
Shrink the name of the hidden cortex-m option for the
null-pointer dereference detection feature.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 12:30:05 -05:00
Ioannis Glaropoulos d105a2b76c arm: shrink names for null-pointer exception detection Kconfigs
Reduce the length of the Kconfig defines related to
null-pointed dereference detection in Cortex-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 12:30:05 -05:00
Ioannis Glaropoulos 4084242a71 kernel: make MULTITHREADING promptless if single-thread not supported
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 11:03:22 -05:00
Watson Zeng 8414e86b42 arch: arc: _reset and _start section fix
SECTION_FUNC allows only one function to reside in a sub-section
SECTION_SUBSEC_FUNC allows multiple functions to reside in a sub-section
we should use SECTION_SUBSEC_FUNC for _reset and _start

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-05-26 04:43:06 -05:00
Huifeng Zhang cb32268fad arch: arm64: Fix the assertion failed when MP_NUM_CPUS >= 3
"arm64_cpu_boot_params.mpid" should be assigned to "master_core_mpid"
after secondary CPU core up.

Because "arm64_cpu_boot_params.mpid" is used to check the next up CPU
core's mpid is the excepted mpid. After excepted CPU core up, the
"arm64_cpu_boot_params.mpid" doesn't restore to primary CPU core's mpid
and then the primary CPU core try to up third CPU core will crash in
assertion.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2021-05-26 04:42:49 -05:00
Watson Zeng 5516b02d53 arch: archs: using ATOMIC_OPERATIONS_BUILTIN
ATOMIC_OPERATIONS_BUILTIN issue (internal jira number: P10019563-43273)
has been fixed in new relasese MWDT 2021.03. We can use builtin atomic.
this commit reverts PR: #28528

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-05-25 12:55:48 -05:00
Johan Hedberg 8341a136d6 x86: multiboot: Fix NULL pointer dereferences
From the point of checking the info pointer value all code in the
z_multiboot_init() function depends on it being non-NULL. Therefore,
simply return from the function if it's NULL.

Fixes #33084

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2021-05-25 13:37:19 -04:00
Nicolas Pitre b8d24ffb45 arm64: mitigate FPU-in-exception usage side effects
Every va_start() currently triggers a FPU access trap if FPU is not
already used. This is due to the fact that va_start() must copy FPU
registers that are used for float argument passing into the va_list
object. Flushing the FPU context to its owner and granting access to
the current thread is wasteful if this is only for va_start(),
especially since in most cases there are simply no FP arguments
being passed by the caller.

This is made even worse with exception code (syscalls, IRQ handlers,
etc.) where the exception code has to be resumed with interrupts
disabled upon FPU access as there is no provision for preserving an
interrupted exception mode's FPU context.

Fix those issues by simply simulating the sequence of STR instructions
that the va_start() generates without actually granting FPU access.
We limit ourselves only to exception context to keep changes to a
minimum for now.

This also allows for reverting the ARM64 exception in the nested IRQ
test as it now works properly even if FPU_SHARING is enabled.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-21 04:52:44 -05:00
Aurelien Jarno be49df628f arch: arm: cortex_m: z_arm_mpu_init: fix D-Cache invalidation
In case CONFIG_NOCACHE_MEMORY=y, the D-Cache need to be clean and
invalidated before enabling the MPU to make sure no data from a
__nocache__ region is present in the D-Cache.

If the D-Cache is disabled, SCB_CleanInvalidateDCache() shall not be
used as it might contains random data for random addresses, and this
might just create a bus fault.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2021-05-18 11:39:26 -05:00
Aurelien Jarno 1a583e44ba arch: arm: cortex_m: fix D-Cache reset with CONFIG_INIT_ARCH_HW_AT_BOOT
On reset we do not know what is the status of the D-Cache, nor its
content.

If it is disabled, do not try to clean it, as it might contains random
data for random addresses, and this might just create a bus fault.
Invalidating it is enough.

If it is enabled, it means its content is not random.
SCB_InvalidateDCache() will clean it, invalidate it and disable it.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2021-05-18 11:39:26 -05:00
Andy Ross 41e885947e arch/x86: Correct multiboot interpretation when building for EFI
When loaded via EFI, we obviously don't have a multiboot info pointer
available (we might have an EFI system table, but zefi doesn't pass
that through yet).  Don't try to parse the "whatever garbage was in
%rbp" as a multiboot table.

The configuration is a little clumsy, as strictly our EFI kconfig just
says we're "building for" EFI but not that we'll boot that way.  And
tests like arch/x86/info are trying to set CONFIG_MULTIBOOT=n
unconditionally, when it really should be something they detect from
devicetree or wherever.

Fixes #33545

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-15 15:30:02 -04:00
Daniel Leung 2c2d313cb9 x86: ia32: mark symbols for boot and pinned regions
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung 512cb905d1 x86: ia32/linker: add boot and pinned sections
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung 45d1fc9cc2 x86: gen_mmu: add support for boot and pinned regions
Both boot and pinned regions need to be mapped and permissions
set correctly.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung af49ec0277 linker: remove TEXT_START macro
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Nicolas Pitre 5f6e257b0b arm64: provide an optimized arch_page_phys_get()
The AT instruction gives the corresponding physical address directly.
Much faster than the default implementation.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-08 17:06:58 -04:00
Carlo Caione f000695243 cache: Rename sys_{dcache,icache}_* to sys_{data,instr}_cache_*
To have a common prefix.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Carlo Caione e2333269ae cache: Introduce external cache controller system support
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.

Rework the cache code introducing support for external cache
controllers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Evgeniy Paltsev 93bf5f58e7 ARC: add TLS support for ARCv3
For ARCv3 the register is fixed to r30, so we don't need to
configure it at compile-time.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev 9a3d925860 ARC: boost default stacks in case of 64BIT
Increase stacks required for ARCv3 64-bit CI to pass.
The CMSIS stacks are for programs in samples/portability

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev 8048b14135 ARC: allow to build code for processors without ZOL
ARCv3 64 bit processors doesn't have Zero Delay Loop
(also named Zero Overhead Loop, ZOL) mechanism. Add kconfig
option to remove ZOL register save/restore so the code
can be build for both ARCv2 and ARCv3.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev 3f12ca57b8 ARC: make vector table bit agnostic
ARCv2 32 bit and ARCv3 64 bit share the same vector table
structure but with different vector entry size (32 and 64 bit),
so we can easily make vector table bit agnostic.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev 0d859796be ARC: make variables with regs and addresses bit agnostic
Make variables where we store CPU registers values and
memory addresses bit agnostic.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev ab17a59ba5 ARC: mark accesses which are 32 bit despite of platform bittnes
Mark the places where we intentionally use st instead of STR for
code common for ARCv2 and ARCv3.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev 9d309d300a ARC: workaround bloated structure access in ASM with _st_huge_offset
When we accessing bloated structure member we can exceed u9 operand
in store instruction. So we can use _st32_huge_offset macro instead
for 32 bit accesses

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev c2b61dfe72 ARC: rewrite ASM code with asm-compat macroses
Rewrite ARC assembler code with asm-compat macroses, so the same
code can be used for both ARCv2 (GNU and MWDT assemblers) and
ARCv3 (GNU assembler)

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev 8cb122ea5d ARC: reuse headers for both ARCv3 and ARCv3 if possible
Reuse ARCv2 headers [where it is possible] for ARCv3.
In this commit we simply allow to use them for ARCv3, we'll
move it to proper folder and rename them [where it is required]
in the upcoming cleanup patch.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev 6afe7c5fd2 ARC: prepare for building for ARCv3 HS6x
Do basic preparations for building code for ARCv3 HS6x
* add ISA_ARCV3 and CPU_HS6X config options
* add off_t type support for __ARC64__
* use elf64-littlearc format for linking
* use arc64 mcpu for CPU_HS6X

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Daniel Leung 18aad13d76 x86: mmu: implement arch_page_phys_get()
This implements arch_page_phys_get() to translate mapped
virtual addresses back to physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung 786cf641dc x86: mmu: implement arch_mem_unmap()
This implements arch_mem_unmap() as counterpart to
arch_mem_map().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung c481fd412e x86: mmu: don't decrement z_free_page_count in reserving code
In z_mem_manage_init(), z_free_page_count is only manipulated
after all reserved pages are marked, and will reflect
the actual number of page frames being added to the free page
frame list. Manipulating z_free_page_count before this is
going to mess up the accounting, so remove the code to
decrement z_free_page_count in arch_reserved_pages_update()
under x86.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung 783b20712e arch: implement brute force find_lsb_set()
On RISC-V 64-bit, GCC complains about undefined reference
to 'ffs' via __builtin_ffs(). So implement a brute force
way to do it. Once the toolchain has __builtin_ffs(),
this can be reverted.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Bradley Bolen 95a7e71661 arch: arm: aarch32: Move mpu code up a level
Move the mpu code to the common aarch32 directory in preparation for
Cortex-R mpu support

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-05-06 19:39:09 +02:00
Daniel Leung 37672958ac x86: mmu: relax KERNEL_VM_OFFSET == SRAM_OFFSET
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-05 19:42:25 -04:00
Gerard Marull-Paretas 280ca7a632 arch: replace power/power.h with pm/pm.h
Replace old header with the new one.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Jennifer Williams ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00
Øyvind Rønningstad a2cfb8431d arch: arm: Add code for swapping threads between secure and non-secure
This adds code to swap_helper.S which does special handling of LR when
the interrupt came from secure. The LR value is stored to memory, and
put back into LR when swapping back to the relevant thread.

Also, add special handling of FP state when switching from secure to
non-secure, since we don't know whether the original non-secure thread
(which called a secure service) was using FP registers, so we always
store them, just in case.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-05-05 13:00:31 +02:00
Ioannis Glaropoulos ad808354d2 arch: arm: Add config for non-blocking secure calls
Introduce a Kconfig option to allow Secure function calls to be
pre-empted.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-05-05 13:00:31 +02:00
Nicolas Pitre 76494f8589 arm64: optimize offsets in z_arm64_context_switch
We can use build-time offsets from a struct k_thread pointer directly
to struct _callee_saved members. No need to compute that at run time.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-04 22:41:32 -04:00
Mahesh Mahadevan d6b50233ac arch: arm: Setup Static MPU regions earlier in boot flow
Setup the static MPU regions before PRE_KERNEL_1 and
PRE_KERNEL_2 functions are invoked. This will setup
the MPU for SRAM regions in case code relocated to SRAM
is invoked from any of these functions.

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Mahesh Mahadevan 1b36c6c00e arch: arm: Create a MPU entry for relocated code
Code relocated using CONFIG_CODE_DATA_RELOCATION_SRAM should
be allowed to execute from SRAM

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Mahesh Mahadevan 64e973fdcd Kconfig: Add a new config CODE_DATA_RELOCATION_SRAM
1. This will help us identify if the relocation is to
SRAM which is used when setting up the MPU entry
for the SRAM region where code is relocated
2. Move CODE_DATA_RELOCATION configs to ARM specific
folder

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Nicolas Pitre 35c9ed6a4b arm64: don't create a section for z_arm64_exit_exc_fpu_done
Both z_arm64_exit_exc and z_arm64_exit_exc_fpu_done must be within
the same section as execution falls through here.

If z_arm64_exit_exc_fpu_done creates a section of its own then the
linker is free to disjoint the code and we absolutely don't want that.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 19:56:26 -04:00
Guennadi Liakhovetski 29abc8adc0 xtensa: fix booting secondary cores on the dummy thread
When secondary cores are booted, they use the dummy thread and
the IRQ stack until they switch over to a real thread. Therefore
dummy threads shouldn't be skipped when cohering outgoing thread
stack, only threads with zero stack size should be skipped.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-03 17:13:01 -04:00
Nicolas Pitre f1f63dda17 arm64: FPU context switching support
This adds FPU sharing support with a lazy context switching algorithm.

Every thread is allowed to use FPU/SIMD registers. In fact, the compiler
may insert FPU reg accesses in anycontext to optimize even non-FP code
unless the -mgeneral-regs-only compiler flag is used, but Zephyr
currently doesn't support such a build.

It is therefore possible to do FP access in IRS as well with this patch
although IRQs are then disabled to prevent nested IRQs in such cases.

Because the thread object grows in size, some tests have to be adjusted.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Nicolas Pitre a82fff04ff arm64: implement exception depth count
Add the exception depth count to tpidrro_el0 and make it available
through the arch_exception_depth() accessor.

The IN_EL0 flag is now updated unconditionally even if userspace is
not configured. Doing otherwise made the code rather hairy and
I doubt the overhead is measurable.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Nicolas Pitre 949ef7c660 Kconfig: clean up FPU and FPU_SHARING entries
CONFIG_FPU: The architecture dependency list is redundant.
Having CPU_HAS_FPU being selected by those archs as a dependency
is sufficient and cleaner.

CONFIG_FPU_SHARING: The default should always be y to be on the safe
side here, but as a compromise for not affecting existing config, let's
move the default selection local to those configs that care, again to
avoid a growing list of conditionals here. Adjust the help text which
applies to more than just Cortex-M.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Jiafei Pan 4a87c08606 arm64: cache: fix arch_dcache_all()
Add data barrier before and after dcachle flush or clean,
and restore to data cache level 0 after all ops.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-05-03 11:55:52 +02:00
Jiafei Pan 12b9b5aacc arm64: cache: refine arch_dcache_range()
Moved all assembly code to c code. Fixed arch_dcache_line_size_get()
to get dcache line size by using "4 << dminline" and don't consider
CWG according to sample code in cotexta-v8 programer guider.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-05-03 11:55:52 +02:00
Daniel Leung 54283efcce x86: mmu: allow page table extra mappings to have cache disabled
This adds the bits to the gen_mmu.py script so that extra mappings
can be added with caching disabled. This is useful for mapping
MMIO regions where caching is not desired.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 21:17:24 -04:00
Daniel Leung 43f0726985 arm: aarch32: timing: fix potential divide by zero if DWT
There is a possibility that the DWT frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta DWT cycles
until they both are not zero.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 16:49:17 -04:00
Daniel Leung d6cbdace78 x86: timing: fix potential divide by zero
There is a possibility that the TSC frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta TSC cycles
until they both are not zero.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 16:49:17 -04:00
Jennifer Williams 3e28a570c2 arch: x86: core: pcie: rephrase use of ain't
Rephrasing away from ain't, which is informal, uncommon, and can
be viewed as substandard or 'slang'.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-29 07:15:50 -04:00
Gerard Marull-Paretas f163bdb280 power: move reboot functionality to os lib
Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-28 20:34:00 -04:00
Jennifer Williams 734c65ad23 arch: arm: core: aarch32: cortex_m: fault: fix if...else ifs
bus_fault() and hard_fault() were missing final else statement
in the if else if constructs. This commit adds non-empty else {}
to comply with coding guideline 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams a5c27d69b5 arch: arm: core: aarch32: cortex_m: debug: remove if...else if construct
z_arm_debug_monitor_event_error_check() was missing final
else statement in the if else if construct so violated guideline
15.7. This commit removes the else if for symmetry in the limited
early-exit conditions, rather than empty final else {}, to comply.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Gerard Marull-Paretas 6c7c9e2b99 arch: x86: remove usage of device_pm_control_nop
If device PM is not implemented just use NULL.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-27 16:28:49 -04:00
Hou Zhiqiang 50d263d138 arm64: Do not try to bring up the cores disabled in DT node
The macro DT_FOREACH_CHILD will iterates all child nodes ignoring the
status property, this patch changes to use DT_FOREACH_CHILD_STATUS_OKAY
to avoid trying to bring up disabled cores, which only iterates the
enabled child nodes.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2021-04-27 13:32:55 -04:00
Hou Zhiqiang 9681034875 arm64: Fix MPID load instruction for secondary cores
Change to load MPID for secondary cores adding offset macro
BOOT_PARAM_MPID_OFFSET.

Currently the code load MPID for secondary cores from offset 0x0
of the struct arm64_cpu_boot_params, it's working as currently
the macro BOOT_PARAM_MPID_OFFSET has value 0x0, but when the
location of the member "mpid" is changed, it can result in SMP
booting failure and the build assert won't throw out any warning.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2021-04-27 13:32:18 -04:00
Daniel Leung 1117169980 kernel: generate placeholders for kobj tables before final build
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-27 13:32:00 -04:00
Morten Priess a0dd44c5e0 arch: select HAS_DTS for SPARC
With PR#34449, architectures that use DTS must select the HAS_DTS
configuration.

Signed-off-by: Morten Priess <mtpr@oticon.com>
2021-04-26 13:42:10 +02:00
Jiafei Pan a89cb1cc13 arm64: mmu: invalidate all data caches before enable them
Datas in data cache are dirty before data caches are enabled,
so need to invalidate all data caches firstly before enable
them.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-26 13:39:39 +02:00
Jiafei Pan 7b7035231f arm64: cache: add arch_dcache_all()
Add cache function: arch_dcache_all(), it can clean or
invalidate or clean&invalidate all data caches.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-26 13:39:39 +02:00
Ioannis Glaropoulos fdb4df26d3 arm: cortex-m: minor doc updates in swap_helper.S
Inline some minor clarifications regarding the
Lazy Stacking feature in the cortex-m pendSV
handler, for ease of understanding. Also, fix
some minor style issues in comments.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-04-23 15:18:16 -05:00
Carlo Caione 256ca55476 arm64: Rework stack usage
The ARM64 port is currently using SP_EL0 for everything: kernel threads,
user threads and exceptions. In addition when taking an exception the
exception code is still using the thread SP without relying on any
interrupt stack.

If from one hand this makes the context switch really quick because the
thread context is already on the thread stack so we have only to save
one register (SP) for the whole context, on the other hand the major
limitation introduced by this choice is that if for some reason the
thread SP is corrupted or pointing to some unaccessible location (for
example in case of stack overflow), the exception code is unable to
recover or even deal with it.

The usual way of dealing with this kind of problems is to use a
dedicated interrupt stack on SP_EL1 when servicing the exceptions. The
real drawback of this is that, in case of context switch, all the
context must be copied from the shared interrupt stack into a
thread-specific stack or structure, so it is really slow.

We use here an hybrid approach, sacrificing a bit of stack space for a
quicker context switch. While nothing really changes for kernel threads,
for user threads we now use the privileged stack (already present to
service syscalls) as interrupt stack.

When an exception arrives the code now switches to use SP_EL1 that for
user threads is always pointing inside the privileged portion of the
stack of the current running thread. This achieves two things: (1)
isolate exceptions and syscall code to use a stack that is isolated,
privileged and not accessible to user threads and (2) the thread SP is
not touched at all during exceptions, so it can be invalid or corrupted
without any direct consequence.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-23 06:32:20 -04:00
Mahesh Mahadevan a9397e3b3a arm: cortex_m: Update get DWT frequency for NXP SoC's
Get the DWT cycle count frequency for NXP devices from
CMSIS SystemCoreClock symbol

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-04-21 20:40:24 -04:00
Carlo Caione 1ceff68ea1 arm64: Fix maybe-uninitialized error
Fix:
 arch/arm64/core/smp.c:98:3: error: 'cpu_mpid' may be used uninitialized
 in this function [-Werror=maybe-uninitialized]

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-20 15:51:22 -04:00
Bradley Bolen 92a3209c5c arch: arm: aarch32: cortex_a_r: Dump callee saved registers on fault
Some of these registers may contain nuggets of information that would be
beneficial when debugging, so include them in the fault dump.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Bradley Bolen c96ae584bf arch: arm: aarch32: cortex_a_r: Correct syntax for srs
The writeback specification should be after the register, not after the
mode according to the documentation at

Link: https://developer.arm.com/documentation/dui0489/h/arm-and-thumb-instructions/srs

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Bradley Bolen 18ec84803c arch: arm: aarch32: Use ARRAY_SIZE in for loop
Do not hardcode the array size in the loop for printing out the floating
point registers of the exception stack frame.  The size of this array
will change when Cortex-R support is added.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Krzysztof Chruscinski ae4adea463 arch: arm: cortex_m: z_arm_pendsv in vector table when multithreading
When CONFIG_MULTITHREADING=n kernel specific pendsv is not used. Remove
from vector table.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-20 16:00:39 +02:00
Bradley Bolen 6734c6e874 arch: arm: aarch32: Fix spurious interrupt handling
The GIC can return 0x3ff to indicate a spurious interrupt.  Other
interrupt controllers could return something different.  Check that the
pending interrupt is valid in order to avoid indexing past the end of
the isr_table.

This fixes #30465 and is based on the aarch64 fix in 9dd2731d.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 08:30:41 -04:00
Nicolas Pitre 29c8e9bf66 arm64: decrustify and extend SMP boot code
The SMP boot code depends on physical CPU #0 to be first to boot and
subsequent CPUs to follow suit in a linear fashion. Let's decouple
physical and logical numbering so that any physical CPU can be the
boot CPU. This is based on a prior code proposal from
Jiafei Pan <Jiafei.Pan@nxp.com>.

This, however, was about to turn the boot code into some hairy mess.
So let's clean things up and simplify the code as well while at it.
Both the extension and the clean up aren't separate commits because
they actually depend on each other.

The BOOT_PARAM_*_OFFSET defines are locally hardcoded as there is no
point exposing the related structure widely. Build time assertions
ensure they don't go out of sync with the struct definition. And
vector_table.h is repurposed into boot.h to gather boot related
definitions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-19 11:00:05 -04:00
Jiafei Pan 7889364771 arm64: refine the code for primary core checking
We can find caller of z_arm64_mmu_init is on primary
core or not, so no need to check mpidr, just add a
function parameter.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-19 11:00:05 -04:00
Carlo Caione cd4b01e0bb arm64: Use syscall frame and fix bad syscall handling
This patch is fixing three related problems:

1. When calling a syscall the marshalling function is using the ssf
   parameter as value to be saved in _current->syscall_frame to mark the
   beginning and the end of the syscall. This ssf value is not currently
   being explictly set and instead the syscall code is using whatever
   value is stored in x6 when the syscall is called. If it happens that
   x6 is 0 at the time the syscall is called, this causes the
   z_is_in_user_syscall() function to fail. Fix this passing the ESF as
   value for ssf.

2. Given that in the ssf is now present the ESF, we can fix
   arch_syscall_oops() using the ESF to print a more detailed error
   message with registers dump.

3. When a wrong syscall number is used, handler_bad_syscall() is called.
   This function expects the ID number as first parameter to print the
   error message, fix this.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-18 22:04:52 -04:00
Carlo Caione 04df0ddc88 arm64: Set AARCH64_IMAGE_HEADER and BUILD_OUTPUT_BIN to y
It doesn't hurt always having the image header and generating the binary
output. I find myself constantly setting those to 'y', so make it
definitive.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-15 12:58:22 +02:00
Carlo Caione 013c8273ca arm64: gic: Enable access to ICC_* registers
Thi GICv3 driver is configuring the controller accessing the system
registers ICC_*. To be able to do that without trapping we have to
explicitly set at boot in EL3 the value of the ICC_SRE_EL3 register that
is architecturally set to UNKNOWN value on warm reset.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-15 12:26:39 +02:00
Nicolas Pitre 88477906f0 arm64: hold curr_cpu instance in tpidrro_el0
Let's fully exploit tpidrro_el0 by storing in it the current CPU's
struct _cpu instance alongside the userspace mode flag bit. This
greatly simplifies the code needed to get at the cpu structure, and
this paves the way to much simpler multi cluster support, as there
is no longer the need to decode MPIDR all the time.

The same code is used in the !SMP case as there are benefits there too
such as avoiding the literal pool, and it looks cleaner.

The tpidrro_el0 value is no longer stored in the exception stack frame.
Instead, we simply restore the user mode flag based on the SPSR value.
This way, more flag bits could be used independently in the future.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-14 15:06:21 -04:00
Jaxson Han f249544f48 arch: arm64: Add MPU drivers to the build system
When ARM_MPU is defined, the MPU drivers will be built into the final
zephyr target.

Signed-off-by: Haibo Xu <haibo.xu@arm.com>
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Jaxson Han 30ed92c218 arch: arm64: Armv8-R AArch64 MPU implementation
Armv8-R AArch64 MPU can support a maximum 16 memory regions, and the
actual region number can be retrieved from the system register(MPUIR)
during MPU initialization.
Current MPU driver only suppots EL1.

Signed-off-by: Haibo Xu <haibo.xu@arm.com>
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Jaxson Han ad1da08f4f arch: arm64: Add Cortex-R82 config
Add Cortex-R82 config to support the Cortex-R82 processor.
Introduce the new CPU_CORTEX_R_AARCH64 config for the Cortex-R 64-bit
processor.

Since the current CPU_CORTEX_R config has already been bound for
AArch32 in many test cases, we therefore add a new CPU_AARCH64_CORTEX_R
to distinguish from the Cortex-R 32-bit processor.
We do not use CPU_CORTEX_R64 because this name will lead to ambiguity
with processor name like Cortex-R82.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Evgeniy Paltsev d4081fd07f ARC: allow to configure the RGF_NUM_BANKS only if ARC_FIRQ is enabled
As of today we use second register bank only if fast interrupts are
enabled. So don't show the 'number of register bank' configuration
option if fast interrupts are disabled to avoid user confusion.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-04-13 06:59:20 -04:00
Nicolas Pitre 790794ce84 arm64: improve CONFIG_MAX_XLAT_TABLES default value
The typical number of needed translation tables depends on memory
domain usage and userspace support, but also on the virtual address
space width due to the number of translation levels involved.
Reflect that in the default value.

Also fix a related comment where values were off by 1.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-12 22:13:38 -04:00
Nicolas Pitre fc8c53ff0e arm64: a few alignment fixes
The structure for the arm64_cpu_init array has to carry the cache
alignment on the whole structure and not on some internal padding
to achieve the desired effect.

And align struct __esf to a 16-byte boundary which will also align
its size accordingly. This structure is allocated on the stack on
exception entry and the ABI prescribed 16-byte stack alignment
should be preserved.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-12 11:47:41 -04:00
Krzysztof Chruscinski 8bee027ec4 arch: arm: Unconditionally compile IRQ_ZERO_LATENCY flag
Flag was present only when ZLI was enabled. That resulted in additional
ifdefs needed whenever code supports ZLI and non-ZLI mode.

Removed ifdefs, added build assert to irq connections to fail at
compile time if IRQ_ZERO_LATENCY is set but ZLI is disabled. Additional
clean up made which resulted from removing the ifdef.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-12 07:33:27 -04:00
Flavio Ceolin 9285b4c6bf arch: nios2: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin b7d04487e1 arch: riscv: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin bfd9d0069b arch: sparc: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin 49f0c74a9e arch: common: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin 4f5460ad6a arch: arm: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin abb1bbe6b1 arch: xtensa: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin 4b55ee27d4 arch: arc: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin 03544f0b77 arch: x86: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Ioannis Glaropoulos d307bd2fdd arm: add note explaining why Hard ABI is disabled for tfm builds
Add a note in the Kconfig help text that explains why Hard ABI
is not possible on builds with TF-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-04-09 11:48:55 -05:00
Øyvind Rønningstad 80a351e22d arch: arm: Disallow FP_HARDABI when building with TFM
When building with TFM, the app is linked with libraries built by the
TFM build system. TFM is always built with -msoft-float which is
equivalent to -mfloat-abi=soft. FP_HARDABI adds -mfloat-abi=hard
which gives errors when linking with the libs from TFM since they are
built with a different ABI.

Fixes https://github.com/zephyrproject-rtos/zephyr/issues/33956

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-04-09 11:48:55 -05:00
Nicolas Pitre 69a0fd3a6a aarch64: smp: make the cross-CPU swap_ptables call use its own IPI
Let's disentangle this from arch_sched_ipi() with an SGI for its
own purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-09 11:55:13 -04:00
Nicolas Pitre 28dc807f50 aarch64: mmu: don't touch the lock before the MMU is on
We can't do atomic memory operations before the MMU is on. Let's create
a code path to set up MMU page tables without any lock. There is
obviously no concurrency issues at this stage.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-09 11:55:00 -04:00
Carlo Caione 9dd2731d15 aarch64: Remove comparison with GIC-specific intid
GIC_INTID_SPURIOUS is a GIC-specific intid so it's not valid for custom
interrupt controllers. Rework a bit the logic by comparing the intid to
the maximum intid possible instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:28:21 -04:00
Carlo Caione 23a0c8c2ec aarch64: Do not save garbage on the stack
No need to save useless values on the stack.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:28:21 -04:00
Carlo Caione 64dfa69681 aarch64: Remove useless _curr_cpu struct
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:10:10 -04:00
Jiafei Pan 56db1ee66d arch: arm64: add 40 bits physical and virtual address
Add support 40 bits physical and virtual address space.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-09 13:25:15 +02:00
Kumar Gala be0a19757c riscv: MTVAL CSR not supported on OpenISA RV32M1
Don't report MTVAL on the OpenISA RV32M1 SoC as this CSR isn't
supported on the SoC.

Fixes: #34014

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-04-08 14:22:54 +02:00
Daniel Leung 09e8db3d68 kernel: enable using timing subsys to collect paging histograms
This adds bits to the paging timing histogram collection routines
so they can use timing functions to collect execution time data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung 1eba3545c1 x86: timing: allow userspace to convert cycles to ns
The variable tsc_freq is not accessible in user thread
and is thus preventing user threads to convert cycles to ns.
So make tsc_freq available globally in default memory
domain so conversion is possible.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung 8eea5119d7 kernel: mmu: demand paging execution time histogram
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung ae86519819 kernel: mmu: collect more demand paging statistics
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.

Also extends this to gather per-thread demand paging
statistics.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Flavio Ceolin 85b2bd63c1 arch: x86: Fix 14.4 guideline violation
The controlling expression of an if statement has to be an
essentially boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-06 10:25:24 -04:00
Flavio Ceolin 95cd021cea arch: arm: Fix 14.4 guideline violation
The controlling expression of an if statement has to be an
essentially boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-06 10:25:24 -04:00
Daniel Leung 64e99dfcf6 xtensa: change CONFIG_ATOMIC_OPERATIONS_ARCH to imply
Xtensa cores are highly configurable so each SoC may not have
the needed instructions for the hardware assisted atomic
operations. So instead of selecting the arch-specific atomic
operations kconfig, do a "imply" instead. So SoC or board
configs can disable this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-02 07:23:33 -04:00
Nicolas Pitre d18ab4af49 arm64: get rid of the mmu directory
Turns out that we could flatten the tree further as there is not
that many files to warrant a whole directory for this.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-01 15:47:04 -05:00
Anas Nashif 0630452890 x86: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif 25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Carlo Caione a43f3bade8 arm/arm64: Fix misc and trivials for ARM/ARM64 split
Fix the header guards, comments, github labeler, CODEOWNERS and
MAINTAINERS files.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Carlo Caione 3539c2fbb3 arm/arm64: Make ARM64 a standalone architecture
Split ARM and ARM64 architectures.

Details:

- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
  (arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
  boards/bcm_vk/viper directory

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Daniel Leung 7a27509d6f x86: gen_mmu: allow script to take extra arguments
This extends the cmake build script to take in extra arguments
for gen_mmu.py.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung 4b477a9864 x86: mmu: allow copying page directory entries with large pages
This changes the assert when a large page is encountered to
copying the page directory entry to the new page directory.
This is needed when a large page entry is generated by
gen_mmu.py. Note that this still asserts when there are entries
of large page at higher level.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung 51263f73aa x86: gen_mmu: allow specifying extra mappings
This extends gen_mmu.py to accept additional mappings passed via
command line.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung 0886a73df8 x86: gen_mmu: fail if reserved page table space is too small
This makes the gen_mmu.py script to error out if the reserved space
for page table in zephyr_prebuilt.elf is not large enough to
accommodate the generated page table. Let catch this at build time
instead of mysterious hangs when loading the page table at boot.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung 3ebcd8307e x86: mmu: add kconfig CONFIG_X86_EXTRA_PAGE_TABLE_PAGES
The whole page table is pre-allocated at build time and is
dependent on the range of address space. This kconfig allows
reserving extra pages (of size CONFIG_MMU_PAGE_SIZE) to
the page table so that gen_mmu.py can make use of these
extra pages.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Flavio Ceolin 3a04cc2210 riscv: core: Remove invalid comparison
unsigned int will never be lesser than 0.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-26 07:13:13 -04:00
Martin Åberg 83f733ce59 SPARC: improve fatal log
The fatal log now contains
- Trap type in human readable representation
- Integer registers visible to the program when trap was taken
- Special register values such as PC and PSR
- Backtrace with PC and SP

If CONFIG_EXTRA_EXCEPTION_INFO is enabled, then all the above is
logged. If not, only the special registers are logged.

The format is inspired by the GRMON debug monitor and TSIM simulator.
A quick guide on how to use the values is in fatal.c.

It now looks like this:

E: tt = 0x02, illegal_instruction
E:
E:       INS        LOCALS     OUTS       GLOBALS
E:   0:  00000000   f3900fc0   40007c50   00000000
E:   1:  00000000   40004bf0   40008d30   40008c00
E:   2:  00000000   40004bf4   40008000   00000003
E:   3:  40009158   00000000   40009000   00000002
E:   4:  40008fa8   40003c00   40008fa8   00000008
E:   5:  40009000   f3400fc0   00000000   00000080
E:   6:  4000a1f8   40000050   4000a190   00000000
E:   7:  40002308   00000000   40001fb8   000000c1
E:
E: psr: f30000c7   wim: 00000008   tbr: 40000020   y: 00000000
E:  pc: 4000a1f4   npc: 4000a1f8
E:
E:       pc         sp
E:  #0   4000a1f4   4000a190
E:  #1   40002308   4000a1f8
E:  #2   40003b24   4000a258

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Martin Åberg c2b1e8d2f5 SPARC: implement ARCH_EXCEPT()
Introduce a new software trap 15 which is generated by the
ARCH_EXCEPT() function macro.

The handler for this software trap calls z_sparc_fatal_error() and
finally z_fatal_error() with "reason" and ESF as arguments.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Martin Åberg 9da5a786a1 SPARC: catch unexpected softare traps
Unexpected software traps ("ta" instruction) are now handled by the
fatal exception handler and eventually end up in z_fatal_error().

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Kumar Gala 520ebe4d76 arch: arm: remove compat headers
These compat headers have been moved since at least v2.4.0 release so we
can now remove them.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-25 16:40:25 +01:00
Katsuhiro Suzuki 19db485737 kernel: arch: use ENOTSUP instead of ENOSYS in k_float_disable()
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Katsuhiro Suzuki 59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Carlo Caione 807991e15f AArch64: Do not use CONFIG_GEN_PRIV_STACKS
We are setting CONFIG_GEN_PRIV_STACKS when AArch64 actually uses a
statically allocated privileged stack.

This error was not captured by the tests because we only verify whether
a read/write to a privileged stack is failing, but it can fail for a lot
of reasons including when the pointer to the privileged stack is not
initialized at all, like in this case.

With this patch we deselect CONFIG_GEN_PRIV_STACKS and we fix the
mem_protect/userspace test to correctly probe the privileged stack.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-25 07:23:19 -04:00
Eugeniy Paltsev 1b41da2630 ARC: Kconfig: rename CPU_ARCV2 option to ISA_ARCV2
* Rename CPU_ARCV2 to ISA_ARCV2. That helps to avoid conflict between
  CPU families naming and ISAs naming and aligns this options
  with other ARC OSS projects.

* Generalize ARCV2 check to ARC check where it is required.

NOTE: we add ISA_ARCV2 option in a choice list as a preparation
for ISA_ARCV3 addition.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-25 07:23:02 -04:00
Eugeniy Paltsev 8311d27afc ARC: Kconfig: cleanup CPU_ARCEM / CPU_ARCHS options usage
Don't allow user to choose CPU_ARCEM / CPU_ARCHS options
but select them when exact CPU type (i.e. EM4 / EM6 / HS3X/ etc)
is chosen.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-25 07:23:02 -04:00
Kumar Gala 95e4b3eb2c arch: arm: Add initial support for Cortex-M55 Core
Add initial support for the Cortex-M55 Core which is an implementation
of the Armv8.1-M mainline architecture and includes support for the
M‑profile Vector Extension (MVE).

The support is based on the Cortex-M33 support that already exists in
Zephyr.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-23 13:13:32 -05:00
Jim Shu 8db3683820 arch: riscv: improve exception messages
Add exception descriptions of mcause id 6~15. Also print mtval CSR for
memory access fault & illegal instruction exceptions.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-03-22 15:47:09 -04:00
Watson Zeng 0da8ec70dc arch: arc: enable divide zero exception
STATUS32.DZ(bit 13) is the EV_DivZero exception enable bit, and it's
not enabled by default. we need to set it explicitly to enable divide
zero exception on early boot and each thread's setup.

The DZ bit is ignored on write and read as zero when there is no
hardware division configured. So we can simply set DZ bit even if
there is no hardware division configured.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-03-19 13:56:59 -04:00
Anas Nashif fe0872c0ab clocks: rename z_tick_get -> sys_clock_tick_get
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif 771cc9705c clock: z_clock_isr -> sys_clock_isr
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Carlo Caione f3d11cccf4 aarch64: userspace: Enable userspace
Add ARCH_HAS_USERSPACE to enable userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 2936998591 aarch64: GCC10: Add -mno-outline-atomics
GCC10 introduced by default calls to out-of-line helpers to implement
atomic operations with the '-moutline-atomic' option. This is breaking
several tests because the embedded calls are trying to access the
zephyr_data region from userspace that is declared as MT_P_RW_U_NA,
triggering a memory fault.

Since there is currently no support for MT_P_RW_U_RO (and probably never
will be), disable the out-of-line helpers disabling the GCC option.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 8cbd9c7d8e aarch64: userspace: Add missing entries in vector table
To support exceptions taken in EL0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione 1347fdbca7 aarch64: userspace: Increase KOBJECT_TEXT_AREA
This is needed to have some tests run successfully.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Nicolas Pitre 2b5b054b0b aarch64: userspace: bump the global number of available page tables
Each memory domain requires a few pages for itself.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione b52f769908 aarch64: mmu: Fix MMU permissions for zephyr code and data
User threads still need to access the code and the RO data. Fix the
permissions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Nicolas Pitre a74f378cdc aarch64: mmu: apply domain switching on all CPUs if SMP
It is apparently possible for one CPU to change the memory domain
of a thread already being executed on another CPU.

All CPUs must ensure they're using the appropriate mapping after a
thread is newly added to a domain.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione ec70b2bc7a aarch64: userspace: Add support for page tables swapping
Introduce the necessary routines to have the user thread stack correctly
mapped and the functions to swap page tables on context switch.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Kumar Gala 7d35a8c93d kernel: remove arch_mem_domain_destroy
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed.  So remove
arch_mem_domain_destroy as well.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-18 16:30:47 +01:00
Carles Cufi 59a51f0e09 debug: Clean up thread awareness data sections
There's no need to duplicate the linker section for each architecture.
Instead, move the section declaration to common-rom.ld.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-03-17 14:43:01 -05:00
Daniel Leung 0c540126c0 x86: gen_mmu: unify size display in hex
This unifies all the display of region size in hex.
Some of them are there to aid in figuring out the end of
a memory region so it is easier if they are already in hex.

This also fixes the display of address range where the end
is off by one and should be (base + size - 1).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung c650721a0f x86: ia32: use virtual address for interrupt stack at boot
After page table is load, we should be executing in virtual
address space. Therefore we need to set ESP to the virtual
address of interrupt stack for the boot process.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 9109fbb1a2 x86: ia32: load GDT in virtual memory after loading page table
This reverts commit d40e8ede8e.

This fixes triple faults after wiping the identity mapping of
physical memory when running entering userspace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Andrew Boie 348d1315d2 x86: 32-bit: restore virtual linking capability
This reverts commit 7d32e9f9a5.

We now allow the kernel to be linked virtually. This patch:

- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
  in boot page tables to facilitate instruction pointer
  transition, with logic to clean this up after completed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 03b413712a x86: gen_mmu: double map physical/virtual memory at top level
This reuses the page directory pointer table (PAE=y) or page
directory (PAE=n) to point to next level page directory table
(PAE=y) or page tables (PAE=n) to identity map the physical
memory. This gets rid of the extra memory needed to host
the extra mappings which are only used at boot. Following
patches will have code to actual unmap physical memory
during the boot process, so this avoids some wasting of
memory.

Since no extra memory needs to be reserved, this also reverts
commit ee3d345c09
("x86: mmu: reserve more space for page table if linking in virt").

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung dd0748a979 x86: gen_mmu: use constants to refer to page level...
...instead of magic numbers. Makes it a tiny bit easier to
read code.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung b95b8fb075 x86: gen_mmu: allow more verbose messages
This allows specifying second --verbose in command line to
enable more messages. Two new ones have been added to aid
in debugging code for mapping and setting permission to
a single page.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 4bab992e80 x86: gen_mmu: consolidate map() and identity_map()
Consolidate map() and identity_map() as they are mostly
the same.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung e211d3a999 kernel: remove CONFIG_KERNEL_LINK_IN_VIRT
There actually is no need for a separate kconfig here, as
the kernel VM address and SRAM address can be used to figure
out if the kernel is linked in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 273a5e670b x86: remove usage of CONFIG_KERNEL_LINK_IN_VIRT
There is no need to use this kconfig, as the phys-to-virt
offset is enough to figure out if the kernel is linked in
virtual address space in gen_mmu.py.

For code, use Z_VM_KERNEL instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung d39012a590 x86: use Z_MEM_*_ADDR instead of Z_X86_*_ADDR
With the introduction of Z_MEM_*_ADDR for physical<->virtual
address translation, there is no need to have x86 specific
versions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Nicolas Pitre f062490c7e aarch64: mmu: add TLB flushing on mapping changes
Pretty crude for now, as we always invalidate the entire set.
It remains to be seen if more fined grained TLB flushing is worth
the added complexity given this ought to be a relatively rare event.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Carlo Caione a010651c65 aarch64: mmu: Add initial support for memory domains
Introduce the basic support code for memory domains. To each domain
is associated a top page table which is a copy of the global kernel
one. When a partition is added, corresponding memory range is made
private before its mapping is adjusted.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre c77ffebb24 aarch64: mmu: apply proper locking
We need to protect against concurrent modifications to page tables and
their use counts.

It would have been nice to have one lock per domain, but we heavily
share page tables across domains. Hence the global lock.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre e4cd3d4292 aarch64: mmu: code to split/combine page tables
Two scenarios are possible.

privatize_page_range:

Affected pages are made private if they're not. This means a whole
new page branch starting from the top may be allocated and content
shared with the reference page tables, except for the private range
where content is duplicated.

globalize_page_range:

That's the reverse operation where pages for given range is shared with
the reference page tables and no longer needed pages are freed.

When changing a domain mapping the range needs to be privatized first.

When changing a global mapping the range needs to be globalized last.

This way page table sharing across domains is maximized and memory
usage remains optimal.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre 402636153d aarch64: mmu: factor out table expansion code
Make the allocation, population and linking of a new table into
a function of its own for easier code reuse.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Eugeniy Paltsev 8165f3ad80 ARC: cleanup instruction cache initialization
As of today during the Zephyr start we
 - invalidate I$
 - disable I$
 - enable I$

Given that we don't need to have I$ disabled during any
initialization period and ARC processors have caches enabled
after reset the I$ disabling/enabling is excessive, so we can
drop it.

By that we also aligh the I$ initialization on ARC with other
projects like U-boot and Linux kernel.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-12 18:29:07 -05:00
Watson Zeng 5c3e7e3cb7 arch: arc: remove ARCH_HAS_STACK_PROTECTION for ARC_MPU_VER 2
As we have removed MPU_STACK_GUARD for ARC_MPU_VER 2, we also
need to remove ARCH_HAS_STACK_PROTECTION for boards with
ARC_MPU_VER 2 and no hardware stack checking, relative commit:
commit(arch: arc: remove MPU_STACK_GUARD for ARC_MPU_VER 2)
in pull request #24021

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-03-11 08:57:01 -05:00
Daniel Leung 6cac92ad52 x86: remove CONFIG_CPU_MINUTEIA
Since the removal of Quark-based boards, there are no user of
Minute-IA. Also, the generic x86 SoC is not exactly Minute-IA
so change it to use a fairly safe CPU_ATOM.

Fixes #14442

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-11 06:37:02 -05:00
Peng Fan b4f5b9e237 aarch64: reset: initialize CNTFRQ_EL0 in the highest EL
Can only be written at the highest Exception level implemented.
For example, if EL3 is the highest implemented Exception level,
CNTFRQ_EL0 can only be written at EL3.

Also move z_arm64_el_highest_plat_init to be called when is_el_highest

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-11 12:24:18 +01:00
Carlo Caione dacd176991 aarch64: userspace: Implement syscalls
This patch adds the code managing the syscalls. The privileged stack
is setup before jumping into the real syscall.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Nicolas Pitre f2995bcca2 aarch64: arch_buffer_validate() implementation
This leverages the AT (address translation) instruction to test for
given access permission. The result is then provided in the PAR_EL1
register.

Thanks to @jharris-intel for the suggestion.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 9ec1c1a793 aarch64: userspace: Introduce arch_user_string_nlen
Introduce the arch_user_string_nlen() assembly routine and the necessary
C code bits.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione a7a3e800bf aarch64: fatal: Restrict oops-es when in user-mode
User mode is only allowed to induce oopses and stack check failures via
software-triggered system fatal exceptions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 6978160427 aarch64: userspace: Introduce arch_is_user_context
The arch_is_user_context() function is relying on the content of the
tpidrro_el0 register to determine whether we are in user context or not.

This register is set to '1' when in EL1 and set back to '0' when user
threads are running in userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione 6cf0d000e8 aarch64: userspace: Introduce skeleton code for user-threads
Introduce the first pieces needed to schedule user threads by defining
two different code paths for kernel and user threads.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione a7d3d2e0b1 aarch64: fatal: Add arch_syscall_oops hook
Add the arch_syscall_oops hook for the AArch64.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
James Harris 4e1926d508 arch: aarch64: do EL2 init in EL3 if necessary
If EL2 is implemented but we're skipping EL2, we should still
do EL2 init. Otherwise we end up with a bunch of things still
at their (unknown) reset values.

This in particular causes problems when different
cores have different virtual timer offsets.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-10 06:50:36 -05:00
Carlo Caione 8388794c9b aarch64: Rename z_arm64_get_cpu_id macro
z_arm64_* prefix should not be used for macros. Rename it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-09 04:52:40 -05:00
Carlo Caione bdbe33b795 aarch64: Rework {inc,dec}_nest_counter
There are several issues with the current implemenation of the
{inc,dec}_nest_counter macros.

The first problem is that it's internally using a call to a misplaced
function called z_arm64_curr_cpu() (for some unknown reason hosted in
irq_manage.c) that could potentially clobber the caller-saved registers
without any notice to the user of the macro.

The second problem is that being a macro the clobbered registers should
be specified at the calling site, this is not possible given the current
implementation.

To fix these issues and make the call quicker, this patch rewrites the
code in assembly leveraging the availability of the _curr_cpu array. It
now clobbers only two registers passed from the calling site.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-09 04:52:40 -05:00
Erwan Gouriou 19314514e6 arch/arm: cortex_m: Disable DWT based null-pointer exception detection
Null-pointer exception detection using DWT is currently incompatible
with current openocd runner default implementation that leaves debug
mode on by default.
As a consequence, on all targets that use openocd runner, null-pointer
exception detection using DWT will generated an assert.
As a consequence, all tests are failing on such platforms.

Disable this until openocd behavior is fixed (#32984) and enable
the MPU based solution for now.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
2021-03-08 19:19:14 -05:00
Andy Ross ae4f7a1a06 arch/xtensa: Remember to spill windows in arch_cohere_stacks()
When we reach this code in interrupt context, our upper GPRs contain a
cross-stack call that may still include some registers from the
interrupted thread.  Those need to go out to memory before we can do
our cache coherence dance here.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross b28da4a3b7 arch/xtensa: Invalidate bottom of outbound stacks
Both new thread creation and context switch had the same mistake in
cache management: the bottom of the stack (the "unused" region between
the lower memory bound and the live stack pointer) needs to be
invalidated before we switch, because otherwise any dirty lines we
might have left over can get flushed out on top of the same thread on
another CPU that is putting live data there.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross 64cf33952d arch/xtensa: Add non-HAL caching primitives
The Xtensa L1 cache layer has straightforward semantics accessible via
single-instructions that operate on cache lines via physical
addresses.  These are very amenable to inlining.

Unfortunately the Xtensa HAL layer requires function calls to do this,
leading to significant code waste at the calling site, an extra frame
on the stack and needless runtime instructions for situations where
the call is over a constant region that could elide the loop.  This is
made even worse because the HAL library is not built with
-ffunction-sections, so pulling in even one of these tiny cache
functions has the effect of importing a 1500-byte object file into the
link!

Add our own tiny cache layer to include/arch/xtensa/cache.h and use
that instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross d0c538e9a2 arch/xtensa: Add an arch-internal README on register windows
Back when I started work on this stuff, I had a set of notes on
register windows that slowly evolved into something that looks like
formal documentation.  There really isn't any overview-style
documentation of this stuff on the public internet, so it couldn't
hurt to commit it here for posterity.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross a230fafde5 arch/xtensa: soc/intel_adsp: Rework MP code entry
Instead of passing the crt1 _start function as the entry code for
auxiliary CPUs, use a tiny assembly stub instead which can avoid the
runtime testing needed to skip the work in _start.  All the crt1 code
was doing was clearing BSS (which must not happen on a second CPU) and
setting the stack pointer (which is wrong on the second CPU).

This allows us to clean out the SMP code in crt1.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross 613594e68c soc/intel_adsp: Use the correct MP stack pointer
The kernel passes the CPU's interrupt stack expected that it will
start on that, so do it.  Pass the initial stack pointer from the SOC
layer in the variable "z_mp_stack_top" and set it in the assembly
startup before calling z_mp_entry().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross 820c94e5dd arch/xtensa: Inline atomics
The xtensa atomics layer was written with hand-coded assembly that had
to be called as functions.  That's needlessly slow, given that the low
level primitives are a two-instruction sequence.  Ideally the compiler
should see this as an inline to permit it to better optimize around
the needed barriers.

There was also a bug with the atomic_cas function, which had a loop
internally instead of returning the old value synchronously on a
failed swap.  That's benign right now because our existing spin lock
does nothing but retry it in a tight loop anyway, but it's incorrect
per spec and would have caused a contention hang with more elaborate
algorithms (for example a spinlock with backoff semantics).

Remove the old implementation and replace with a much smaller inline C
one based on just two assembly primitives.

This patch also contains a little bit of refactoring to address the
scheme has been split out into a separate header for each, and the
ATOMIC_OPERATIONS_CUSTOM kconfig has been renamed to
ATOMIC_OPERATIONS_ARCH to better capture what it means.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross eb1ef50b6b arch/xtensa: General cleanup, remove dead code
There was a bunch of dead historical cruft floating around in the
arch/xtensa tree, left over from older code versions.  It's time to do
a cleanup pass.  This is entirely refactoring and size optimization,
no behavior changes on any in-tree devices should be present.

Among the more notable changes:

+ xtensa_context.h offered an elaborate API to deal with a stack frame
  and context layout that we no longer use.

+ xtensa_rtos.h was entirely dead code

+ xtensa_timer.h was a parallel abstraction layer implementing in the
  architecture layer what we're already doing in our timer driver.

+ The architecture thread structs (_callee_saved and _thread_arch)
  aren't used by current code, and had dead fields that were removed.
  Unfortunately for standards compliance and C++ compatibility it's
  not possible to leave an empty struct here, so they have a single
  byte field.

+ xtensa_api.h was really just some interrupt management inlines used
  by irq.h, so fold that code into the outer header.

+ Remove the stale assembly offsets.  This architecture doesn't use
  that facility.

All told, more than a thousand lines have been removed.  Not bad.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Peng Fan e27c9c7c52 arch: arm64: select SCHED_IPI_SUPPORTED when SMP enabled
Select SCHED_IPI_SUPPORTED when SMP enabled.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan a2ea20dd6d arch: arm: aarch64: add SMP support
With timer/gic/cache added, we could add the SMP support.
Bringup cores

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 14b9b752be arch: arm: aarch64: add arch_dcache_range
Add arch_dcache_range to support flush and invalidate

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan e10d9364d0 arch: arm64: irq/switch: accessing nested using _cpu_t
With _kernel_offset_to_nested, we only able to access the nested counter
of the first cpu. Since we are going to support SMP, we need accessing
nested from per cpu.

To get the current cpu, introduce z_arm64_curr_cpu for asm usage,
because arch_curr_cpu could not be compiled in asm code.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 251b1d39ac arch: arm: aarch64: export z_arm64_mmu_init for SMP
Export z_arm64_mmu_init for SMP usage

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan 6182330fc3 arm: core: aarch64: save switch_handle
Save old_thread to switch_handle for wait_for_thread usage

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Ioannis Glaropoulos 191c3088af arm: cortex_m: fix arguments to dwt_init() function
Fix the call to z_arm_dwt_init(), remove the NULL argument.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-05 18:13:22 -06:00
Katsuhiro Suzuki e58e2767f8 arch: riscv: add common stub reboot function
This patch adds weak sys_arch_reboot() function to avoid build error
with CONFIG_REBOOT=y. Some SoC has already had own reboot function
but others (Ex. qemu boards) faced buld error.

- openisa_rv32m1: Not change
- riscv-ite: Do nothing, remove and use arch/riscv function

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-04 11:09:51 -06:00
Carlo Caione 9d908c78fa aarch64: Rewrite reset code using C
There is no strict reason to use assembly for the reset routine. Move as
much code as possible to C code using the proper helpers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Carlo Caione bba7abe975 aarch64: Use helpers instead of inline assembly
No need to rely on inline assembly when helpers are available.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Carlo Caione a2226f5200 aarch64: Fix registers naming in cpu.h
The name for registers and bit-field in the cpu.h file is incoherent and
messy. Refactor the whole file using the proper suffixes for bits,
shifts and masks.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Daniel Leung 90722ad548 x86: gen_idt: fix some pylint issues
Fixes some issues identified by pylint.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 685e6aa2e4 x86: gen_mmu: fix some pylint issues
Fixes some issues identified by pylint.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 8cfdd91d54 x86: ia32/fatal: be explicit on pointer math with _df_tss.cr3
For some unknown reason, the pagetable address for _df_tss.cr3
did not get translated from virtual to physical. However,
the translation is done if the pointer to pagetable is obtained
through reference to the first array element (instead of simply
through the name of array). Without CR3 pointing to the page
table via physical address, double fault does not work. So
fixing this by being explicit with the page table pointer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung fa6d7cecb5 x86: mmu/mem_domain: don't translate address before null check
When adding a new thread to memory domain, there is a NULL check
to figure out if a thread is being migrated to another memory
domain. However, the NULL check is AFTER physical-to-virtual
address translation which means (NULL + offset) != NULL anymore.
This results in calling reset_region() with an invalid page table
pointer. Fix this by doing the NULL check before address
translation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung ee3d345c09 x86: mmu: reserve more space for page table if linking in virt
When linking in virtual address space, we still need physical
addresses in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So we need to enlarge the reserved space for page table
to accommodate this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 9ce77abf23 x86: ia32: jump to virtual address before calling z_x86_prep_c
We have been having the assumption that the physical memory
is identity-mapped to virtual address space. However, with
the ability to set CONFIG_KERNEL_VM_BASE separately from
CONFIG_SRAM_BASE_ADDRESS, the assumption is no longer valid.
This changes the boot code in x86 32-bit, so that once
the page table is loaded, we can proceed with executing in
the virtual address space. So do a long jump to virtual
address just before calling z_x86_prep_c. From this point on,
code execution is in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 991300e1ba x86: gen_mmu: also map SRAM if linking in virtual address space
When linking in virtual address space, we still need physical
addressed in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So identity maps the kernel in SRAM.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung d40e8ede8e x86: gen_gdt: add address translation if needed
When the kernel is mapped into virtual address space
that is different than the physical address space,
the dynamic GDT generation uses the virtual addresses.
However, the GDT table is required at boot before
page table is loaded where the virtual addresses are
invalid. So make sure GDT generation is using
physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 7a51aab397 x86: gen_mmu: add address translation if needed
There is an assumption made in the page table generation code
that the kernel would occupy the same physical and virtual
addresses. However, we may want to map the kernel into
a virtual address space which differs from kernel's physical
address space. For example, with demand paging enabled on
kernel code and data, we can accommodate kernel that is
larger than physical memory size, and may want to utilize
a bigger virtual address space. So add address translation
in the gen_mmu.py script for this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung a1afe9be5e x86: ia32: do virtual address translation at boot
This adds virtual address translation to a few variables
used in crt0.S. This is needed as they are linked at
virtual addresses but before page table is loaded,
they are not available at virtual addresses and must be
referred via physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung bbe4b39f8d x86: mmu: cast to uintptr_t for page table using Z_X86_PHYS_ADDR
When feeding &z_shared_kernel_page_start directly to
Z_X86_PHYS_ADDR(), the compiler would complain array subscript
out of bound if linking in virtual address space. So cast it
into uintptr_t first before translation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Nicolas Pitre 0c45b548e2 aarch64: rationalize exception entry/exit code
Each vector slot has room for 32 instructions. The exception context
saving needs 15 instructions already. Rather than duplicating those
instructions in each out-of-line exception routines, let's store
them directly in the vector table. That vector space is otherwise
wasted anyway. Move the z_arm64_enter_exc macro into vector_table.S
as this is the only place where it should be used.

To further reduce code size, let's make z_arm64_exit_exc into a
function of its own to avoid code duplication again. It is put in
vector_table.S as this is the most logical location to go with its
z_arm64_enter_exc counterpart.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-03 16:26:40 +03:00
Shubham Kulkarni 26efef4f94 arch: xtensa: Fix backtrace from ISR
a0 is used as scratch register. Restore value of a0 (return address)
from stack frame before spilling registers on stack

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-03-03 13:02:57 +01:00
Ioannis Glaropoulos f1a27a8189 arm: cortex_m: assert if DebugMonitor exc is enabled in debug mode
Assert if the null pointer de-referencing detection (via DWT) is
enabled when the processor is in debug mode, because the debug
monitor exception can not be triggered in debug mode (i.e. the
behavior is unpredictable). Add a note in the Kconfig definition
of the null-pointer detection implementation via DWT, stressing
that the solution requires the core be in normal mode.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 77c76a3b79 arm: cortex_m: build time assert for null-pointer exception page size
We introduce build time asserts for
CONFIG_CORTEX_M_DEBUG_NULL_POINTER_EXCEPTION_PAGE_SIZE
to catch that the user-supplied value has, as requested
by the Kconfig symbol specification, a power of 2 value.
For the MPU-based implementation of null-pointer detection
we can use an existing macro for the build time assert,
since the region for catching null-pointer exceptions
is a regular MPU region, with different restrictions,
depending on the MPU architecture. For the DWT-based
implementation, we introduce a custom build-time assert.

We add also a run-time ASSERT for the MPU-based
implementation in ARMv8-M platforms, which require
that the null pointer exception detection page is
already mapped by the MPU.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 1db78aae73 arm: cortex_m: ensure DebugMonitor is targeting Secure domain
By design, the DebugMonitor exception is only employed
for null-pointer dereferencing detection, and enabling
that feature is not supported in Non-Secure builds. So
when enabling the DebugMonitor exception, assert that
it is not targeting the Non Secure domain.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 1b22f6b8c8 arm: cortex_m: enable null-pointer exception detection in the tests
Enable the null-pointer dereferencing detection by default
throughout the test-suite. Explicitly disable this for the
gen_isr_table test which needs to perform vector table reads.
Disable null-pointer exception detection on qemu_cortex_m3
board, as DWT it is not emulated by QEMU on this platform.
Additionally, disable null-pointer exception detection on
mps2_an521 (QEMU target), as DWT is not present and the MPU
based solution won't work, since the target does not have
the area 0x0 - 0x400 mapped, but the QEMU still permits
read access.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos d86d2c6f65 arm: cortex_m: implement null pointer exception detection with MPU
Implementation for null pointer exception detection feature
using the MPU on Cortex-M. Null-pointer detection is implemented
by programming an MPU to guard a limited area starting at
address 0x0. on non ARMv8-M we program an MPU region with
No-access policy. On ARMv8-M we program a region with any
permissions, assuming the region will overlap with fixed
FLASH0 region. We add a compile-time message to warn the
user if the MPU-based null-pointer exception solution can
not be used (ARMv8-M only).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 66ef96fded arm: cortex_m: add vector table padding for null pointer detection
Padding inserted after the (first-stage) vector table,
so that the Zephyr image does not attempt to use the
area which we reserve to detect null pointer dereferencing
(0x0 - <size>). If the end of the vector table section is
higher than the upper end of the reserved area, no padding
 will be added. Note also that the padding will be added
only once, to the first stage vector table, even if the current
snipped is included multiple times (this is for a corner case,
when we want to use this feature together with SW Vector Relaying
on MCUs without VTOR but with an MPU present).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 0bac92db96 arm: cortex-m: null pointer detection additions for ARMv8-M
Additions to the null-pointer exception detection mechanism
for ARMv8-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 3054c1351a arm: cortex_m: null-pointer exception detection via DWT
Implement the functionality to detect null pointer dereference
exceptions via the DWT unit in the ARMv7-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos f97ccd940c arm: cortex-m: build debug.c for null-pointer detection feature
When we enable the null pointer exceptino feature (using DWT)
we include debug.c in the build. debug.c contains the functions
to configure and enable null pointer detection using the Data
Watchdog and Trace unit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos c42a8d9d24 arm: cortex_m: fault: hook up debug monitor exception handler
Extend the debug monitor exception handler to
- return recoverable faults when the debug monitor
  is enabled but we do not get an expected DWT event,
- call a debug monitor routine to check for null pointer
  exceptions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos 712a7951db arm: cortex_m: move static inline DWT functions in internal header
Move the DWT utility functions, present in timing.c
in an internal cortex-m header.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos b3cd5065eb arm: cortex_m: Kconfig symbols for null pointer detection feature
Introduce the required Kconfig symbol framework for the
Cortex-M-specific null pointer dereferencing detection
feature. There are two implementations (based on DWT and
MPU) so we introduce the corresponding choice symbols,
including a choice symbol to signify that the feature
is to be disabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Carlo Caione eb72b2d72a aarch64: smccc: Retrieve up to 8 64-bit values
The most common secure monitor firmware in the ARM world is TF-A. The
current release allows up to 8 64-bit values to be returned from a
SMC64 call from AArch64 state.

Extend the number of possible return values from 4 to 8.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione bc7cb75a82 aarch64: smccc: Use offset macros
Instead of relying on hardcoded offset in the assembly code, introduce
the offset macros to make the code more clear.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione 998856bacb aarch64: smccc: Update specs link
The link points to an outdated version. Update it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione 90859c6bf3 aarch64: smccc: Decouple PSCI from SMCCC
The current code is assuming that the SMC/HVC helpers can only be used
by the PSCI driver. This is wrong because a mechanism to call into the
secure monitor should be made available regardless of using PSCI or not.

For example several SoCs relies on SMC calls to read/write e-fuses,
retrieve the chip ID, control power domains, etc...

This patch introduces a new CONFIG_HAS_ARM_SMCCC symbol to enable the
SMC/HVC helpers support and export that to drivers that require it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Nicolas Pitre 443e3f519e arm64: mmu: initialize early
This is fundamental enough that it better be initialized ASAP.
Many other things get initialized soon afterwards assuming the MMU
is already operational.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 9461600c86 aarch64: mmu: rationalize debugging output
Make it into a generic call that can be used in various places.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre b40a2fdb8b aarch64: mmu: fix common MMU mapping
Location of __kernel_ram_start is too far and _app_smem .bss areas
are not covered. Use _image_ram_start instead.

Location of __kernel_ram_end is also way too far. We should stop at
_image_ram_end where the expected unmapped area starts.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre fb3de16f0c aarch64: mmu: use a range (start..end) for common MMU mapping
This is easier to cover multiple segments this way. Especially since
not all boundary symbols from the linker script come with a size
derrivative.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre cb49e4b789 aarch64: mmu: invert the MT_OVERWRITE flag
The MT_OVERWRITE case is much more common. Redefine that flag as
MT_NO_OVERWRITE instead for those fewer cases where it is needed.

One such case is platform provided mappings. Apply them after the
common kernel mappings and use the MT_NO_OVERWRITE on them.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 56c77118d3 aarch64: mmu: factor out the phys argument out of set_mapping()
Minor cleanup.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre f53bd24a4d aarch64: mmu: move get_region_desc() closer to usage points
Simple code tidiness.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre b696090bb7 aarch64: mmu: make page table pool global
There is no real reason for keeping page tables into separate pools.
Make it global which allows for more efficient memory usage and
simplifies the code.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 459bfed9ea aarch64: mmu: dynamic mapping support
Introduce a remove_map() to ... remove a mapping.

Add a use count to the page table pool so pages can be dynamically
allocated, deallocated and reused.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre 861f6ce2c8 aarch64: a few trivial assembly optimizations
Removed some instructions when possible.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-25 10:35:37 -05:00
Andy Ross 6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Shih-Wei Teng 8912f549ce arch: riscv: Update the description of CONFIG_PMP_STACK_GUARD_MIN_SIZE
Update the uints in bytes instead of words in its description. It can
avoid confusion.

Signed-off-by: Shih-Wei Teng <swteng@andestech.com>
2021-02-24 10:37:03 -05:00
Yuguo Zou a8b6936c7d arch: arc: fix mpu version number
ARC mpu version used a wrong number 3, could cause conflict in future.
This commit fix this issue to version number 4.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-02-24 08:57:35 -05:00
Ioannis Glaropoulos 8289b8c877 arch: arm: cortex_m: fix ASSERT expression in MemManage handler
We need to form the ASSERT expression inside the MemManage
fault handler for the case we building without USERSPACE
and STACK GUARD support, in the same way it is formed for
the case with USERSPACE or MPU STACK GUARD support, that
is, we only assert if we came across a stacking error.

Data access violations can still occur even without user
mode or guards, e.g. when trying to write to Read-only
memory (such as the code region).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-23 11:29:49 +01:00
Andrei Emeltchenko 377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Daniel Leung 2816c17a09 x86: allow linking in virtual address space
This adds the pieces to allow the kernel to be linked
in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung d340afd456 x86: use CONFIG_SRAM_OFFSET instead of CONFIG_X86_KERNEL_OFFSET
This changes x86 to use CONFIG_SRAM_OFFSET instead of
arch-specific CONFIG_X86_KERNEL_OFFSET. This allows the common
MMU macro Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
function properly if we ever need to map the kernel into
virtual address space that does not have the same starting
physical address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung c0ee8c4a43 x86: use z_bss_zero and z_data_copy
Instead of doing these in assembly, use the common z_bss_zero()
and z_data_copy() C functions instead. This simplifies code
a bit and we won't miss any additions to these two functions
(if any) under x86 in the future (as x86_64 was actually not
clearing gcov bss area).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung dd98de880a x86: move calling z_loapic_enable into z_x86_prep_c
This moves calling z_loapic_enable() from crt0.S into
z_x86_prep_c(). This is done so we can move BSS clearing
and data section copying inside z_x86_prep_c() as
these are needed before calling z_loapic_enable().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung 78837c769a soc: x86: add Lakemont SoC
This adds a very basic SoC configuration for Intel Lakemont SoC.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Daniel Leung 9a189da03b x86: add kconfig CONFIG_X86_MEMMAP
This adds a new kconfig to enable the use of memory map.
This map can be populated automatically if
CONFIG_MULTIBOOT_MEMMAP=y or can be manually defined
via x86_memmap[].

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Daniel Leung c027494dba x86: add kconfig CONFIG_X86_PC_COMPATIBLE
This is an hidden option to indicate we are building for
PC-compatible devices (where there are BIOS, ACPI, etc.
which are standard on such devices).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Carlo Caione 3f055058dc aarch64: Remove QEMU 'wfi' issue workaround
The problem is not reproducible when CONFIG_QEMU_ICOUNT=n. We can now
revert the commit aebb9d8a45.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-19 16:26:38 +03:00
Nicolas Pitre 7a91cf0176 Revert "lib/os/heap: introduce option to force big heap mode"
This reverts commit b6b6d39bb6.

With both commit 4690b8d5ec ("libc/minimal: fix malloc() allocated
memory alignment") and commit c822e0abbd ("libc/minimal: fix
realloc() allocated memory alignment") in place, there is no longer
a need for enforcing the big heap mode on every allocations.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-19 07:32:22 -05:00
Martin Åberg 88f478108d sparc: write through switched_from in arch_switch()
Write through switched_from in arch_switch() as required by the
switch protocol.

Also restructure the implementation to better match the template in
kernel_arch_interface.h, by removing a wrapper routine and instead
use CONTAINER_OF().

Fixes #32197

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-02-17 06:35:03 -05:00
Carlo Caione fadbe9d2f2 arch: aarch64: Add XIP support
Add the missing pieces to enable XIP for AArch64. Try to simulate the
XIP using QEMU using the '-bios' parameter.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-17 14:13:10 +03:00
Daniel Leung 32b70bb7b5 x86: multiboot: map memory before accessing if necessary
Before accessing the multiboot data passed by the bootloader,
we need to map the memory first. This adds the code to map
the memory if necessary.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-16 19:08:55 -05:00
Tomasz Bursztyka 5e4e0298e9 arch/x86: Generalize cache manipulation functions
We assume that all x86 CPUs do have clflush instructions.
And the cache line size is now provided through DTS.

So detecting clflush instruction as well as the cache line size is no
longer required at runtime and thus removed.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-02-15 09:43:30 -05:00
Daniel Leung 5c649921de x86: add kconfigs and compiler flags for MMX and SSE*
This adds kconfigs and compiler flags to support MMX and SSE*
instructions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Daniel Leung ce44048d46 x86: rename CONFIG_SSE* to CONFIG_X86_SSE*
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Daniel Leung 23a9a3234b x86: correct compiler flags for SSE
It is possible to enable SSE without using SSE for floating
point, so fix the compiler flags.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Carlo Caione b27bca4b45 aarch64: mmu: Remove SRAM memory region
Now that the arch_mem_map() is actually working correctly we can remove
the big SRAM region.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-15 08:07:55 -05:00
Andy Ross 746c65acb7 soc/intel_adsp: Move KERNEL_COHERENCE to cavs15
Only the CAVS 1.5 linker script has full support for the coherence
features, don't advertise it on the other SoC's yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Anas Nashif 5d1c535fc8 license: add missing SPDX headers
Add SPDX header to files with existing license.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Anas Nashif 1cea902fad license: add missing SPDX headers
Add missing SPDX header.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Anas Nashif 67d290540e xtensa: remove unused script
While fixing license headers, identified this script as orphan and not
being used anywhere, so remove.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Carlo Caione d6316aae27 aarch64: Fix corrupted IRQ state when tracing enabled
The call to sys_trace_idle() is potentially clobbering x0 resulting in a
wrong value being used by the following code. Save and restore x0 before
and after the call to sys_trace_idle() to avoid any issue.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Suggested-by: James Harris <james.harris@intel.com>
2021-02-10 10:16:03 -05:00
Daniel Leung 4daa2cb6cf x86: mark page frame as reserved according to memory map
With x86, there are usually memory regions that are reserved
for firmware and device MMIOs. We don't want to use these
pages for memory mapping so mark them as reserved at boot.
The weakly defined x86_memmap contains the list of memory
regions which can be overriden by SoC or board configurations.
Also, with CONFIG_MULTIBOOT_MEMMAP=y, the memory regions
are populated from multiboot provided data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-05 11:42:28 -05:00
Ioannis Glaropoulos 8bc242ebb5 arm: cortex-m: add extra stack size for test build with FPU_SHARING
Additional stack for tests when building with FPU_SHARING
enabled is required, because the option may increase ESF
stacking requirements for threads.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-05 11:41:25 -05:00
Daniel Leung 5a11caba33 xtensa: fix rsr/wsr assembly for XCC
XCC doesn't like the "rsr.<reg name>" style assembly
so fix that to the other style.

Also, XCC doesn't like _CONCAT() with the EPC/EPS
registers so need to spell out all of them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-05 07:45:07 -05:00
Daniel Leung 92c93b1b7f xtensa: fix hard-coded interrupt value for PS register
There is a hard-coded value of PS_INTLEVEL(15) to set the PS
register. The correct way is actually to use XCHAL_EXCM_LEVEL
with PS_INTLEVEL() to setup the register. So fix it.

Fixes #31858

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-04 20:58:56 -05:00
Andy Ross cce5ff1510 arch/x86: Fix stack alignment for user threads
The x86_64 SysV ABI requires 16 byte alignment for the stack pointer
during execution of normal code.  That means that on entry to an
ABI-compatible C function (which is reached via a CALL instruction
that pushes the return address) the RSP register must be MISaligned by
exactly 8 bytes.  The kernel mode thread setup got this right, but we
missed the equivalent condition in userspace entry.

The end result was a misaligned stack, which is surprisingly robust
for most use.  But recent toolchains have starting doing some more
elaborate vectorization, and the resulting SSE instructions started
failing in userspace on the misaliged loads.

Note that there's a comment about optimization: we're doing the stack
alignment in the "wrong place" and are needlessly wasting bytes in
some cases.  We should see the raw stack boundaries where we are
setting up RSP values.  Add a FIXME to this effect, but don't touch
anything as this patch is a targeted bugfix.

Also fix a somewhat embarassing 32-bit-ism that would have truncated
the address of a userspace stack that we tried to put above 4G.

Fixes #31018

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-03 18:45:48 -05:00
Ioannis Glaropoulos ef926e714b arm: cortex_m: fix vector table relocation in non-XIP builds
When VTOR is implemented on the Cortex-M SoC, we can
basically use any address (properly aligned) for the
vector table starting address. We fix the setting of
VTOR in prep_c.c for non-XIP images, in this commit,
so we do not need to always have the vector table be
present at the start of RAM (CONFIG_SRAM_BASE_ADDRESS)
and allow for extra linker sections being placed before
the vector table section.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-03 10:44:17 -05:00
Ioannis Glaropoulos 73288490f6 arm: cortex_m: log EXC_RETURN value in fatal.c
If CONFIG_EXTRA_EXCEPTION_INFO is enabled, log
the value of EXC_RETURN in the fault handler.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos cafe04558c arm: cortex_m: make lazy FP stacking enabling dynamic
Under FPU sharing mode, any thread is allowed to generate
a Floating Point context (use FP registers in FP instructions),
regardless of whether threads are pre-tagged with K_FP_REGS
option when they are created.

When building with MPU stack guard feature enabled,
a large MPU stack guard is required to catch stack
overflows, if lazy FP stacking is enabled. When lazy
FP stacking is not enabled, a default 32 byte guard is
sufficient.

If lazy stacking is enabled by default, all threads may
potentially generate FP context, so they would need to
program a large MPU guard, carved out of their reserved
stack memory.

To avoid this memory waste, we modify the behavior, and make
lazy stacking a dynamically enabled feature, implemented as
follows:
- threads that are not pre-tagged with K_FP_REGS, and have
not generated an FP context use a default MPU guard and disable
lazy stacking. As long as the threads do not have an active FP
context, they won't stack FP registers, anyway, on ISRs and
exceptions, while they will benefit from reserving a small
MPU guard size
- as soon as a thread starts using FP registers, ISR might
temporarily experience some increased ISR latency due to lazy
stacking being disabled. This will be the case until the next
context switch, where the threads that have active FP context
will be tagged with K_FP_REGS, enable lazy stacking, and
program a wide MPU guard.

The implementation is a tradeoff between performance (ISR
latency) and memory consumption.

Note that when MPU STACK GUARD feature is not enabled, lazy
FP stacking is always activated.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos 86c1b57103 arm: cortex_m: select by default FP sharing mode when using the FPU
For applications that make use of the FPU in cortex m,
we enforce the FPU sharing registers mode, because the
compiler, under certain optimization regimes, may use
FP instructions and create FP context in any thread,
so the unshared registers mode is not practically
supported.

In addition to that we force FPU_SHARING to depend on
MULTITHREADING, as FPU sharing mode does not make sense
outside the normal multi-threaded builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos 2642eb28bf arm: cortex_m: force FP context stacking by default
For the standard multi-theading builds, we will
enforce FP context stacking only when FPU_SHARING
is set. For the single-threading use case we enable
context stacking by default.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos 56dd787627 arm: cortex_m: skip clearing CONTROL if this is done at boot
If CONTROL register is done in reset.S we can skip
clearing the FPCA when enabling the floating point
support, to save a few instructions. The CONTROL
register is cleared right after boot, if the symbol
CONFIG_INIT_ARCH_HW_AT_BOOT is enabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Daniel Leung 92c12d1f82 toolchain: add GEN_ABSOLUTE_SYM_KCONFIG()
This adds a new GEN_ABSOLUTE_SYM_KCONFIG() specifically for
generating absolute symbols in assembly for kconfig values.
This is needed as the existing GEN_ABSOLUTE_SYM() with
constraints in extended assembly parses the "value" as
signed 32-bit integers. An unsigned 32-bit integer with
MSB set results in a negative number in the final binary.
This also prevents integers larger than 32-bit. So this
new macro simply puts the value inline within the assembly
instrcution instead of having it as parameter.

Fixes #31562

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-02 09:23:45 -05:00
Daniel Leung dea8fccfb3 x86: clear GS at boot for x86_64
On Intel processors, if GS is not zero and is being set to
zero, GS_BASE is also being set to zero. This would interfere
with the actual use of GS_BASE for usespace. To avoid accidentally
clearing GS_BASE, simply set GS to 0 at boot, so any subsequent
clearing of GS will not clear GS_BASE.

The clearing of GS_BASE was discovered while trying to figure out
why the mem_protect test would hang within 10-20 repeated runs.
GDB revealed that both GS and GS_BASE was set to zero when the tests
hanged. After setting GS to zero at boot, the mem_protect tests
were running repeated for 5,000+ times without hanging.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-01 21:38:28 -05:00
Nicolas Pitre 7fcf5519d0 aarch64: mmu: cleanups and fixes
Major changes:

- move related functions together
- optimize add_map() not to walk the page tables *twice* on
  every loop
- properly handle leftover size when a range is already mapped
- don't overwrite existing mappings by default
- return an error when the mapping fails

and make the code clearer overall.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-01-28 20:24:30 -05:00
Daniel Leung 0d099bdd54 linker: remove asterisk from IRQ/ISR section name macro
Both _IRQ_VECTOR_TABLE_SECTION_NAME and _SW_ISR_TABLE_SECTION_NAME
are defined with asterisk at the end in an attempt to include
all related symbols in the linker script. However, these two
macros are also being used in the source code to specify
the destination sections for variables. Asterisks in the name
results in older GCC (4.x) complaining about those asterisks.
So create new macros for use in linker script, and keep
the names asterisk free.

Fixes #29936

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-26 16:24:11 -05:00
Andrew Boie 6b58e2c0a3 x86: use large VM size if ACPI
We've already enabled full RAM mapping if ACPI is enabled, also
set a large 3GB address space size, these systems are not RAM-
constrained (they are PC platforms) and they have large MMIO
config spaces for PCIe.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-26 16:21:50 -05:00
Stephanos Ioannidis f769a03081 arch: arm: aarch32: Fix interrupt nesting
In the current interrupt nesting implementation, if an ISR is
interrupted while executing inside a branch, the lr_svc register will
be corrupted, and the branch of the interrupted ISR will exit to the
return address of the final branch of the interrupting ISR, which may
or may not correspond to the intended return address.

This commit fixes the aforementioned bug by storing the lr_svc register
in the stack at the ISR entry, and restoring its value before exiting
the ISR.

For more details, refer to the issue #30517.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Stephanos Ioannidis c00169daba arch: arm: aarch32: Fix exception exit failures
This commit fixes the following bugs in the AArch32 z_arm_exc_exit
routine:

1. Invalid return address when calling `z_arm_pendsv` from the
   exception-specific mode

2. Caller-saved register is referenced after a call to `z_arm_pendsv`

For more details, refer to the issue #31511.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Stephanos Ioannidis d86fdb2154 arch: arm: aarch32: Update stale references to _IntExit
This commit updates the stale references to the `_IntExit` function in
the in-line documentation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Volodymyr Babchuk cd86ec2655 aarch64: add ability to generate image header
Image header is compatible with Linux aarch64 boot protocol,
so zephyr can be booted with U-boot or Xen loader.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
2021-01-24 13:59:55 -05:00
Yonatan Schachter 7191b64c6f gen_isr_tables: Added check of the IRQ num before accessing the vt
At its current state, the script tries to access the vector table
list without checking first that the index is valid. This can
cause the script to crash without a descriptive message.
The index can be invalid if an IRQ number that is larger than
the maximum number allowed by the SOC is used.
This PR adds a check of that index, that exits with an error
message if the index is invalid.

Fixes #29809

Signed-off-by: Yonatan Schachter <yonatan.schachter@gmail.com>
2021-01-24 10:12:54 -05:00
Martin Åberg b6b6d39bb6 lib/os/heap: introduce option to force big heap mode
This option allows forcing big heap mode. Useful on for getting 8-byte
aligned blocks on 32-bit machines.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-01-24 10:11:11 -05:00
Andrew Boie 77861037d9 x86: map all RAM if ACPI
ACPI tables can lurk anywhere. Map all memory so they can be
read.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie ed22064e27 x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 56a9e7b91e arch: add CONFIG_DEMAND_PAGING
Indicates at the kernel level that demand paging is active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 299a2cf62e mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie b0b7756756 x86: pre-allocate address space
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.

We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.

The default address space size is now 8MB, but this can be
tuned by the application.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 5c47bbc501 x86: only map the kernel image
The policy is changed and we no longer map all page frames.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 893822fbda arch: remove KERNEL_RAM_SIZE
We don't map all RAM at boot any more, just the kernel image.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie f3e9b61a91 x86: reserve the first megabyte
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 73a3e05e40 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 69355d13a8 arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Shubham Kulkarni 8b7da334d5 arch: xtensa: Print backtrace from panic handler
This change uses stack frame to print backtrace once exception occurs
Printing backtrace helps to identify the cause of exception

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-01-23 08:43:10 -05:00
Kumar Gala 895277f909 x86: Fix zefi.py creating valid images
When zefi.py was changed to pass compiler and objcopy the flag to
objcopy for the EFI target was dropped.  This is because the current
SDK (0.12.1) doesn't support that target type for objcopy.  However,
target is necessary for the images to be created correctly and boot.

Switch back to use the host objcopy as a stop gap fix, until the SDK
can support target for EFI.

Fixes: #31517

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-01-22 12:41:27 -05:00
Daniel Leung 4e8abfcba7 x86: use TSC for timing information
This changes the timing functions to use TSC to gather
timing information instead of using the timer for
scheduling as it provides higher resolution for timing
information.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-22 11:05:30 -05:00
Anas Nashif 6f61663695 Revert "arch: add KERNEL_VM_OFFSET"
This reverts commit fd2434edbd.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif db0732f11d Revert "kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES"
This reverts commit 9d2ebfff58.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 4422b1d376 Revert "x86: reserve the first megabyte"
This reverts commit 51e3c9efa5.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 34e9c09330 Revert "arch: remove KERNEL_RAM_SIZE"
This reverts commit 73561be500.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 83d15d96e3 Revert "x86: only map the kernel image"
This reverts commit 3660040e22.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif e980848ba7 Revert "x86: pre-allocate address space"
This reverts commit 64f05d443a.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif a2ec139bf7 Revert "mmu: arch_mem_map() may no longer fail"
This reverts commit db56722729.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 0f24e09bcf Revert "arch: add CONFIG_DEMAND_PAGING"
This reverts commit 48cc63b4a3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif adff757c72 Revert "x86: implement demand paging APIs"
This reverts commit 7711c9a82d.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Daniel Leung d3218ca515 debug: coredump: remove z_ prefix for stuff used outside subsys
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-21 22:08:59 -05:00
Andrew Boie 7711c9a82d x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 48cc63b4a3 arch: add CONFIG_DEMAND_PAGING
Indicates at the kernel level that demand paging is active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00