Commit graph

5846 commits

Author SHA1 Message Date
Iuliana Prodan
a6364da1a3 arch: xtensa: add workaround for small vector table entries
For some platforms, like NXP's IMX8 or Mediatek's MT8195,
the size of an interrupt vector table entry is 0x1C bytes,
less than usual (0x30 for Intel's platforms).
So, the interrupt handlers don't fit in the vector table
entries.

I've added a small indirection to bypass this size
constraint and moved the default handlers to the end
of vector table, renaming them to
_Level\LVL\()VectorHelper.
For this, I've added a generic configuration -
XTENSA_SMALL_VECTOR_TABLE_ENTRY.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
2021-09-10 10:59:44 -04:00
Alexandre Bourdiol
23c0e16782 arch: arm: core: aarch32: fix regression introduced with Cortex-R
Regression introduced on ARMV6_M_ARMV8_M_BASELINE by Cortex-R PR #28231
Fixes #38421

Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
2021-09-09 19:49:37 -04:00
Wolfgang Reißnegger
535fc38fe7 riscv: Don't reschedule on back-to-back interrupts
In some cases the 'reschedule' code path is executed when the current
thread is the same as the next thread in the ready Q. If this happens,
the swap_return_value of the thread is ifalsely being reset to -EAGAIN.

This commit prevents the rescheduling code to run if the current thread
is the same as the thread in the ready Q.

Signed-off-by: Wolfgang Reißnegger <gnagflow@fb.com>
2021-09-03 12:20:03 -04:00
Daniel Leung
d33017b458 x86: x86-64: add arch_float_en-/dis-able() functions
This adds arch_float_enable() and arch_float_disable() to x86-64.
As x86-64 always has FP/SSE enabled, these operations are basically
no-ops. These are added just for the completeness of arch interface.

Fixes #38022

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-09-03 10:00:02 -04:00
Andy Ross
37bbe7aeea arch/xtensa: Add arch_cpu_idle() workarounds
A simple WAITI isn't sufficient in all cases.  The cAVS 2.5 hardware
uses WAITI as the entry state for per-core power gating, which is very
difficult to debug.  Provide a fallback that simply spins in the idle
loop waiting for interrupts to provide a stable system while this
feature stabilizes.

Also, the SOF code for those platforms references a known bug with the
Xtensa LX6 core IP (or at least some versions), and will prefix the
WAIT instruction with 128 NOP.N's followed by an ISYNC and EXTW.  This
bug hasn't been seen under Zephyr yet, and details are sketchy.  But
the code is simply enough to import and works correctly.

Place both workaround under new kconfig variables and select them both
(even though they're actually mutually exclusive -- if you select both
CPU_IDLE_SPIN overrides) for cavs_v25.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-03 07:19:34 -04:00
Andy Ross
b76bc6c80d arch/xtensa: Fix outgoing stack flush for dummy threads
On CPU startup, When we reach the cache flush code in arch_switch(),
the outgoing thread is a dummy.  The behavior of the existing code was
to leave the existing value in the SR unchanged (probably NULL at
startup).  Then the context switch would walk from that address up to
the top of the outgoing stack, flushing everything in between.  That's
wrong, because the outgoing stack is a real pointer (generally the
interrupt stack of the current CPU), and we're flushing everything in
memory underneath it.

This also reverts commit 29abc8adc0 ("xtensa: fix booting secondary
cores on the dummy thread"), which appears to have been an early
attempt to address this issue.  It worked (modulo all the extra and
potentially incorrect flushing) on cavs v1.5/1.8 because of the way
the entry code worked there.  But on 2.5 we now hit the first context
switch in a case where those extra lines are in address space already
marked unwritable by the CPU, so the flush explodes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-03 07:19:34 -04:00
Evgeniy Paltsev
60fdec616b ARC: MWDT: get rid of MWDT startup libs
__cxa_atexit implementation provided by MWDT startup code calls
malloc which isn't supported right now. As we don't support
calling static destructors in Zephyr let's provide our own
__cxa_atexit stub and get rid of MWDT startup libs
entirely.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-09-01 17:08:32 -04:00
Stephanos Ioannidis
41fd6e003c arch: arm: aarch32: Add half-precision floating-point configs
This commit adds the half-precision (16-bit) floating-point
configurations to the ARM AArch32 architectures.

Enabling CONFIG_FP16 has the effect of specifying `-mfp16-format`
option (in case of GCC) which allows using the half-precision floating
point types such as `__fp16` and `_Float16`.

Note that this configuration can be used regardless of whether a
hardware FPU is available or supports half-precision operations.

When an FP16-capable FPU is not available, the compiler will
automatically provide the software emulations.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-08-30 18:17:47 +02:00
Torsten Rasmussen
94a010107a arch: linker: specify intList section in the IDT_LIST region
This commit specifies the intList section in the IDT_LIST region in the
arch/common CMakeLists.txt file.

It uses zephyr_linker_section to setup the intList section for first
pass linker file and configures the section to hold irq_info and
intList input section.

For second pass linker file, the irq_info and intList input sections are
placed in the /DISCARD/ section.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-30 08:54:23 -04:00
Torsten Rasmussen
38040292c3 cmake: linker: converter arm and common ld scripts into CMake code
Converted existing ld script templates into CMake files.

This commit takes the common-ram.ld, common-rom.ld, debug-sections.ld,
and thread-local-storage.ld and creates corresponding CMake files for
the linker script generator.

The CMake files uses the new Zephyr CMake functions:
- zephyr_linker_section()
- zephyr_linker_section_configure()
- zephyr_linker_section_obj_level()

to generate the same linker result as the existing C preprocessor based
scheme.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-30 08:54:23 -04:00
Torsten Rasmussen
da926f6855 asm: .eabi_attribute Tag_ABI_align_preserved, 1
Tell armlink that files has ensured proper stack alignment.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-30 08:54:23 -04:00
Iuliana Prodan
f9810ccbe1 arch: xtensa: modify asm for interrupt sections
For IMX, for timer interrupt, the interrupt handler
was not the correct one executed and that’s because
the handlers were not at the expected address.
For IMX the size constraint of the interrupt vector
table entry is 0x1C bytes of code, less than usual.

I've added a small indirection to bypass this size
constraint and moved the default handlers to the end
of vector table, renaming them to
_Level\LVL\()VectorHelper.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
2021-08-28 23:27:02 -04:00
Torsten Rasmussen
302fd804ce interrupts: safeguard isr_wrapper and isr_install
ld linker will only resolve undefined symbols inside functions that is
actually being called.

However, not all linkers behaves this way. Certain linkers, for example
armlink, resolves all undefined symbols even if during a later stage at
the linking the function will be pruned.

Therefore `ifdef CONFIG_GEN_ISR_TABLES` has been placed to safeguard
functions that will call undefined symbols when CONFIG_GEN_ISR_TABLES=y.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen
f57483664b arch: arm: swap_helper.S: safe guarding GTEXT(z_arm_do_syscall)
z_arm_do_syscall is only defined and used when CONFIG_USERSPACE=y.

Defining the symbol z_arm_do_syscall in assembly without a corresponding
implementation is fine for GNU ld as long as the function is not
actively called, but armlink fails to link in such cases.

Safegaurd GTEXT(z_arm_do_syscall) so the symbol is only referenced when
actively used, that is when CONFIG_USERSPACE=y.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen
3d82c7c828 linker: align _image_text_start/end/size linker symbols name
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.

The symbols _image_text_start and _image_text_end sometimes includes
linker/kobject-text.ld. This mean there must be both the regular
__text_start and __text_end symbols for the pure text section, as well
as <group>_start and <group>_end symbols.

The symbols describing the text region which covers more than just the
text section itself will thus be changed to:
_image_text_start -> __text_region_start
_image_text_end   -> __text_region_end

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen
c6aded2dcb linker: align _image_rodata and _image_rom start/end/size linker symbols
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.

The symbols _image_rom_start and _image_rom_end corresponds to the group
ROMABLE_REGION defined in the ld linker scripts.

The symbols _image_rodata_start and _image_rodata_end is not placed as
independent group but covers common-rom.ld, thread-local-storage.ld,
kobject-rom.ld and snippets-rodata.ld.

This commit align those names and prepares for generation of groups in
linker scripts.

The symbols describing the ROMABLE_REGION will be renamed to:
_image_rom_start -> __rom_region_start
_image_rom_end   -> __rom_region_end

The rodata will also use the group symbol notation as:
_image_rodata_start -> __rodata_region_start
_image_rodata_end   -> __rodata_region_end

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen
510d7dbfb6 linker: align _ramfunc_ram/rom_start/size linker symbol names
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.

Generally, start and end symbols uses the following pattern, as:
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end

However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end
Section size symbol:      __foo_size
Section LMA start symbol: __foo_load_start

This commit aligns the symbols for _ramfunc_ram/rom to other symbols and
in such a way they follow consistent pattern which allows for linker
script and scatter file generation.

The symbols are named according to the section name they describe.
Section name is `ramfunc`

The following symbols are aligned in this commit:
-  _ramfunc_ram_start  -> __ramfunc_start
-  _ramfunc_ram_end    -> __ramfunc_end
-  _ramfunc_ram_size   -> __ramfunc_size
-  _ramfunc_rom_start  -> __ramfunc_load_start

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Yuguo Zou
eb14e21d18 arch: arc: add support of mpu v6
Add support of ARC mpu v6
* minimal region size down to 32 bytes
* maximal region number up to 32
* not support uncacheable region and volatile uncached region
* clean up mpu code for better readablity

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-08-27 11:45:43 -04:00
Yuguo Zou
333501e871 arch: arc: add support of mpu v3
Add support of ARC mpu version 3 which can have region size down to 32
bytes

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-08-27 11:45:43 -04:00
Daniel Leung
c2a01af003 x86: pin z_x86_set_stack_guard()
This function should be pinned in memory instead of simply
putting it in the boot section, as this function will be
used when new threads are created at runtime.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung
7605619c1e x86: userspace: page in stack before starting user thread
If generic section is not present at boot, the thread stack
may not be in physical memory. Unconditionally page in the stack
instead of relying on page fault to speed up a little bit
on starting the thread.

Also, this prevents a double fault during thread setup when
setting up stack permission in z_x86_userspace_enter().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung
ea0f9474f7 x86: gen_mmu: don't force extra map argument to be base 16
When converting the address and size arguments for extra mappings,
the script assumes they are always base 16. This is not always
the case. So let Python's own int() decides how to interpret
the values as it supports "0x" prefix also.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung
c11ad59ed6 x86: mmu: don't mark generic sections as present if desired
With demand paging, it is possible for data pages to not be
present in physical memory. The gen_mmu.py script is updated
so that, if so desired, the generic sections are marked
non-present so the paging mechanism can bring them in
if needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung
30e5968d34 x86: don't clear BSS if not in physical memory at boot
If the BSS section is not present in physical memory at boot,
do not zero the section, or else page faults would occur.
The zeroing of BSS will be done once the paging mechanism
has been initialized.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung
2dfae4a0f7 kernel: demand_paging: allow reserving page frames
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Jim Shu
073cfa9cdf arch: riscv: introduce global pointer relative addressing support
Enable RISC-V GP relative addressing by linker relaxation to reduce
the code size. It optimizes addressing of globals in small data section
(.sdata).

The gp initialization at program start needs each SoC support. Also,
if RISC-V SoC has custom linker script, SoC should provide
__global_pointer$ symbol in it's linker script.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-08-20 18:53:23 -04:00
Manuel Argüelles
9ff6282089 arch: arm64: invalidate TLBs after ptables swap
This prevent the new thread to attempt accessing cached ptable entries
which are no longer valid.

Signed-off-by: Manuel Argüelles <manuel.arguelles@coredumplabs.com>
2021-08-20 06:26:05 -04:00
Maureen Helm
9b6122d5ac arch: riscv: Increase default CONFIG_TEST_EXTRA_STACKSIZE for 32-bit
Increases the default CONFIG_TEST_EXTRA_STACKSIZE for the 32-bit RISC-V
architecture. This fixes the portability.posix.fs test on the
qemu_riscv32 platform.

Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
2021-08-18 20:54:46 -04:00
Jim Shu
97fa203330 Revert "arch: riscv: added support for custom initialization of gp register"
This reverts commit 7b09d031fa. Because
context save of GP register is removed, we don't need to initialize GP
at thread init. GP will be a constant value so that it could only be
initialized at program start.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-08-18 05:18:55 -04:00
Jim Shu
e3fe63a221 arch: riscv: remove unneeded context switch to gp register
RISC-V global pointer (GP) register is neither caller nor callee
register, and it's a constant value in the single ELF file. Thus, we
don't need to save/restore GP at ISR enter/exit. Remove it to optimize
context switch performance.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-08-18 05:18:55 -04:00
Jim Shu
e1c7333dc7 arch: riscv: fix typo of context switch macro
Fix typo of LOAD_CALLER/CALLEE macros.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-08-18 05:18:55 -04:00
Phil Erwin
78ba3ddbc5 arch: arm: mpu: Put a lock around MPU buffer validate
Related to github #22290.  Getting interrupt during mpu buffer validate
is corrupting index register.  Fix applied to ARC is to disable
interrupts during the buffer validate operation.

Signed-off-by: Phil Erwin <phil.erwin@lexmark.com>
2021-08-17 06:06:33 -04:00
Bradley Bolen
046f93627c arch: arm: cortex_r: Support nested exception detection
Cortex-A/R does not have hardware supported nested interrupts, but it is
easily emulatable using the nesting level stored in the kernel
structure.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-08-17 06:06:33 -04:00
Bradley Bolen
1e153b5091 arch: arm: cortex_r: Add support for recoverable data abort
Add functionality based on Cortex-M that enables recovery from a data
abort using zephyr's exception recovery framework.  If there is a
registered z_exc_handle for a function, then use its fixup address if
that function aborts.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-08-17 06:06:33 -04:00
Bradley Bolen
ff1a5e7858 arch: arm: cortex_r: Add ARCH_EXCEPT macro
With the addition of userspace support, Cortex-R needs to use SVC calls
to handle oops exceptions.  Add that support by defining ARCH_EXCEPT to
do a svc call.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-08-17 06:06:33 -04:00
Bradley Bolen
65dcab81d0 arch: arm: cortex_r: Do not use user stack in svc/isr modes
The user thread cannot be trusted so do not use the stack pointer it
passes in.  Use the thread's privilege stack when in privileged modes to
make sure a user thread does not trick the svc/isr handlers into writing
to memory it should not.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-08-17 06:06:33 -04:00
Phil Erwin
e0bed3b989 arch: arm: cortex_r: Add MPU and USERSPACE support
Use Cortex-M code as a basis for adding MPU support for the Cortex-R.

Signed-off-by: Phil Erwin <phil.erwin@lexmark.com>
2021-08-17 06:06:33 -04:00
Daniel Leung
7862724c50 arm64: smp: arm64_smp_init to be done at PRE_KERNEL_2
The arm64_smp_init() is the same initialization level
and priority as the GICv3 interrupt controller. This means
that arm64_smp_init() can be called before the interrupt
controller driver has been initialized if linker decides
to put the driver init entry later. This would result in
faults when arm64_smp_init() is trying to connect interrupts.
So move arm64_smp_init() to PRE_KERNEL_2 instead. SMP
initialization is called later in the boot process so
this should not affect SMP operations.

This is in preparation of making interrupt controller
drivers to be build as static library. The linking order
is going to change which will result in this being
initialized before the interrupt contoller driver.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-17 06:06:03 -04:00
Stephanos Ioannidis
6df8f7e435 arch: arm: cortex_m: Add ARMv8.1-M MVE configs
This commit adds the ARMv8.1-M M-Profile Vector Extension (MVE)
configurations as well as the compiler flags to enable it.

The M-Profile Vector Extension consists of the MVE-I and MVE-F
instruction sets which are integer and floating-point vector
instruction sets, respectively.

The MVE-I instruction set is a superset of the ARM DSP instruction
set (ARMv7E-M) and therefore depends on ARMV8_M_DSP, and the MVE-F
instruction set is a superset of the ARM MVE-I instruction set and
therefore depends on ARMV8_1_M_MVEI.

The SoCs that implement the MVE instruction set should select the
following configurations:

  select ARMV8_M_DSP
  select ARMV8_1_M_MVEI
  select ARMV8_1_M_MVEF (if floating-point MVE is supported)

The GCC compiler flags for the MVE instruction set are specified
through the `-mcpu` flag.

In case of the Cortex-M55 (the only supported processor type for
ARMv8.1-M at the time of writing), the `-mcpu=cortex-m55` flag, by
default, enables all the supported extensions which are DSP, MVE-I and
MVE-F.

The extensions that are not supported can be specified by appending
`+no(ext)` to the `-mcpu=cortex-m55` flag:

  -mcpu=cortex-m55           Cortex-M55 with DSP + MVE-I + MVE-F
  -mcpu=cortex-m55+nomve.fp  Cortex-M55 with DSP + MVE-I
  -mcpu=cortex-m55+nomve     Cortex-M55 with DSP
  -mcpu=cortex-m55+nodsp     Cortex-M55 without any extensions

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-08-14 20:29:57 -04:00
Evgeniy Paltsev
44e53eeacf ARC: MWDT: fix SMP build for MWDT toolchain
Metaware assembler doesn't accept '@' symbol in the beginning
of symbol name like GNU does.

Drop excessive '@' for _curr_cpu symbol.

Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-08-10 07:36:25 -04:00
Evgeniy Paltsev
7ca190c20f ARC: 64BIT: Kconfig increase stacks sizes for 64bit platforms
Increase default stacks sizes for 64bit platforms where it is
required.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-08-07 20:36:23 -04:00
Evgeniy Paltsev
5ed232b62c ARC: ARCv3 64: adopt ARC SMP code for ARCv3 64 bit
Rewrite ARC SMP code with ASM-compat macros so it can be
used for ARCv3 64 bit.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-08-07 20:36:23 -04:00
Daniel Leung
c661765f1d arm: cortex-m: setup TLS pointer before switching to main
The TLS global pointer is only set during context switch.
So for the first switch to main thread, the TLS pointer
is NULL which would cause access violation when trying
to access any thread local variables in main thread.
Fix it by setting it before going into main thread.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-30 20:16:47 -04:00
Ioannis Glaropoulos
ca5623d288 arm: swap: cleanup an #ifdef statement in swap routine
Cleanup an #ifdef statement in swap_helper.S; use
ARMV6_M_ARMV8_M_BASELINE instead of listing all
Cortex-M baseline implementation variants. This
fixes an issue with Cortex-M23 whose Kconfig
define was not included in the original list.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos
f795672743 arm: cortex-m: enhance information dump during HardFault escalation
When inside an escalated HardFault, we would like to get
more information about the reason for this escalation. We
first check if the reason for thise escalation is an SVC,
which occurs within a priority level that does not allow
it to trigger (e.g. fault or another SVC). If this is true
we set the error reason according to the provided argument.

Only when this is not a synchronous SVC that caused the HF,
do we check the other reasons for HF escalation (e.g. a BF
inside a previous BF).

We also add a case for a debug event, to complete going through
the available flags in HFSR.

Finally we ASSERT if we cannot find the reason for the escalation.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos
7930829826 arm: cortex-m: move synchronous SVC assessment in a separate function
Move the assessment of a synchronous SVC error into a
separate function. This commit does not introduce any
behavioral changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos
a8d6c14d30 arm: cortex-m: clean up some more hard-coded constants in swap_helper
Clean up a few more hard-coded constants
in swap_helper.S and replace them with
CMSIS-like defines in cpu.h. No behavioral
changes in this commit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos
03c4bcd920 arm: use BASEPRI_MAX instead of BASEPRI to mask interrupts
When locking interrupt in a critical session, it is
safer to do MSR BASEPRI_MAX instead of BASEPRI. The
rationale is that when writing to BASEPRI_MAX, the
writing is conditional, and is only applied if the
change is to a higher priority level. This commit
replaces BASEPRI with BASEPRI_MAX in operations that
aim to lock some specific interrupts:
- irq_lock()
- masking out PendSV
So, for example, it is not possible to actually
unmask any interrupts by doing an irq_lock operation.
The commit does not introduce behavioral changes.
However, it makes irq_lock() more robust against
future changes to the IRQ locking mechanism.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos
7156183985 arm: fix the VTOR alignment requirement for Baseline Cortex-M
Baseline Cortex-M requires VTOR to be aligned on 64-word
boundary. That is because bit-7 of VTOR is also RAZ/WI.
The commit updates the vector table section alignment for
Baseline Cortex-M to reflect the implementation constraint.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00
Ioannis Glaropoulos
ebcd5de596 arm: cortex_a_r: rename z_platform_init to z_arm_platform_init
Platform specific initialization during early boot
has been a feature supported only by Cortex-M; the
Kconfig symbol is define in arch/arm Kconfig space.
We rename the z_platform_init() function to
z_arm_platform_init(), to indicate more clearly that
this is an internal, private ARM-only API.

This commit does not introduce behavioral changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-07-28 21:08:18 -04:00