Commit graph

5555 commits

Author SHA1 Message Date
Ioannis Glaropoulos
4084242a71 kernel: make MULTITHREADING promptless if single-thread not supported
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 11:03:22 -05:00
Watson Zeng
8414e86b42 arch: arc: _reset and _start section fix
SECTION_FUNC allows only one function to reside in a sub-section
SECTION_SUBSEC_FUNC allows multiple functions to reside in a sub-section
we should use SECTION_SUBSEC_FUNC for _reset and _start

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-05-26 04:43:06 -05:00
Huifeng Zhang
cb32268fad arch: arm64: Fix the assertion failed when MP_NUM_CPUS >= 3
"arm64_cpu_boot_params.mpid" should be assigned to "master_core_mpid"
after secondary CPU core up.

Because "arm64_cpu_boot_params.mpid" is used to check the next up CPU
core's mpid is the excepted mpid. After excepted CPU core up, the
"arm64_cpu_boot_params.mpid" doesn't restore to primary CPU core's mpid
and then the primary CPU core try to up third CPU core will crash in
assertion.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2021-05-26 04:42:49 -05:00
Watson Zeng
5516b02d53 arch: archs: using ATOMIC_OPERATIONS_BUILTIN
ATOMIC_OPERATIONS_BUILTIN issue (internal jira number: P10019563-43273)
has been fixed in new relasese MWDT 2021.03. We can use builtin atomic.
this commit reverts PR: #28528

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-05-25 12:55:48 -05:00
Johan Hedberg
8341a136d6 x86: multiboot: Fix NULL pointer dereferences
From the point of checking the info pointer value all code in the
z_multiboot_init() function depends on it being non-NULL. Therefore,
simply return from the function if it's NULL.

Fixes #33084

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2021-05-25 13:37:19 -04:00
Nicolas Pitre
b8d24ffb45 arm64: mitigate FPU-in-exception usage side effects
Every va_start() currently triggers a FPU access trap if FPU is not
already used. This is due to the fact that va_start() must copy FPU
registers that are used for float argument passing into the va_list
object. Flushing the FPU context to its owner and granting access to
the current thread is wasteful if this is only for va_start(),
especially since in most cases there are simply no FP arguments
being passed by the caller.

This is made even worse with exception code (syscalls, IRQ handlers,
etc.) where the exception code has to be resumed with interrupts
disabled upon FPU access as there is no provision for preserving an
interrupted exception mode's FPU context.

Fix those issues by simply simulating the sequence of STR instructions
that the va_start() generates without actually granting FPU access.
We limit ourselves only to exception context to keep changes to a
minimum for now.

This also allows for reverting the ARM64 exception in the nested IRQ
test as it now works properly even if FPU_SHARING is enabled.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-21 04:52:44 -05:00
Aurelien Jarno
be49df628f arch: arm: cortex_m: z_arm_mpu_init: fix D-Cache invalidation
In case CONFIG_NOCACHE_MEMORY=y, the D-Cache need to be clean and
invalidated before enabling the MPU to make sure no data from a
__nocache__ region is present in the D-Cache.

If the D-Cache is disabled, SCB_CleanInvalidateDCache() shall not be
used as it might contains random data for random addresses, and this
might just create a bus fault.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2021-05-18 11:39:26 -05:00
Aurelien Jarno
1a583e44ba arch: arm: cortex_m: fix D-Cache reset with CONFIG_INIT_ARCH_HW_AT_BOOT
On reset we do not know what is the status of the D-Cache, nor its
content.

If it is disabled, do not try to clean it, as it might contains random
data for random addresses, and this might just create a bus fault.
Invalidating it is enough.

If it is enabled, it means its content is not random.
SCB_InvalidateDCache() will clean it, invalidate it and disable it.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2021-05-18 11:39:26 -05:00
Andy Ross
41e885947e arch/x86: Correct multiboot interpretation when building for EFI
When loaded via EFI, we obviously don't have a multiboot info pointer
available (we might have an EFI system table, but zefi doesn't pass
that through yet).  Don't try to parse the "whatever garbage was in
%rbp" as a multiboot table.

The configuration is a little clumsy, as strictly our EFI kconfig just
says we're "building for" EFI but not that we'll boot that way.  And
tests like arch/x86/info are trying to set CONFIG_MULTIBOOT=n
unconditionally, when it really should be something they detect from
devicetree or wherever.

Fixes #33545

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-15 15:30:02 -04:00
Daniel Leung
2c2d313cb9 x86: ia32: mark symbols for boot and pinned regions
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
512cb905d1 x86: ia32/linker: add boot and pinned sections
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
45d1fc9cc2 x86: gen_mmu: add support for boot and pinned regions
Both boot and pinned regions need to be mapped and permissions
set correctly.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
af49ec0277 linker: remove TEXT_START macro
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Nicolas Pitre
5f6e257b0b arm64: provide an optimized arch_page_phys_get()
The AT instruction gives the corresponding physical address directly.
Much faster than the default implementation.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-08 17:06:58 -04:00
Carlo Caione
f000695243 cache: Rename sys_{dcache,icache}_* to sys_{data,instr}_cache_*
To have a common prefix.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Carlo Caione
e2333269ae cache: Introduce external cache controller system support
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.

Rework the cache code introducing support for external cache
controllers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Evgeniy Paltsev
93bf5f58e7 ARC: add TLS support for ARCv3
For ARCv3 the register is fixed to r30, so we don't need to
configure it at compile-time.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
9a3d925860 ARC: boost default stacks in case of 64BIT
Increase stacks required for ARCv3 64-bit CI to pass.
The CMSIS stacks are for programs in samples/portability

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
8048b14135 ARC: allow to build code for processors without ZOL
ARCv3 64 bit processors doesn't have Zero Delay Loop
(also named Zero Overhead Loop, ZOL) mechanism. Add kconfig
option to remove ZOL register save/restore so the code
can be build for both ARCv2 and ARCv3.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
3f12ca57b8 ARC: make vector table bit agnostic
ARCv2 32 bit and ARCv3 64 bit share the same vector table
structure but with different vector entry size (32 and 64 bit),
so we can easily make vector table bit agnostic.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
0d859796be ARC: make variables with regs and addresses bit agnostic
Make variables where we store CPU registers values and
memory addresses bit agnostic.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
ab17a59ba5 ARC: mark accesses which are 32 bit despite of platform bittnes
Mark the places where we intentionally use st instead of STR for
code common for ARCv2 and ARCv3.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
9d309d300a ARC: workaround bloated structure access in ASM with _st_huge_offset
When we accessing bloated structure member we can exceed u9 operand
in store instruction. So we can use _st32_huge_offset macro instead
for 32 bit accesses

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
c2b61dfe72 ARC: rewrite ASM code with asm-compat macroses
Rewrite ARC assembler code with asm-compat macroses, so the same
code can be used for both ARCv2 (GNU and MWDT assemblers) and
ARCv3 (GNU assembler)

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
8cb122ea5d ARC: reuse headers for both ARCv3 and ARCv3 if possible
Reuse ARCv2 headers [where it is possible] for ARCv3.
In this commit we simply allow to use them for ARCv3, we'll
move it to proper folder and rename them [where it is required]
in the upcoming cleanup patch.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
6afe7c5fd2 ARC: prepare for building for ARCv3 HS6x
Do basic preparations for building code for ARCv3 HS6x
* add ISA_ARCV3 and CPU_HS6X config options
* add off_t type support for __ARC64__
* use elf64-littlearc format for linking
* use arc64 mcpu for CPU_HS6X

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Daniel Leung
18aad13d76 x86: mmu: implement arch_page_phys_get()
This implements arch_page_phys_get() to translate mapped
virtual addresses back to physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
786cf641dc x86: mmu: implement arch_mem_unmap()
This implements arch_mem_unmap() as counterpart to
arch_mem_map().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
c481fd412e x86: mmu: don't decrement z_free_page_count in reserving code
In z_mem_manage_init(), z_free_page_count is only manipulated
after all reserved pages are marked, and will reflect
the actual number of page frames being added to the free page
frame list. Manipulating z_free_page_count before this is
going to mess up the accounting, so remove the code to
decrement z_free_page_count in arch_reserved_pages_update()
under x86.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
783b20712e arch: implement brute force find_lsb_set()
On RISC-V 64-bit, GCC complains about undefined reference
to 'ffs' via __builtin_ffs(). So implement a brute force
way to do it. Once the toolchain has __builtin_ffs(),
this can be reverted.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Bradley Bolen
95a7e71661 arch: arm: aarch32: Move mpu code up a level
Move the mpu code to the common aarch32 directory in preparation for
Cortex-R mpu support

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-05-06 19:39:09 +02:00
Daniel Leung
37672958ac x86: mmu: relax KERNEL_VM_OFFSET == SRAM_OFFSET
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-05 19:42:25 -04:00
Gerard Marull-Paretas
280ca7a632 arch: replace power/power.h with pm/pm.h
Replace old header with the new one.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Jennifer Williams
ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00
Øyvind Rønningstad
a2cfb8431d arch: arm: Add code for swapping threads between secure and non-secure
This adds code to swap_helper.S which does special handling of LR when
the interrupt came from secure. The LR value is stored to memory, and
put back into LR when swapping back to the relevant thread.

Also, add special handling of FP state when switching from secure to
non-secure, since we don't know whether the original non-secure thread
(which called a secure service) was using FP registers, so we always
store them, just in case.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-05-05 13:00:31 +02:00
Ioannis Glaropoulos
ad808354d2 arch: arm: Add config for non-blocking secure calls
Introduce a Kconfig option to allow Secure function calls to be
pre-empted.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-05-05 13:00:31 +02:00
Nicolas Pitre
76494f8589 arm64: optimize offsets in z_arm64_context_switch
We can use build-time offsets from a struct k_thread pointer directly
to struct _callee_saved members. No need to compute that at run time.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-04 22:41:32 -04:00
Mahesh Mahadevan
d6b50233ac arch: arm: Setup Static MPU regions earlier in boot flow
Setup the static MPU regions before PRE_KERNEL_1 and
PRE_KERNEL_2 functions are invoked. This will setup
the MPU for SRAM regions in case code relocated to SRAM
is invoked from any of these functions.

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Mahesh Mahadevan
1b36c6c00e arch: arm: Create a MPU entry for relocated code
Code relocated using CONFIG_CODE_DATA_RELOCATION_SRAM should
be allowed to execute from SRAM

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Mahesh Mahadevan
64e973fdcd Kconfig: Add a new config CODE_DATA_RELOCATION_SRAM
1. This will help us identify if the relocation is to
SRAM which is used when setting up the MPU entry
for the SRAM region where code is relocated
2. Move CODE_DATA_RELOCATION configs to ARM specific
folder

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Nicolas Pitre
35c9ed6a4b arm64: don't create a section for z_arm64_exit_exc_fpu_done
Both z_arm64_exit_exc and z_arm64_exit_exc_fpu_done must be within
the same section as execution falls through here.

If z_arm64_exit_exc_fpu_done creates a section of its own then the
linker is free to disjoint the code and we absolutely don't want that.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 19:56:26 -04:00
Guennadi Liakhovetski
29abc8adc0 xtensa: fix booting secondary cores on the dummy thread
When secondary cores are booted, they use the dummy thread and
the IRQ stack until they switch over to a real thread. Therefore
dummy threads shouldn't be skipped when cohering outgoing thread
stack, only threads with zero stack size should be skipped.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-03 17:13:01 -04:00
Nicolas Pitre
f1f63dda17 arm64: FPU context switching support
This adds FPU sharing support with a lazy context switching algorithm.

Every thread is allowed to use FPU/SIMD registers. In fact, the compiler
may insert FPU reg accesses in anycontext to optimize even non-FP code
unless the -mgeneral-regs-only compiler flag is used, but Zephyr
currently doesn't support such a build.

It is therefore possible to do FP access in IRS as well with this patch
although IRQs are then disabled to prevent nested IRQs in such cases.

Because the thread object grows in size, some tests have to be adjusted.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Nicolas Pitre
a82fff04ff arm64: implement exception depth count
Add the exception depth count to tpidrro_el0 and make it available
through the arch_exception_depth() accessor.

The IN_EL0 flag is now updated unconditionally even if userspace is
not configured. Doing otherwise made the code rather hairy and
I doubt the overhead is measurable.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Nicolas Pitre
949ef7c660 Kconfig: clean up FPU and FPU_SHARING entries
CONFIG_FPU: The architecture dependency list is redundant.
Having CPU_HAS_FPU being selected by those archs as a dependency
is sufficient and cleaner.

CONFIG_FPU_SHARING: The default should always be y to be on the safe
side here, but as a compromise for not affecting existing config, let's
move the default selection local to those configs that care, again to
avoid a growing list of conditionals here. Adjust the help text which
applies to more than just Cortex-M.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Jiafei Pan
4a87c08606 arm64: cache: fix arch_dcache_all()
Add data barrier before and after dcachle flush or clean,
and restore to data cache level 0 after all ops.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-05-03 11:55:52 +02:00
Jiafei Pan
12b9b5aacc arm64: cache: refine arch_dcache_range()
Moved all assembly code to c code. Fixed arch_dcache_line_size_get()
to get dcache line size by using "4 << dminline" and don't consider
CWG according to sample code in cotexta-v8 programer guider.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-05-03 11:55:52 +02:00
Daniel Leung
54283efcce x86: mmu: allow page table extra mappings to have cache disabled
This adds the bits to the gen_mmu.py script so that extra mappings
can be added with caching disabled. This is useful for mapping
MMIO regions where caching is not desired.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 21:17:24 -04:00
Daniel Leung
43f0726985 arm: aarch32: timing: fix potential divide by zero if DWT
There is a possibility that the DWT frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta DWT cycles
until they both are not zero.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 16:49:17 -04:00
Daniel Leung
d6cbdace78 x86: timing: fix potential divide by zero
There is a possibility that the TSC frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta TSC cycles
until they both are not zero.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 16:49:17 -04:00
Jennifer Williams
3e28a570c2 arch: x86: core: pcie: rephrase use of ain't
Rephrasing away from ain't, which is informal, uncommon, and can
be viewed as substandard or 'slang'.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-29 07:15:50 -04:00
Gerard Marull-Paretas
f163bdb280 power: move reboot functionality to os lib
Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-28 20:34:00 -04:00
Jennifer Williams
734c65ad23 arch: arm: core: aarch32: cortex_m: fault: fix if...else ifs
bus_fault() and hard_fault() were missing final else statement
in the if else if constructs. This commit adds non-empty else {}
to comply with coding guideline 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams
a5c27d69b5 arch: arm: core: aarch32: cortex_m: debug: remove if...else if construct
z_arm_debug_monitor_event_error_check() was missing final
else statement in the if else if construct so violated guideline
15.7. This commit removes the else if for symmetry in the limited
early-exit conditions, rather than empty final else {}, to comply.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Gerard Marull-Paretas
6c7c9e2b99 arch: x86: remove usage of device_pm_control_nop
If device PM is not implemented just use NULL.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-27 16:28:49 -04:00
Hou Zhiqiang
50d263d138 arm64: Do not try to bring up the cores disabled in DT node
The macro DT_FOREACH_CHILD will iterates all child nodes ignoring the
status property, this patch changes to use DT_FOREACH_CHILD_STATUS_OKAY
to avoid trying to bring up disabled cores, which only iterates the
enabled child nodes.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2021-04-27 13:32:55 -04:00
Hou Zhiqiang
9681034875 arm64: Fix MPID load instruction for secondary cores
Change to load MPID for secondary cores adding offset macro
BOOT_PARAM_MPID_OFFSET.

Currently the code load MPID for secondary cores from offset 0x0
of the struct arm64_cpu_boot_params, it's working as currently
the macro BOOT_PARAM_MPID_OFFSET has value 0x0, but when the
location of the member "mpid" is changed, it can result in SMP
booting failure and the build assert won't throw out any warning.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2021-04-27 13:32:18 -04:00
Daniel Leung
1117169980 kernel: generate placeholders for kobj tables before final build
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-27 13:32:00 -04:00
Morten Priess
a0dd44c5e0 arch: select HAS_DTS for SPARC
With PR#34449, architectures that use DTS must select the HAS_DTS
configuration.

Signed-off-by: Morten Priess <mtpr@oticon.com>
2021-04-26 13:42:10 +02:00
Jiafei Pan
a89cb1cc13 arm64: mmu: invalidate all data caches before enable them
Datas in data cache are dirty before data caches are enabled,
so need to invalidate all data caches firstly before enable
them.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-26 13:39:39 +02:00
Jiafei Pan
7b7035231f arm64: cache: add arch_dcache_all()
Add cache function: arch_dcache_all(), it can clean or
invalidate or clean&invalidate all data caches.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-26 13:39:39 +02:00
Ioannis Glaropoulos
fdb4df26d3 arm: cortex-m: minor doc updates in swap_helper.S
Inline some minor clarifications regarding the
Lazy Stacking feature in the cortex-m pendSV
handler, for ease of understanding. Also, fix
some minor style issues in comments.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-04-23 15:18:16 -05:00
Carlo Caione
256ca55476 arm64: Rework stack usage
The ARM64 port is currently using SP_EL0 for everything: kernel threads,
user threads and exceptions. In addition when taking an exception the
exception code is still using the thread SP without relying on any
interrupt stack.

If from one hand this makes the context switch really quick because the
thread context is already on the thread stack so we have only to save
one register (SP) for the whole context, on the other hand the major
limitation introduced by this choice is that if for some reason the
thread SP is corrupted or pointing to some unaccessible location (for
example in case of stack overflow), the exception code is unable to
recover or even deal with it.

The usual way of dealing with this kind of problems is to use a
dedicated interrupt stack on SP_EL1 when servicing the exceptions. The
real drawback of this is that, in case of context switch, all the
context must be copied from the shared interrupt stack into a
thread-specific stack or structure, so it is really slow.

We use here an hybrid approach, sacrificing a bit of stack space for a
quicker context switch. While nothing really changes for kernel threads,
for user threads we now use the privileged stack (already present to
service syscalls) as interrupt stack.

When an exception arrives the code now switches to use SP_EL1 that for
user threads is always pointing inside the privileged portion of the
stack of the current running thread. This achieves two things: (1)
isolate exceptions and syscall code to use a stack that is isolated,
privileged and not accessible to user threads and (2) the thread SP is
not touched at all during exceptions, so it can be invalid or corrupted
without any direct consequence.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-23 06:32:20 -04:00
Mahesh Mahadevan
a9397e3b3a arm: cortex_m: Update get DWT frequency for NXP SoC's
Get the DWT cycle count frequency for NXP devices from
CMSIS SystemCoreClock symbol

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-04-21 20:40:24 -04:00
Carlo Caione
1ceff68ea1 arm64: Fix maybe-uninitialized error
Fix:
 arch/arm64/core/smp.c:98:3: error: 'cpu_mpid' may be used uninitialized
 in this function [-Werror=maybe-uninitialized]

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-20 15:51:22 -04:00
Bradley Bolen
92a3209c5c arch: arm: aarch32: cortex_a_r: Dump callee saved registers on fault
Some of these registers may contain nuggets of information that would be
beneficial when debugging, so include them in the fault dump.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Bradley Bolen
c96ae584bf arch: arm: aarch32: cortex_a_r: Correct syntax for srs
The writeback specification should be after the register, not after the
mode according to the documentation at

Link: https://developer.arm.com/documentation/dui0489/h/arm-and-thumb-instructions/srs

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Bradley Bolen
18ec84803c arch: arm: aarch32: Use ARRAY_SIZE in for loop
Do not hardcode the array size in the loop for printing out the floating
point registers of the exception stack frame.  The size of this array
will change when Cortex-R support is added.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 17:20:15 +02:00
Krzysztof Chruscinski
ae4adea463 arch: arm: cortex_m: z_arm_pendsv in vector table when multithreading
When CONFIG_MULTITHREADING=n kernel specific pendsv is not used. Remove
from vector table.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-20 16:00:39 +02:00
Bradley Bolen
6734c6e874 arch: arm: aarch32: Fix spurious interrupt handling
The GIC can return 0x3ff to indicate a spurious interrupt.  Other
interrupt controllers could return something different.  Check that the
pending interrupt is valid in order to avoid indexing past the end of
the isr_table.

This fixes #30465 and is based on the aarch64 fix in 9dd2731d.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-04-20 08:30:41 -04:00
Nicolas Pitre
29c8e9bf66 arm64: decrustify and extend SMP boot code
The SMP boot code depends on physical CPU #0 to be first to boot and
subsequent CPUs to follow suit in a linear fashion. Let's decouple
physical and logical numbering so that any physical CPU can be the
boot CPU. This is based on a prior code proposal from
Jiafei Pan <Jiafei.Pan@nxp.com>.

This, however, was about to turn the boot code into some hairy mess.
So let's clean things up and simplify the code as well while at it.
Both the extension and the clean up aren't separate commits because
they actually depend on each other.

The BOOT_PARAM_*_OFFSET defines are locally hardcoded as there is no
point exposing the related structure widely. Build time assertions
ensure they don't go out of sync with the struct definition. And
vector_table.h is repurposed into boot.h to gather boot related
definitions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-19 11:00:05 -04:00
Jiafei Pan
7889364771 arm64: refine the code for primary core checking
We can find caller of z_arm64_mmu_init is on primary
core or not, so no need to check mpidr, just add a
function parameter.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-19 11:00:05 -04:00
Carlo Caione
cd4b01e0bb arm64: Use syscall frame and fix bad syscall handling
This patch is fixing three related problems:

1. When calling a syscall the marshalling function is using the ssf
   parameter as value to be saved in _current->syscall_frame to mark the
   beginning and the end of the syscall. This ssf value is not currently
   being explictly set and instead the syscall code is using whatever
   value is stored in x6 when the syscall is called. If it happens that
   x6 is 0 at the time the syscall is called, this causes the
   z_is_in_user_syscall() function to fail. Fix this passing the ESF as
   value for ssf.

2. Given that in the ssf is now present the ESF, we can fix
   arch_syscall_oops() using the ESF to print a more detailed error
   message with registers dump.

3. When a wrong syscall number is used, handler_bad_syscall() is called.
   This function expects the ID number as first parameter to print the
   error message, fix this.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-18 22:04:52 -04:00
Carlo Caione
04df0ddc88 arm64: Set AARCH64_IMAGE_HEADER and BUILD_OUTPUT_BIN to y
It doesn't hurt always having the image header and generating the binary
output. I find myself constantly setting those to 'y', so make it
definitive.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-15 12:58:22 +02:00
Carlo Caione
013c8273ca arm64: gic: Enable access to ICC_* registers
Thi GICv3 driver is configuring the controller accessing the system
registers ICC_*. To be able to do that without trapping we have to
explicitly set at boot in EL3 the value of the ICC_SRE_EL3 register that
is architecturally set to UNKNOWN value on warm reset.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-15 12:26:39 +02:00
Nicolas Pitre
88477906f0 arm64: hold curr_cpu instance in tpidrro_el0
Let's fully exploit tpidrro_el0 by storing in it the current CPU's
struct _cpu instance alongside the userspace mode flag bit. This
greatly simplifies the code needed to get at the cpu structure, and
this paves the way to much simpler multi cluster support, as there
is no longer the need to decode MPIDR all the time.

The same code is used in the !SMP case as there are benefits there too
such as avoiding the literal pool, and it looks cleaner.

The tpidrro_el0 value is no longer stored in the exception stack frame.
Instead, we simply restore the user mode flag based on the SPSR value.
This way, more flag bits could be used independently in the future.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-14 15:06:21 -04:00
Jaxson Han
f249544f48 arch: arm64: Add MPU drivers to the build system
When ARM_MPU is defined, the MPU drivers will be built into the final
zephyr target.

Signed-off-by: Haibo Xu <haibo.xu@arm.com>
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Jaxson Han
30ed92c218 arch: arm64: Armv8-R AArch64 MPU implementation
Armv8-R AArch64 MPU can support a maximum 16 memory regions, and the
actual region number can be retrieved from the system register(MPUIR)
during MPU initialization.
Current MPU driver only suppots EL1.

Signed-off-by: Haibo Xu <haibo.xu@arm.com>
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Jaxson Han
ad1da08f4f arch: arm64: Add Cortex-R82 config
Add Cortex-R82 config to support the Cortex-R82 processor.
Introduce the new CPU_CORTEX_R_AARCH64 config for the Cortex-R 64-bit
processor.

Since the current CPU_CORTEX_R config has already been bound for
AArch32 in many test cases, we therefore add a new CPU_AARCH64_CORTEX_R
to distinguish from the Cortex-R 32-bit processor.
We do not use CPU_CORTEX_R64 because this name will lead to ambiguity
with processor name like Cortex-R82.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-04-13 07:47:44 -04:00
Evgeniy Paltsev
d4081fd07f ARC: allow to configure the RGF_NUM_BANKS only if ARC_FIRQ is enabled
As of today we use second register bank only if fast interrupts are
enabled. So don't show the 'number of register bank' configuration
option if fast interrupts are disabled to avoid user confusion.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-04-13 06:59:20 -04:00
Nicolas Pitre
790794ce84 arm64: improve CONFIG_MAX_XLAT_TABLES default value
The typical number of needed translation tables depends on memory
domain usage and userspace support, but also on the virtual address
space width due to the number of translation levels involved.
Reflect that in the default value.

Also fix a related comment where values were off by 1.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-12 22:13:38 -04:00
Nicolas Pitre
fc8c53ff0e arm64: a few alignment fixes
The structure for the arm64_cpu_init array has to carry the cache
alignment on the whole structure and not on some internal padding
to achieve the desired effect.

And align struct __esf to a 16-byte boundary which will also align
its size accordingly. This structure is allocated on the stack on
exception entry and the ABI prescribed 16-byte stack alignment
should be preserved.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-12 11:47:41 -04:00
Krzysztof Chruscinski
8bee027ec4 arch: arm: Unconditionally compile IRQ_ZERO_LATENCY flag
Flag was present only when ZLI was enabled. That resulted in additional
ifdefs needed whenever code supports ZLI and non-ZLI mode.

Removed ifdefs, added build assert to irq connections to fail at
compile time if IRQ_ZERO_LATENCY is set but ZLI is disabled. Additional
clean up made which resulted from removing the ifdef.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-12 07:33:27 -04:00
Flavio Ceolin
9285b4c6bf arch: nios2: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin
b7d04487e1 arch: riscv: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin
bfd9d0069b arch: sparc: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin
49f0c74a9e arch: common: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin
4f5460ad6a arch: arm: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin
abb1bbe6b1 arch: xtensa: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin
4b55ee27d4 arch: arc: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin
03544f0b77 arch: x86: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Ioannis Glaropoulos
d307bd2fdd arm: add note explaining why Hard ABI is disabled for tfm builds
Add a note in the Kconfig help text that explains why Hard ABI
is not possible on builds with TF-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-04-09 11:48:55 -05:00
Øyvind Rønningstad
80a351e22d arch: arm: Disallow FP_HARDABI when building with TFM
When building with TFM, the app is linked with libraries built by the
TFM build system. TFM is always built with -msoft-float which is
equivalent to -mfloat-abi=soft. FP_HARDABI adds -mfloat-abi=hard
which gives errors when linking with the libs from TFM since they are
built with a different ABI.

Fixes https://github.com/zephyrproject-rtos/zephyr/issues/33956

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-04-09 11:48:55 -05:00
Nicolas Pitre
69a0fd3a6a aarch64: smp: make the cross-CPU swap_ptables call use its own IPI
Let's disentangle this from arch_sched_ipi() with an SGI for its
own purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-09 11:55:13 -04:00
Nicolas Pitre
28dc807f50 aarch64: mmu: don't touch the lock before the MMU is on
We can't do atomic memory operations before the MMU is on. Let's create
a code path to set up MMU page tables without any lock. There is
obviously no concurrency issues at this stage.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-09 11:55:00 -04:00
Carlo Caione
9dd2731d15 aarch64: Remove comparison with GIC-specific intid
GIC_INTID_SPURIOUS is a GIC-specific intid so it's not valid for custom
interrupt controllers. Rework a bit the logic by comparing the intid to
the maximum intid possible instead.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:28:21 -04:00
Carlo Caione
23a0c8c2ec aarch64: Do not save garbage on the stack
No need to save useless values on the stack.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:28:21 -04:00
Carlo Caione
64dfa69681 aarch64: Remove useless _curr_cpu struct
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:10:10 -04:00
Jiafei Pan
56db1ee66d arch: arm64: add 40 bits physical and virtual address
Add support 40 bits physical and virtual address space.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-04-09 13:25:15 +02:00
Kumar Gala
be0a19757c riscv: MTVAL CSR not supported on OpenISA RV32M1
Don't report MTVAL on the OpenISA RV32M1 SoC as this CSR isn't
supported on the SoC.

Fixes: #34014

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-04-08 14:22:54 +02:00
Daniel Leung
09e8db3d68 kernel: enable using timing subsys to collect paging histograms
This adds bits to the paging timing histogram collection routines
so they can use timing functions to collect execution time data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung
1eba3545c1 x86: timing: allow userspace to convert cycles to ns
The variable tsc_freq is not accessible in user thread
and is thus preventing user threads to convert cycles to ns.
So make tsc_freq available globally in default memory
domain so conversion is possible.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung
8eea5119d7 kernel: mmu: demand paging execution time histogram
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung
ae86519819 kernel: mmu: collect more demand paging statistics
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.

Also extends this to gather per-thread demand paging
statistics.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Flavio Ceolin
85b2bd63c1 arch: x86: Fix 14.4 guideline violation
The controlling expression of an if statement has to be an
essentially boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-06 10:25:24 -04:00
Flavio Ceolin
95cd021cea arch: arm: Fix 14.4 guideline violation
The controlling expression of an if statement has to be an
essentially boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-06 10:25:24 -04:00
Daniel Leung
64e99dfcf6 xtensa: change CONFIG_ATOMIC_OPERATIONS_ARCH to imply
Xtensa cores are highly configurable so each SoC may not have
the needed instructions for the hardware assisted atomic
operations. So instead of selecting the arch-specific atomic
operations kconfig, do a "imply" instead. So SoC or board
configs can disable this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-02 07:23:33 -04:00
Nicolas Pitre
d18ab4af49 arm64: get rid of the mmu directory
Turns out that we could flatten the tree further as there is not
that many files to warrant a whole directory for this.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-01 15:47:04 -05:00
Anas Nashif
0630452890 x86: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif
25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Carlo Caione
a43f3bade8 arm/arm64: Fix misc and trivials for ARM/ARM64 split
Fix the header guards, comments, github labeler, CODEOWNERS and
MAINTAINERS files.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Carlo Caione
3539c2fbb3 arm/arm64: Make ARM64 a standalone architecture
Split ARM and ARM64 architectures.

Details:

- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
  (arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
  boards/bcm_vk/viper directory

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Daniel Leung
7a27509d6f x86: gen_mmu: allow script to take extra arguments
This extends the cmake build script to take in extra arguments
for gen_mmu.py.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung
4b477a9864 x86: mmu: allow copying page directory entries with large pages
This changes the assert when a large page is encountered to
copying the page directory entry to the new page directory.
This is needed when a large page entry is generated by
gen_mmu.py. Note that this still asserts when there are entries
of large page at higher level.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung
51263f73aa x86: gen_mmu: allow specifying extra mappings
This extends gen_mmu.py to accept additional mappings passed via
command line.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung
0886a73df8 x86: gen_mmu: fail if reserved page table space is too small
This makes the gen_mmu.py script to error out if the reserved space
for page table in zephyr_prebuilt.elf is not large enough to
accommodate the generated page table. Let catch this at build time
instead of mysterious hangs when loading the page table at boot.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung
3ebcd8307e x86: mmu: add kconfig CONFIG_X86_EXTRA_PAGE_TABLE_PAGES
The whole page table is pre-allocated at build time and is
dependent on the range of address space. This kconfig allows
reserving extra pages (of size CONFIG_MMU_PAGE_SIZE) to
the page table so that gen_mmu.py can make use of these
extra pages.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Flavio Ceolin
3a04cc2210 riscv: core: Remove invalid comparison
unsigned int will never be lesser than 0.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-26 07:13:13 -04:00
Martin Åberg
83f733ce59 SPARC: improve fatal log
The fatal log now contains
- Trap type in human readable representation
- Integer registers visible to the program when trap was taken
- Special register values such as PC and PSR
- Backtrace with PC and SP

If CONFIG_EXTRA_EXCEPTION_INFO is enabled, then all the above is
logged. If not, only the special registers are logged.

The format is inspired by the GRMON debug monitor and TSIM simulator.
A quick guide on how to use the values is in fatal.c.

It now looks like this:

E: tt = 0x02, illegal_instruction
E:
E:       INS        LOCALS     OUTS       GLOBALS
E:   0:  00000000   f3900fc0   40007c50   00000000
E:   1:  00000000   40004bf0   40008d30   40008c00
E:   2:  00000000   40004bf4   40008000   00000003
E:   3:  40009158   00000000   40009000   00000002
E:   4:  40008fa8   40003c00   40008fa8   00000008
E:   5:  40009000   f3400fc0   00000000   00000080
E:   6:  4000a1f8   40000050   4000a190   00000000
E:   7:  40002308   00000000   40001fb8   000000c1
E:
E: psr: f30000c7   wim: 00000008   tbr: 40000020   y: 00000000
E:  pc: 4000a1f4   npc: 4000a1f8
E:
E:       pc         sp
E:  #0   4000a1f4   4000a190
E:  #1   40002308   4000a1f8
E:  #2   40003b24   4000a258

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Martin Åberg
c2b1e8d2f5 SPARC: implement ARCH_EXCEPT()
Introduce a new software trap 15 which is generated by the
ARCH_EXCEPT() function macro.

The handler for this software trap calls z_sparc_fatal_error() and
finally z_fatal_error() with "reason" and ESF as arguments.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Martin Åberg
9da5a786a1 SPARC: catch unexpected softare traps
Unexpected software traps ("ta" instruction) are now handled by the
fatal exception handler and eventually end up in z_fatal_error().

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Kumar Gala
520ebe4d76 arch: arm: remove compat headers
These compat headers have been moved since at least v2.4.0 release so we
can now remove them.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-25 16:40:25 +01:00
Katsuhiro Suzuki
19db485737 kernel: arch: use ENOTSUP instead of ENOSYS in k_float_disable()
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Katsuhiro Suzuki
59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Carlo Caione
807991e15f AArch64: Do not use CONFIG_GEN_PRIV_STACKS
We are setting CONFIG_GEN_PRIV_STACKS when AArch64 actually uses a
statically allocated privileged stack.

This error was not captured by the tests because we only verify whether
a read/write to a privileged stack is failing, but it can fail for a lot
of reasons including when the pointer to the privileged stack is not
initialized at all, like in this case.

With this patch we deselect CONFIG_GEN_PRIV_STACKS and we fix the
mem_protect/userspace test to correctly probe the privileged stack.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-25 07:23:19 -04:00
Eugeniy Paltsev
1b41da2630 ARC: Kconfig: rename CPU_ARCV2 option to ISA_ARCV2
* Rename CPU_ARCV2 to ISA_ARCV2. That helps to avoid conflict between
  CPU families naming and ISAs naming and aligns this options
  with other ARC OSS projects.

* Generalize ARCV2 check to ARC check where it is required.

NOTE: we add ISA_ARCV2 option in a choice list as a preparation
for ISA_ARCV3 addition.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-25 07:23:02 -04:00
Eugeniy Paltsev
8311d27afc ARC: Kconfig: cleanup CPU_ARCEM / CPU_ARCHS options usage
Don't allow user to choose CPU_ARCEM / CPU_ARCHS options
but select them when exact CPU type (i.e. EM4 / EM6 / HS3X/ etc)
is chosen.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-25 07:23:02 -04:00
Kumar Gala
95e4b3eb2c arch: arm: Add initial support for Cortex-M55 Core
Add initial support for the Cortex-M55 Core which is an implementation
of the Armv8.1-M mainline architecture and includes support for the
M‑profile Vector Extension (MVE).

The support is based on the Cortex-M33 support that already exists in
Zephyr.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-23 13:13:32 -05:00
Jim Shu
8db3683820 arch: riscv: improve exception messages
Add exception descriptions of mcause id 6~15. Also print mtval CSR for
memory access fault & illegal instruction exceptions.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-03-22 15:47:09 -04:00
Watson Zeng
0da8ec70dc arch: arc: enable divide zero exception
STATUS32.DZ(bit 13) is the EV_DivZero exception enable bit, and it's
not enabled by default. we need to set it explicitly to enable divide
zero exception on early boot and each thread's setup.

The DZ bit is ignored on write and read as zero when there is no
hardware division configured. So we can simply set DZ bit even if
there is no hardware division configured.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-03-19 13:56:59 -04:00
Anas Nashif
fe0872c0ab clocks: rename z_tick_get -> sys_clock_tick_get
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif
771cc9705c clock: z_clock_isr -> sys_clock_isr
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Carlo Caione
f3d11cccf4 aarch64: userspace: Enable userspace
Add ARCH_HAS_USERSPACE to enable userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione
2936998591 aarch64: GCC10: Add -mno-outline-atomics
GCC10 introduced by default calls to out-of-line helpers to implement
atomic operations with the '-moutline-atomic' option. This is breaking
several tests because the embedded calls are trying to access the
zephyr_data region from userspace that is declared as MT_P_RW_U_NA,
triggering a memory fault.

Since there is currently no support for MT_P_RW_U_RO (and probably never
will be), disable the out-of-line helpers disabling the GCC option.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione
8cbd9c7d8e aarch64: userspace: Add missing entries in vector table
To support exceptions taken in EL0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione
1347fdbca7 aarch64: userspace: Increase KOBJECT_TEXT_AREA
This is needed to have some tests run successfully.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Nicolas Pitre
2b5b054b0b aarch64: userspace: bump the global number of available page tables
Each memory domain requires a few pages for itself.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione
b52f769908 aarch64: mmu: Fix MMU permissions for zephyr code and data
User threads still need to access the code and the RO data. Fix the
permissions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Nicolas Pitre
a74f378cdc aarch64: mmu: apply domain switching on all CPUs if SMP
It is apparently possible for one CPU to change the memory domain
of a thread already being executed on another CPU.

All CPUs must ensure they're using the appropriate mapping after a
thread is newly added to a domain.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-18 19:33:59 -04:00
Carlo Caione
ec70b2bc7a aarch64: userspace: Add support for page tables swapping
Introduce the necessary routines to have the user thread stack correctly
mapped and the functions to swap page tables on context switch.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-18 19:33:59 -04:00
Kumar Gala
7d35a8c93d kernel: remove arch_mem_domain_destroy
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed.  So remove
arch_mem_domain_destroy as well.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-18 16:30:47 +01:00
Carles Cufi
59a51f0e09 debug: Clean up thread awareness data sections
There's no need to duplicate the linker section for each architecture.
Instead, move the section declaration to common-rom.ld.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-03-17 14:43:01 -05:00
Daniel Leung
0c540126c0 x86: gen_mmu: unify size display in hex
This unifies all the display of region size in hex.
Some of them are there to aid in figuring out the end of
a memory region so it is easier if they are already in hex.

This also fixes the display of address range where the end
is off by one and should be (base + size - 1).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
c650721a0f x86: ia32: use virtual address for interrupt stack at boot
After page table is load, we should be executing in virtual
address space. Therefore we need to set ESP to the virtual
address of interrupt stack for the boot process.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
9109fbb1a2 x86: ia32: load GDT in virtual memory after loading page table
This reverts commit d40e8ede8e.

This fixes triple faults after wiping the identity mapping of
physical memory when running entering userspace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Andrew Boie
348d1315d2 x86: 32-bit: restore virtual linking capability
This reverts commit 7d32e9f9a5.

We now allow the kernel to be linked virtually. This patch:

- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
  in boot page tables to facilitate instruction pointer
  transition, with logic to clean this up after completed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
03b413712a x86: gen_mmu: double map physical/virtual memory at top level
This reuses the page directory pointer table (PAE=y) or page
directory (PAE=n) to point to next level page directory table
(PAE=y) or page tables (PAE=n) to identity map the physical
memory. This gets rid of the extra memory needed to host
the extra mappings which are only used at boot. Following
patches will have code to actual unmap physical memory
during the boot process, so this avoids some wasting of
memory.

Since no extra memory needs to be reserved, this also reverts
commit ee3d345c09
("x86: mmu: reserve more space for page table if linking in virt").

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
dd0748a979 x86: gen_mmu: use constants to refer to page level...
...instead of magic numbers. Makes it a tiny bit easier to
read code.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
b95b8fb075 x86: gen_mmu: allow more verbose messages
This allows specifying second --verbose in command line to
enable more messages. Two new ones have been added to aid
in debugging code for mapping and setting permission to
a single page.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
4bab992e80 x86: gen_mmu: consolidate map() and identity_map()
Consolidate map() and identity_map() as they are mostly
the same.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
e211d3a999 kernel: remove CONFIG_KERNEL_LINK_IN_VIRT
There actually is no need for a separate kconfig here, as
the kernel VM address and SRAM address can be used to figure
out if the kernel is linked in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
273a5e670b x86: remove usage of CONFIG_KERNEL_LINK_IN_VIRT
There is no need to use this kconfig, as the phys-to-virt
offset is enough to figure out if the kernel is linked in
virtual address space in gen_mmu.py.

For code, use Z_VM_KERNEL instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
d39012a590 x86: use Z_MEM_*_ADDR instead of Z_X86_*_ADDR
With the introduction of Z_MEM_*_ADDR for physical<->virtual
address translation, there is no need to have x86 specific
versions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Nicolas Pitre
f062490c7e aarch64: mmu: add TLB flushing on mapping changes
Pretty crude for now, as we always invalidate the entire set.
It remains to be seen if more fined grained TLB flushing is worth
the added complexity given this ought to be a relatively rare event.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Carlo Caione
a010651c65 aarch64: mmu: Add initial support for memory domains
Introduce the basic support code for memory domains. To each domain
is associated a top page table which is a copy of the global kernel
one. When a partition is added, corresponding memory range is made
private before its mapping is adjusted.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre
c77ffebb24 aarch64: mmu: apply proper locking
We need to protect against concurrent modifications to page tables and
their use counts.

It would have been nice to have one lock per domain, but we heavily
share page tables across domains. Hence the global lock.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre
e4cd3d4292 aarch64: mmu: code to split/combine page tables
Two scenarios are possible.

privatize_page_range:

Affected pages are made private if they're not. This means a whole
new page branch starting from the top may be allocated and content
shared with the reference page tables, except for the private range
where content is duplicated.

globalize_page_range:

That's the reverse operation where pages for given range is shared with
the reference page tables and no longer needed pages are freed.

When changing a domain mapping the range needs to be privatized first.

When changing a global mapping the range needs to be globalized last.

This way page table sharing across domains is maximized and memory
usage remains optimal.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Nicolas Pitre
402636153d aarch64: mmu: factor out table expansion code
Make the allocation, population and linking of a new table into
a function of its own for easier code reuse.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-16 08:43:19 -04:00
Eugeniy Paltsev
8165f3ad80 ARC: cleanup instruction cache initialization
As of today during the Zephyr start we
 - invalidate I$
 - disable I$
 - enable I$

Given that we don't need to have I$ disabled during any
initialization period and ARC processors have caches enabled
after reset the I$ disabling/enabling is excessive, so we can
drop it.

By that we also aligh the I$ initialization on ARC with other
projects like U-boot and Linux kernel.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-03-12 18:29:07 -05:00
Watson Zeng
5c3e7e3cb7 arch: arc: remove ARCH_HAS_STACK_PROTECTION for ARC_MPU_VER 2
As we have removed MPU_STACK_GUARD for ARC_MPU_VER 2, we also
need to remove ARCH_HAS_STACK_PROTECTION for boards with
ARC_MPU_VER 2 and no hardware stack checking, relative commit:
commit(arch: arc: remove MPU_STACK_GUARD for ARC_MPU_VER 2)
in pull request #24021

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-03-11 08:57:01 -05:00
Daniel Leung
6cac92ad52 x86: remove CONFIG_CPU_MINUTEIA
Since the removal of Quark-based boards, there are no user of
Minute-IA. Also, the generic x86 SoC is not exactly Minute-IA
so change it to use a fairly safe CPU_ATOM.

Fixes #14442

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-11 06:37:02 -05:00
Peng Fan
b4f5b9e237 aarch64: reset: initialize CNTFRQ_EL0 in the highest EL
Can only be written at the highest Exception level implemented.
For example, if EL3 is the highest implemented Exception level,
CNTFRQ_EL0 can only be written at EL3.

Also move z_arm64_el_highest_plat_init to be called when is_el_highest

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-11 12:24:18 +01:00
Carlo Caione
dacd176991 aarch64: userspace: Implement syscalls
This patch adds the code managing the syscalls. The privileged stack
is setup before jumping into the real syscall.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Nicolas Pitre
f2995bcca2 aarch64: arch_buffer_validate() implementation
This leverages the AT (address translation) instruction to test for
given access permission. The result is then provided in the PAR_EL1
register.

Thanks to @jharris-intel for the suggestion.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione
9ec1c1a793 aarch64: userspace: Introduce arch_user_string_nlen
Introduce the arch_user_string_nlen() assembly routine and the necessary
C code bits.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione
a7a3e800bf aarch64: fatal: Restrict oops-es when in user-mode
User mode is only allowed to induce oopses and stack check failures via
software-triggered system fatal exceptions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione
6978160427 aarch64: userspace: Introduce arch_is_user_context
The arch_is_user_context() function is relying on the content of the
tpidrro_el0 register to determine whether we are in user context or not.

This register is set to '1' when in EL1 and set back to '0' when user
threads are running in userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione
6cf0d000e8 aarch64: userspace: Introduce skeleton code for user-threads
Introduce the first pieces needed to schedule user threads by defining
two different code paths for kernel and user threads.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
Carlo Caione
a7d3d2e0b1 aarch64: fatal: Add arch_syscall_oops hook
Add the arch_syscall_oops hook for the AArch64.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-10 14:52:50 -05:00
James Harris
4e1926d508 arch: aarch64: do EL2 init in EL3 if necessary
If EL2 is implemented but we're skipping EL2, we should still
do EL2 init. Otherwise we end up with a bunch of things still
at their (unknown) reset values.

This in particular causes problems when different
cores have different virtual timer offsets.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-10 06:50:36 -05:00
Carlo Caione
8388794c9b aarch64: Rename z_arm64_get_cpu_id macro
z_arm64_* prefix should not be used for macros. Rename it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-09 04:52:40 -05:00
Carlo Caione
bdbe33b795 aarch64: Rework {inc,dec}_nest_counter
There are several issues with the current implemenation of the
{inc,dec}_nest_counter macros.

The first problem is that it's internally using a call to a misplaced
function called z_arm64_curr_cpu() (for some unknown reason hosted in
irq_manage.c) that could potentially clobber the caller-saved registers
without any notice to the user of the macro.

The second problem is that being a macro the clobbered registers should
be specified at the calling site, this is not possible given the current
implementation.

To fix these issues and make the call quicker, this patch rewrites the
code in assembly leveraging the availability of the _curr_cpu array. It
now clobbers only two registers passed from the calling site.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-09 04:52:40 -05:00
Erwan Gouriou
19314514e6 arch/arm: cortex_m: Disable DWT based null-pointer exception detection
Null-pointer exception detection using DWT is currently incompatible
with current openocd runner default implementation that leaves debug
mode on by default.
As a consequence, on all targets that use openocd runner, null-pointer
exception detection using DWT will generated an assert.
As a consequence, all tests are failing on such platforms.

Disable this until openocd behavior is fixed (#32984) and enable
the MPU based solution for now.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
2021-03-08 19:19:14 -05:00
Andy Ross
ae4f7a1a06 arch/xtensa: Remember to spill windows in arch_cohere_stacks()
When we reach this code in interrupt context, our upper GPRs contain a
cross-stack call that may still include some registers from the
interrupted thread.  Those need to go out to memory before we can do
our cache coherence dance here.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross
b28da4a3b7 arch/xtensa: Invalidate bottom of outbound stacks
Both new thread creation and context switch had the same mistake in
cache management: the bottom of the stack (the "unused" region between
the lower memory bound and the live stack pointer) needs to be
invalidated before we switch, because otherwise any dirty lines we
might have left over can get flushed out on top of the same thread on
another CPU that is putting live data there.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross
64cf33952d arch/xtensa: Add non-HAL caching primitives
The Xtensa L1 cache layer has straightforward semantics accessible via
single-instructions that operate on cache lines via physical
addresses.  These are very amenable to inlining.

Unfortunately the Xtensa HAL layer requires function calls to do this,
leading to significant code waste at the calling site, an extra frame
on the stack and needless runtime instructions for situations where
the call is over a constant region that could elide the loop.  This is
made even worse because the HAL library is not built with
-ffunction-sections, so pulling in even one of these tiny cache
functions has the effect of importing a 1500-byte object file into the
link!

Add our own tiny cache layer to include/arch/xtensa/cache.h and use
that instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross
d0c538e9a2 arch/xtensa: Add an arch-internal README on register windows
Back when I started work on this stuff, I had a set of notes on
register windows that slowly evolved into something that looks like
formal documentation.  There really isn't any overview-style
documentation of this stuff on the public internet, so it couldn't
hurt to commit it here for posterity.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross
a230fafde5 arch/xtensa: soc/intel_adsp: Rework MP code entry
Instead of passing the crt1 _start function as the entry code for
auxiliary CPUs, use a tiny assembly stub instead which can avoid the
runtime testing needed to skip the work in _start.  All the crt1 code
was doing was clearing BSS (which must not happen on a second CPU) and
setting the stack pointer (which is wrong on the second CPU).

This allows us to clean out the SMP code in crt1.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross
613594e68c soc/intel_adsp: Use the correct MP stack pointer
The kernel passes the CPU's interrupt stack expected that it will
start on that, so do it.  Pass the initial stack pointer from the SOC
layer in the variable "z_mp_stack_top" and set it in the assembly
startup before calling z_mp_entry().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross
820c94e5dd arch/xtensa: Inline atomics
The xtensa atomics layer was written with hand-coded assembly that had
to be called as functions.  That's needlessly slow, given that the low
level primitives are a two-instruction sequence.  Ideally the compiler
should see this as an inline to permit it to better optimize around
the needed barriers.

There was also a bug with the atomic_cas function, which had a loop
internally instead of returning the old value synchronously on a
failed swap.  That's benign right now because our existing spin lock
does nothing but retry it in a tight loop anyway, but it's incorrect
per spec and would have caused a contention hang with more elaborate
algorithms (for example a spinlock with backoff semantics).

Remove the old implementation and replace with a much smaller inline C
one based on just two assembly primitives.

This patch also contains a little bit of refactoring to address the
scheme has been split out into a separate header for each, and the
ATOMIC_OPERATIONS_CUSTOM kconfig has been renamed to
ATOMIC_OPERATIONS_ARCH to better capture what it means.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross
eb1ef50b6b arch/xtensa: General cleanup, remove dead code
There was a bunch of dead historical cruft floating around in the
arch/xtensa tree, left over from older code versions.  It's time to do
a cleanup pass.  This is entirely refactoring and size optimization,
no behavior changes on any in-tree devices should be present.

Among the more notable changes:

+ xtensa_context.h offered an elaborate API to deal with a stack frame
  and context layout that we no longer use.

+ xtensa_rtos.h was entirely dead code

+ xtensa_timer.h was a parallel abstraction layer implementing in the
  architecture layer what we're already doing in our timer driver.

+ The architecture thread structs (_callee_saved and _thread_arch)
  aren't used by current code, and had dead fields that were removed.
  Unfortunately for standards compliance and C++ compatibility it's
  not possible to leave an empty struct here, so they have a single
  byte field.

+ xtensa_api.h was really just some interrupt management inlines used
  by irq.h, so fold that code into the outer header.

+ Remove the stale assembly offsets.  This architecture doesn't use
  that facility.

All told, more than a thousand lines have been removed.  Not bad.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Peng Fan
e27c9c7c52 arch: arm64: select SCHED_IPI_SUPPORTED when SMP enabled
Select SCHED_IPI_SUPPORTED when SMP enabled.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan
a2ea20dd6d arch: arm: aarch64: add SMP support
With timer/gic/cache added, we could add the SMP support.
Bringup cores

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan
14b9b752be arch: arm: aarch64: add arch_dcache_range
Add arch_dcache_range to support flush and invalidate

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan
e10d9364d0 arch: arm64: irq/switch: accessing nested using _cpu_t
With _kernel_offset_to_nested, we only able to access the nested counter
of the first cpu. Since we are going to support SMP, we need accessing
nested from per cpu.

To get the current cpu, introduce z_arm64_curr_cpu for asm usage,
because arch_curr_cpu could not be compiled in asm code.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan
251b1d39ac arch: arm: aarch64: export z_arm64_mmu_init for SMP
Export z_arm64_mmu_init for SMP usage

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Peng Fan
6182330fc3 arm: core: aarch64: save switch_handle
Save old_thread to switch_handle for wait_for_thread usage

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2021-03-06 07:36:37 -05:00
Ioannis Glaropoulos
191c3088af arm: cortex_m: fix arguments to dwt_init() function
Fix the call to z_arm_dwt_init(), remove the NULL argument.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-05 18:13:22 -06:00
Katsuhiro Suzuki
e58e2767f8 arch: riscv: add common stub reboot function
This patch adds weak sys_arch_reboot() function to avoid build error
with CONFIG_REBOOT=y. Some SoC has already had own reboot function
but others (Ex. qemu boards) faced buld error.

- openisa_rv32m1: Not change
- riscv-ite: Do nothing, remove and use arch/riscv function

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-04 11:09:51 -06:00
Carlo Caione
9d908c78fa aarch64: Rewrite reset code using C
There is no strict reason to use assembly for the reset routine. Move as
much code as possible to C code using the proper helpers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Carlo Caione
bba7abe975 aarch64: Use helpers instead of inline assembly
No need to rely on inline assembly when helpers are available.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Carlo Caione
a2226f5200 aarch64: Fix registers naming in cpu.h
The name for registers and bit-field in the cpu.h file is incoherent and
messy. Refactor the whole file using the proper suffixes for bits,
shifts and masks.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-04 06:51:48 -05:00
Daniel Leung
90722ad548 x86: gen_idt: fix some pylint issues
Fixes some issues identified by pylint.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
685e6aa2e4 x86: gen_mmu: fix some pylint issues
Fixes some issues identified by pylint.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
8cfdd91d54 x86: ia32/fatal: be explicit on pointer math with _df_tss.cr3
For some unknown reason, the pagetable address for _df_tss.cr3
did not get translated from virtual to physical. However,
the translation is done if the pointer to pagetable is obtained
through reference to the first array element (instead of simply
through the name of array). Without CR3 pointing to the page
table via physical address, double fault does not work. So
fixing this by being explicit with the page table pointer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
fa6d7cecb5 x86: mmu/mem_domain: don't translate address before null check
When adding a new thread to memory domain, there is a NULL check
to figure out if a thread is being migrated to another memory
domain. However, the NULL check is AFTER physical-to-virtual
address translation which means (NULL + offset) != NULL anymore.
This results in calling reset_region() with an invalid page table
pointer. Fix this by doing the NULL check before address
translation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
ee3d345c09 x86: mmu: reserve more space for page table if linking in virt
When linking in virtual address space, we still need physical
addresses in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So we need to enlarge the reserved space for page table
to accommodate this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
9ce77abf23 x86: ia32: jump to virtual address before calling z_x86_prep_c
We have been having the assumption that the physical memory
is identity-mapped to virtual address space. However, with
the ability to set CONFIG_KERNEL_VM_BASE separately from
CONFIG_SRAM_BASE_ADDRESS, the assumption is no longer valid.
This changes the boot code in x86 32-bit, so that once
the page table is loaded, we can proceed with executing in
the virtual address space. So do a long jump to virtual
address just before calling z_x86_prep_c. From this point on,
code execution is in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
991300e1ba x86: gen_mmu: also map SRAM if linking in virtual address space
When linking in virtual address space, we still need physical
addressed in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So identity maps the kernel in SRAM.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
d40e8ede8e x86: gen_gdt: add address translation if needed
When the kernel is mapped into virtual address space
that is different than the physical address space,
the dynamic GDT generation uses the virtual addresses.
However, the GDT table is required at boot before
page table is loaded where the virtual addresses are
invalid. So make sure GDT generation is using
physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
7a51aab397 x86: gen_mmu: add address translation if needed
There is an assumption made in the page table generation code
that the kernel would occupy the same physical and virtual
addresses. However, we may want to map the kernel into
a virtual address space which differs from kernel's physical
address space. For example, with demand paging enabled on
kernel code and data, we can accommodate kernel that is
larger than physical memory size, and may want to utilize
a bigger virtual address space. So add address translation
in the gen_mmu.py script for this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
a1afe9be5e x86: ia32: do virtual address translation at boot
This adds virtual address translation to a few variables
used in crt0.S. This is needed as they are linked at
virtual addresses but before page table is loaded,
they are not available at virtual addresses and must be
referred via physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
bbe4b39f8d x86: mmu: cast to uintptr_t for page table using Z_X86_PHYS_ADDR
When feeding &z_shared_kernel_page_start directly to
Z_X86_PHYS_ADDR(), the compiler would complain array subscript
out of bound if linking in virtual address space. So cast it
into uintptr_t first before translation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Nicolas Pitre
0c45b548e2 aarch64: rationalize exception entry/exit code
Each vector slot has room for 32 instructions. The exception context
saving needs 15 instructions already. Rather than duplicating those
instructions in each out-of-line exception routines, let's store
them directly in the vector table. That vector space is otherwise
wasted anyway. Move the z_arm64_enter_exc macro into vector_table.S
as this is the only place where it should be used.

To further reduce code size, let's make z_arm64_exit_exc into a
function of its own to avoid code duplication again. It is put in
vector_table.S as this is the most logical location to go with its
z_arm64_enter_exc counterpart.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-03 16:26:40 +03:00
Shubham Kulkarni
26efef4f94 arch: xtensa: Fix backtrace from ISR
a0 is used as scratch register. Restore value of a0 (return address)
from stack frame before spilling registers on stack

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-03-03 13:02:57 +01:00
Ioannis Glaropoulos
f1a27a8189 arm: cortex_m: assert if DebugMonitor exc is enabled in debug mode
Assert if the null pointer de-referencing detection (via DWT) is
enabled when the processor is in debug mode, because the debug
monitor exception can not be triggered in debug mode (i.e. the
behavior is unpredictable). Add a note in the Kconfig definition
of the null-pointer detection implementation via DWT, stressing
that the solution requires the core be in normal mode.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
77c76a3b79 arm: cortex_m: build time assert for null-pointer exception page size
We introduce build time asserts for
CONFIG_CORTEX_M_DEBUG_NULL_POINTER_EXCEPTION_PAGE_SIZE
to catch that the user-supplied value has, as requested
by the Kconfig symbol specification, a power of 2 value.
For the MPU-based implementation of null-pointer detection
we can use an existing macro for the build time assert,
since the region for catching null-pointer exceptions
is a regular MPU region, with different restrictions,
depending on the MPU architecture. For the DWT-based
implementation, we introduce a custom build-time assert.

We add also a run-time ASSERT for the MPU-based
implementation in ARMv8-M platforms, which require
that the null pointer exception detection page is
already mapped by the MPU.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
1db78aae73 arm: cortex_m: ensure DebugMonitor is targeting Secure domain
By design, the DebugMonitor exception is only employed
for null-pointer dereferencing detection, and enabling
that feature is not supported in Non-Secure builds. So
when enabling the DebugMonitor exception, assert that
it is not targeting the Non Secure domain.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
1b22f6b8c8 arm: cortex_m: enable null-pointer exception detection in the tests
Enable the null-pointer dereferencing detection by default
throughout the test-suite. Explicitly disable this for the
gen_isr_table test which needs to perform vector table reads.
Disable null-pointer exception detection on qemu_cortex_m3
board, as DWT it is not emulated by QEMU on this platform.
Additionally, disable null-pointer exception detection on
mps2_an521 (QEMU target), as DWT is not present and the MPU
based solution won't work, since the target does not have
the area 0x0 - 0x400 mapped, but the QEMU still permits
read access.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
d86d2c6f65 arm: cortex_m: implement null pointer exception detection with MPU
Implementation for null pointer exception detection feature
using the MPU on Cortex-M. Null-pointer detection is implemented
by programming an MPU to guard a limited area starting at
address 0x0. on non ARMv8-M we program an MPU region with
No-access policy. On ARMv8-M we program a region with any
permissions, assuming the region will overlap with fixed
FLASH0 region. We add a compile-time message to warn the
user if the MPU-based null-pointer exception solution can
not be used (ARMv8-M only).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
66ef96fded arm: cortex_m: add vector table padding for null pointer detection
Padding inserted after the (first-stage) vector table,
so that the Zephyr image does not attempt to use the
area which we reserve to detect null pointer dereferencing
(0x0 - <size>). If the end of the vector table section is
higher than the upper end of the reserved area, no padding
 will be added. Note also that the padding will be added
only once, to the first stage vector table, even if the current
snipped is included multiple times (this is for a corner case,
when we want to use this feature together with SW Vector Relaying
on MCUs without VTOR but with an MPU present).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
0bac92db96 arm: cortex-m: null pointer detection additions for ARMv8-M
Additions to the null-pointer exception detection mechanism
for ARMv8-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
3054c1351a arm: cortex_m: null-pointer exception detection via DWT
Implement the functionality to detect null pointer dereference
exceptions via the DWT unit in the ARMv7-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
f97ccd940c arm: cortex-m: build debug.c for null-pointer detection feature
When we enable the null pointer exceptino feature (using DWT)
we include debug.c in the build. debug.c contains the functions
to configure and enable null pointer detection using the Data
Watchdog and Trace unit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
c42a8d9d24 arm: cortex_m: fault: hook up debug monitor exception handler
Extend the debug monitor exception handler to
- return recoverable faults when the debug monitor
  is enabled but we do not get an expected DWT event,
- call a debug monitor routine to check for null pointer
  exceptions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
712a7951db arm: cortex_m: move static inline DWT functions in internal header
Move the DWT utility functions, present in timing.c
in an internal cortex-m header.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
b3cd5065eb arm: cortex_m: Kconfig symbols for null pointer detection feature
Introduce the required Kconfig symbol framework for the
Cortex-M-specific null pointer dereferencing detection
feature. There are two implementations (based on DWT and
MPU) so we introduce the corresponding choice symbols,
including a choice symbol to signify that the feature
is to be disabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Carlo Caione
eb72b2d72a aarch64: smccc: Retrieve up to 8 64-bit values
The most common secure monitor firmware in the ARM world is TF-A. The
current release allows up to 8 64-bit values to be returned from a
SMC64 call from AArch64 state.

Extend the number of possible return values from 4 to 8.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione
bc7cb75a82 aarch64: smccc: Use offset macros
Instead of relying on hardcoded offset in the assembly code, introduce
the offset macros to make the code more clear.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione
998856bacb aarch64: smccc: Update specs link
The link points to an outdated version. Update it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione
90859c6bf3 aarch64: smccc: Decouple PSCI from SMCCC
The current code is assuming that the SMC/HVC helpers can only be used
by the PSCI driver. This is wrong because a mechanism to call into the
secure monitor should be made available regardless of using PSCI or not.

For example several SoCs relies on SMC calls to read/write e-fuses,
retrieve the chip ID, control power domains, etc...

This patch introduces a new CONFIG_HAS_ARM_SMCCC symbol to enable the
SMC/HVC helpers support and export that to drivers that require it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Nicolas Pitre
443e3f519e arm64: mmu: initialize early
This is fundamental enough that it better be initialized ASAP.
Many other things get initialized soon afterwards assuming the MMU
is already operational.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
9461600c86 aarch64: mmu: rationalize debugging output
Make it into a generic call that can be used in various places.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
b40a2fdb8b aarch64: mmu: fix common MMU mapping
Location of __kernel_ram_start is too far and _app_smem .bss areas
are not covered. Use _image_ram_start instead.

Location of __kernel_ram_end is also way too far. We should stop at
_image_ram_end where the expected unmapped area starts.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
fb3de16f0c aarch64: mmu: use a range (start..end) for common MMU mapping
This is easier to cover multiple segments this way. Especially since
not all boundary symbols from the linker script come with a size
derrivative.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
cb49e4b789 aarch64: mmu: invert the MT_OVERWRITE flag
The MT_OVERWRITE case is much more common. Redefine that flag as
MT_NO_OVERWRITE instead for those fewer cases where it is needed.

One such case is platform provided mappings. Apply them after the
common kernel mappings and use the MT_NO_OVERWRITE on them.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
56c77118d3 aarch64: mmu: factor out the phys argument out of set_mapping()
Minor cleanup.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
f53bd24a4d aarch64: mmu: move get_region_desc() closer to usage points
Simple code tidiness.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
b696090bb7 aarch64: mmu: make page table pool global
There is no real reason for keeping page tables into separate pools.
Make it global which allows for more efficient memory usage and
simplifies the code.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
459bfed9ea aarch64: mmu: dynamic mapping support
Introduce a remove_map() to ... remove a mapping.

Add a use count to the page table pool so pages can be dynamically
allocated, deallocated and reused.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
861f6ce2c8 aarch64: a few trivial assembly optimizations
Removed some instructions when possible.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-25 10:35:37 -05:00
Andy Ross
6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Shih-Wei Teng
8912f549ce arch: riscv: Update the description of CONFIG_PMP_STACK_GUARD_MIN_SIZE
Update the uints in bytes instead of words in its description. It can
avoid confusion.

Signed-off-by: Shih-Wei Teng <swteng@andestech.com>
2021-02-24 10:37:03 -05:00
Yuguo Zou
a8b6936c7d arch: arc: fix mpu version number
ARC mpu version used a wrong number 3, could cause conflict in future.
This commit fix this issue to version number 4.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-02-24 08:57:35 -05:00
Ioannis Glaropoulos
8289b8c877 arch: arm: cortex_m: fix ASSERT expression in MemManage handler
We need to form the ASSERT expression inside the MemManage
fault handler for the case we building without USERSPACE
and STACK GUARD support, in the same way it is formed for
the case with USERSPACE or MPU STACK GUARD support, that
is, we only assert if we came across a stacking error.

Data access violations can still occur even without user
mode or guards, e.g. when trying to write to Read-only
memory (such as the code region).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-23 11:29:49 +01:00
Andrei Emeltchenko
377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Daniel Leung
2816c17a09 x86: allow linking in virtual address space
This adds the pieces to allow the kernel to be linked
in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung
d340afd456 x86: use CONFIG_SRAM_OFFSET instead of CONFIG_X86_KERNEL_OFFSET
This changes x86 to use CONFIG_SRAM_OFFSET instead of
arch-specific CONFIG_X86_KERNEL_OFFSET. This allows the common
MMU macro Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
function properly if we ever need to map the kernel into
virtual address space that does not have the same starting
physical address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung
ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung
c0ee8c4a43 x86: use z_bss_zero and z_data_copy
Instead of doing these in assembly, use the common z_bss_zero()
and z_data_copy() C functions instead. This simplifies code
a bit and we won't miss any additions to these two functions
(if any) under x86 in the future (as x86_64 was actually not
clearing gcov bss area).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung
dd98de880a x86: move calling z_loapic_enable into z_x86_prep_c
This moves calling z_loapic_enable() from crt0.S into
z_x86_prep_c(). This is done so we can move BSS clearing
and data section copying inside z_x86_prep_c() as
these are needed before calling z_loapic_enable().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung
78837c769a soc: x86: add Lakemont SoC
This adds a very basic SoC configuration for Intel Lakemont SoC.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Daniel Leung
9a189da03b x86: add kconfig CONFIG_X86_MEMMAP
This adds a new kconfig to enable the use of memory map.
This map can be populated automatically if
CONFIG_MULTIBOOT_MEMMAP=y or can be manually defined
via x86_memmap[].

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Daniel Leung
c027494dba x86: add kconfig CONFIG_X86_PC_COMPATIBLE
This is an hidden option to indicate we are building for
PC-compatible devices (where there are BIOS, ACPI, etc.
which are standard on such devices).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Carlo Caione
3f055058dc aarch64: Remove QEMU 'wfi' issue workaround
The problem is not reproducible when CONFIG_QEMU_ICOUNT=n. We can now
revert the commit aebb9d8a45.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-19 16:26:38 +03:00
Nicolas Pitre
7a91cf0176 Revert "lib/os/heap: introduce option to force big heap mode"
This reverts commit b6b6d39bb6.

With both commit 4690b8d5ec ("libc/minimal: fix malloc() allocated
memory alignment") and commit c822e0abbd ("libc/minimal: fix
realloc() allocated memory alignment") in place, there is no longer
a need for enforcing the big heap mode on every allocations.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-19 07:32:22 -05:00
Martin Åberg
88f478108d sparc: write through switched_from in arch_switch()
Write through switched_from in arch_switch() as required by the
switch protocol.

Also restructure the implementation to better match the template in
kernel_arch_interface.h, by removing a wrapper routine and instead
use CONTAINER_OF().

Fixes #32197

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-02-17 06:35:03 -05:00
Carlo Caione
fadbe9d2f2 arch: aarch64: Add XIP support
Add the missing pieces to enable XIP for AArch64. Try to simulate the
XIP using QEMU using the '-bios' parameter.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-17 14:13:10 +03:00
Daniel Leung
32b70bb7b5 x86: multiboot: map memory before accessing if necessary
Before accessing the multiboot data passed by the bootloader,
we need to map the memory first. This adds the code to map
the memory if necessary.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-16 19:08:55 -05:00
Tomasz Bursztyka
5e4e0298e9 arch/x86: Generalize cache manipulation functions
We assume that all x86 CPUs do have clflush instructions.
And the cache line size is now provided through DTS.

So detecting clflush instruction as well as the cache line size is no
longer required at runtime and thus removed.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-02-15 09:43:30 -05:00
Daniel Leung
5c649921de x86: add kconfigs and compiler flags for MMX and SSE*
This adds kconfigs and compiler flags to support MMX and SSE*
instructions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Daniel Leung
ce44048d46 x86: rename CONFIG_SSE* to CONFIG_X86_SSE*
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Daniel Leung
23a9a3234b x86: correct compiler flags for SSE
It is possible to enable SSE without using SSE for floating
point, so fix the compiler flags.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Carlo Caione
b27bca4b45 aarch64: mmu: Remove SRAM memory region
Now that the arch_mem_map() is actually working correctly we can remove
the big SRAM region.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-15 08:07:55 -05:00
Andy Ross
746c65acb7 soc/intel_adsp: Move KERNEL_COHERENCE to cavs15
Only the CAVS 1.5 linker script has full support for the coherence
features, don't advertise it on the other SoC's yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Anas Nashif
5d1c535fc8 license: add missing SPDX headers
Add SPDX header to files with existing license.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Anas Nashif
1cea902fad license: add missing SPDX headers
Add missing SPDX header.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Anas Nashif
67d290540e xtensa: remove unused script
While fixing license headers, identified this script as orphan and not
being used anywhere, so remove.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-11 08:05:16 -05:00
Carlo Caione
d6316aae27 aarch64: Fix corrupted IRQ state when tracing enabled
The call to sys_trace_idle() is potentially clobbering x0 resulting in a
wrong value being used by the following code. Save and restore x0 before
and after the call to sys_trace_idle() to avoid any issue.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Suggested-by: James Harris <james.harris@intel.com>
2021-02-10 10:16:03 -05:00
Daniel Leung
4daa2cb6cf x86: mark page frame as reserved according to memory map
With x86, there are usually memory regions that are reserved
for firmware and device MMIOs. We don't want to use these
pages for memory mapping so mark them as reserved at boot.
The weakly defined x86_memmap contains the list of memory
regions which can be overriden by SoC or board configurations.
Also, with CONFIG_MULTIBOOT_MEMMAP=y, the memory regions
are populated from multiboot provided data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-05 11:42:28 -05:00
Ioannis Glaropoulos
8bc242ebb5 arm: cortex-m: add extra stack size for test build with FPU_SHARING
Additional stack for tests when building with FPU_SHARING
enabled is required, because the option may increase ESF
stacking requirements for threads.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-05 11:41:25 -05:00
Daniel Leung
5a11caba33 xtensa: fix rsr/wsr assembly for XCC
XCC doesn't like the "rsr.<reg name>" style assembly
so fix that to the other style.

Also, XCC doesn't like _CONCAT() with the EPC/EPS
registers so need to spell out all of them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-05 07:45:07 -05:00
Daniel Leung
92c93b1b7f xtensa: fix hard-coded interrupt value for PS register
There is a hard-coded value of PS_INTLEVEL(15) to set the PS
register. The correct way is actually to use XCHAL_EXCM_LEVEL
with PS_INTLEVEL() to setup the register. So fix it.

Fixes #31858

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-04 20:58:56 -05:00
Andy Ross
cce5ff1510 arch/x86: Fix stack alignment for user threads
The x86_64 SysV ABI requires 16 byte alignment for the stack pointer
during execution of normal code.  That means that on entry to an
ABI-compatible C function (which is reached via a CALL instruction
that pushes the return address) the RSP register must be MISaligned by
exactly 8 bytes.  The kernel mode thread setup got this right, but we
missed the equivalent condition in userspace entry.

The end result was a misaligned stack, which is surprisingly robust
for most use.  But recent toolchains have starting doing some more
elaborate vectorization, and the resulting SSE instructions started
failing in userspace on the misaliged loads.

Note that there's a comment about optimization: we're doing the stack
alignment in the "wrong place" and are needlessly wasting bytes in
some cases.  We should see the raw stack boundaries where we are
setting up RSP values.  Add a FIXME to this effect, but don't touch
anything as this patch is a targeted bugfix.

Also fix a somewhat embarassing 32-bit-ism that would have truncated
the address of a userspace stack that we tried to put above 4G.

Fixes #31018

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-03 18:45:48 -05:00
Ioannis Glaropoulos
ef926e714b arm: cortex_m: fix vector table relocation in non-XIP builds
When VTOR is implemented on the Cortex-M SoC, we can
basically use any address (properly aligned) for the
vector table starting address. We fix the setting of
VTOR in prep_c.c for non-XIP images, in this commit,
so we do not need to always have the vector table be
present at the start of RAM (CONFIG_SRAM_BASE_ADDRESS)
and allow for extra linker sections being placed before
the vector table section.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-03 10:44:17 -05:00
Ioannis Glaropoulos
73288490f6 arm: cortex_m: log EXC_RETURN value in fatal.c
If CONFIG_EXTRA_EXCEPTION_INFO is enabled, log
the value of EXC_RETURN in the fault handler.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos
cafe04558c arm: cortex_m: make lazy FP stacking enabling dynamic
Under FPU sharing mode, any thread is allowed to generate
a Floating Point context (use FP registers in FP instructions),
regardless of whether threads are pre-tagged with K_FP_REGS
option when they are created.

When building with MPU stack guard feature enabled,
a large MPU stack guard is required to catch stack
overflows, if lazy FP stacking is enabled. When lazy
FP stacking is not enabled, a default 32 byte guard is
sufficient.

If lazy stacking is enabled by default, all threads may
potentially generate FP context, so they would need to
program a large MPU guard, carved out of their reserved
stack memory.

To avoid this memory waste, we modify the behavior, and make
lazy stacking a dynamically enabled feature, implemented as
follows:
- threads that are not pre-tagged with K_FP_REGS, and have
not generated an FP context use a default MPU guard and disable
lazy stacking. As long as the threads do not have an active FP
context, they won't stack FP registers, anyway, on ISRs and
exceptions, while they will benefit from reserving a small
MPU guard size
- as soon as a thread starts using FP registers, ISR might
temporarily experience some increased ISR latency due to lazy
stacking being disabled. This will be the case until the next
context switch, where the threads that have active FP context
will be tagged with K_FP_REGS, enable lazy stacking, and
program a wide MPU guard.

The implementation is a tradeoff between performance (ISR
latency) and memory consumption.

Note that when MPU STACK GUARD feature is not enabled, lazy
FP stacking is always activated.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos
86c1b57103 arm: cortex_m: select by default FP sharing mode when using the FPU
For applications that make use of the FPU in cortex m,
we enforce the FPU sharing registers mode, because the
compiler, under certain optimization regimes, may use
FP instructions and create FP context in any thread,
so the unshared registers mode is not practically
supported.

In addition to that we force FPU_SHARING to depend on
MULTITHREADING, as FPU sharing mode does not make sense
outside the normal multi-threaded builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos
2642eb28bf arm: cortex_m: force FP context stacking by default
For the standard multi-theading builds, we will
enforce FP context stacking only when FPU_SHARING
is set. For the single-threading use case we enable
context stacking by default.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Ioannis Glaropoulos
56dd787627 arm: cortex_m: skip clearing CONTROL if this is done at boot
If CONTROL register is done in reset.S we can skip
clearing the FPCA when enabling the floating point
support, to save a few instructions. The CONTROL
register is cleared right after boot, if the symbol
CONFIG_INIT_ARCH_HW_AT_BOOT is enabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Daniel Leung
92c12d1f82 toolchain: add GEN_ABSOLUTE_SYM_KCONFIG()
This adds a new GEN_ABSOLUTE_SYM_KCONFIG() specifically for
generating absolute symbols in assembly for kconfig values.
This is needed as the existing GEN_ABSOLUTE_SYM() with
constraints in extended assembly parses the "value" as
signed 32-bit integers. An unsigned 32-bit integer with
MSB set results in a negative number in the final binary.
This also prevents integers larger than 32-bit. So this
new macro simply puts the value inline within the assembly
instrcution instead of having it as parameter.

Fixes #31562

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-02 09:23:45 -05:00
Daniel Leung
dea8fccfb3 x86: clear GS at boot for x86_64
On Intel processors, if GS is not zero and is being set to
zero, GS_BASE is also being set to zero. This would interfere
with the actual use of GS_BASE for usespace. To avoid accidentally
clearing GS_BASE, simply set GS to 0 at boot, so any subsequent
clearing of GS will not clear GS_BASE.

The clearing of GS_BASE was discovered while trying to figure out
why the mem_protect test would hang within 10-20 repeated runs.
GDB revealed that both GS and GS_BASE was set to zero when the tests
hanged. After setting GS to zero at boot, the mem_protect tests
were running repeated for 5,000+ times without hanging.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-01 21:38:28 -05:00
Nicolas Pitre
7fcf5519d0 aarch64: mmu: cleanups and fixes
Major changes:

- move related functions together
- optimize add_map() not to walk the page tables *twice* on
  every loop
- properly handle leftover size when a range is already mapped
- don't overwrite existing mappings by default
- return an error when the mapping fails

and make the code clearer overall.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-01-28 20:24:30 -05:00
Daniel Leung
0d099bdd54 linker: remove asterisk from IRQ/ISR section name macro
Both _IRQ_VECTOR_TABLE_SECTION_NAME and _SW_ISR_TABLE_SECTION_NAME
are defined with asterisk at the end in an attempt to include
all related symbols in the linker script. However, these two
macros are also being used in the source code to specify
the destination sections for variables. Asterisks in the name
results in older GCC (4.x) complaining about those asterisks.
So create new macros for use in linker script, and keep
the names asterisk free.

Fixes #29936

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-26 16:24:11 -05:00
Andrew Boie
6b58e2c0a3 x86: use large VM size if ACPI
We've already enabled full RAM mapping if ACPI is enabled, also
set a large 3GB address space size, these systems are not RAM-
constrained (they are PC platforms) and they have large MMIO
config spaces for PCIe.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-26 16:21:50 -05:00
Stephanos Ioannidis
f769a03081 arch: arm: aarch32: Fix interrupt nesting
In the current interrupt nesting implementation, if an ISR is
interrupted while executing inside a branch, the lr_svc register will
be corrupted, and the branch of the interrupted ISR will exit to the
return address of the final branch of the interrupting ISR, which may
or may not correspond to the intended return address.

This commit fixes the aforementioned bug by storing the lr_svc register
in the stack at the ISR entry, and restoring its value before exiting
the ISR.

For more details, refer to the issue #30517.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Stephanos Ioannidis
c00169daba arch: arm: aarch32: Fix exception exit failures
This commit fixes the following bugs in the AArch32 z_arm_exc_exit
routine:

1. Invalid return address when calling `z_arm_pendsv` from the
   exception-specific mode

2. Caller-saved register is referenced after a call to `z_arm_pendsv`

For more details, refer to the issue #31511.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Stephanos Ioannidis
d86fdb2154 arch: arm: aarch32: Update stale references to _IntExit
This commit updates the stale references to the `_IntExit` function in
the in-line documentation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-01-26 06:17:15 -05:00
Volodymyr Babchuk
cd86ec2655 aarch64: add ability to generate image header
Image header is compatible with Linux aarch64 boot protocol,
so zephyr can be booted with U-boot or Xen loader.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
2021-01-24 13:59:55 -05:00
Yonatan Schachter
7191b64c6f gen_isr_tables: Added check of the IRQ num before accessing the vt
At its current state, the script tries to access the vector table
list without checking first that the index is valid. This can
cause the script to crash without a descriptive message.
The index can be invalid if an IRQ number that is larger than
the maximum number allowed by the SOC is used.
This PR adds a check of that index, that exits with an error
message if the index is invalid.

Fixes #29809

Signed-off-by: Yonatan Schachter <yonatan.schachter@gmail.com>
2021-01-24 10:12:54 -05:00
Martin Åberg
b6b6d39bb6 lib/os/heap: introduce option to force big heap mode
This option allows forcing big heap mode. Useful on for getting 8-byte
aligned blocks on 32-bit machines.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-01-24 10:11:11 -05:00
Andrew Boie
77861037d9 x86: map all RAM if ACPI
ACPI tables can lurk anywhere. Map all memory so they can be
read.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
ed22064e27 x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
56a9e7b91e arch: add CONFIG_DEMAND_PAGING
Indicates at the kernel level that demand paging is active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
299a2cf62e mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
b0b7756756 x86: pre-allocate address space
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.

We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.

The default address space size is now 8MB, but this can be
tuned by the application.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
5c47bbc501 x86: only map the kernel image
The policy is changed and we no longer map all page frames.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
893822fbda arch: remove KERNEL_RAM_SIZE
We don't map all RAM at boot any more, just the kernel image.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
f3e9b61a91 x86: reserve the first megabyte
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
73a3e05e40 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
69355d13a8 arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Shubham Kulkarni
8b7da334d5 arch: xtensa: Print backtrace from panic handler
This change uses stack frame to print backtrace once exception occurs
Printing backtrace helps to identify the cause of exception

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-01-23 08:43:10 -05:00
Kumar Gala
895277f909 x86: Fix zefi.py creating valid images
When zefi.py was changed to pass compiler and objcopy the flag to
objcopy for the EFI target was dropped.  This is because the current
SDK (0.12.1) doesn't support that target type for objcopy.  However,
target is necessary for the images to be created correctly and boot.

Switch back to use the host objcopy as a stop gap fix, until the SDK
can support target for EFI.

Fixes: #31517

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-01-22 12:41:27 -05:00
Daniel Leung
4e8abfcba7 x86: use TSC for timing information
This changes the timing functions to use TSC to gather
timing information instead of using the timer for
scheduling as it provides higher resolution for timing
information.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-22 11:05:30 -05:00
Anas Nashif
6f61663695 Revert "arch: add KERNEL_VM_OFFSET"
This reverts commit fd2434edbd.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
db0732f11d Revert "kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES"
This reverts commit 9d2ebfff58.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
4422b1d376 Revert "x86: reserve the first megabyte"
This reverts commit 51e3c9efa5.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
34e9c09330 Revert "arch: remove KERNEL_RAM_SIZE"
This reverts commit 73561be500.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
83d15d96e3 Revert "x86: only map the kernel image"
This reverts commit 3660040e22.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
e980848ba7 Revert "x86: pre-allocate address space"
This reverts commit 64f05d443a.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
a2ec139bf7 Revert "mmu: arch_mem_map() may no longer fail"
This reverts commit db56722729.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
0f24e09bcf Revert "arch: add CONFIG_DEMAND_PAGING"
This reverts commit 48cc63b4a3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
adff757c72 Revert "x86: implement demand paging APIs"
This reverts commit 7711c9a82d.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Daniel Leung
d3218ca515 debug: coredump: remove z_ prefix for stuff used outside subsys
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-21 22:08:59 -05:00
Andrew Boie
7711c9a82d x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
48cc63b4a3 arch: add CONFIG_DEMAND_PAGING
Indicates at the kernel level that demand paging is active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
db56722729 mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
64f05d443a x86: pre-allocate address space
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.

We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.

The default address space size is now 8MB, but this can be
tuned by the application.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
3660040e22 x86: only map the kernel image
The policy is changed and we no longer map all page frames.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
73561be500 arch: remove KERNEL_RAM_SIZE
We don't map all RAM at boot any more, just the kernel image.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
51e3c9efa5 x86: reserve the first megabyte
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
9d2ebfff58 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
fd2434edbd arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Daniel Leung
ff720cd9b3 x86: enable soft float support for Zephyr SDK
This adds the correct compiler and linker flags to
support software floating point operations. The flags
need to be added to TOOLCHAIN_*_FLAGS for GCC to find
the correct library (when calling GCC with
--print-libgcc-file-name).

Note that software floating point needs to be turned
on for Newlib. This is due to Newlib having floating
point numbers in its various printf() functions which
results in floating point instructions being emitted
from toolchain. These instructions are placed very
early in the functions which results in them being
executed even though the format string contains
no floating point conversions. Without using CONFIG_FPU
to enable hardware floating point support, any calls to
printf() like functions will result in exceptions
complaining FPU is not available. Although forcing
CONFIG_FPU=y with newlib is an option, and because
the OS doesn't know which threads would call these
printf() functions, Zephyr has to assume all threads
are using FPU and thus incurring performance penalty as
every context switching now needs to save FPU registers.
A compromise here is to use soft float instead. Newlib
with soft float enabled does not have floating point
instructions and yet can still support its printf()
like functions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-20 16:45:31 -05:00
Anas Nashif
8b3d9e7285 sparc: do not set cmsis api Kconfigs
Those should be set by applications, not architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-20 16:45:31 -05:00
Spoorthy Priya Yerabolu
513b9f598f zefi.py: Use cross compiler while building zephyr
Currently, zefi.py takes host GCC OBJCOPY as
default. Fixing the script to use CMAKE_C_COMPILER
and CMAKE_OBJCOPY.

Fixes: #27047

Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
2021-01-20 16:38:12 -05:00
Carlo Caione
dc4cc5565b x86: cache: Use new cache APIs
Add an helper to correctly use the new cache APIs.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione
42386e48b3 arc: cache: Use new cache APIs
Add an helper to correctly use the new cache APIs.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione
e77c841023 cache: Expand the APIs for cache flushing
The only two supported operations for data caches in the cache framework
are currently arch_dcache_flush() and arch_dcache_invd().

This is quite restrictive because for some architectures we also want to
control i-cache and in general we want a finer control over what can be
flushed, invalidated or cleaned. To address these needs this patch
expands the set of operations that can be performed on data and
instruction caches, adding hooks for the operations on the whole cache,
a specific level or a specific address range.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione
20f59c8f1e cache: Rename CACHE_FLUSHING to CACHE_MANAGEMENT
The new APIs are not only dealing with cache flushing. Rename the
Kconfig symbol to CACHE_MANAGEMENT to better reflect this change.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione
923b3be890 kconfig: Unify CACHE_* options
The kconfig options to configure the cache flushing framework are
currently living in the arch-specific kconfigs of ARC and X86 (32-bit)
architectures even though these are defining the same things.

Move the common symbols in one place accessible by all the architectures
and create a menu for those.

Leave the default values in the arch-specific locations.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione
57f7e31017 drivers: PSCI: Add driver and subsystem
Firmware implementing the PSCI functions described in ARM document
number ARM DEN 0022A ("Power State Coordination Interface System
Software on ARM processors") can be used by Zephyr to initiate various
CPU-centric power operations.

It is needed for virtualization, it is used to coordinate OSes and
hypervisors and it provides the functions used for SMP bring-up such as
CPU_ON and CPU_OFF.

A new PSCI driver is introduced to setup a proper subsystem used to
communicate with the PSCI firmware, implementing the basic operations:
get_version, cpu_on, cpu_off and affinity_info.

The current implementation only supports PSCI 0.2 and PSCI 1.0

The PSCI conduit (SMC or HVC) is setup reading the corresponding
property in the DTS node.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-18 19:06:53 +01:00
Martin Åberg
9d1dd9a760 arch/riscv: boost default stacks
Increased stacks required for RISC-V 64-bit CI to pass. Most of these
were catched by the kernel stack sentinel.

The CMSIS stacks are for programs in samples/portability.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-01-15 13:06:33 -05:00
Martin Åberg
d2e9568543 riscv: make COMPRESSED_ISA independent of FLOAT_HARD
The CONFIG_FLOAT_HARD config previously enabled the C (compressed)
ISA extensions (CONFIG_COMPRESSED_ISA). This commit removes that
dependency.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-01-15 13:06:33 -05:00
Martin Åberg
a0a875026b arch/riscv: semantic ESF printer
Present ESF registers as columns of argument registers and temporary
registers. Also print 64-bit values in full width.

It now looks like this:

E:  mcause: 2, Illegal instruction
E:      a0: 00000000    t0: 12345678
E:      a1: 00000000    t1: 00000000
E:      a2: 00000000    t2: 00000000
E:      a3: 00000000    t3: 0badc0de
E:      a4: 00000000    t4: 00000000
E:      a5: 8000733c    t5: deadbeef
E:      a6: 00000000    t6: 00000000
E:      a7: 00000000
E:                      tp: 00000000
E:      ra: 80000d9a    gp: 00000000
E:    mepc: 8000733c
E: mstatus: 00001880

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-01-15 13:06:33 -05:00
Johan Hedberg
c88dae574c x86: early_serial: Suppress output attempts prior to init
Until now, any attempts to call printk prior to early serial init has
caused page faults due to the device not being mapped yet. Add static
variable to track the pre-init status, and instead of page faulting
just suppress the characters and log a warning right after init to
give an indication that output characters have been lost.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2021-01-15 11:01:23 -05:00
Carlo Caione
c5b898743a aarch64: Fix alignment fault on z_bss_zero()
Using newlibc with AArch64 is causing an alignement fault in
z_bss_zero() when the code is run on real hardware (on QEMU the problem
is not reproducible).

The main cause is that the memset() function exported by newlibc is
using 'DC ZVA' to zero out memory.

While this is often a nice optimization, this is causing the issue on
AArch64 because memset() is being used before the MMU is enabled, and
when the MMU is disabled all data accesses will be treated as
Device_nGnRnE.

This is a problem because quoting from the ARM reference manual: "If the
memory region being zeroed is any type of Device memory, then DC ZVA
generates an Alignment fault which is prioritized in the same way as
other alignment faults that are determined by the memory type".

newlibc tries to be a bit smart about this reading the DCZID_EL0
register before deciding whether using 'DC ZVA' or not. While this is a
good idea for code running in EL0, currently the Zephyr kernel is
running in EL1. This means that the value of the DCZID_EL0 register is
actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because
EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is
unconditionally used in the newlibc memset() implementation.

The "standard" solution for this case is usually to use a different
memset routine to be specifically used for two cases: (1) against IO
memory or (2) against normal memory but with MMU disabled (which means
all memory is considered device memory for data accesses).

To fix this issue in Zephyr we avoid calling memset() when clearing the
bss, and instead we use a simple loop to zero the memory region.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-14 13:37:47 -08:00
Ioannis Glaropoulos
c6c14724ba arch: arm: cortex_m: fix stack overflow error detection
In rare cases when a thread may overflow its stack, the
core will not report a Stacking Error. This is the case
when a large stack array is created, making the PSP cross
beyond the stack guard; in this case a MemManage fault
won't cause a stacking error (but only a Data Access
Violation error). We fix the fault handling logic so
such errors are reported as stack overflows and not as
generic CPU exceptions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-01-14 12:35:47 +01:00
Ioannis Glaropoulos
202c2fde54 arch: arm: cortex_m: do not read MMFAR if MMARVALID is not set
When the MMARVALID bit is not set, do not read the MMFAR
register to get the fault address in a MemManage fault.
This change prevents the fault handler to erroneously
assume MMFAR contains a valid address.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-01-14 12:35:47 +01:00
Guennadi Liakhovetski
ca0e5df219 xtensa: don't build and run the reset handler twice
Currently Zephyr links reset-vector.S twice in xtensa builds:
into the bootloader and the main image. It is run at the end
of the boot loader execution and immediately after that again
in the beginning of the main code. This patch adds a
configuration option to select whether to link the file to the
bootloader or to the application. The default is to the
application, as needed e.g. for QEMU, SOF links it to the
bootloader like in native builds.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-01-13 18:17:40 -05:00
Julien Massot
d3345dd54d arch: arm: Add Cortex-R7 support
Pass the correct -mcpu flags to the compiler when building for the
Cortex-R7.

Signed-off-by: Julien Massot <julien.massot@iot.bzh>
2021-01-13 15:04:43 +01:00
Carlo Caione
e710d36f77 aarch64: mmu: Enable CONFIG_MMU
Enable CONFIG_MMU for AArch64 and add the new arch_mem_map() required
function.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione
6a3401d6be aarch64: mmu: Fix variable types
Before hooking up the MMU driver code to the Zephyr MMU core code it's
better to match the expected variable types of the two parts.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione
0a0061d901 aarch64: mmu: Do not assume a single set of pagetables is used
The MMU code is currently assuming that Zephyr only uses one single set
of page tables shared by kernel and user threads. This could possibly be
not longer true in the future when multiple set of page tables can be
present and swapped at run-time.

With this patch a new arm_mmu_ptables struct is introduced that is used
to host a buffer pointing to the memory region containing the page
tables and the helper variables used to manage the page tables. This new
struct is then used by the ARM64 MMU code instead of assuming that the
kernel page tables are the only ones present.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione
1a4a73da97 aarch64: mmu: Makes memory mapping functions more generic
The ARM64 MMU code used to create the page tables is strictly tied to
the custom arm_mmu_region struct. To be able to hook up this code to the
Zephyr MMU APIs we need to make it more generic.

This patch makes the mapping function more generic and creates a new
helper function add_arm_mmu_region() to map the regions defined by the
old arm_mmu_region structs using this new generic function.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Carlo Caione
2581009c3e aarch64: mmu: Move xlat tables to one single array
In the current code the base xlat table is a standalone array. This is
done because we know at compile time the size of this table so we can
allocate the correct size and save a bit of memory. All the other xlat
tables are statically allocated in a different array with full size.

With this patch we move all the page tables in one single array,
including the base table. This is probably going to waste a bit of space
but it makes easier to:

- have all the page tables mapped in one single contiguous memory region
  instead of having to take care of two different arrays in two
  different locations
- duplicate the page tables more quickly if we need to
- use a pre-allocated space to host the page tables
- use a pre-computed set of page tables saved in a contiguous memory
  region

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-12 06:51:09 -05:00
Flavio Ceolin
6bf34a6258 arc: power: Remove dead code
Removing dead code to handle deep sleep. This option is never enabled
and it is broken.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-08 06:49:43 -05:00
Flavio Ceolin
47e0621bb7 power: Remove not used build option
There is no usage of BOOTLOADER_CONTEXT_RESTORE since quark support
was removed.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-08 06:49:43 -05:00
Flavio Ceolin
28cf88183a x86: power: Remove dead code
X86 currently has no support for deep sleep states, just removing this
dead code.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-08 06:49:43 -05:00
Carlo Caione
1d68c48786 aarch64: mmu: Fix typo in mask definition
Fix typo in mask definition (s/UPPER/LOWER)

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-06 08:18:27 -06:00
Daniel Leung
0d7bdbc876 xtensa: use highest available EPC/EPS regs in restore context
There may be Xtensa SoCs which don't have high enough interrupt
levels for EPC6/EPS6 to exist in _restore_context. So changes
these to those which should be available according to the ISA
config file.

Fixes #30126

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-05 10:31:45 -08:00
Carlo Caione
181088600e aarch64: mmu: Avoid creating a new table when not needed
In the current MMU code a new table is created when mapping a memory
region that is overlapping with a block already mapped. The problem is
that the new table is created also when the new and old mappings have
the same attributes.

To avoid using a new table when not needed the attributes of the two
mappings are compared before creating the new table.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-04 23:53:04 -08:00
Daniel Leung
0aa61793de x86: remove custom switch to main thread function
The original idea of using a custom switch to main thread
function is to make sure the buffer to save floating point
registers are aligned correctly or else exception would be
raised when saving/restoring those registers. Since
the struct of the buffer is defined with alignment hint
to toolchain, the alignment will be enforced by toolchain
as long as the k_thread struct variable is a dedicated,
declared variable. So there is no need for the custom
switch to main thread function anymore.

This also allows the stack usage calculation of
the interrupt stack to function properly as the end of
the interrupt stack is not being used for the dummy
thread anymore.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-04 16:59:59 -08:00
Eugeniy Paltsev
14afc02caa isr_tables: adopt _irq_vector_table for using on 64bit architectures
As of today generic _irq_vector_table is used only on 32bit
architectures and 64bit architectures have their own implementation.
Make vectors size adjustable by using uintptr_t instead of uint32_t
for vectors.

The ARCv3 64 bit HS6x processors are going to be first users for
that.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2021-01-04 16:47:51 -08:00
Watson Zeng
b1adc462be arch: arc: archs using ATOMIC_OPERATIONS_C
ATOMIC_OPERATIONS_BUILTIN still has some problem in mwdt toolchain,
so choosing ATOMIC_OPERATIONS_C instead.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-01-04 12:54:05 -05:00
Wojciech Sipak
56c06e852b arch: arm: cortex_r: disable ECC on TCMs
This commit adds possibility to disable ECC in Tightly Coupled
Memory in Cortex-R.
Linker scripts places stacks in this memory and marks it as
.noinit section. With ECC enabled, stack read accesses without
previous write result in Data Abort Exception.

Signed-off-by: Wojciech Sipak <wsipak@antmicro.com>
2020-12-27 18:16:00 +01:00
Watson Zeng
2609101eda arc: stack guard: bug fix with multi push stack situation
accessing the stack below guard_end is always a bug. some
instrustions (like enter_s {r13-r26, fp, blink}) push a collection
of registers on to the stack. In this situation, the fault_addr will
less than guard_end, but sp will greater than guard_end.

|------stack base-------| <--- high address
|                       |
|                       | <--- sp
|------stack top--------|
|------guard_end--------|
|                       | <--- fault_addr
|                       |
|------guard_start------| <--- low address

So we need to remove the SP check. Trade-off here is if we prefer
'false' classifications of MPU stack guard area accesses as stack
error or as general mpu error. The faults get caught anyway, this is
just about classification: don't see a strong need for the extra check
to only report stack pointer accesses to guard area as stack error,
instead of all accesses.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-12-20 13:01:22 -05:00
Kumar Gala
63d0a109c0 arch/x86: Convert DEVICE_AND_API_INIT to DEVICE_DEFINE
Convert device to DEVICE_DEFINE instead of DEVICE_AND_API_INIT
so we can deprecate DEVICE_AND_API_INIT in the future.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-12-20 10:19:30 -06:00
Johan Hedberg
8b01efe8e6 arch: x86: Fix early_serial build error when using fixed MMIO address
Fix the following complilation error that happens when specifying a
fixed MMIO address for the UART through X86_SOC_EARLY_SERIAL_MMIO8_ADDR:

arch/x86/core/early_serial.c:30:26: error: #if with no expression
   30 | #if DEVICE_MMIO_IS_IN_RAM

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-12-17 11:55:50 -05:00
Peng Fan
cca070c80a arch: arm64: mmu: support using MT_NS attribute
According to CONFIG_ARMV8_A_NS, using MT_SECURE or MT_NS, to simplify
code change, use MT_DEFAULT_SECURE_STATE instead

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2020-12-17 08:08:00 -05:00
Peng Fan
0d0f168460 arm: aarch64: Kconfig: introduce ARMV8_A_NS
Add new Kconfig entry ARMV8_A_NS for zephyr runs in normal world usage.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2020-12-17 08:08:00 -05:00
Andrew Boie
d2ad783a97 mmu: rename z_mem_map to z_phys_map
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.

mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.

Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-16 08:55:55 -05:00
Luke Starrett
430952f0b2 arch: arm64: GICv2/v3 handling causes abort on spurious interrupt
In _isr_wrapper, the interrupt ID read from the GIC is blindly used to
index into _sw_isr_table, which is only sized based on CONFIG_NUM_IRQ.

It is possible for both GICv2 and GICv3 to return 1023 for a handful
of scenarios, the simplest of which is a level sensitive interrupt
which has subsequently become de-asserted.  Borrowing from the Linux
GIC implementation, a read that returns an interrupt ID of 1023 is
simply ignored.

Minor collateral changes to gic.h to group !_ASMLANGUAGE content
together to allow this header to be used in assembler files.

Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
2020-12-16 08:46:03 -05:00
Andrew Boie
0a791b7a09 x86: mmu: clarify physical/virtual conversions
The page table implementation requires conversion between virtual
and physical addresses when creating and walking page tables. Add
a phys_addr() and virt_addr() functions instead of hard-casting
these values, plus a macro for doing the same in ASM code.

Currently, all pages are identity mapped so VIRT_OFFSET = 0, but
this will now still work if they are not the same.

ASM language was also updated for 32-bit. Comments were left in
64-bit, as long mode semantics don't allow use of Z_X86_PHYS_ADDR
macro; this can be revisited later.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-15 14:16:51 -05:00
Alberto Escolar Piedras
6d3476117b posix: Add cpu_hold() function to better emulate code delay
In native_posix and nrf52_bsim add the cpu_hold() function,
which can be used to emulate the time it takes for code
to execute.
It is very similar to arch_busy_wait(), but while
arch_busy_wait() returns when the requested time has passed,
cpu_hold() ensures that the time passes in the callers
context independently of how much time may pass in some
other context.

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2020-12-14 12:32:11 +01:00
Anas Nashif
3d33d767f1 twister: rename in code
Some code had reference to sanitycheck, replace with twister.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-11 14:13:02 -05:00
Kumar Gala
7defa02329 x86: Fix compiler warning in z_x86_dump_mmu_flags
Fix compiler warnings associated with 'level' and 'entry' variables
'may be used uninitialized in this function'

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-12-11 17:41:58 +02:00
Carlo Caione
0d1a6dc27f intc: gic: Use SYS_INIT instead of custom init function
The GIC interrupt controller driver is using a custom init function
called directly from the prep_c function. For consistency move that to
use SYS_INIT.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-12-11 10:17:27 -05:00
Andrew Boie
3605832682 x86: fix z_x86_thread_page_tables_get()
This was reporting the wrong page tables for supervisor
threads with KPTI enabled.

Analysis of existing use of this API revealed no problems
caused by this issue, but someone may trip over it eventually.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-10 18:22:58 -05:00
Andrew Boie
905e6d52a7 x86: improve page table printouts
We now show:

 - Data pages that are paged out in red
 - Pages that are mapped but non-present due to KPTI,
   respectively in cyan or blue if they are identity mapped
   or not.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-10 18:22:58 -05:00
Andrei Emeltchenko
34573803a8 arch: x86_64: Rename _exception_stack to z_x86_exception_stack
Rename stack name according to MISRA-C standard.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
a4dbb51e74 arch: x86_64: Rename _nmi_stack to z_x86_nmi_stack
Rename stack name according to MISRA-C standard.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
ddc5139a6e arch: x86_64: Trivial correction
Correct register name in comment.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
e7d3dd1362 arch: x86_64: Using right exception stack with KPTI
With kernel page table isolation (KPTI) we cannot use right exception
stack since after using trampoline stack there was always switch to
7th IST stack (__x86_tss64_t_ist7_OFFSET). Make this configurable as a
parameter in EXCEPT(nr, ist) and EXCEPT_CODE(nr, ist). For the NMI we
would use ist6 (_nmi_stack).

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
85db42883f arch/x86: Use NMI stack for NMIs
NMI can be triggered at any time, even when in the process of
switching stacks. Use special stack for it.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrei Emeltchenko
8db06aee69 arch/x86: Add NMI registration API
Add simple NMI registration API.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-12-10 17:06:17 +02:00
Andrew Boie
39dab07f49 x86: provide inline pentry_get()
Non-debug code may need this functionality, create an inline.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-09 16:57:01 -08:00
Andrew Boie
d362acb567 x86: mmu: add PTE_LEVEL
For code clarity, add a macro indicating the paging level
for leaf page tables.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-09 16:57:01 -08:00
Andrew Boie
d7dc0deae5 x86: mmu: fix ipi comments
Delete an incorrect one and note a limitation of the current
IPI implementation.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-09 16:57:01 -08:00
Andrew Boie
e20846eaa7 x86: mmu: add range_map_unlocked()
range_map() now doesn't implicitly hold x86_mmu_lock, allowing
callers to use it if the lock is already held.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-09 16:57:01 -08:00
Andrew Boie
6eabe20aee x86: mmu: fix typo
Fix incorrect macro name in comment.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-09 16:57:01 -08:00
Andrew Boie
2a139ee95d x86: pte_atomic_update should not return flipped
KPTI gymnastics need to be abstracted away from callers to
page_map_set().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-09 16:57:01 -08:00
Anas Nashif
e3937453a6 power: rename _sys_suspend/_sys_resume
Be consistent in PM namespaces.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif
c10d4b377d power: move z_pm_save_idle_exit prototype to power.h
Maintain power prototypes in power.h instead of kernel and arch headers.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif
e0f3833bf7 power: remove SYS_ and sys_ prefixes
Remove SYS_ and sys_ from all PM related functions and defines.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif
142c3060e7 power: move kconfigs from arch/ to power/
Move all Kconfigs where they belong and in one place.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif
dd931f93a2 power: standarize PM Kconfigs and cleanup
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE

and use PM_ as the prefix for all PM related Kconfigs

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Tomasz Bursztyka
da70d20a9c arch/x86: Support PCIE MSI-X
Provide the necessary adjustments to get MSI-X working (with or without
Intel VT-D).

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-12-08 09:29:20 -05:00
Tomasz Bursztyka
bfbe9b6df5 arch/x86: Implement arch specifics for software MSI multi-vector
Which requires Intel VT-D to work, since we don't allocate contiguous
vectors.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-12-08 09:29:20 -05:00
Tomasz Bursztyka
be4c893549 arch/x86: Expose function do get DRHDs from DMAR ACPI table
This is part of Intel VT-D and how to discover capabilities, base
addresses and so on in order to start taking advantage from it.

There is a lot to get from there, but currently we are interested only
by getting the remapping hardware base address. And more specifically
for interrupt remapping usage.

There might be more than one of such hardware so the exposed function is
made to retrieve all of them.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-12-08 09:29:20 -05:00
Tomasz Bursztyka
abf65b5e65 arch/x86: Generalize dynamic irq connection on given vector
This will be used by MSI multi-vector implementation to connect the irq
and the vector prior to allocation.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-12-08 09:29:20 -05:00
Tomasz Bursztyka
bd57e4cf12 arch/x86: Generalize the vector allocator
This will be used by MSI/MSI-X when multi-vector is requested.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-12-08 09:29:20 -05:00
Jingru Wang
5f2aa0409c toolchain: improved toolchain abstraction for compilers and linker
Accodind to c55c64e to update code

Signed-off-by: Jingru Wang <jingru@synopsys.com>
2020-12-05 10:19:50 -05:00
Carlo Caione
7e36bd31fe arch: aarch64: Use SP_EL0 instead of SP_ELx
ARM64 is currently using SP_ELx as stack pointer for kernel and threads
because everything is running in EL1. If support for EL0 is required, it
is necessary to switch to use SP_EL0 instead, that is the only stack
pointer that can be accessed at all exception levels by threads.

While it is not required to keep using SP_EL0 also during the
exceptions, the current code implementation makes it easier to use the
same stack pointer as the one used by threads also during the
exceptions.

This patch moves the code from using SP_ELx to SP_EL0 and fill in the
missing entries in the vector table.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-12-04 08:13:42 -05:00
Martin Åberg
53a4acb2dc SPARC: add FPU support
This change adds full shared floating point support for the SPARC
architecture.

All SPARC floating point registers are scratch registers with respect
to function call boundaries. That means we only have to save floating
point registers when switching threads in ISR. The registers are
stored to the corresponding thread stack.

FPU is disabled when calling ISR. Any attempt to use FPU in ISR
will generate the fp_disabled trap which causes Zephyr fatal error.

- This commit adds no new thread state.
- All FPU contest save/restore is synchronous and lazy FPU context
  switch is not implemented.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-12-04 14:33:43 +02:00
Martin Åberg
356d37fb7f SPARC: optimized interrupt stack frame size
With this change we allocate stack space only for the registers we
actually store in the thread interrupt stack frame.

Furthermore, no function is called on with the interrupt context save
frame %sp so no full frame is needed here. ABI functions are called
later in the interrupt trap handler, but that is after the dedicated
interrupt stack has been installed.

This saves 96 bytes of stack space for each interrupted context.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-12-04 14:33:43 +02:00
Martin Åberg
3734465354 SPARC: optimized interrupt trap stores and loads
The input registers (i0..i7) are not modified by the interrupt trap
handler and are preserved by function calls. So we do not need to
store them in the interrupt stack frame.

This saves 48 bytes of stack space for each interrupted context,
and eliminates 4 double word stores and 4 double word loads.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-12-04 14:33:43 +02:00
Andrew Boie
8771fbdaac x86: fix page_validate for page-outs
A non-present entry might still be valid access, the
page could just be swapped out.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-03 17:33:39 -05:00
Andrew Boie
4a61fd87cf x86: make PTE updates atomic
This is important for when we will need to atomically
un-map a page and get its dirty state before the un-mapping
completed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-03 17:33:39 -05:00
Andrew Boie
524890718b x86: fatal: un-ifdef two inlines
These are inline functions and don't need to be ifdefed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-03 17:33:39 -05:00
Andrew Boie
7a6cb633c0 x86: add MMU page fault codes to header
We will soon need these for demand paging.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-03 17:33:39 -05:00
Andrew Boie
5b5bdb5fbd x86: add inline function to fetch cr2
Better to encapsulate asm operations in inline functions than
embed directly in other C code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-03 17:33:39 -05:00
Alexandre Bourdiol
4cf1d4380e arch: arm: aarch32:cortex_m: timing.c: cortex M7 may need DWT unlock
On Cortex M7, we need to check the optional presence of
Lock Access Register (LAR) which is indicated in
Lock Status Register (LSR).
When present, a special access token must be written to unlock DWT
registers.

Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
2020-12-02 10:58:08 +02:00
Peng Fan
346ecb2a5a arch: arm64: correct vector_table alignment
The 2K alignment assembler directives should be under
'SECTION_SUBSEC_FUNC(exc_vector_table,_vector_table_section,_vector_table)'
Otherwise the _vector_table is actually 0x80 bytes aligned.

Signed-off-by: Peng Fan <peng.fan@nxp.com>
2020-12-01 10:04:59 -06:00
Krzysztof Chruscinski
3ed8083dc1 kernel: Cleanup logger setup in kernel files
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2020-11-27 09:56:34 -05:00
Watson Zeng
249aa62c27 arch: arc: fix for hs eret has no copy of pc in interrupt entry
According to the PRMs of both ARC EM & ARC HS families on entry
to Fast IRQ handler ARC hardware saves PC (Program Counter) value
of where processor was right before jumping to the IRQ handler into
2 registers: ILINK & ERET.

But it turned out in case of ARC HS (at least in configuration with
Fast IRQs & 1 register bank) only ILINK was populated with the
previous PC, while in Zephyr we relied on what we read out of ERET.
That lead to funny issues when CPU returned from IRQ handling
to some unexpected location.

And now with that precious knowledge we're switching to return
address recovery from ILINK so that with both families of ARC
processors (EM & HS) we may get reliably good results.

The wrapper is few cycles shorter/faster as well, as we may shave off
another extra instruction for transferring ERET value from its AUX reg
to a scratch core register to be later stored in the memory.

+----+---------------+---------------+--------------+
|    | FIRQ          | RIRQ          | RIRQ(Secure) |
+----+---------------+---------------+--------------+
| HS | ILINK=PC      | ILINK=PC      | NULL         |
+----+---------------+---------------+--------------+
| EM | ILINK=ERET=PC | ILINK=ERET=PC | ILINK=PC     |
+----+---------------+---------------+--------------+

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-11-26 14:19:28 +01:00
Eugeniy Paltsev
e6300bee2d ARC: handle the difference of GNU & MWDT assembly for CONFIG_SMP=y
Handle the difference of GNU & MWDT assembly for ARC-specific
code guarded by CONFIG_SMP define. That fixies SMP platforms build
with MWDT toolchain.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-11-26 13:51:33 +01:00
Carlo Caione
47ebde30b9 aarch64: error: Handle software-generated fatal exceptions
Introduce a new SVC call ID to trigger software-generated CPU
exceptions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-25 12:02:11 +02:00
Håkon Øye Amundsen
2ce570b03f arch: arm: clear SPLIM registers before z_platform_init
Allow z_platform_init to perform stack operations.

Signed-off-by: Håkon Øye Amundsen <haakon.amundsen@nordicsemi.no>
2020-11-24 20:53:49 +02:00
Maximilian Bachmann
3c8e98cb39 drivers/pcie: Change pcie_get_mbar() to return size and flags
currently pcie_get_mbar only returns the physical address.
This changes the function to return the size of the mbar and
the flags (IO Bar vs MEM BAR).

Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
2020-11-20 09:36:22 +02:00
Andrew Boie
5a58ad508c arch: mem protect Kconfig cleanups
Adds a new CONFIG_MPU which is set if an MPU is enabled. This
is a menuconfig will some MPU-specific options moved
under it.

MEMORY_PROTECTION and SRAM_REGION_PERMISSIONS have been merged.
This configuration depends on an MMU or MPU. The protection
test is updated accordingly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-18 08:02:08 -05:00
Andrew Boie
00cdb597ff arm: de-couple MPU code from k_mem_partition
k_mem_partition is part of the CONFIG_USERSPACE abstraction,
but some older MPU code was depending on it even if user mode
isn't enabled. Use a new structure z_arm_mpu_partition instead,
which will insulate this code from any changes to the core
kernel definition of k_mem_partition.

The logic in z_arm_configure_dynamic_mpu_regions has been
adjusted to copy the necessary information out of the
memory domain instead of passing the addresses of the domain
structures directly to the lower-level MPU code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-18 08:02:08 -05:00
Martin Åberg
35264cc214 SPARC: add support for the tracing subsystem
This commit implements the architecture specific parts for the
Zephyr tracing subsystem on SPARC and LEON3. It does so by calling
sys_trace_isr_enter(), sys_trace_isr_exit() and sys_trace_idle().

The logic for the ISR tracing is:
1. switch to interrupt stack
2. *call sys_trace_isr_enter()* if CONFIG_TRACING_ISR
3. call the interrupt handler
4. *call sys_trace_isr_exit()* if CONFIG_TRACING_ISR
5. switch back to thread stack

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-11-18 10:31:26 +01:00
Carlo Caione
f095e2fd05 arch: arm64: mmu: Rework defines
Every time I try to decode all the defines in this driver what I get is
only a huge headache. This patch:

- adds a few sensible comments
- remove the redundant defines
- rename the defines to be more self-explanatory
- reorder the defines
- try to make sense of some mysterious derived values

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 19:04:25 -05:00
Carlo Caione
96f574c7a4 aarch64: Use macro-generated absolute symbols for the ESF
As done already for other structs, use the macro-generated offsets when
referencing register in the ESF.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:59:23 -05:00
Carlo Caione
daa94e5e59 aarch64: Remove redundant init_stack_frame
The init_stack_frame is the same as the the ESF. No need to have two
separate structs. Consolidate everything into one single struct and make
register entries explicit.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:59:23 -05:00
Carlo Caione
a7d94b003e aarch64: Use absolute symbols for the callee saved registers
Use GEN_OFFSET_SYM macro to genarate absolute symbols for the
_callee_saved struct and use these new symbols in the assembly code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:59:23 -05:00
Carlo Caione
666974015e aarch64: error: Introduce CONFIG_EXCEPTION_DEBUG
Introduce CONFIG_EXCEPTION_DEBUG to discard exception debug strings and
code when not needed.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:52:45 -05:00
Carlo Caione
2683a1ed97 aarch64: error: Enable recoverable errors
For some kind of faults we want to be able to put in action some
corrective actions and keep executing the code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:52:45 -05:00
Carlo Caione
a054e424e4 aarch64: error: Beautify error printing
Make the printing of errors a bit more descriptive and print the FAR_ELn
register only when strictly required.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:52:45 -05:00
Carlo Caione
6d05c57781 arch: aarch64: Branch SError vector table entry
Each vector table entry has 128-bytes to host the vector code. This is
not always enough and in general it's better to branch to the actual
exception handler elsewhere in memory.

Move the SError entry to a branched code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:52:45 -05:00
Martin Åberg
a5293567c0 SPARC: adjust network stack stack sizes
Makes the following tests pass on qemu_leon3:
- net.ipv6
- net.tcp2.simple

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-11-17 11:44:16 +02:00
Martin Åberg
feae3249b2 sparc: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch. Register g7 is
used to point to the thread data. Thread data is accessed with negative
offsets from g7.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-11-13 14:53:55 -08:00
Martin Åberg
07160fa153 arch: Add SPARC processor architecture
SPARC is an open and royalty free processor architecture.

This commit provides SPARC architecture support to Zephyr. It is
compatible with the SPARC V8 specification and the SPARC ABI and is
independent of processor implementation.

Functionality specific to SPRAC processor implementations should
go in soc/sparc. One example is the LEON3 SOC which is part of this
patch set.

The architecture port is fully SPARC ABI compatible, including trap
handlers and interrupt context.

Number of implemented register windows can be configured.

Some SPARC V8 processors borrow the CASA (compare-and-swap) atomic
instructions from SPARC V9. An option has been defined in the
architecture port to forward the corresponding code-generation option
to the compiler.

Stack size related config options have been defined in sparc/Kconfig
to match the SPARC ABI.

Co-authored-by: Nikolaus Huber <nikolaus.huber.melk@gmail.com>
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-11-13 14:53:55 -08:00
Wentong Wu
bfc7785da0 arch: arm: push ssf to thread privileged stack to complete stack frame
Pushes the seventh argument named ssf to thread's privileged
stack to follow below syscall prototype.

uintptr_t z_mrsh_xx(uintptr_t arg0, uintptr_t arg1, uintptr_t arg2,
		    uintptr_t arg3, uintptr_t arg4, void *more, void *ssf)

Fixes: #29386.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2020-11-12 17:12:38 -05:00
Carlo Caione
d770880a71 kconfig: userspace: Select THREAD_STACK_INFO
stack_info is needed for userspace and TLS. Select it when enabling
userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-12 15:57:40 -05:00
Daniel Leung
270ce0b850 nios2: fix register saving during thread switch instrumentation
This changes to use stack to store registers before calling thread
switch instrumentation functions, instead of using the thread's
register saving struct.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Daniel Leung
11e6b43090 tracing: roll thread switch in/out into thread stats functions
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Daniel Leung
9be37553ee timing: do not repeatedly do init()/start()/stop()
We should not be initializing/starting/stoping timing functions
multiple times. So this changes how the timing functions are
structured to allow only one initialization, only start when
stopped, and only stop when started.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Carlo Caione
3afb493858 arch: arm64: mmu: Restore SRAM region
In a5f34d85c2 ("soc: arm: qemu_cortex_a53: Remove SRAM region") the
SRAM memory region was removed.

While this is correct when userspace is not enabled, when userspace is
enabled new regions are introduced outside the boundaries of
the mapped [__kernel_ram_start,__kernel_ram_end] region. This means that
we need to map again the whole SRAM to include all the needed regions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-11 08:21:53 -05:00
Daniel Leung
c7704d8c66 arc: enable thread local storage
This adds the necessary bits to support thread local storage
in ARC.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 13:25:29 +01:00
Yuguo Zou
ba2413f544 arch: arc: change to CONFIG_INIT_ARCH_HW_AT_BOOT
align kconfig option CONFIG_ARC_CUSTOM_INIT to
CONFIG_INIT_ARCH_HW_AT_BOOT. Remove unused CONFIG_ARC_CUSTOM_INIT in
kconfig.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2020-11-11 13:20:14 +01:00
Andrew Boie
ea6e4ad098 kernel: support non-identity RAM mapping
Some platforms may have multiple RAM regions which are
dis-continuous in the physical memory map. We really want
these to be in a continuous virtual region, and we need to
stop assuming that there is just one SRAM region that is
identity-mapped.

We no longer use CONFIG_SRAM_BASE_ADDRESS and CONFIG_SRAM_SIZE
as the bounds of kernel RAM, and no longer assume in the core
kernel that these are identity mapped at boot.

Two new Kconfigs, CONFIG_KERNEL_VM_BASE and
CONFIG_KERNEL_RAM_SIZE now indicate the bounds of this region
in virtual memory.

We are currently only memory-mapping physical device driver
MMIO regions so we do not need virtual-to-physical calculations
to re-map RAM yet. When the time comes an architecture interface
will be defined for this.

Platforms which just have one RAM region may continue to
identity-map it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-09 20:19:13 -05:00
Yuguo Zou
3826eb302c arch: arc: add support of ARConnect inter-core debug unit
The Inter-core Debug Unit provides additional debug assist features in
multi-core scenarios.This commit allows ARConnect to conditionally
halt cores during debugging.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2020-11-09 15:52:15 -06:00
Alexandre Mergnat
4b97619b19 riscv: add support for canaries
Kobject test area size must be increased if canary feature
is enabled.

Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
2020-11-09 15:37:11 -05:00
Alexandre Mergnat
542a7fa25d arch: riscv: add memory protection support
The IRQ handler has had a major changes to manage syscall, reschedule
and interrupt from user thread and stack guard.

Add userspace support:
- Use a global variable to know if the current execution is user or
  machine. The location of this variable is read only for all user
  thread and read/write for kernel thread.
- Memory shared is supported.
- Use dynamic allocation to optimize PMP slot usage. If the area size
  is a power of 2, only one PMP slot is used, else 2 are used.

Add stack guard support:
- Use MPRV bit to force PMP rules to machine mode execution.
- IRQ stack have a locked stack guard to avoid re-write PMP
  configuration registers for each interruption and then win some
  cycle.
- The IRQ stack is used as "temporary" stack at the beginning of IRQ
  handler to save current ESF. That avoid to trigger write fault on
  thread stack during store ESF which that call IRQ handler to
  infinity.
- A stack guard is also setup for privileged stack of a user thread.

Thread:
- A PMP setup is specific to each thread. PMP setup are saved in each
  thread structure to improve reschedule performance.

Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
Reviewed-by: Nicolas Royer <nroyer@baylibre.com>
2020-11-09 15:37:11 -05:00
Alexandre Mergnat
18962e4ab8 arch: riscv: add pmp support
- Set some helper function to write/clear/print PMP config registers.
- Add support for different PMP slot size function to core/board.

Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
2020-11-09 15:37:11 -05:00
Alexandre Mergnat
6340dd7a59 arch: riscv: add e31 core support
Introducing core E31 family to link Zephyr features (userspace and
stack protection) to architecture capabilities (PMP).

Signed-off-by: Alexandre Mergnat <amergnat@baylibre.com>
2020-11-09 15:37:11 -05:00
Andrew Boie
d2a72273b7 x86: add support for common page tables
We provide an option for low-memory systems to use a single set
of page tables for all threads. This is only supported if
KPTI and SMP are disabled. This configuration saves a considerable
amount of RAM, especially if multiple memory domains are used,
at a cost of context switching overhead.

Some caching techniques are used to reduce the amount of context
switch updates; the page tables aren't updated if switching to
a supervisor thread, and the page table configuration of the last
user thread switched in is cached.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
cd789a7ac7 x86: add logs for tuning page pool size
This will do until we can set up a proper page pool using
all unused ram for paging structures, heaps, and anonymous
mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
8e5114b370 x86: update X86_MMU_PAGE_POOL_PAGES documentation
Help users understand how this should be tuned. Rather than
guessing wildly, set the default to 0. This needs to be tuned
on a per-board, per-application basis anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
a15be58019 x86: move page table reservation macros
We don't need this for stacks any more and only use this
for pre-calculating the boot page tables size. Move to C
code, this doesn't need to be in headers anywhere.

Names adjusted for conciseness.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
b8242bff64 x86: move page tables to the memory domain level
- z_x86_userspace_enter() for both 32-bit and 64-bit now
  call into C code to clear the stack buffer and set the
  US bits in the page tables for the memory range.

- Page tables are now associated with memory domains,
  instead of having separate page tables per thread.
  A spinlock protects write access to these page tables,
  and read/write access to the list of active page
  tables.

- arch_mem_domain_init() implemented, allocating and
  copying page tables from the boot page tables.

- struct arch_mem_domain defined for x86. It has
  a page table link and also a list node for iterating
  over them.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
745dd6f931 x86: use unused PTE bits when mapping memory
Page table management for x86 is being revised such that there
will not in many cases be a pristine, master set of page tables.
Instead, when mapping memory, use unused PTE bits to store the
original RW, US, and XD settings when the mapping was made.

This will allow memory domains to alter page tables while still
being able to restore the original mapping permissions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
bd76bdb8ff x86: define some unused PTE bits
These are ignored by the CPU and may be used for OS
accounting.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie
d31f62a955 x86: smp: add TLB shootdown logic
This will be needed when we support memory un-mapping, or
the same user mode page tables on multiple CPUs. Neither
are implemented yet.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Carlo Caione
7b7c328f7a aarch64: mmu: Enable support for unprivileged EL0
The current MMU code is assuming that both kernel and threads are both
running in EL1, not supporting EL0. Extend the support to EL0 by adding
the missing attribute to mirror the access / execute permissions to EL0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-04 13:58:19 -08:00
Carlo Caione
fd559f16a5 aarch64: mmu: Create new header file
The MMU coded is polluted by a lot of preprocessor code. Move that code
to a proper header file.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-04 13:58:19 -08:00
Carlo Caione
b9a407b65c aarch64: mmu: Move MMU files in a sub-directory
We are probably going to do more work on the MMU side and more files
will be added. Create a new sub-directory to host all the MMU related
files.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-04 13:58:19 -08:00
Yuguo Zou
d24c6e5aae arch: arc: use ifdef to replace if define in isr wrapper code
isr wrapper code has mixed usage of #ifdef and #if define macros. Unify
them to more usual #ifdef.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2020-11-02 11:02:47 -06:00
Yuguo Zou
dbd431d2bc arch: arc: fix a reg misuse in leaving tickless idle
There is a register misuse in leaving tickless idle code, which would
destroy exception/interrupt status. This commit fix this issue.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2020-11-02 11:02:47 -06:00
Ioannis Glaropoulos
47e87d8459 arch: arm: cortex_m: implement functionality for ARCH core regs init
Implement the functionality for configuring the
architecture core registers to their warm reset
values upon system initialization. We enable the
support of the feature in the Cortex-M architecture.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-11-02 15:02:24 +01:00
Ioannis Glaropoulos
89658dad19 arch: arm: aarch32: cortex_m: improve documentation of z_arm_reset
We enhance the documentation of z_arm_reset, stressing that
the function may either be loaded by the processor coming
out of reset, or by another image, e.g. a bootloader. We
also specify what is required at minimum when executing the
reset function.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-11-02 15:02:24 +01:00
Ioannis Glaropoulos
20a9848230 arch: introduce option to force internal architectural state init
We introduce an option that instructs Zephyr to perform
the initialization of internal architectural state (e.g.
ARCH-level HW registers and system control blocks) during
early boot to the reset values. The option is available
to the application developer but shall depend on whether
the architecture supports the functionality.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-11-02 15:02:24 +01:00
Carlo Caione
b3ff89bd51 arch: arm64: Remove _BIT suffix
This is redundant and not coherent with the rest of the file. Thus
remove the _BIT suffix from the bit field names.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione
24c907d292 arch: arm64: Add missing vector table entries
The current vector table is missing some (not used) entries. Fill these
in for the sake of completeness.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione
f0b2e3d652 arch: arm64: Use mov_imm when possible in the start code
Instead of relying on mov.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione
673803dc48 arch: arm64: Rename z_arm64_svc to z_arm64_sync_exc
The SVC handler is not only used for the SVC call but in general for all
the synchronous exceptions. Reflect this in the handler name.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione
e738631ddf arch: arm64: Fix indentation
Fix indentation for the ISR wrapper.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione
78b5e5563d arch: arm64: Reword comments
Fix, reword and rework comments.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Carlo Caione
9e897ea2c3 arch: arm64: Remove unused macro parameters
Remove z_arm64_{enter,exit}_exc parameter leftovers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-02 12:04:35 +01:00
Andrew Boie
edc5e31d6b x86_64: fix RBX clobber in nested IRQ case
In the code path for nested interrupts, we are not saving
RBX, yet the assembly code is using it as a storage location
for the ISR.

Use RAX. It is backed up in both the nested and non-nested
cases, and the ASM code is not currently using it at that
point.

Fixes: #29594

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-28 10:29:32 -07:00
Daniel Leung
f8a909dad1 xtensa: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Note that this does not enable TLS for all Xtensa SoC.
This is because Xtensa SoCs are highly configurable
so that each SoC can be considered a whole architecture.
So TLS needs to be enabled on the SoC level, instead of
at the arch level.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung
8a79ce1428 riscv: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung
388725870f arm: cortex_m: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Note that since Cortex-M does not have the thread ID or
process ID register needed to store TLS pointer at runtime
for toolchain to access thread data, a global variable is
used instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung
778c996831 arm: cortex_r: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung
df77e2af8b arm64: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung
4b38392ded x86: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung
53ac1ee6fa x86_64: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung
240beb42af kconfig: add configs for thread local storage
Add kconfigs to indicate whether an architecture has support
for thread local storage (TLS), and to enable TLS in kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Johan Hedberg
1c23a19c36 x86: pcie: Fix array index for bus_segs
This seems like a typo since all other places accessing bus_segs in
this context use i as the index.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-10-23 22:01:59 -04:00
Watson Zeng
3d30f1f60a arc: cpu_idle: remove sleep workround for nsim_hs_smp
In old version nSIM, when cpu is sleeping, no response to
inter-processor interrupt although it's pending and interrupts
are enabled(SNPS JIRA issue P10019563-41294). Now this has
been fixed in nSIM version (2020.09), so we can safely remove it.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-10-22 06:17:08 -04:00
Andy Ross
a8d5437799 soc/xtensa: Misc. checkpatch fixups
Code style fixes.  Kept separate from the original changes to permit
easier rebasing.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-10-21 06:38:53 -04:00
Andy Ross
0e83961b21 arch/xtensa: soc/xtensa/intel_adsp: Enable KERNEL_COHERENCE
Implement the kernel "coherence" API on top of the linker
cached/uncached mapping work.

Add Xtensa handling for the stack coherence API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-10-21 06:38:53 -04:00
Andy Ross
f6d32ab0a4 kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches.  Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.

Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory.  Where this is
available, place all writable data sections by default into the
coherent region.  An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.

Stack memory will be incoherent by default, as by definition it is
local to its current thread.  This requires special cache management
on context switch, so an arch API has been added for that.

Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent.  We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks.  In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-21 06:38:53 -04:00
Andy Ross
47940a63d7 arch/xtensa: Don't clear BSS on MP startup when !SMP
It's legal to have CONFIG_MP_NUM_CPUS > 1 and !CONFIG_SMP.  The
tests/kernel/mp test does this as a unit test of the multiprocessor
facilities. Test the right tunable when deciding whether to blow away
static data or not.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-10-21 06:38:53 -04:00
Andrew Boie
507ebd541a x86: remove NULL check in arch_user_mode_enter
These days all threads are always a member of a memory domain,
remove this NULL check as it won't ever be false.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
Andrew Boie
f6c64e92ce x86: fix arch_user_mode_enter locking
This function iterates over the thread's memory domain
and updates page tables based on it. We need to be holding
z_mem_domain_lock while this happens.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
Mikkel Jakobsen
8f2d69c4a3 arch: posix: add missing include for cpuhalt.c
Add posix_board_if.h which declares posix_exit().

This fixes implicit declaration of function errors when running
sanitycheck on samples for native_posix that calls sys_reboot().

Signed-off-by: Mikkel Jakobsen <mikkel.aunsbjerg@prevas.dk>
2020-10-20 08:54:59 +02:00
Maximilian Bachmann
498655ae8e arch/x86: fix compilation errors
fixes the following compilation errors
- sys_cache_line_size was undeclared at first use
- there was an assignment to an rvalue in arch_dcache_flush

Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
2020-10-14 13:27:23 -07:00
Andy Ross
55a85771b0 arch/x86: Make EFI copies bytewise
Originally the EFI boot code was written to assume that all sections
in the ELF file were 8-byte aligned and sized (because I thought this
was part of some platform spec somewhere).  This turned out to be
wrong in practice (at least for section sizes), so the requirement was
reduced to 4 bytes.  But now we have a section being generated
somewhere that turns out to violate even that.

There's no particular value in doing those copies in big chunks.
There's at best a mild performance benefit, but if we really cared
we'd be using a more complicated memcpy() implementation anyway.
Replace the loop in the C code with a bytewise copy, change the size
field in the generated header to store bytes, and remove the
assertions (which were the failuers actually being seen in practice)
in the script that were there to detect this misalignment.

Fixes #29095

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-10-13 14:07:24 -07:00
Carlo Caione
645082791b arch: aarch64: Catch early errors in EL3 and EL1
Setup the stack as early as possible to catch any possible errors in the
reset routine and handle also EL3 fatal errors.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-12 12:22:15 -04:00
Carlo Caione
758fb93b0b arch: arm64: Remove useless assembly code
The content of the SCR_EL3 register is overwritten by a later
instruction. Also no need to route SError, IRQs and FIQs to EL3.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-12 12:22:15 -04:00
Carlo Caione
06adb96c1a arch: arm64: Introduce {inc,dec}_nest_counter macros
The same code is used in several place / files. Make a macro out of it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-06 10:25:56 -04:00
Carlo Caione
2f3962534a arch: arm64: Remove new thread entry wrapper
Instead of having some special stack frame when first scheduling new
thread and a new thread entry wrapper to pull out the needed data, we
can reuse the context restore code by adapting the initial stack frame.

This reduces the lines of code and simplify the code at the expense of a
slightly bigger initial stack frame.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-06 10:25:56 -04:00
Torstein Grindvik
c02eb30bbf gen_isr_tables: Function ptr instead of (void *)
gen_isr_tables.py generates C-code which initializes a table with
values, and these values are structs with members cast to
(const void *) and (void *), respectively.

The actual struct definition has a member of type (const void *)
and another of type void (*)(const void *).

In order to avoid a large amount of reported issues in Coverity,
cast this to the exact type.

Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
2020-10-02 18:48:46 +02:00
Yuguo Zou
d04ff1af7c arch: arc: Restore MPU registers to its initial states between tests
EMSK boards can't be reset between tests due to hardware configures.
MPU v3 configs in previous test could cause exceptions in the following
tests. This commit fixes this issue by restoring MPU registers initial
states at early init stage.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2020-10-02 11:31:34 +02:00
Yuguo Zou
df4b7803a1 arch: arc: unify different versions of MPU registers
Previously MPU registers macros are only defined within its own header
files and could not be used by other part of program. This commit unify
them together.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2020-10-02 11:31:34 +02:00
Aastha Grover
83b9f69755 code-guideline: Fixing code violation 10.4 Rule
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.

Changes are related to converting the integer constants to the
unsigned integer constants

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-10-01 17:13:29 -04:00
Tomasz Bursztyka
2b6c574499 arch/x86: If ACPI is enabled, get the CPU APIC ID from it
The hardcoded APIC ID will be kept as default if the CPU is not found in
ACPI MADT.

Note that ACPI may expose more "CPUs" than there actually are
physically. Thus, make the logic aware of this possibility by checking
the enabled flas. (Non-enabled CPU are ignored).

This fixes up_squared board made of Celeron CPU.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-10-01 11:16:40 -07:00
Tomasz Bursztyka
d98f7b1895 arch/x86: Optimize ACPI RSDP lookup
As well as normalizing its signature declaration through header.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-10-01 11:16:40 -07:00
Tomasz Bursztyka
4ff1885f69 arch/x86: Move ACPI structures to header file
Let's have all specified ACPI structures in the central header.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-10-01 11:16:40 -07:00
Tomasz Bursztyka
9ce08544c5 arch/x86: Fix style issue and code logic in ACPI
Trivial changes.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-10-01 11:16:40 -07:00
Tomasz Bursztyka
c7787c623e arch/x86: Cleanup ACPI structure attributes names
No need to mix super short version of names with other structures
having full name. Let's follow a more relevant naming where each and
every attribute name is self-documenting then. (such as s/id/apic_id
etc...)

Also make CONFIG_ACPI usable through IS_ENABLED by enclosing exposed
functions with ifdef CONFIG_ACPI.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-10-01 11:16:40 -07:00
Carlo Caione
871bdd0712 arch: arm64: Deprecate booting from EL2
We are deprecating booting and running in EL2.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-01 10:42:47 -04:00
Carlo Caione
fb2bf23ec1 arch: arm64: Remove EL2/EL3 code
Zephyr is only supposed to be running at EL1 (+ EL0). Now that we drop
in EL1 from ELn at start we can remove all the EL2/EL3 unused code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-01 10:42:47 -04:00
Carlo Caione
7d40208ef7 arch: arm64: Remove CONFIG_SWITCH_TO_EL1
Remove the useless CONFIG_SWITCH_TO_EL1 since there should be no reason
to run Zephyr in EL3. So just drop to EL1 by default when booting from
EL3. Remove also non-reachable code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-01 10:42:47 -04:00
Luke Starrett
4800b03e56 arch: arm64: cosmetic changes to register dump
- Display full 64-bits register width in crash dumps
- Some values were prefixed 0x, some not.  Made consistent.

Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
2020-10-01 07:29:27 -04:00
Luke Starrett
169e7c5e75 arch: arm64: Fix arm64 crash dump output
- x0/x1 register printing is reversed
- The error stack frame struct (z_arch_esf_t) had the SPSR and ELR in
  the wrong position, inconsistent with the order these regs are pushed
  to the stack in z_arm64_svc.  This caused all register printing to be
  skewed by two.
- Verified by writing known values (abcd0000 -> abcd000f) to x0 - x15
  and then forcing a data abort.

Signed-off-by: Luke Starrett <luke.starrett@gmail.com>
2020-10-01 07:29:27 -04:00
Andrew Boie
c3c7f6c6d3 x86: don't define _image_rom_* unless XIP
Meaningless if we are not a XIP system and are running
from RAM.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-30 14:14:07 -07:00
Andrew Boie
f5a7e1a108 kernel: handle thread self-aborts on idle thread
Fixes races where threads on another CPU are joining the
exiting thread, since it could still be running when
the joiners wake up on a different CPU.

Fixes problems where the thread object is still being
used by the kernel when the fn_abort() function is called,
preventing the thread object from being recycled or
freed back to a slab pool.

Fixes a race where a thread is aborted from one CPU while
it self-aborts on another CPU, that was currently worked
around with a busy-wait.

Precedent for doing this comes from FreeRTOS, which also
performs final thread cleanup in the idle thread.

Some logic in z_thread_single_abort() rearranged such that
when we release sched_spinlock, the thread object pointer
is never dereferenced by the kernel again; join waiters
or fn_abort() logic may free it immediately.

An assertion added to z_thread_single_abort() to ensure
it never gets called with thread == _current outside of an ISR.

Some logic has been added to ensure z_thread_single_abort()
tasks don't run more than once.

Fixes: #26486
Related to: #23063 #23062

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-30 14:11:59 -04:00
Flavio Ceolin
27fcdaf71e arch: arm: Fix undefined symbol reference
_isr_wrapper is not defined when building with
CONFIG_GEN_SW_ISR_TABLE = n.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-09-29 12:36:33 +02:00
Ioannis Glaropoulos
3b89cf173b arch: arm: cortex-m: enable IRQs before main() in single-thread mode
Enable interrupts before switching to main()
in cortex-m builds with single-thread mode
(CONFIG_MULTITHREADING=n).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-29 10:47:43 +02:00
Andrew Boie
054a4c0ddc x86-64: increase exception stack size
We are not RAM-constrained and there is an open issue where
exception stack overflows are not caught. Increase this size
so that options like CONFIG_NO_OPTIMIZATIONS work without
incident.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-28 13:38:41 -07:00
Andrew Boie
a696f8ba27 x86: rename CONFIG_EXCEPTION_STACKS_SIZE
This is an x86-specific definition.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-28 13:38:41 -07:00
Johan Hedberg
6f5d8bd2c4 x86: pcie: Fix calling pcie_mm_init()
Commit 5632ee26f3 introduced an issue where in order to use MMIO
configuration:

 - do_pcie_mmio_cfg is required to be true
 - Only set to true in pcie_mm_init()
 - Which is only called from pcie_mm_conf()
 - Which is only called from pcie_conf() if do_pcie_mmio_cfg is
   already true!

The end result is that MMIO configuration will never be used.

Fix the situation by moving the initialization check to pcie_conf().

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-09-24 16:08:08 +03:00
Øyvind Rønningstad
407ebf8132 cortex_m: secure_entry_functions.ld: Increase SAU alignment to 32
The spec requires SAU regions to be aligned on 32 bytes.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-23 13:15:38 +02:00
Øyvind Rønningstad
81e7608c03 arm: tz: secure_entry_functions.ld: Fix NSC_ALIGN for nRF devices
If the location counter ('.') is within the area that the veneers
should go, the current solution will give a linker error ("Cannot move
location counter backwards"). This patch places the veneers in the next
SPU region in this case.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-23 13:15:38 +02:00
Øyvind Rønningstad
2b56b86190 arm: tz: secure_entry_functions.ld: Fix NSC_ALIGN redefinition
Allow CONFIG_ARM_NSC_REGION_BASE_ADDRESS to override the nRF-specific
logic for alignment.

Fixes issue https://github.com/zephyrproject-rtos/zephyr/issues/27544

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-23 13:15:38 +02:00
Andrew Boie
61d42cf42b x86-64: fix thread tracing
The current instrumentation point for CONFIG_TRACING added in
PR #28512 had two problems:

- If userspace and KPTI are enabled, the tracing point is simply
  never run if we are resuming a user thread as the
  z_x86_trampoline_to_user function is jumped to and calls
  'iret' from there

- Only %rdi is being saved. However, at that location, *all*
  caller-saved registers are in use as they contain the
  resumed thread's context

Simplest solution is to move this up near where we update page
tables. The #ifdefs are used to make sure we don't push/pop
%rdi more than once. At that point in the code only %rdi
is in use among the volatile registers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-22 20:47:48 -04:00
Wentong Wu
50252bbf92 arch: x86: mmu: use z_x86_kernel_ptables as array.
Use z_x86_kernel_ptables as array to make Coverity happy.

Coverity-CID: 212957.
Fixes: 27832.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2020-09-22 16:00:03 -05:00
Ioannis Glaropoulos
14f248fe1b arch: arm: cortex_m: cleanup SW_VECTOR_RELAY_CLIENT dependencies
CPU Cortex-M implies Mainline Cortex-M, therfore, the dependency
on ARMV6_M_ARMV8_M_BASELINE is redundant and can be removed. The
change in this commit is a no-op.

We also add the ARMV6_M_ARMV8_M_BASELINE dependency on option
CPU_CORTEX_M0_HAS_VECTOR_TABLE_REMAP to make sure it cannot be
selected for non Cortex-M Baseline SoCs (at least, not without
a warning).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-21 11:19:22 +02:00
Anas Nashif
75d159bf0d tracing: x86_64: move switched_in to switch function
Tracing switched in threads in C code does not work, it needs to happen
in the arch_switch code. See also Xtensa and ARC.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-09-20 21:27:55 -04:00
Kumar Gala
283a057279 x86_64: Fix memory access size for locore EOI
Newer QEMU (5.1) hangs / timeouts on a number of tests on x86_64.  In
debugging the issue this is related to a fix in QEMU 5.1 that
validates memory region access.  QEMU has the APIC region only allowing
1 to 4 byte access.  64-bit access is treated as an error.

Change the APIC EOI access in locore.S back to just doing a 32-bit
access.

Fixes #	28453

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-09-18 13:29:00 -05:00
Daniel Leung
43b2e291db x86_64: fix size to init stack at boot
The boot code of x86_64 initializes the stack (if enabled)
with a hard-coded size for the ISR stack. However,
the stack being used does not have to be the ISR stack,
and can be any defined stacks. So pass in the actual size
of the stack so the stack can be initialized properly.

Fixes #21843

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-17 21:05:45 -04:00
Crist Xu
ac3d9438ed drivers: usb: Fix usb fail when using the on-chip memory
Using SCB_CleanInvalidateDcache instead of SCB_DisableDcache
 & SCB_EnableDcache when config the non-cache area, in case
of the cache will effect the configuration of the non-cache
area

Signed-off-by: Crist Xu <crist.xu@nxp.com>
2020-09-17 16:56:28 -05:00
Andrew Boie
8240d7d07b x86: memory map BIOS Data Area
Changes to paging code ensured that the NULL virtual page is
never mapped. Since RAM is identity mapped, on a PC-like
system accessing the BIOS Data Area in the first 4K requires
a memory mapping. We need to read this to probe the ACPI RSDP.

Additionally check that the BDA has something in it as well
and not a bunch of zeroes.

It is unclear whether this function is truly safe on UEFI
systems, but that is for another day.

Fixes: #27867

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-16 20:37:34 -04:00
Daniel Leung
5632ee26f3 x86: pcie: fallback to config via PIO
When probing for PCI-E device resources, it is possible that
configuration via MMIO is not available. This may caused by
BIOS or its settings. So when CONFIG_PCIE_MMIO_CFG=y, have
a fallback path to config devices via PIO. The inability to
config via MMIO has been observed on a couple UP Squared
boards.

Fixes #27339

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-16 14:16:50 -05:00
Tomasz Bursztyka
ed98883795 device: Fixing new left over device instance made constant
Recent addition that went under the radar.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-15 09:18:43 -05:00
Andrew Boie
2bc21ea4de x86: print more detail on non-present pagefaults
The CPU sets the relevant bits on who tried to do what
if the page wasn't present, print them.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-11 09:03:01 -04:00
Andrew Boie
aebb9d8a45 aarch64: work around QEMU 'wfi' issue
Work around an issue where the emulator ignores host OS
signals when inside a `wfi` instruction.

This should be reverted once this has been addressed in the
AARCH64 build of QEMU in the SDK.

See https://github.com/zephyrproject-rtos/sdk-ng/issues/255

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-10 21:31:15 +02:00
Carlo Caione
d6f608219c arm64: tracing: Fix double tracing
When _arch_switch() API is used, the tracing of the thread swapped out
is done in the C kernel code (in do_swap() for cooperative scheduling
and in set_current() during preemption). In the assembly code we only
have to trace the thread when swapped in.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-09-09 15:36:43 -04:00
Ioannis Glaropoulos
394d2912a1 arch: arm: cortex-m: implement timing.c based on DWT
For Cortex-M platforms with DWT we implement
the timing API (timing.c).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-05 13:28:38 -05:00
Ioannis Glaropoulos
6f84d7d3fd arch: arm: cortex_m: conditionally select ARCH_HAS_TIMING_FUNCTIONS
Cortex-M SoCs implement (optionally) the Data Watchpoint and
Tracing Unit (DWT), which can be used for timing functions.
Select the corresponding ARCH capability if the SoC implements
the DWT.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-05 13:28:38 -05:00
Anas Nashif
6e27478c3d benchmarking: remove execution benchmarking code
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.

For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.

Furthermore, much of the assembly code used had issues.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-05 13:28:38 -05:00
Anas Nashif
150c82c8f9 arch: nios2: add timing implementation
Add timing implementation for NIOS2 architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-09-05 13:28:38 -05:00
Anas Nashif
d896decb79 timing: add support for x86
Add initial support for X86 and get timestamps from tsc.

Authored-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-09-05 13:28:38 -05:00
Anas Nashif
5dec235196 arch: default timings for all architectures
Use default if architecture does not have a custom timing
implementation.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-09-05 13:28:38 -05:00
Eugeniy Paltsev
a28ec6201f isr_tables: don't whole-archive library
As of today we have a bit weird situation with generated
sw_isr_table / irq_vector_table tables.

On the final linkage stage we pass two files which content
section with sw_isr_table / irq_vector_table. They are
 * libarch__common.a (with an outdated tables from the first
   linkage stage)
 * isr_tables.c.obj (with an actual tables)

The sections where tables are located are marked with
".gnu.linkonce" prefix. That means:
<<<As a GNU extension, if the name begins with .gnu.linkonce,
we only link a single copy of the section.>>>

However the "libarch__common.a" is passed to linker with
"--whole-archive" option which means <<<include every object
file in the archive in the link, rather than searching the archive
for the required object files>>>

That combination confuses MWDT linker and breaks linkage with
MWDT toolchain.

As a simple fix we can move the sw_isr_table / irq_vector_table
sections to their own library and link this library with
"--no-whole-archive" option.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-05 10:22:56 -05:00
Wayne Ren
fb05bfce05 ARC: rename arch_switch implementation to z_arc_switch
"arch_switch" is declared as an inline function in kswap.h,
it should be a wrapper of arch level switch. The difference
of declaration and implementation of "arch_swich" causes
warning from MWDT compiler.

Use "arch_switch" with proper declararion (which is just
wraper for "z_arc_switch") to do conext switch for ARC.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-05 10:22:56 -05:00
Wayne Ren
ef224ce1cd ARC: make the assembly codes compatible
Make the assembly codes compatible with both GNU
and Metaware toolchain.

* replace ".balign" with ".align"
  ".align" assembler directive is supposed by all
  ARC toolchains and it is implemented in a same
  way across ARC toolchains.
* replace "mov_s __certain_reg" with "mov __certain_reg"
  Even though GCC encodes those mnemonics and even real
  HW executes them according to PRM these are restricted
  ones for mov_s and CCAC rightfully refuses to accept
  such mnemonics. So for compatibility and clarity sake
  we switch to 32-bit mov instruction which allows use
  of all those instructions.
* Add "%%" prefix while accessing registers from inline
  ASM as it is required by MWDT.
* Drop "@" prefix while accessing symbols (defined in C
  code) from ASM code as it is required by MWDT.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>

/#
2020-09-05 10:22:56 -05:00
Wayne Ren
d67475ab6e ARC: handle the difference of assembly macro definition
GNU toolchain and MWDT (Metware) toolchain have different style
for accessing arguments in assembly macro. Implement the
preprocessor macro to handle the difference.

Make all ASM macros in swap_macros.h compatible for both ARC
toolchains.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-09-05 10:22:56 -05:00
Carlo Caione
df4aa230c8 arch: arm64: Use _arch_switch() API
Switch to the _arch_switch() API that is required for an SMP-aware
scheduler instead of using the old arch_swap mechanism.

SMP is not supported yet but this is a necessary step in that direction.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-09-05 12:06:38 +02:00
Torsten Rasmussen
c55c64e242 toolchain: improved toolchain abstraction for compilers and linker
First abstraction completed for the toolchains:
- gcc
- clang

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2020-09-04 20:36:59 +02:00
Øyvind Rønningstad
2be0086e87 cortex_m: tz_ns.h: Various fixes (late comments on PR)
Fix dox and restructure ASM.
No functional changes.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-04 19:05:58 +02:00
Pavel Král
06342e3474 arch: arm: mpu: Removal of include path pollution
Removes unnecessary and incorrect directories from include path.

Signed-off-by: Pavel Král <pavel.kral@omsquare.com>
2020-09-04 13:58:38 +02:00
Øyvind Rønningstad
c00f33dcb0 arch: arm: cortex_m: Add tz_ns.h
Provide a TZ_SAFE_ENTRY_FUNC() macro for wrapping non-secure entry
functions in calls to k_sched_lock()/k_sched_unlock()

Provide a __TZ_WRAP_FUNC() macro which helps in creating a function
that "wraps" another in a preface and postface function call.

	int foo(char *arg); // Implemented somewhere else.
	int __attribute__((naked)) foo_wrapped(char *arg)
	{
		WRAP_FUNC(bar, foo, baz);
	}

is equivalent to

	int foo(char *arg); // Implemented somewhere else.
	int foo_wrapped(char *arg)
	{
		bar();
		int res = foo(arg);
		baz();
		return res;
	}

This commit also adds tests for __TZ_WRAP_FUNC().

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-09-04 11:58:41 +02:00
Watson Zeng
1dddbecb35 tracing: swap: bug fix and enhancement for ARC
* Move switched_in into the arch context switch assembly code,
  which will correctly record the switched_in information.

* Add switched_in/switched_out for context switch in irq exit.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-09-03 21:54:15 +02:00
Andrew Boie
7d32e9f9a5 mmu: support only identity RAM mapping
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.

We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.

CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.

Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Flavio Ceolin
7f9cc2359e x86-32: Allow set DPL value for an exception
In order to be possible to debug usermode threads need to be able
issue breakpoint and debug exceptions. To do this it is necessary to
set DPL bits to, at least, the same CPL level.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-09-02 20:54:57 -04:00
Flavio Ceolin
5408f3102d debug: x86: Add gdbstub for X86
It implements gdb remote protocol to talk with a host gdb during the
debug session. The implementation is divided in three layers:

1 - The top layer that is responsible for the gdb remote protocol.
2 - An architecture specific layer responsible to write/read registers,
    set breakpoints, handle exceptions, ...
3 - A transport layer to be used to communicate with the host

The communication with GDB in the host is synchronous and the systems
stops execution waiting for instructions and return its execution after
a "continue" or "step" command. The protocol has an exception that is
when the host sends a packet to cause an interruption, usually triggered
by a Ctrl-C. This implementation ignores this instruction though.

This initial work supports only X86 using uart as backend.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-09-02 20:54:57 -04:00
Andrew Boie
ffc1da08f9 kernel: add z_thread_single_abort to private hdr
We shouldn't be copy-pasting extern declarations like this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Andrew Boie
3425c32328 kernel: move stuff into z_thread_single_abort()
The same code was being copypasted in k_thread_abort()
implementations, just move into z_thread_single_abort().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Andrew Boie
e34ac286b7 arm: don't lock irqs during thread abort
This isn't needed; match the vanilla implementation
in kernel/thread_abort.c and do this unlocked. This
should improve system latency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Andrew Boie
0a99011357 arm: thread_abort: clarify what's going on
A check was being done that was a more obscure way of
calling arch_is_in_isr(). Add a comment explaining
why we need to trigger PendSV.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Ioannis Glaropoulos
e08dfec77c arch: arm: cortex-m: add ARM-only API to set all IRQS to Non-Secure
We implement an ARM-only API for ARM Secure Firmware,
to set all NVIC IRQ lines to target the Non-Secure state.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-02 15:01:30 +02:00
Ioannis Glaropoulos
4ec7725110 arch: arm: cortex-m: Modify ARM-only API for IRQ target state mgmt
we modify the ARM Cortex-M only API for managing the
security target state of the NVIC IRQs. We remove the
internal ASSERT checking allowing to call the API for
non-implemented NVIC IRQ lines. However we still give the
option to the user to check the success of the IRQ target
state setting operation by allowing the API function to
return the resulting target state.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-09-02 15:01:30 +02:00
Tomasz Bursztyka
4dcfb5531c isr: Normalize usage of device instance through ISR
The goal of this patch is to replace the 'void *' parameter by 'struct
device *' if they use such variable or just 'const void *' on all
relevant ISRs

This will avoid not-so-nice const qualifier tweaks when device instances
will be constant.

Note that only the ISR passed to IRQ_CONNECT are of interest here.

In order to do so, the script fix_isr.py below is necessary:

from pathlib import Path
import subprocess
import pickle
import mmap
import sys
import re
import os

cocci_template = """
@r_fix_isr_0
@
type ret_type;
identifier P;
identifier D;
@@
-ret_type <!fn!>(void *P)
+ret_type <!fn!>(const struct device *P)
{
 ...
(
 const struct device *D = (const struct device *)P;
|
 const struct device *D = P;
)
 ...
}

@r_fix_isr_1
@
type ret_type;
identifier P;
identifier D;
@@
-ret_type <!fn!>(void *P)
+ret_type <!fn!>(const struct device *P)
{
 ...
 const struct device *D;
 ...
(
 D = (const struct device *)P;
|
 D = P;
)
 ...
}

@r_fix_isr_2
@
type ret_type;
identifier A;
@@
-ret_type <!fn!>(void *A)
+ret_type <!fn!>(const void *A)
{
 ...
}

@r_fix_isr_3
@
const struct device *D;
@@
-<!fn!>((void *)D);
+<!fn!>(D);

@r_fix_isr_4
@
type ret_type;
identifier D;
identifier P;
@@
-ret_type <!fn!>(const struct device *P)
+ret_type <!fn!>(const struct device *D)
{
 ...
(
-const struct device *D = (const struct device *)P;
|
-const struct device *D = P;
)
 ...
}

@r_fix_isr_5
@
type ret_type;
identifier D;
identifier P;
@@
-ret_type <!fn!>(const struct device *P)
+ret_type <!fn!>(const struct device *D)
{
 ...
-const struct device *D;
...
(
-D = (const struct device *)P;
|
-D = P;
)
 ...
}
"""

def find_isr(fn):
    db = []
    data = None
    start = 0

    try:
        with open(fn, 'r+') as f:
            data = str(mmap.mmap(f.fileno(), 0).read())
    except Exception as e:
        return db

    while True:
        isr = ""
        irq = data.find('IRQ_CONNECT', start)
        while irq > -1:
            p = 1
            arg = 1
            p_o = data.find('(', irq)
            if p_o < 0:
                irq = -1
                break;

            pos = p_o + 1

            while p > 0:
                if data[pos] == ')':
                    p -= 1
                elif data[pos] == '(':
                    p += 1
                elif data[pos] == ',' and p == 1:
                    arg += 1

                if arg == 3:
                    isr += data[pos]

                pos += 1

            isr = isr.strip(',\\n\\t ')
            if isr not in db and len(isr) > 0:
                db.append(isr)

            start = pos
            break

        if irq < 0:
            break

    return db

def patch_isr(fn, isr_list):
    if len(isr_list) <= 0:
        return

    for isr in isr_list:
        tmplt = cocci_template.replace('<!fn!>', isr)
        with open('/tmp/isr_fix.cocci', 'w') as f:
            f.write(tmplt)

        cmd = ['spatch', '--sp-file', '/tmp/isr_fix.cocci', '--in-place', fn]

        subprocess.run(cmd)

def process_files(path):
    if path.is_file() and path.suffix in ['.h', '.c']:
        p = str(path.parent) + '/' + path.name
        isr_list = find_isr(p)
        patch_isr(p, isr_list)
    elif path.is_dir():
        for p in path.iterdir():
            process_files(p)

if len(sys.argv) < 2:
    print("You need to provide a dir/file path")
    sys.exit(1)

process_files(Path(sys.argv[1]))

And is run: ./fix_isr.py <zephyr root directory>

Finally, some files needed manual fixes such.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
93cd336204 arch: Apply dynamic IRQ API change
Switching to constant parameter.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
6df8b3995e irq: Change dynamic API to take a constant parameter
All ISRs are meant to take a const struct device pointer, but to
simplify the change let's just move the parameter to constant and that
should be fine.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
7def6eeaee arch: Apply IRQ offload API change
Switching to constant parameter.

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Tomasz Bursztyka
e18fcbba5a device: Const-ify all device driver instance pointers
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.

A coccinelle rule is used for this:

@r_const_dev_1
  disable optional_qualifier
@
@@
-struct device *
+const struct device *

@r_const_dev_2
 disable optional_qualifier
@
@@
-struct device * const
+const struct device *

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Chris Coleman
99a268fa16 arch: arm: Collect full register state in Cortex-M Exception Stack Frame
To debug hard-to-reproduce faults/panics, it's helpful to get the full
register state at the time a fault occurred. This enables recovering
full backtraces and the state of local variables at the time of a
crash.

This PR introduces a new Kconfig option, CONFIG_EXTRA_EXCEPTION_INFO,
to facilitate this use case. The option enables the capturing of the
callee-saved register state (r4-r11 & exc_return) during a fault. The
info is forwarded to `k_sys_fatal_error_handler` in the z_arch_esf_t
parameter. From there, the data can be saved for post-mortem analysis.

To test the functionality a new unit test was added to
tests/arch/arm_interrupt which verifies the register contents passed
in the argument match the state leading up to a crash.

Signed-off-by: Chris Coleman <chris@memfault.com>
2020-08-31 10:13:27 +02:00
Ruud Derwig
80194192d4 arc: Fix for undefined shift behavior (CID 211523)
find_msb_set() can return 0, resulting in a undefined shift of 255 bits.
Fixes #27319

Signed-off-by: Ruud Derwig <Ruud.Derwig@synopsys.com>
2020-08-27 07:53:49 -04:00
Andrew Boie
00f71b0d63 kernel: add CONFIG_ARCH_MEM_DOMAIN_SYNCHRONOUS_API
Saves us a few bytes of program text on arches that don't need
these implemented, currently all uniprocessor MPU-based systems.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Andrew Boie
ed938d18b1 arc: fix arch_ API implementations on current CPU
All of these should be no-ops for the following reasons:

1. User threads cannot configure memory domains, only supervisor
   threads.
2. The scope of memory domains is user thread memory access,
   supervisor threads can access the entire memory map.

Hence it's never required to reprogram the MPU on the current CPU
when a memory domain API is called.

This does not address the issue #27785 if a user thread in the domain
is running on some other CPU.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Andrew Boie
2222fa1426 arm: fix memory domain arch_ API implementations
All of these should be no-ops for the following reasons:

1. User threads cannot configure memory domains, only supervisor
   threads.
2. The scope of memory domains is user thread memory access,
   supervisor threads can access the entire memory map.

Hence it's never required to reprogram the MPU when a memory domain
API is called.

Fixes a problem where an assertion would fail if a supervisor thread
added a partition and then immediately removes it, and possibly
other problems.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Andrew Boie
91f1bb5414 arm: clarify a memory domain assertion
Dump the partition information to make this assertion
less ambiguous.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Jingru Wang
dae250472f gcov: Add coverage support for arc qemu platform
* add toolchain abstraction for coverage
* add select HAS_COVERAGE_SUPPORT to kconfig
* port gcov linker code to CKake for arc

Signed-off-by: Jingru Wang <jingru@synopsys.com>
2020-08-26 12:32:39 +02:00
Andrew Boie
38e17b68e3 x86: paging code rewrite
The x86 paging code has been rewritten to support another paging mode
and non-identity virtual mappings.

 - Paging code now uses an array of paging level characteristics and
   walks tables using for loops. This is opposed to having different
   functions for every paging level and lots of #ifdefs. The code is
   now more concise and adding new paging modes should be trivial.

 - We now support 32-bit, PAE, and IA-32e page tables.

 - The page tables created by gen_mmu.py are now installed at early
   boot. There are no longer separate "flat" page tables. These tables
   are mutable at any time.

 - The x86_mmu code now has a private header. Many definitions that did
   not need to be in public scope have been moved out of mmustructs.h
   and either placed in the C file or in the private header.

 - Improvements to dumping page table information, with the physical
   mapping and flags all shown

 - arch_mem_map() implemented

 - x86 userspace/memory domain code ported to use the new
   infrastructure.

 - add logic for physical -> virtual instruction pointer transition,
   including cleaning up identity mappings after this takes place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
ddb63c404f x86_64: fix sendling locore EOI
The address was being truncated because we were using
32-bit registers. CONFIG_MMU is always enabled on 64-bit,
remove the #ifdef.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
df2fe7c1f7 x86: add build system hooks for page tables
We need to produce a binary set of page tables wired together
by physical address. Add build system logic to use the script
to produce them.

Some logic for running build scripts that produce artifacts moved
out of IA32 into common CMake code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
e9d15451b1 x86: add new page table generation script
This produces a set of page tables with system RAM
mapped for read/write/execute access by supervisor
mode, such that it may be installed in the CPU
in the earliest boot stages and mutable at runtime.

These tables optionally support a dual physical/virtual
mapping of RAM to help boot virtual memory systems.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
1524ef2f52 arch: Kconfig: add sub-menu for MMU options
De-clutters the main menu.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
d03c60b71b x86: force VM base to be identity mapped
The x86 ports are linked at their physical address and
the arch_mem_map() implementation currently requires
virtual = physical. This will be removed later.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
2c3523e421 mmu: add virtual memory Kconfigs
Specify the virtual address range for kernel mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
88ddce652c mmu: add CONFIG_SRAM_REGION_PERMISSIONS
If CONFIG_MMU is active, choose whether to separate text,
rodata, and ram into their own page-aligned regions so that
they have have different MMU permissions applied.

If disabled, all RAM pages will have RWX permission to
supervisor mode, but some memory may be saved due to lack
of page alignment padding between these regions.

This used to always happen. This patch adds the Kconfig,
linker script changes to come in a subsequent patch.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Daniel Leung
181d07321f coredump: add support for ARM Cortex-M
This adds the necessary bits in arch code, and Python scripts
to enable coredump support for ARM Cortex-M.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Daniel Leung
8fbb14ef50 coredump: add support for x86 and x86_64
This adds the necessary bits to enable coredump for x86
and x86_64.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Daniel Leung
49206a86ff debug/coredump: add a primitive coredump mechanism
This adds a very primitive coredump mechanism under subsys/debug
where during fatal error, register and memory content can be
dumped to coredump backend. One such backend utilizing log
module for output is included. Once the coredump log is converted
to a binary file, it can be used with the ELF output file as
inputs to an overly simplified implementation of a GDB server.
This GDB server can be attached via the target remote command of
GDB and will be serving register and memory content. This allows
using GDB to examine stack and memory where the fatal error
occurred.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Anas Nashif
0be0743144 tracing: x86: trace isr_exit
We were missing exit from ISR..

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
e5e6ba240d tracing: posix_arch: trace swap
Moving trace points to the architecture code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
caa348034f tracing: riscv: fix tracing of context switch
We had switched_in and switched_out mixed up.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
b234660f4f tracing: cortex_a53: fix order of swap tracing
We had switched_in and switched_out mixed up.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
c1a2c7992b tracing: nios2: fix swap tracing
Fixed ordering of tracepoints for the nios2 architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
a45c403c56 tracing: arc: depend on CONFIG_TRACING_ISR for ISRs
Use CONFIG_TRACING_ISR to exclude tracing ISRs just like other
architectures.

Also, z_sys_trace_isr_exit was not defined (It was renamed some time ago
and this was forgotten...)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Anas Nashif
d1049dc258 tracing: swap: cleanup trace points and their location
Move tracing switched_in and switched_out to the architecture code and
remove duplications. This changes swap tracing for x86, xtensa.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Andrew Boie
44ed730724 x86: static scope for exception handlers
These just need `__used` to avoid compiler warnings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-20 10:19:41 +02:00
Anas Nashif
bc40cbc9b4 arch: x86: guard some functions based on usage
Found when building with clang..

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-19 06:57:40 -04:00
Carlo Caione
310057d641 arch: arm64: Parametrize registers usage for z_arm64_{enter,exit}_exc
Make explicit what registers we are going to be touched / modified when
using z_arm64_enter_exc and z_arm64_exit_exc.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-08-18 15:17:39 +02:00
Carlo Caione
d187830929 arch: arm64: Rework registers allocation
Rationalize the registers usage trying to reuse the smallest set of
registers possible.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-08-18 15:17:39 +02:00
Andrew Boie
ed972b9582 arm: remove custom k_thread_abort() for Cortex-R
The default implementation is the same as this custom
one now, as the assertion that the context switch occurs
at the end of the ISR is true for all arches.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-18 08:36:35 +02:00
Andrew Boie
be0d69f1e6 arch: posix: don't exit ISR on thread abort
If a thread is running, an ISR fires, and the ISR
itself calls k_thread_abort() on the thread, the ISR
was being unexpectedly terminated.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-18 08:36:35 +02:00
Henrik Brix Andersen
e7f51fa918 arch: arm: aarch32: add support for Cortex-M1
Add support for the ARM Cortex-M1 CPU.

Signed-off-by: Henrik Brix Andersen <henrik@brixandersen.dk>
2020-08-14 13:35:39 -05:00
Andrew Boie
83aedb2377 x86: increment default page pool pages
With the current identity mapping scheme a new test requires
some more memory to be set aside here.

In production this parameter gets turned per-board, and
the pending paging code overhaul in #27001 significantly
relaxes this as driver I/O mappings are no longer sparse.

Fixes a runtime failure in tests/kernel/device on
qemu_x86_64 that somehow slipped past CI.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-12 16:13:19 -05:00
Anas Nashif
ce59510127 arch: xip: cleanup XIP Kconfig
unify how XIP is configured across architectures. Use imply instead of
setting defaults per architecture and imply XIP on riscv arch and remove
XIP configuration from individual defconfig files to match other
architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-07 09:50:22 -04:00
Ioannis Glaropoulos
fa04bf615c arch: arm: cortex-m: hw stack protection under no multi-threading
This commit adds the support for HW Stack Protection when
building Zephyr without support for multi-threading. The
single MPU guard (if the feature is enabled) is set to
guard the Main stack area. The stack fail check is also
updated.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Ioannis Glaropoulos
4338552175 arch: arm: cortex-m: introduce custom switch to main function
For the case of building Zephy with no-multithreading
support (CONFIG_MULTITHREADING=n) we introduce a
custom (ARCH-specific) function to switch to main()
from cstart(). This is required, since the Cortex-M
initialization code is temporarily using the interrupt
stack and main() should be using the z_main_stack,
instead. The function performs the PSP switching,
the PSPLIM setting (for ARMv8-M), FPU initialization
and static memory region initialization, to mimic
what the normal (CONFIG_MULTITHREADING=y) case does.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Ioannis Glaropoulos
e9a85eec28 arch: arm: cortex-m: extract common code block into a static function
We extract the common code for both multithreading and
non-multithreading cases into a common static function
which will get called in Cortex-M archictecture initialization.
This commit does not introduce behavioral changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Ioannis Glaropoulos
1a5390f438 arch: arm: cortex-m: fix the accounted size of IRQ stack in reset.S
This patch is simply adding the guard area (if applicable) to
the calculations for the size of the interrupt stack in reset.S
for ARM Cortex-M architecture. If exists, the GUARD area is
always reserved aside from CONFIG_ISR_STACK_SIZE, since the
interrupt stack is defined using the K_KERNEL_STACK_DEFINE.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-08-07 13:06:04 +02:00
Torsten Rasmussen
bb3946666f cmake: fix include directories to work with out-of-tree arch
Include directories for ${ARCH} is not specified correctly.
Several places in Zephyr, the include directories are specified as:
${ZEPHYR_BASE}/arch/${ARCH}/include
the correct line is:
${ARCH_DIR}/${ARCH}/include
to correctly support out of tree archs.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2020-08-05 08:06:07 -04:00
Carles Cufi
244f826e3c cmake: remove _if_kconfig() functions
This set of functions seem to be there just because of historical
reasons, stemming from Kbuild. They are non-obvious and prone to errors,
so remove them in favor of the `_ifdef()` ones with an explicit
`CONFIG_` condition.

Script used:

git grep -l _if_kconfig | xargs sed -E -i
"s/_if_kconfig\(\s*(\w*)/_ifdef(CONFIG_\U\1\E \1/g"

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-08-01 12:35:20 +02:00
Andrew Boie
8b4b0d6264 kernel: z_interrupt_stacks are now kernel stacks
This will save memory on many platforms that enable
user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
8ce260d8df kernel: introduce supervisor-only stacks
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.

Two new arch defines are introduced:

- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN

New public declaration macros:

- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF

If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.

Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
e4cc84a537 kernel: update arch_switch_to_main_thread()
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.

Based on #24467 authored by Flavio Ceolin.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
b0c155f3ca kernel: overhaul stack specification
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.

thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.

thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.

CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.

thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.

Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
24825c8667 arches: fix arch_new_thread param names
MISRA-C wants the parameter names in a function implementaion
to match the names used by the header prototype.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
0c69561469 arch: remove duplicate docs for arch_new_thread
This interface is documented already in
kernel/include/kernel_arch_interface.h

Other architectural notes were left in place except where
they were incorrect (like the thread struct
being in the low stack addresses)

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie
62eb7d99dc arch_interface: remove unnecessary params
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Ioannis Glaropoulos
b0c5e6335a arch: arm: cortex-m: move the relay table section after vector table
In CPUs with VTOR we are free to place the relay vector table
section anywhere inside ROM_START section (as long as we respect
alignment requirements). This PR moves the relay table towards
the end of ROM_START. This leaves sufficient area for placing
some SoC-specific sections inside ROM_START that need to start
at a fixed address.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-07-27 13:23:36 +02:00
David Leach
565a61dd79 arch/x86: zefi: Fix mismatch in printf with args
printf function didn't have enough specifiers for the
number of arguments in the command line (Coverity warning).

Fixes #26985
Fixes #26986

Signed-off-by: David Leach <david.leach@nxp.com>
2020-07-24 21:51:14 -04:00
Eugeniy Paltsev
9cb9906418 ARC: cleanup exit_tickless_idle
Rewrite 'exit_tickless_idle' macro to make code more readable.
No functional changes intended.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-07-24 12:05:12 +02:00
Eugeniy Paltsev
3f544ca5b2 ARC: replace NOP ASM inlines with builtin
NOP instruction is available via builtin for ARC so get rid of all
ASM inlines with NOP/NOP_S instructions.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-07-24 12:05:12 +02:00
Andrzej Głąbek
ec8cb07da2 arch: arm: Export vector table symbols with GDATA instead of GTEXT
_vector_table and __vector_relay_table symbols were exported with GTEXT
(i.e. as functions). That resulted in bit[0] being incorrectly set in
the addresses they represent (for functions this bit set to 1 specifies
execution in Thumb state).
This commit corrects this by switching to exporting these objects as
objects, i.e. with GDATA.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2020-07-24 12:04:28 +02:00
Daniel Leung
3f6ac9fdfd arm: add include guard for offset files
MISRA-C directive 4.10 requires that files being included must
prevent itself from being included more than once. So add
include guards to the offset files, even though they are C
source files.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-24 10:01:12 +02:00
Daniel Leung
49199641b9 x86: add include guard for offset files
MISRA-C directive 4.10 requires that files being included must
prevent itself from being included more than once. So add
include guards to the offset files, even though they are C
source files.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-07-24 10:01:12 +02:00
David Leach
5803ec1bf0 arch: arm: mpu: Use temporary MPU mapping while reprogramming NXP MPU
Race conditions exist when remapping the NXP MPU. When writing the
start, end, or attribute registers of a MPU descriptor, the hardware
will automatically clear the region's valid bit. If that region gets
accessed before the code is able to set the valid bit, the core will
fault.

Issue #20595 revealled this problem with the code in region_init()
when the compiler options are set to no optimizations. The code
generated by the compiler put local variables on the stack and then
read those stack based variables when writing the MPU descriptor
registers. If that region mapped the stack a memory fault would occur.
Higher compiler optimizations would store these local variables in
CPU registers which avoided the memory access when programming the
MPU descriptor.

Because the NXP MPU uses a logic OR operation of the MPU descriptors,
the fix uses the last descriptor in the MPU hardware to remap all of
dynamic memory for access instead of the first of the dynamic memory
descriptors as was occuring before. This allows reprogramming of the
primary discriptor blocks without having a memory fault. After all
the dynamic memory blocks are mapped, the unused blocks will have
their valid bits cleared including this temporary one, if it wasn't
alread changed during the mapping of the current set.

Fixes #20595

Signed-off-by: David Leach <david.leach@nxp.com>
2020-07-22 11:27:40 +02:00
Eugeniy Paltsev
3eee762e08 ARC: NSIM: switch to ns16550 UART model
Switch nSIM from custom ARC UART to ns16550 model. That will
allow us to use zephyr images built for nSIM on other platforms
like HAPS, QEMU, etc...

This patch do:
 * switch nSIM board to ns16550 UART usage
 * change nSIM simulator configuration to use ns16550 UART model
 * drop checks for CONFIG_UART_NSIM in ARC code
 * update nSIM documentation

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
2020-07-20 13:34:34 -04:00
Andrew Boie
98bcc51b09 x86: gen_gdt: improve docstring
Describe what info we're snarfing out of the prebuilt kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-19 08:50:52 -04:00
Johan Hedberg
38333afe0e arch: x86: zefi: Reduce data section alignment requirement from 8 to 4
It's not safe to assume that the data section is 8-byte aligned.
Assuming 4-byte alignment seems to work however, and results in
simpler code than arbitrary alignment support.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-07-18 08:44:31 -04:00
Anas Nashif
4382532fd3 x86: zefi: support arguments and make compatible with windows
Add argument parsing and use os.path.join where possible to support
building on windows.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-07-18 08:44:31 -04:00
Andrew Boie
4df734683e x86: 32-bit: enable thread stack info
The hardware stack overflow feature requires
CONFIG_THREAD_STACK_INFO enabled in order to distingush
stack overflows from other causes when we get an exception.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-18 07:21:53 -04:00
Andrew Boie
aec607cc67 x86: remove memory mapping SOC code
This isn't needed any more, all of these directives were
for drivers which use device_map() now.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andrew Boie
c802cc8920 x86: pcie: use device_map() for MMIO config access
Replaces custom runtime calls to map memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andrew Boie
ee3c50ba6d x86: apic: use device MMIO APIs
A hack was required for the loapic code due to the address
range not being in DTS. A bug was filed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andrew Boie
a810c7b5a0 x86: early_serial: use device_map()
This driver code uses PCIe and doesn't use Zephyr's
device model, so we can't use the nice DEVICE_MMIO macros.
Set stuff up manually instead using device_map().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andrew Boie
e7376057ca x86: add arch_mem_map()
This currently only supports identity paging; there's just
enough here for device_map() calls to work.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andrew Boie
542dcae0c7 arch: add CONFIG_MMU
This config indicates that a memory management unit is present
and enabled, which will in turn allow arch APIs to allow
mapping memory to be used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Andrew Boie
ff294e02cd arch: add CONFIG_CPU_HAS_MMU
Indicate that the CPU has a memory management unit,
similar to CPU_HAS_MPU for MPUs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Aastha Grover
ffd8e8aefc arch: x86: core: Add cache flush function for x86
Adding just the cache flush function for x86. The name
arch_cache_flush comply with API names in include/cache.h

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-07-15 15:53:26 -07:00
Aastha Grover
97ecad69f0 include: Implement API's for cache flush and cache invalidate
arch: arc: core: Add Cache Implementation function & prototype for arc

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-07-15 15:53:26 -07:00
Johan Hedberg
0329027bd8 arch: x86: zefi: Fix assuming segment size being a multiple of 8
The p_memsz field which indicates the size of a segment in memory
isn't always a multiple of 8. Remove the assert and add padding if
necessary. Without this change it's not possible to generate EFI
binaries out of all samples & tests in the tree.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-07-14 18:21:38 -04:00
Rafał Kuźnia
f2b0bfda8f arch: arm: aarch32: Always use VTOR when it is available
Zephyr applications will always use the VTOR register when it is
available on the CPU and the register will always be configured
to point to applications vector table during startup.

SW_VECTOR_RELAY_CLIENT is meant to be used only on baseline ARM cores.

SW_VECTOR_RELAY is intended to be used only by the bootloader.
The bootloader may configure the VTOR to point to the relay table
right before chain-loading the application.

Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
2020-07-14 16:17:30 +02:00
Andrzej Puzdrowski
4152ccf124 arch/arm/aarch32: ensured SW IRQ relay modes exclusive
Select either SW_VECTOR_RELAY or SW_VECTOR_RELAY_CLIENT
at the time.

Removed #ifdef-ry in irq_relay.S as SW_VECTOR_RELAY was
refined so it became reserved for the bootloader and it
conditionally includes irq_relay.S for compilation.
See SHA #fde3116f1981cf152aadc2266c66f8687ea9f764

Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
2020-07-14 16:17:30 +02:00
Rafał Kuźnia
89bf746ebe arch/arm/aarch32: add IRQ relay mechanism to ARMv7/8-M
This patch allows the `SW_VECTOR_RELAY` and
`SW_VECTOR_RELAY_CLIENT` pair to be
enabled on the ARMv7-M and ARMv8-M architectures
and covers all additional interrupt vectors.

Signed-off-by: Rafał Kuźnia <rafal.kuznia@nordicsemi.no>
Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
2020-07-14 16:17:30 +02:00
Ioannis Glaropoulos
e80e655b01 arch: arm: cortex_m: align vector table based on VTOR requirements
Enforce VTOR table offset alignment requirements on Cortex-M
vector table start address.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-07-14 13:03:25 +02:00
Karsten Koenig
2e61137cc9 arch: riscv: thread: Init soc context on stack
The optional SOC_CONTEXT carries processor state registers that need to
be initialized properly to avoid uninitialized memory read as processor
state.
In particular on the RV32M1 the extra soc context stores a state for
special loop instructions, and loading non zero values will have the
core assume it is in a loop.

Signed-off-by: Karsten Koenig <karsten.koenig.030@gmail.com>
2020-07-13 15:00:19 -05:00
Stephanos Ioannidis
3322489d22 config: Rename TEXT_SECTION_OFFSET to ROM_START_OFFSET
The `TEXT_SECTION_OFFSET` symbol is used to specify the offset between
the beginning of the ROM area and the address of the first ROM section.

This commit renames `TEXT_SECTION_OFFSET` to `ROM_START_OFFSET` because
the first ROM section is not always the `.text` section.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-07-09 14:02:38 -04:00
Andy Ross
0e20eafe7a arch/x86: Make sure PCI mmio is initialized before page tables
The page table initialization needs a populated PCI MMIO
configuration, and that is lazy-evaluated.  We aren't guaranteed that
a driver already hit that path, so be sure to call it explicitly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-08 12:34:09 +02:00
Andy Ross
2626d702c6 arch/x86: zefi must disable HPET before OS handoff
The firmware on existing devices uses HPET timer zero for its own
purposes, and leaves it alive with interrupts enabled.  The Zephyr
driver now knows how to recover from this state with fuller
initialization, but that's not enough to fix the inherent race:

The timer can fire BEFORE the driver initialization happens (and does,
with certain versions of the EFI shell), thus flagging an interrupt to
what Zephyr sees as a garbage vector.  The OS can't fix this on its
own, the EFI bootloader (which is running with interrupts enabled as
part of the EFI environment) has to do it.  Here we can know that our
setting got there in time and didn't result in a stale interrupt flag
in the APIC waiting to blow up when interrupts get enabled.

Note: this is really just a workaround.  It assumes the hardware has
an HPET with a standard address.  Ideally we'd be able to build zefi
using Zephyr kconfig and devicetree values and predicate the HPET
reset on the correct configuraiton.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-08 12:34:09 +02:00
Andy Ross
c7d76cbe58 arch/x86: Add a spurious interrupt handler to x86_64
Right now x86_64 doesn't install handlers for vectors that aren't
populated by Zephyr code.  Add a tiny spurious interrupt handler that
logs the error and triggers a fatal error, like other platforms do.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-08 12:34:09 +02:00
Andy Ross
f7dd9856ba arch/x86/early_serial: General code cleanup
This patch is almost entirely aesthetics, designed to isolate the
variant configurations to a simple macro API (just IN/OUT), reduce
complexity derived from code pasted out of the larger ns16550 driver,
and keep the complexity out of the (very simple!) core code.  Useful
when hacking on the driver in contexts where it isn't working yet.

The sole behavioral change here is that I've removed the runtime
printk hook installation in favor of defining an
arch_printk_char_out() function which overrides the weak-linked
default (that is, we don't need to install a hook, we can be the
default hook at startup).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-08 12:34:09 +02:00
Andy Ross
d2eca354e8 arch/x86: early_serial cleanup
Various cleanups to the x86 early serial driver, mostly with the goal
of simplifying its deployment during board bringup (which is really
the only reason it exists in the first place):

+ Configure it =y by default.  While there are surely constrained
  environments that will want to disable it, this is a TINY driver,
  and it serves a very important role for niche tasks.  It should be
  built always to make sure it works everywhere.

+ Decouple from devicetree as much as possible.  This code HAS to work
  during board bringup, often with configurations cribbed from other
  machines, before proper configuration gets written.  Experimentally,
  devicetree errors tend to be easy to make, and without a working
  console impossible to diagnose.  Specify the device via integer
  constants in soc.h (in the case of IOPORT access, we already had
  such a symbol) so that the path from what the developer intends to
  what the code executes is as short and obvious as possible.
  Unfortunately I'm not allowed to remove devicetree entirely here,
  but at least a developer adding a new platform will be able to
  override it in an obvious way instead of banging blindly on the
  other side of a DTS compiler.

+ Don't try to probe the PCI device by ID to "verify".  While this
  sounds like a good idea, in practice it's just an extra thing to get
  wrong.  If we bail on our early console because someone (yes, that's
  me) got the bus/device/function right but typoed the VID/DID
  numbers, we're doing no one any favors.

+ Remove the word-sized-I/O feature.  This is a x86 driver for a PCI
  device.  No known PC hardware requires that UART register access be
  done in dword units (in fact doing so would be a violation of the
  PCI specifciation as I understand it).  It looks to have been cut
  and pasted from the ns16550 driver, remove.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-08 12:34:09 +02:00
Andy Ross
36b8db0129 arch/x86: Map 512G by default when booting x86_64
The default page table (the architecturally required one used for
entrance to long mode, before the OS page tables get assembled) was
mapping the first 4G of memory.

Extend this to 512G by fully populating the second level page table.
We have devices now (up_squared) which have real RAM mapped above 4G.
There's really no good reason not to do this, the page is present
always anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-08 12:34:09 +02:00
Andy Ross
4d5e67ed13 arch/x86: Unbreak SMP startup on x86_64
A last minute "cleanup" to the EFI startup path (on a system where I
had SMP disabled) moved the load of the x86_cpuboot[0] entry into RBP
into the main startup code, which is wrong because on auxiliary CPUs
that's already set up by the 16/32 bit entry code to point to the
OTHER entries.

Put it back where it belongs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-07 12:59:33 -04:00
Watson Zeng
85acfd2e27 arch: arc: irq: bugs fix for fast irq in one register bank config
* The stack pointer (SP) register points to the lowest-used address of
   a downward-growing stack, so memory address [sp] is used, we can't
   modify it.

* In firq_no_switch case, we need to pop sp, which pushed before
   _isr_demux function in firq_nest function.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-07-07 15:10:26 +02:00
Ioannis Glaropoulos
fde3116f19 arch: arm: cortex_m: Add config for SW_VECTOR_RELAY_CLIENT
Define vector relay tables for bootloader only.
If an image is not a bootloader image (such as an MCUboot image)
but it is a standard Zephyr firmware, chain-loadable by a
bootloader, then this image will not need to relay IRQs itself.
In this case SW_VECTOR_RELAY_CLIENT should be used to setting the
vector table pointer in RAM so the parent image can forward the
interrupts to it.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Co-authored-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
2020-07-03 13:34:50 -04:00
Andy Ross
928d31125f arch/x86: Add zefi, an EFI stub/packer/wrappper/loader
This is a first cut on a tool that will convert a built Zephyr ELF
file into an EFI applciation suitable for launching directly from the
firmware of a UEFI-capable device, without the need for an external
bootloader.

It works by including the Zephyr sections into the EFI binary as
blobs, then copying them into place on startup.

Currently, it is not integrated in the build.  Right now you have to
build an image for your target (up_squared has been tested) and then
pass the resulting zephyr.elf file as an argument to the
arch/x86/zefi/zefi.py script.  It will produce a "zephyr.efi" file in
the current directory.

This involved a little surgery in x86_64 to copy over some setup that
was previously being done in 32 bit mode to a new EFI entry point.
There is no support for 32 bit UEFI targets for toolchain reasons.

See the README for more details.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-07-02 09:10:01 -04:00
Andy Ross
7c6d8aa58e arch/x86: Add support for PCI MMIO configuration access
The traditional IO Port configuration mechanism was technically
deprecated about 15 years ago when PCI Express started shipping.
While frankly the MMIO support is significantly more complicated and
no more performant in practice, Zephyr should have support for current
standards.  And (particularly complicated) devices do exist in the
wild whose extended capability pointers spill beyond the 256 byte area
allowed by the legacy mechanism.  Zephyr will want drivers for those
some day.

Also, Windows and Linux use MMIO access, which means that's what
system vendors validate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-06-23 13:07:39 +02:00
Andy Ross
7fe8caebc0 arch/x86: Add z_acpi_find_table(), MCFG support
The existing minimal ACPI implementation was enough to find the MADT
table for dumping CPU info.  Enhance it with a slightly less minimal
implementation that can fetch any table, supports the ACPI 2.0 XSDT
directory (technically required on 64 bit systems so tables can live
>4G) and provides definitions for the MCFG table with the PCI
configuration pointers.

Note that there is no use case right now for high performance table
searching, so the "init" step has been removed and tables are probed
independently from scratch for each one requested (there are only
two).

Note also that the memory to which these tables point is not
understood by the Zephyr MMU configuration, so in long mode all ACPI
calls have to be done very early, before z_x86_paging_init() (or on a
build with the MMU initialization disabled).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-06-23 13:07:39 +02:00
Andrew Boie
a8585ac35c x86: fix early boot pagefault reason code
If we get a page fault in early boot context, before
main thread is started, page faults were being
incorrectly reported as stack overflows.
z_x86_check_stack_bounds() needs to consider the
interrupt stack as the correct stack for this context.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-18 19:36:17 +02:00
Andrew Boie
87dd0492db x86: add CONFIG_X86_KERNEL_OFFSET
Previously, DTS specification of physical RAM bounds did not
correspond to the actual bounds of system RAM as the first
megabyte was being skipped.

There were reasons for this - the first 1MB on PC-like systems
is a no-man's-land of reserved memory regions, but we need DTS
to accurately capture physical memory bounds.

Instead, we introduce a config option which can apply an offset
to the beginning of physical memory, and apply this to the "RAM"
region defined in the linker scripts.

This also fixes a problem where an extra megabyte was being
added to the size of system RAM.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-18 19:35:52 +02:00
Scott Branden
d77fbcb86a arch: arm64: mmu: create macro for TCR_PS_BITS
Create macro for TCR_PS_BITS instead of programmatically looking up
a static value based on a CONFIG option.  Moving to macro
removes logically dead code reported by Coverity static analysis tool.

Signed-off-by: Scott Branden <scott.branden@broadcom.com>
2020-06-18 12:47:30 +02:00
Andrew Boie
8920549464 qemu_x86: propagate exit reason code to the shell
This helps distingush between fatal errors if logging isn't
enabled.

As detailed in comments, pass a reason code which controls
the QEMU process' return value.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-12 23:24:37 -04:00
Andrew Boie
bed6b6891d x86: report when thread re-use is detected
x86_64's __resume path 'poisons' the incoming thread's
saved RIP value with a special 0xB9 value, to catch
re-use of thread objects across CPUs in SMP. Add a check
and printout for this when handling fatal errors, and
treat as a kernel panic.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-10 18:36:06 -04:00
Andrew Boie
faff50d8e7 arc: show all registers on exception
The ESF contains register file contents including program
counter when the exception happened. If non-NULL and we
have ARC_EXCEPTION_DEBUG enabled, dump its contents to the
log stream.

Other arches do this already.

There is no need to read ERET, the ESF already contains the
interrupted PC value.

A future enhancement could create an option to additionally
push callee-saved register context into the ESF so it can
also be dumped out, but this patch does not address this.

A future enhancement could also convert the syscall
stack frame pointer passed to arch_syscall_oops() into
an ESF so that context of the failed system call can be
inferred.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-09 13:52:56 +02:00
Rubin Gerritsen
3cf7f9c974 arch: posix: Print warning on sys_reboot
This will simplify debugging.

Signed-off-by: Rubin Gerritsen <rubin.gerritsen@nordicsemi.no>
2020-06-09 08:19:50 +02:00
Kumar Gala
a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Ioannis Glaropoulos
9d8111c88f arch: arm: cortex-m: fix placement of ARMv7-M-related MPU workaround
The workaround for ARMv7-M architecture (which proactively
decreases the available thread stack by the size of the MPU
guard) needs to be placed before we calculate the pointer of
the user-space local thread data, otherwise this pointer will
fall beyond the boundary of the thread stack area.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-05-27 19:48:27 +02:00
Ioannis Glaropoulos
6b54958a0e arch: arm: aarch32: cortex-m: fix logic for detecting guard violation
We fix (by inverting) the logic of the IS_MPU_GUARD_VIOLATION
macro, with respect to the value of the supplied 'fault_addr'.
We shall only be inspecting the fault_addr value if it is not
set to -EINVAL.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-05-27 10:10:22 +02:00
Ioannis Glaropoulos
7284aee7d7 arch: arm: aarch32: cortex_m: add note in mem_manage_fault()
It is possible that MMFAR address is not written by the
Cortex-M core; this occurs when the stacking error is
not accompanied by a data access violation error (i.e.
when stack overflows due to the exception entry frame
stacking): z_check_thread_stack_fail() shall be able to
handle the case of 'mmfar' holding the -EINVAL value.

Add this node in mem_manage_fault() function to clarify
that it is valid for z_check_thread_stack_fail() to be
called with invalid mmfar address value.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-05-27 10:10:22 +02:00
Wayne Ren
c44bd69c9a arch: arc: enable the workaround of sleep only for SMP case in nsim
Because the issue of nsim, the sleep instruction doest not work
correctly when SMP is enabled. A workaround is introduced in commit
d56a12d955, this workaround should be enabled only for SMP case in
nsim.

For other cases, no need of this workaround.

This commit fixes #24276

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-05-26 17:47:42 +02:00
Andrew Boie
20962612f6 x86: dump the right page tables
If KPTI is not enabled, the current value of CR3 is the correct
page tables when the exception happened in all cases.

If KPTI is enabled, and the excepting thread was in user mode,
then a page table switch happened and the current value of CR3
is not the page tables when the fault happened. Get it out of the
thread object instead.

Fixes two problems:
- Divergent exception loop if we crash when _current is a dummy
  thread or its page table pointer stored in the thread object is
  NULL or uninitialized
- Printing the wrong CR3 value on exceptions from user mode in
  the register dump

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-26 14:37:00 +02:00
Daniel Leung
2887fbcccf x86: mmu: fix type mismatch of memory address in assert
In one of the ASSERT() statement, the PHYS_RAM_ADDR (alias
of DT_REG_ADDR()) may be interpreted by the compiler as
long long int when it's large than 0x7FFFFFFF, but is
paired with %x, resulting in compiler warning. Fix this
by type casting it to uintptr_t and use %lx instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-05-21 22:30:14 +02:00
Ruslan Mstoi
aa051857e5 x86: gen_idt.py: typo fix
Fix "consule" as "consult"

Signed-off-by: Ruslan Mstoi <ruslan.mstoi@intel.com>
2020-05-21 14:44:33 +02:00
Daniel Leung
251cb61e20 x86_64: instrument code for timing information
On x86_64, the arch_timing_* variables are not set which
results in incorrect values being used in the timing_info
benchmarks. So instrument the code for those values.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-05-20 22:36:04 +02:00
Daniel Leung
37516a7818 x86: add ability for SoC to add MMU regions
The SoCs usually have devices that are accessed through MMIO.
This requires the corresponding regions to be marked readable
and writable in the MMU or else accesses will result in page
faults.

This adds a function which can be implemented in the SoC code to
specify those pages to be added to MMU.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-05-19 19:19:51 +02:00
Daniel Leung
81c089b690 x86: acpi: make code 64-bit compatible
The integers used for pointer calculation were u32_t.
Change them to uintptr_t to be compatible with 64-bit.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-05-19 19:19:51 +02:00
Andrew Boie
d149909b03 x86: properly align initial dummy thread
x86-32 thread objects require special alignment since they
contain a buffer that is passed to fxsave/fxrstor instructions.
This fell over if the dummy thread is created in a stack frame.

Implement a custom swap to main for x86 which still uses a
dummy thread, but in an unused part of the interrupt stack
with proper alignment.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-13 21:23:52 +02:00
Wayne Ren
801c053f3e arch: arc: fix the bug of firq stack setup for slave cores
r0 is the slave core number, needs to be saved before call
z_arc_firq_stack_set and be restored later.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-05-13 16:23:54 +02:00
Christopher Friedt
2c0eecaa5e posix arch: build on aarch64 / allow host-specific cmake includes
This change enables specific compiler and linker options to be used in
the case that an arch/posix/os.arch.cmake file exists.

Note: os and arch in the above case are evaluations of
CMAKE_HOST_SYSTEM_NAME and CMAKE_HOST_SYSTEM_PROCESSOR.

Otherwise, the existing "generic" compiler and linker flags in
arch/posix/CMakeLists.txt are used.

Additional flags and checks are provided in
arch/posix/Linux.aarch64.cmake.

Added scripts/user_wordsize.py to detect if userspace is 64-bit or
32-bit, which should be consistent with the value of CONFIG_64BIT
for Aarch64 on Linux.

Fixes #24842

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2020-05-09 12:17:24 +02:00
Zide Chen
d27f6cb5eb interrupt_controller: program local APIC LDR register for xAPIC
If IO APIC is in logical destination mode, local APICs compare their
logical APIC ID defined in LDR (Logical Destination Register) with
the destination code sent with the interrupt to determine whether or not
to accept the incoming interrupt.

This patch programs LDR in xAPIC mode to support IO APIC logical mode.

The local APIC ID from local APIC ID register can't be used as the
'logical APIC ID' because LAPIC ID may not be consecutive numbers hence
it makes it impossible for LDR to encode 8 IDs within 8 bits.

This patch chooses 0 for BSP, and for APs, cpu_number which is the index
to x86_cpuboot[], which ultimately assigned in z_smp_init[].

Signed-off-by: Zide Chen <zide.chen@intel.com>
2020-05-08 22:32:39 -04:00
Andrew Boie
2873afb7fe aarch32: fix a build failure
Some wires were crossed when an older PR was merged that
had build conflicts with newer code. Update this header
to reflect were the 'nested' member is in the kernel CPU
struct.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-08 13:59:17 -05:00
Andrew Boie
a203d21962 kernel: remove legacy fields in _kernel
UP should just use _kernel.cpus[0].

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-08 17:42:49 +02:00
Stephanos Ioannidis
0b930a2195 kconfig: Rename x86 FPU sharing symbols
This commit renames the x86 Kconfig `CONFIG_{EAGER,LAZY}_FP_SHARING`
symbol to `CONFIG_{EAGER,LAZY}_FPU_SHARING`, in order to align with the
recent `CONFIG_FP_SHARING` to `CONFIG_FPU_SHARING` renaming.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-05-08 10:58:33 +02:00
Stephanos Ioannidis
aaf93205bb kconfig: Rename CONFIG_FP_SHARING to CONFIG_FPU_SHARING
This commit renames the Kconfig `FP_SHARING` symbol to `FPU_SHARING`,
since this symbol specifically refers to the hardware FPU sharing
support by means of FPU context preservation, and the "FP" prefix is
not fully descriptive of that; leaving room for ambiguity.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-05-08 10:58:33 +02:00
Abhishek Shah
2f85c01eaa arch: arm: aarch64: Add Cortex-A72 config
Add Cortex-A72 config in order to set "-mcpu" correctly.

Signed-off-by: Abhishek Shah <abhishek.shah@broadcom.com>
2020-05-08 10:46:23 +02:00
Daniel Leung
94b744cc0a x86: early_serial: extend to support MMIO UART
This expands the early_serial to support MMIO UART, in addition to
port I/O, by duplicating part of the hardware initialization from
the NS16550 UART driver. This allows enabling of early console on
hardware with MMIO-based UARTs.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-05-07 10:11:35 +02:00
Wayne Ren
e43e137d8b arch: arc: remove MPU_STACK_GUARD for ARC_MPU_VER 2
ARC_MPU_VER 2 has a strong requirement in
  * size, must be >= 2048 bytes and power of 2
  * start address must be aligned to size

It may bring a big waste of memory.

On the other hand, GEN_PRIV_STACK is used for ARC_MPU_VER 2,
it conflicts with MPU_STACK_GUARD.

So considering the limmitations, remove MPU_STACK_GUARD for
ARC_MPU_VER 2

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-05-06 12:51:05 -07:00
Wayne Ren
7633da7046 arch: arc: ARC MPUv3 doesn't use GEN_PRIV_STACK
Because ARC MPUv3 doesn't have a strong alignment requirement
as ARC MPUv2 does, no use of GEN_PRIV_STACK for it.

Without GEN_PRIV_STACK, all stack elements can be in one stack object.
See #24048.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-05-06 12:51:05 -07:00
Wayne Ren
99dd392825 arch: arc: use the way of GEN_PRIV_STACK for privilege stack
drop the original C macro based allocation of privilged stack as
it may cause the waste of memory for ARC MPUv2.

now use the way of GEN_PRIV_STACK to generate privilege stack as
other archs did, e.g. ARM.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-05-06 12:51:05 -07:00
Andrew Boie
0091a700d3 x86_64: fix crash on nested interrupts
x86_64 supports 4 levels of interrupt nesting, with
the interrupt stack divided up into sub-stacks for
each nesting level.

Unfortunately, the initial interrupt stack pointer
on the first CPU was not taking into account reserved
space for guard areas, causing a stack overflow exception
when attempting to use the last interrupt nesting level,
as that page had been set up as a stack guard.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-01 11:44:05 -07:00
Andrew Boie
dac61f450d x86: fix trampoline stack clobber
We need to lock interrupts before setting the thread's
stack pointer to the trampoline stack. Otherwise, we
could unexpectedly take an interrupt on this stack
instead of the thread stack as intended.

The specific problem happens at the end of the interrupt,
when we switch back to the thread stack and call swap.
Doing this on a per-cpu trampoline stack instead of the
thread stack causes data corruption.

Fixes: #24869

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-01 11:43:57 -07:00
Stephanos Ioannidis
8b27d5c6b9 linker: Clean up section name definitions
This commit cleans up the section name definitions in the linker
sections header file (`include/linker/sections.h`) to have the uniform
format of `_(SECTION)_SECTION_NAME`.

In addition, the scope of the short section reference aliases (e.g.
`TEXT`, `DATA`, `BSS`) are now limited to the ASM code, as they are
currently used (and intended to be used) only by the ASM code to
specify the target section for functions and variables, and these short
names can cause name conflicts with the symbols used in the C code.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-30 13:42:36 -04:00
Kumar Gala
a45ea3806f x86: Rework rework x86 related code to use new DTS macros
Replace DT_PHYS_RAM_ADDR and DT_RAM_SIZE with DT_REG_ADDR/DT_REG_SIZE
for the DT_CHOSEN(zephyr_sram) node.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-30 08:37:18 -05:00
Sandeep Tripathy
d4f1f2a07e arch: arm64: add public header for asm macros
Move generic macros to exported assembly header file
'macro.inc'. Rename the existing 'macro.inc' to 'macro_priv.inc'.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-04-28 10:44:42 -07:00
Tobias Svehagen
ca872a44c1 lib: posix: Add support for eventfd
This implements a file descriptor used for event notification that
behaves like the eventfd in Linux.

The eventfd supports nonblocking operation by setting the EFD_NONBLOCK
flag and semaphore operation by settings the EFD_SEMAPHORE flag.

The major use case for this is when using poll() and the sockets that
you poll are dynamic. When a new socket needs to be added to the poll,
there must be some way to wake the thread and update the pollfds before
calling poll again. One way to solve it is to have a timeout set in the
poll call and only update the pollfds during a timeout but that is not
a very nice solution. By instead including an eventfd in the pollfds,
it is possible to wake the polling thread by simply writing to the
eventfd.

Signed-off-by: Tobias Svehagen <tobias.svehagen@gmail.com>
2020-04-28 09:57:41 +03:00
Stephanos Ioannidis
4f4e85c035 kconfig: Improve architecture floating point symbol descriptions
This commit reworks the symbol descriptions for `CONFIG_FPU` and
`CONFIG_FP_SHARING`, in order to provide more details and clarify any
ambiguity between the two symbols.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-27 19:03:44 +02:00
Stephanos Ioannidis
0e6ede8929 kconfig: Rename CONFIG_FLOAT to CONFIG_FPU
This commit renames the Kconfig `FLOAT` symbol to `FPU`, since this
symbol only indicates that the hardware Floating Point Unit (FPU) is
used and does not imply and/or indicate the general availability of
toolchain-level floating point support (i.e. this symbol is not
selected when building for an FPU-less platform that supports floating
point operations through the toolchain-provided software floating point
library).

Moreover, given that the symbol that indicates the availability of FPU
is named `CPU_HAS_FPU`, it only makes sense to use "FPU" in the name of
the symbol that enables the FPU.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-27 19:03:44 +02:00
Corey Wharton
6f6564752a riscv: Restore floating-point caller saved registers before integer
This fixes an issue where the t0 register is overwritten after it
has been restored.

Signed-off-by: Corey Wharton <coreyw7@fb.com>
2020-04-22 16:39:48 -07:00
Corey Wharton
c8f7cd5462 kconfig: Make the CPU_HAS_FPU_DOUBLE_PRECISION option global.
This option now applies to the RISC-V architecture and is no longer
a ARM only configuration.

Signed-off-by: Corey Wharton <coreyw7@fb.com>
2020-04-22 16:39:48 -07:00
Corey Wharton
22c52846a5 riscv: Set mabi and march flags for floating point
Adds handling of the FLOAT_64BIT option when determining the ISA
flags as well as introduces a new Kconfig option to enable/disable
the hard-float calling convention.

Signed-off-by: Corey Wharton <coreyw7@fb.com>
2020-04-22 16:39:48 -07:00
Corey Wharton
58232d58e0 riscv: Add support for floating point
This change adds full shared floating point support for the RISCV
architecture with minimal impact on threads with floating point
support not enabled.

Signed-off-by: Corey Wharton <coreyw7@fb.com>
2020-04-22 16:39:48 -07:00
Andrew Boie
618426d6e7 kernel: add Z_STACK_PTR_ALIGN ARCH_STACK_PTR_ALIGN
This operation is formally defined as rounding down a potential
stack pointer value to meet CPU and ABI requirments.

This was previously defined ad-hoc as STACK_ROUND_DOWN().

A new architecture constant ARCH_STACK_PTR_ALIGN is added.
Z_STACK_PTR_ALIGN() is defined in terms of it. This used to
be inconsistently specified as STACK_ALIGN or STACK_PTR_ALIGN;
in the latter case, STACK_ALIGN meant something else, typically
a required alignment for the base of a stack buffer.

STACK_ROUND_UP() only used in practice by Risc-V, delete
elsewhere.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie
1f6f977f05 kernel: centralize new thread priority check
This was being done inconsistently in arch_new_thread(), just
move to the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie
c0df99cc77 kernel: reduce scope of z_new_thread_init()
The core kernel z_setup_new_thread() calls into arch_new_thread(),
which calls back into the core kernel via z_new_thread_init().

Move everything that doesn't have to be in z_new_thread_init() to
z_setup_new_thread() and convert to an inline function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Stephanos Ioannidis
ae427177c0 arch: arm: aarch32: Rework non-Cortex-M exception handling
This commit reworks the ARM AArch32 non-Cortex-M (i.e. Cortex-A and
Cortex-R) exception handling to establish the base exception handling
framework and support detailed exception information reporting.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-20 18:22:46 +02:00
Stephanos Ioannidis
c442203c08 arch: arm: aarch32: Fix incorrect z_arm_{int,exc}_exit usage
In the ARM Cortex-M architecture implementation, the concepts of
"exceptions" and "interrupts" are interchangeable; whereas, in the
Cortex-A/-R architecture implementation, they are considered separate
and therefore handled differently (i.e. `z_arm_exc_exit` cannot be used
to exit an "interrupt").

This commit fixes all `z_arm_exc_exit` usages in the interrupt handlers
to use `z_arm_int_exit`.

NOTE: In terms of the ARM AArch32 Cortex-A and Cortex-R architecture
      implementations, the "exceptions" refer to the "Undefined
      Instruction (UNDEF)" and "Prefetch/Data Abort (PABT/DABT)"
      exceptions, while "interrupts" refer to the "Interrupt (IRQ)",
      "Fast Interrupt (FIQ)" and "Software Interrupt/Supervisor Call
      (SWI/SVC)".

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-20 18:22:46 +02:00
Stephanos Ioannidis
b14d53435b arch: arm: aarch32: Split fault_s.S for Cortex-M and the rest
The exception/fault handling mechanisms for the ARM Cortex-M and the
rest (i.e. Cortex-A and Cortex-R) are significantly different and there
is no benefit in having the two implementations in the same file.

This commit relocates the Cortex-M fault handler to
`cortex_m/fault_s.S` and the Cortex-A/-R generic exception handler to
`cortex_a_r/exc.S` (note that the Cortex-A and Cortex-R architectures
do not provide direct fault vectors; instead, they provide the
exception vectors that can be used to handle faults).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-20 18:22:46 +02:00
Stephanos Ioannidis
37f44193f3 arch: arm: aarch32: Split exc_exit.S for Cortex-M and the rest
The amount of shared code in exc_exit.S between the ARM Cortex-M and
the rest (i.e. Cortex-A and Cortex-R) is minimal and there is little
benefit in having the two implementations in the same file.

This commit splits the interrupt/exception exit code for the
Cortex-A/-R and Cortex-M into separate files to improve readability as
well as maintainability.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-20 18:22:46 +02:00
Sandeep Tripathy
1dc095c949 arch: arm64: use callee saved reg to stash
Use calee saved register to preserve value accoss sequence.
Procedure calls are mandated to follow ABI spec and preserve
x19 to x29.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-04-20 16:14:36 +02:00
Sandeep Tripathy
82724de6a5 arch: arm64: refactor for EL3 specific init
Zephyr being an OS is typically expected to run at EL1. Arm core
can reset to EL3 which typically requires a firmware to run at EL3
and drop control to lower EL. In that case EL3 init is done by the
firmware allowing the lower EL software to have necessary control.

If Zephyr is entered at EL3 and it is desired to run at EL1, which
is indicated by 'CONFIG_SWITCH_TO_EL1', then Zephyr is responsible
for doing required EL3 initializations to allow lower EL necessary
control.

The entry sequence is modified to have control flow under single
'switch_el'.

Provisions added by giving weak funcions to do platform specific
init from EL3.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-04-20 16:14:36 +02:00
Sandeep Tripathy
c6f8771311 arch: arm64: macro for mov immediate
Single mov instruction can not be used to move non-zero
64b immediate value to the 64b register.
Implement macro to generate mov/ movk and movz sequences
depending on immediate value width.

Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com>
2020-04-20 16:14:36 +02:00
Martí Bolívar
6d0896b7ec gen_isr_tables: error improvements
Random readability improvements:

- avoid a stack trace on error by using sys.exit()
- include "error:" in the error() output, for grep
- print conflicting addresses on multiple IRQ registration

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2020-04-17 18:28:47 +02:00
Kumar Gala
5648df39ac arch: arm: cortex_m: Rework DT_NUM_IRQ_PRIO_BITS
To remove the need to have DT_NUM_IRQ_PRIO_BITS defined in every
dts_fixup.h we can just handle the few variant cases in irq.h.  This
allows us to remove DT_NUM_MPU_REGIONS from all the dts_fixup.h files.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-17 15:17:43 +02:00
Kumar Gala
c5e5d531ca arch: arm: cortex_m: arm_mpu: Rework DT usage for DT_NUM_MPU_REGIONS
To remove the need to have DT_NUM_MPU_REGIONS defined in every
dts_fixup.h we can just handle the few variant cases in arm_mpu.c
directly.  This allows us to remove DT_NUM_MPU_REGIONS from all the
dts_fixup.h files.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-17 15:17:43 +02:00
Stephanos Ioannidis
2d6194170b arch: arm: aarch32: Fix read_timer_end_of_isr register preservation
The current implementation to preserve r0 and r3 registers around the
call to `read_timer_end_of_isr` function has the following problems:

1. STM and LDM mnemonics are used without proper suffixes, in attempt
   to implement PUSH and POP (i.e. STMFD and LDMFD). The suffix-less
   STM mnemonic is equivalent to STMEA (increment after), which clearly
   is not a PUSH operation, and this corrupts the interrupt stack,
   leading to crashes on the Cortex-R.

2. The current implementation unnecessarily preserves additional r1, r2
   and lr registers. There is no need to preserve r1 and r2 because the
   values contained in these registers are not used after the function
   call; as for the lr register, it is already pushed to the stack when
   the interrupt service routine enters.

This commit removes all the unnecessary register preservations and
fixes the incorrect STM and LDM usages.

Note that the PUSH and POP aliases are used in place of the STMFD and
LDMFD mnemonics because they are used throughout the rest of the code.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 15:49:27 +02:00
Stephanos Ioannidis
cdbfbe396f tests: benchmarks: Fix incorrect ARM arch variant check
Currently, the Cortex-M SysTick-based timing info implementation is
incorrectly specified for all 32-bit ARM architectures.

This commit fixes that by restricting the SysTick-based implementation
to the ARM Cortex-M architectures only; in addition, it removes the
ARM64 timing info implementation as it is identical to the default
generic implementation and was previously added only as a workaround
for the aforementioned problem.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 15:49:27 +02:00
Stephanos Ioannidis
819fe00071 tests: benchmarks: Fix Kconfig symbol checks
This commit fixes the incorrect (or un-conventional, at least) Kconfig
boolean symbol checks.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 15:49:27 +02:00
Bobby Noelte
68cd1b7f9e arch: arm: aarch32: fix system clock driver selection for cortex m
The selection of the Cortex M systick driver to be used as a system
clock driver is controlled by CONFIG_CORTEX_M_SYSTICK.

To replace it by another driver CONFIG_CORTEX_M_SYSTICK must be set
to 'n'. Unfortunately this also controls the interrupt vector for
the systick interrupt. It is now routed to z_arm_exc_spurious.

Remove the dependecy on CONFIG_CORTEX_M_SYSTICK and route to
z_clock_isr as it was before #24012.

Fixes #24347

Signed-off-by: Bobby Noelte <b0661n0e17e@gmail.com>
2020-04-15 12:16:10 +02:00
Stephanos Ioannidis
a1e838872c arch: arm: Remove extraneous root cmake files
The ARM architecture root directory contains `aarch32.cmake` and
`aarch64.cmake` files whose contents are better suited to go into other
more purpose-specific files.

This commit removes the aforementioned files and moves their contents
to other files following the convention used by other architectures.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 11:23:56 +02:00
Stephanos Ioannidis
eeddc7566d arch: arm: aarch32: Add missing arch flag for Cortex-R5
This commit adds the GCC `-march` flag for the ARM Cortex-R5 targets.

Note that `armv7-r+idiv` must be specified instead of `armv7-r`,
because the GCC internally resolves `-mcpu=cortex-r5` to it.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 11:23:56 +02:00
Stephanos Ioannidis
3cf1a9139e arch: arm: Clean up configurations
This is a minor clean-up for the ARM architecture configurations.

Note that the `CPU_CORTEX_A` symbol is moved from the AArch64 to the
ARM root Kconfig because it can be selected from both AArch32 and
AArch64.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-15 11:23:56 +02:00
Wayne Ren
2833d016aa arch: arc: fix the bug of IRQ_ACT.U bit sync up
This bug is brought in commit 3f88ddd54999.

The cleanup of IRQ_ACT.U bit before thread switch is not done.

The bug comes out at the case where interrupt comes in user mode,
then a thread switch happens, and the target thread is to run in kernel
mode. Because the U bit is not sync up correctly, the stack operation
is wrong.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-04-14 12:52:41 -07:00
Anas Nashif
b90fafd6a0 kernel: remove unused offload workqueue option
Those are used only in tests, so remove them from kernel Kconfig and set
them in the tests that use them directly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-04-12 18:42:27 -04:00
Kumar Gala
a5cd799523 arch: riscv: irq: fix build warning
In arch_irq_connect_dynamic the 'level' variable is only used on
platforms that define CONFIG_RISCV_HAS_PLIC.  For the other platforms
we'll get a warning about an unused variable.  Remove the need for
'level' and just call irq_get_level() where its needed to address the
issue.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-10 12:38:06 -04:00
Ioannis Glaropoulos
95da5d479b arch: arm: minor fixes in the docs for ARM kernel_arch headers
Fix documentation in kernel_arch_data.h and kernel_arch_func.h
headers for ARM, to indicate that these are common headers for
all ARM architecture variants.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-09 13:13:42 -07:00
Jaron Kelleher
0fb4382164 arch: isr: Update z_isr_install for multi-level interrupts
z_isr_install is not suited to handle multi-level interrupt formats.
This update allows z_isr_install to accept irq numbers in zephyr format
and place them in the isr table appropriately.

Fixes issue #22145

Signed-off-by: Jaron Kelleher <jkelleher@fb.com>
2020-04-09 13:12:24 -07:00
Daniel Leung
7b31f93980 xtensa: enable XTENSA_HAL at SoC level
This moves enabling XTENSA_HAL to the SoC definitions.
As Xtensa SoCs are highly configurable, it is possible
that the generic Xtensa HAL provided in the tree is
not suitable. So only enable XTENSA_HAL only if
the generic version can be used.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-04-08 13:10:35 -07:00
Ioannis Glaropoulos
25060b0f2e arch: arm: aarch32: rename z_arm_reserved to z_arm_exc_spurious
In the Cortex-M exception table we rename z_arm_reserved()
function to z_arm_exc_spurious(), as it is invoked when
existing (that is, non-reserved) but un-installed exceptions
are triggered, accidentaly, by software, or hardware. This
currently applies to SysTick and SecureFault exceptions.

Since fault.S is shared between Cortex-M and other AARCH32
architectures, we keep z_arm_reserved as a defined symbol
there. This commit does some additional, minor, "no-op"
cleanup in #ifdef's for Cortex-M and Cortex-R.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-07 09:57:12 -05:00
Ioannis Glaropoulos
d3fa2eebb0 arch: arm: aarch32: cortex_m: add z_arm_reserved only if core has SE
If the Cortex-M core does not implement the Security Extension,
we should not be adding z_arm_reserved in the corresponding
vector table entry. That is because the entry is reserved by
the ARM architecture.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-07 09:57:12 -05:00
Ioannis Glaropoulos
4364f2d455 arch: arm: aarch32: add z_arm_reserved only when we have SysTick
If the Cortex-M core does not implement the SysTick peripheral,
we should not be adding z_arm_reserved in the corresponding
vector table entry. If we do have SysTick implemented but we
are not using it as the system timer, we shall install the
reserved interrupt at the vector table entry.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-07 09:57:12 -05:00
Ioannis Glaropoulos
d725402daf arch: arm: aarch32: cortex_m: write 0x0 to reserved exception entries
Write 0x0 instead of z_arm_reserved to vector exception
entries that are always reserved for future use by the
ARM architecture. These vector table entries cannot be
fetched to be executed by the Cortex-M exception entry,
so having z_arm_reserved gives a false impression, since
it is a function that may be invoked in the code. This
modification is safe since these vector entries are also
not supposed to be read / written by the code.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-04-07 09:57:12 -05:00
Jaron Kelleher
bd4f721a59 RISCV compiler: Set mabi and march via Kconfig options
The mabi and march options of the compiler and linker commands
were previously hardcoded and depended only on the 64BIT config
option. This update allows these flags to be set by the config
options currently available, plus an additional option to
specify the compressed ISA.

Signed-off-by: Jaron Kelleher <jkelleher@fb.com>
2020-04-06 21:54:07 -04:00
Wayne Ren
819e7aec77 arch: arc: optimizations on irq lock/unlock in low level
When SMP is enabled, the irq_lock/unlock will get and
release a global spin lock, but the codes changed in this
commit only need to lock the local cpu. No affect on
uniprocessor, but optimizations for SMP case.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-04-06 11:17:38 -07:00
Wayne Ren
8f76233029 arch: arc: optimize the arc v2 interrupt unit driver
* add interrupt lock in low level API to gurantee the
  correctness of operations.

* make some functions as in-line functions

* clean up and optimize the code comments

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-04-06 11:17:38 -07:00
Kumar Gala
cd88902bc4 arch: posix: Kconfig: select HAS_DTS as the arch level
Now that all posix boards have a dts we can move the selection of
HAS_DTS to the arch level like it is for all the other architectures.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-04 16:01:11 +02:00
Stephanos Ioannidis
b63a028fbc arch: arm: aarch32: Rework non-Cortex-M context preservation
The current context preservation implementation saves the spsr and
lr_irq registers, which contain the cpsr and pc register values of the
interrupted context, in the thread callee-saved block and this prevents
nesting of interrupts because these values are required to be part of
the exception stack frame to preserve the nested interrupt context.

This commit reworks the AArch32 non-Cortex-M context preservation
implementation to save the spsr and lr_irq registers in the exception
stack frame to allow preservation of the nested interrupt context as
well as the interrupted thread context.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-02 09:22:38 +02:00
Daniel Leung
7bb5015ced tests: benchmarks: use high-res counter for MEC1501 SoC
The timer counter for ticks on MEC1501 SoC is based on the RTOS
timer which runs at 32kHz. This is too slow for timing benchmarks
as most cases can be finished within one or two ticks. Since
the SoC has higher frequency timers running at 48MHz, add
the necessary bits to use these for timing benchmarks.

Fix #23414

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-03-31 19:52:21 -04:00
Carlo Caione
99a8155914 arm: AArch64: Add support for nested exception handlers
In the current implementation both SPSR and ELR registers are saved with
the callee-saved registers and restored by the context-switch routine.
To support nested IRQs we have to save those on the stack when entering
and exiting from an ISR.

Since the values are now carried on the stack we can now add those to
the ESF and the initial stack and take care to restore them for new
threads using the new thread wrapper routine.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-31 19:24:48 +02:00
Stephanos Ioannidis
b58cc459de interrupts: Do not assert on IRQ enable status for ISR install on GIC
The current `z_isr_install` implementation asserts that the IRQ to
which the ISR will be installed must be disabled.

This commit disables that assertion for the ARM GIC because the SGI-
type IRQs can never be disabled as per the specifications and this
causes the assertion to fail for them.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-31 19:24:48 +02:00
Stephanos Ioannidis
33928f18ae arch: arm: aarch32: Add header shims for cortex_a_r renaming
Out-of-tree code can still be using the old file locations. Introduce
header shims to include the headers from the new correct location and
print a warning message.

These shims should be removed after two releases.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-26 11:20:36 +01:00
Stephanos Ioannidis
a033683783 arch: arm: aarch32: Rename cortex_r to cortex_a_r
This commit renames the `cortex_r` directory under the AArch32 to
`cortex_a_r`, in preparation for the AArch32 Cortex-A support.

The rationale for this renaming is that the Cortex-A and Cortex-R share
the same base design and the difference between them, other than the
MPU vs. MMU, is minimal.

Since most of the architecture port code and configurations will be
shared between the Cortex-A and Cortex-R architectures, it is
advantageous to have them together in the same directory.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-26 11:20:36 +01:00
Stephanos Ioannidis
bafb623239 arch: arm: aarch32: Reorganise configurations
This commit re-organises AArch32 configurations for consistency.

1. Move Cortex-M-specific includes to `cortex_m/Kconfig`.

2. Relocate the "TrustZone" configurations to `cortex_m/tz/Kconfig`
  since these are really the TrustZone-M configurations and do not
  apply to the TrustZone-A.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-26 11:20:36 +01:00
Kumar Gala
55d4cd2aa8 arch: x86: Convert to new DT_INST macros
Convert older DT_INST_ macro use the new include/devicetree.h
DT_INST macro APIs.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-03-26 03:29:23 -05:00
Daniel Leung
5539c3ed90 xtensa: add calling entry point for multi-processing
Under multi-processing, only the first CPU#0 needs to go through
setting up the kernel structs and clearing out BSS (among others).
There is no need for other CPUs to do those tasks. Since each
Xtensa core starts using the same boot vector, CPUs other than #0
need to skip all the startup tasks by not calling to z_cstart().
So provide another entry point for those CPUs. Note that Xtensa
arch is highly configurable. So the implementation of the entry
point is up to each individual SoC config.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-03-25 19:07:28 -04:00
Daniel Leung
e8d2c92abb arch/xtensa: smp: only zero BSS only when boot from CPU #0
Under SMP, the main BSS section only needs to be zero-ed on CPU #0.
Other CPUs should not zero out BSS, or else it may cause CPU #0 to
crash on invalid data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-03-25 19:07:28 -04:00
Carlo Caione
67e4ccbc51 arch: aarch64: Add check on context switch
Check whether we actually need to schedule a new thread before calling
the context switch routine.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-23 12:13:07 +01:00
Carlo Caione
b41e5e67d0 arch: aarch64: Rewrite comments and rename swap routines
Rewrite the comments for the swap routine removing the references to the
old aarch32 code and rename z_arm64_pendsv() ->
z_arm64_context_switch().

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-23 12:13:07 +01:00
Carlo Caione
fbf9b2675d aarch64: swap: Remove redundant code
Delete redundant / useless code from z_arm64_pendsv().

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-23 12:13:07 +01:00
Carlo Caione
99e63a799d arch: aarch64: Rework exception entry/exit code
Rework the assembly code for the ISR wrapper and SVC to share the
entry/exit code that is currently scattered amoung several files /
places. No functional changes.

Rename also macro.h -> macro.inc to fool the CI.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-20 14:15:43 +01:00
Ioannis Glaropoulos
fec399e74a arch: arm: aarch32: correct documentation of arch_cpu_atomic_idle
z_CpuIdleInit has been renamed to z_arm_cpu_idle_init, so
we need to correct that function's name in the documentation
of arch_cpu_atomic_idle.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-20 11:53:14 +01:00
Stephanos Ioannidis
3f395f5698 arch: arm: aarch32: Add memory barriers to arch_cpu_idle
This commit adds the required memory barriers to the `arch_cpu_idle`
function in order to ensure proper idle operation in all cases.

1. Add ISB after setting BASEPRI to ensure that the new wake-up
  interrupt priority is visible to the WFI instruction.

2. Add DSB before WFI to ensure that all memory transactions are
  completed before going to sleep.

3. Add ISB after CPSIE to ensure that the pending wake-up interrupt
  is serviced immediately.

Co-authored-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-20 11:53:14 +01:00
Stephanos Ioannidis
ba0bcaf41b arch: arm: aarch32: Fix arch_cpu_idle interrupt masking
The current AArch32 `arch_cpu_idle` implementation enables interrupt
before executing the WFI instruction, and this has the side effect of
allowing interruption and thereby calling wake-up notification
functions before the CPU enters sleep.

This commit fixes the problem described above by ensuring that
interrupt is disabled when the WFI instruction is executed and
re-enabled only after the processor wakes up.

For ARMv6-M, ARMv8-M Baseline and ARM-R, the PRIMASK (ARM-M)/
CPSR.I (ARM-R) is used to lock interrupts and therefore it is not
necessary to do anything before executing the WFI instruction.

For ARMv7-M and ARMv8-M Mainline, the BASEPRI is used to lock
interrupts and the PRIMASK is always cleared in non-interrupt context;
therefore, it is necessary to set the PRIMASK to mask interrupts,
before clearing the BASEPRI to configure wake-up interrupt priority to
the lowest.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-20 11:53:14 +01:00
Stephanos Ioannidis
50e4f2a671 arch: arm: aarch32: Fix whitespaces in cpu_idle.S
This commit fixes whitespaces in cpu_idle.S.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-20 11:53:14 +01:00
Andrew Boie
28be793cb6 kernel: delete separate logic for priv stacks
This never needed to be put in a separate gperf table.
Privilege mode stacks can be generated by the main
gen_kobject_list.py logic, which we do here.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Øyvind Rønningstad
c3ee533b5e arch: arm: tz: secure_entry_functions: Add support for nRF53
The nRF53 has different region size than nRF91.
This patch is aware of Erratum 19 (wrong SPU region size).

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-03-17 11:41:19 +01:00
Andrew Boie
80a0d9d16b kernel: interrupt/idle stacks/threads as array
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.

There is now a centralized declaration in kernel_internal.h.

On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.

The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.

The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 23:17:36 +02:00
Stephanos Ioannidis
cd90d49a86 arch: arm: Optimise Cortex-R exception return function.
z_arm_exc_exit (z_arm_int_exit) requires the current execution mode to
be specified as a parameter (through r0). This is not necessary because
this value can be directly read from CPSR.

This commit modifies the exception return function to retrieve the
current execution mode from CPSR and removes all provisions for passing
the execution mode parameter.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-14 11:49:22 +01:00
Timo Teräs
6fd168e9a1 driver: uart: ns16550: convert to DT_INST_*
Change to code to use the automatically generated DT_INST_*
defines and remove the now unneeded configs and fixups.

Signed-off-by: Timo Teräs <timo.teras@iki.fi>
2020-03-14 02:22:05 +02:00
Stephanos Ioannidis
e816ac7124 isr_tables: Support hardware interrupt vector table-only configuration.
The existing isr_tables implementation does not allow enabling only
hardware interrupt vector table without software isr table.

This commit ensures that CONFIG_GEN_IRQ_VECTOR_TABLE can be used
without setting CONFIG_GEN_SW_ISR_TABLE.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-13 12:02:03 +01:00
Stephanos Ioannidis
91ceee782f arch: arm: aarch64: Refactor interrupt interface
The current AArch64 interrupt system relies on the multi-level
interrupt mechanism and the `irq_nextlevel` public interface to invoke
the Generic Interrupt Controller (GIC) driver functions.

Since the GIC driver has been refactored to provide a direct interface,
in order to resolve various implementation issues described in the GIC
driver refactoring commit, the architecture interrupt control functions
are updated to directly invoke the GIC driver functions.

This commit also adds support for the ARMv8 cores (e.g. Cortex-A53)
that allow interfacing to a custom external interrupt controller
(i.e. non-GIC) by mapping the architecture interrupt control functions
to the SoC layer interrupt control functions when
`ARM_CUSTOM_INTERRUPT_CONTROLLER` configuration is enabled.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-13 09:59:59 +01:00
Stephanos Ioannidis
2c5ca5505c arch: arm: aarch32: Refactor interrupt interface
The current AArch32 (Cortex-R and to-be-added Cortex-A) interrupt
system relies on the multi-level interrupt mechanism and the
`irq_nextlevel` public interface to invoke the Generic Interrupt
Controller (GIC) driver functions.

Since the GIC driver has been refactored to provide a direct interface,
in order to resolve various implementation issues described in the GIC
driver refactoring commit, the architecture interrupt control functions
are updated to directly invoke the GIC driver functions.

This commit also adds support for the Cortex-R cores (Cortex-R4 and R5)
that allow interfacing to a custom external interrupt controller
(i.e. non-GIC) by introducing the `ARM_CUSTOM_INTERRUPT_CONTROLLER`
configuration that maps the architecture interrupt control functions to
the SoC layer interrupt control functions.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-13 09:59:59 +01:00
Ioannis Glaropoulos
d9a6e1d0c0 arch: arm: aarch32: rename z_arm_int_lib_init() function
We rename the z_arm_int_lib_init() function to
z_arm_interrupt_init(), aligning to how other
ARCHes name their IRQ initialization function.
There is nothing about 'library' in this
functionality, so we remove the 'lib' in-fix.

The commit does not introduce any behavior changes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-12 20:11:44 +02:00
Wayne Ren
9544350832 arch: arc: fake exception should set not clrear AE bit
To make a fake exception, we should set not clear AE bit

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-12 13:02:17 -04:00
Wayne Ren
8975c0a33e arch: arc: the stack checking should consider the case of SMP
the old codes just work for single core, we need to consider
the case of SMP.

In SMP, it's not easy to get current thread of current cpu in
assembly, so we'd better do it in C.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-12 13:02:17 -04:00
Wayne Ren
ea0431d305 arch: arc: fix the trace of isr enter and exit
fix the trace of isr enter and exit

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-12 13:02:17 -04:00
Wayne Ren
84cdfa271d arch : arc: clean up of assembly codes
* update comments to match latest codes
* add extra comments for some assembly, macros
* use macro to replace duplcated codes
* remove unused codes, lables, symobols

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-12 13:02:17 -04:00
Wayne Ren
3f88ddd549 arch: arc: overhaul the thread switch code in epilogue of irq and exception
overhaul the thread switch code in epilogue of irq and
exception handling:
* add z_arch_get_next_switch_handle to call z_get_next_switch_handle,
  let the scheduler to decide the switch thread. This will also cover
  the case of SMP.

* put lots of common codes in macros for thread switch to improve
  the maintainablity, readability.

* clean up of some lables to make codes easier to understand

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-12 13:02:17 -04:00
Wayne Ren
34b033d41d arch: arc: fix the bug of "wait for switch" sychronization
after applying commit 3235451880, arc's arch_switch also need's
corresponding fix.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-12 13:02:17 -04:00
Wayne Ren
f700022a35 arch: arc: bug fixes for running just one core for a multicore target
for smp target, there is a case where just one core is running, then:
* during init, the master core will run, others cores will halt/sleep
* use timer driver for single core

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-12 13:02:17 -04:00
Carlo Caione
b4335a04ac arm: aarch64: Reintroduce _ASM_FILE_PROLOGUE
This is currently missing from the AArch64 assembly files.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-03-11 09:34:24 +01:00
Ioannis Glaropoulos
0773fd5963 arch: arm: aarch32: fix z_irq_spurious() implementation
We align the implementation of z_irq_spurious() handler
with the other Zephyr ARCHEs, i.e. we will be calling
directly the ARM-specific fatal error function with
K_ERR_SPURIOUS_IRQ as the error type. This is already
the case for aarch64.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Ioannis Glaropoulos
a31795c440 arch: arm: aarch32: fix documentation in z_irq_spurious definition
Correct documentation note in z_irq_spurious() definition,
stressing that the function is installed in _sw_isr_table
entries at boot time (which may be or not be used for
dynamic interrupts).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-11 10:26:36 +02:00
Stephanos Ioannidis
7c5db4b755 arch: arm: cortex_r: Enable Thumb2 instruction set support
The ARMv7-R architecture supports both Thumb-2 (T32) and ARM (A32)
instruction sets.

This commit selects the `ISA_THUMB2` symbol to indicate that the
ARMv7-R architecture supports the Thumb-2 instruction set, which can
be enabled by selecting the `COMPILER_ISA_THUMB2` symbol.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-10 17:51:32 +01:00
Stephanos Ioannidis
0bd86f3604 arch: arm: aarch32: Allow selecting compiler instruction set
This commit introduces the `COMPILER_ISA_THUMB2` symbol to allow
choosing either the ARM or Thumb instruction set for C code
compilation.

In addition, this commit introduces the `ASSEMBLER_ISA_THUMB2` helper
symbol to specify the default target instruction set for the assembler.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-03-10 17:51:32 +01:00
Andrew Boie
0a41f06159 nios2: add arch_irq_is_enabled()
This arch API was never implemented on this architecture.

Fixes: #9994

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 12:06:54 -04:00
Andrew Boie
9f0acd44a4 kernel: add APIs for atomic os on pointers
The existing APIs are insufficient on 64-bit systems as atomic_t
is 32-bits wide.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-10 10:18:16 -04:00
Wayne Ren
acf9675bfc arch: arc: implement MPU_GAP_FILLING for mpu version 3
when MPU_GAP_FILLING is configured, the default mpu entry
(kernel read + kernel write) will be used to fill the gaps
among mpu entires to avoid dynamic mpu region splitting.
This will bring better performance in thread switch but fewer
constraints on privileged codes.

when MPU_GAP_FILLING is not configured, a sw-based mpu dynamic
region splitting is used to bypass the limitation of no mpu region
overlap in hardware. This approach will consume more hardware
mpu entries and more time in thread switch.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-10 11:58:03 +02:00
Wayne Ren
85f591e866 arch: arc: enable MPU_REQUIRES_NON_OVERLAPPING_REGIONS for arc mpu ver 3
arc mpu ver3 does not allow mpu region overlap, so need to enable
MPU_REQUIRES_NON_OVERLAPPING_REGIONS.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-10 11:58:03 +02:00
Wayne Ren
5f0650b596 arch: arc: fix the bug of blt in syscall
blt is signed comparsion, if r6 is a negative number created by
malicious code, it will pass the check, bring a secure risk.

use blo (unsinged comparison) to do the check.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-03-09 10:08:18 +02:00
Flavio Ceolin
8ed4b62dc0 syscalls: arm: Fix possible overflow in is_in_region function
This function is widely used by functions that validate memory
buffers. Macros used to check permissions, like Z_SYSCALL_MEMORY_READ
and Z_SYSCALL_MEMORY_WRITE, use these functions to check that a
pointers passed by user threads in a syscall.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-03-07 13:12:51 +02:00
Ioannis Glaropoulos
502b67ceba arch: arm: aarch32: userspace: fix syscall ID validation
We need an unsigned comparison when evaluating whether
the supplied syscall ID is lower than the syscall ID limit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-03-07 09:22:23 +02:00
Anders Montonen
219d9fc082 kconfig: Fix typo in ARM_MPU help
The ARMv7-M MPU requires power-of-two alignment, not the ARMv8-M MPU, as
noted a few lines later.

Signed-off-by: Anders Montonen <Anders.Montonen@iki.fi>
2020-03-04 10:18:27 +02:00
Luuk Bosma
dfb80526b4 arch: arm: aarch32: clear CONTROL.FPCA for every CPU that has a FPU
Upon reset, the CONTROL.FPCA bit is, normally, cleared. However,
it might be left un-cleared by firmware running before Zephyr boot,
for example when Zephyr image is loaded by another image.
We must clear this bit to prevent errors in exception unstacking.
This caused stack offset when booting from a build-in EFM32GG bootloader

Fixes #22977

Signed-off-by: Luuk Bosma <l.bosma@interay.com>
2020-02-27 19:26:04 +02:00
Luuk Bosma
cc21aba55f arch: arm: aarch32: clear CPACR for every CPU that has a FPU
Upon reset, the Co-Processor Access Control Register is, normally,
0x00000000. However, it might be left un-cleared by firmware running
before Zephyr boot.
This restores the register back to reset value, even if CONFIG_FLOAT
is not set.
Clearing before setting supports switching between Full access
and Privileged access only.

Refactor enable_floating_point to support initialize
floating point registers for every CPU that has a FPU.

Signed-off-by: Luuk Bosma <l.bosma@interay.com>
2020-02-27 19:26:04 +02:00
Daniel Leung
bf50aae693 xtensa: save/restore scompare1 during context switch
Xtensa uses two instructions to perform atomic compare-and-set
instruction: first the comparison register, then the actual
instruction to do compare-and-set. There is a potential that
context switching is performed before these two instructions.
A restored context may have the wrong value in the comparison
register. So we need to save and restore the comparison
register during context switching.

Fixes #21800

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-02-27 12:42:26 +02:00
Abhishek Shah
f587c5f019 arch: arm64: mmu: Add zephyr execution regions
Add zephyr execution regions(text, rodata, data, noinit, bss, etc.)
with proper attributes to translation tables.
Linker script has been modified a little to align these sections to
minimum translation granule(4 kB).

With this in place, code cannot be overwritten accidently as it is
marked read only. Similarly, execution is prohibited from data/RW
section as it is marked execute-never.

Signed-off-by: Abhishek Shah <abhishek.shah@broadcom.com>
2020-02-20 17:24:59 +02:00
Abhishek Shah
10a05a162f arch: arm64: Add MMU support
Add MMU support for ARMv8A. We support 4kB translation granule.
Regions to be mapped with specific attributes are required to be
at least 4kB aligned and can be provided through platform file(soc.c).

Signed-off-by: Abhishek Shah <abhishek.shah@broadcom.com>
2020-02-20 17:24:59 +02:00
Andrew Boie
9062a5ee91 revert: "program local APIC LDR register for..."
This reverts commit 87b65c5ac2.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-19 14:40:19 -08:00
Ioannis Glaropoulos
f8a5f0330d arch: arm: mpu: protect RNR when reading RBAR, RASR in ARMv7-M driver
We lock IRQs around writing to RNR and immediate reading of RBAR
RASR in ARMv7-M MPU driver. We do this for the functions invoked
directly or undirectly by arch_buffer_validate(). This locking
guarantees that
- arch_buffer_validate() calls by ISRs may safely preempt each
  other
- arch_buffer_validate() calls by threads may safely preempt
  each other (i.e via context switch -out and -in again).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
db0ca2e31a arch: arm: userspace: lock swap to set PSP, PSPLIM in userspace enter
When entering user mode, and before the privileged are dropped,
the thread switches back to using its default (user) stack. For
stack limit checking not to lead to a stack overflow, the PSPLIM
and PSP register updates need to be done with PendSV IRQ locked.
This is because context-switch (done in PendSV IRQ) reprograms
the stack pointer limit register based on the current PSP
of the thread. This commit enforces PendSV locking and
unlocking while reprogramming PSP and PSPLIM when switching to
user stack at z_arm_userspace_enter().

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
b09607dee5 arch: arm: aarch32: no PSLIM clearing in z_arm_userspace_enter()
Modifying the PSP via an MSR instruction is not subject to
stack limit checking so we can remove the relevant code
block in the begining of z_arm_userspace_enter(), which clears
PSPLIM. We add a comment when setting the PSP to the privilege
stack to stress that clearing the PSPLIM is not required and it
is always a safe operation.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
c4c595c56e arch: arm: userspace: lock swap to set PSP & PSPLIM in syscall return
When returning from a system call, the thread switches back
to using its default (user) stack. For stack limit checking
not to lead to a stack overflow, the updates of PSPLIM and
PSP registers need to be done with PendSV IRQ locked. This
is because context-switch (done in PendSV IRQ) reprograms
the stack pointer limit register based on the current PSP
of the thread. This commit enforces PendSV locking and
unlocking while reprogramming PSP and PSPLIM when returning
from a system call.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
6494658983 arch: arm: userspace: no PSPLIM clearing in z_arm_do_syscall() enter
In this commit we remove the PSPLIM clearing when entering
z_arm_do_syscall(), since we want PSPLIM to keep guarding
the user thread stack, until the thread has switched to its
privileged stack, for executing the system call.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
f00dfce891 arch: arm: userspace: set PSPLIM to guard default stack in SVCall
Thread will be in privileged mode after returning from SCVall. It
will use the default (user) stack before switching to the privileged
stack to execute the system call. We need to protect the user stack
against stack overflows until this stack transition. We update the
note in z_arm_do_syscall(), stating clearly that it executing with
stack protection when building with stack limit checking support
(ARMv8-M only).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
a5ecd71163 arch: arm: cortex-m: fix PSPLIM configuring in context-switch
When configuring the built-in stack guard, via setting the
PSPLIM register, during thread context-switch, we shall only
set PSPLIM to "guard" the thread's privileged stack area when
the thread is actually using it (PSP is on this stack).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
16ce4b6953 arch: arm: cortex-m: move PSPLIM clearing in the relevant function
We do not need to have the PSPLIM clearing directly inside
the PendSV handler and outside the function that configures
it, configure_builtin_stack_guard(), since the latter is also
invoked inside the PendSV handler. This commit moves the
PSPLIM clearing inside configure_builtin_stack_guard(). The
patch is not introducing any behavioral change on the
stack limit checking mechanism for Cortex-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
36e80673f9 arch: arm: aarch32: cortex-m: introduce offset for stack info start
We add the mechanism to generate offset #defines for
thread stack info start, to be used directly in ASM.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
4223b71b77 arch: aarch32: define macro for PendSV IRQ priority level
We introduce a macro to define the IRQ priority level for
PendsV, and use it in arch/arm/include/aarch32/exc.h
to set the PendSV IRQ level. The commit does not change
the behavior of PendSV interrupt.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
f9d9b7642e arch: aarch32: document exception priority scheme for 32-bit ARM
This commit adds some documentation for the exception
priority scheme for 32-bit ARM architecture variants.
In addition we document that SVCall priority level for
ARMv6-M is implicitly set to highest (by leaving it as
default).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Ioannis Glaropoulos
446ac06458 arch: arm: core: aarch32: fix wrong indentation in thread.c
Some minor indentation fixes in thread.c.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-02-19 12:19:43 -08:00
Zide Chen
87b65c5ac2 interrupt_controller: program local APIC LDR register for xAPIC
If IO APIC is in logical destination mode, local APICs compare their
logical APIC ID defined in LDR (Logical Destination Register) with
the destination code sent with the interrupt to determine whether or not
to accept the incoming interrupt.

This patch programs LDR in xAPIC mode to support IO APIC logical mode.

The local APIC ID from local APIC ID register can't be used as the
'logical APIC ID' because LAPIC ID may not be consecutive numbers hence
it makes it impossible for LDR to encode 8 IDs within 8 bits.

This patch chooses 0 for BSP, and for APs, cpu_number which is the index
to x86_cpuboot[], which ultimately assigned in z_smp_init[].

Signed-off-by: Zide Chen <zide.chen@intel.com>
2020-02-19 10:25:10 -08:00
Wayne Ren
0d7769f3b8 arch: arc: fix the bug that idle thread blocks other threads
* for COOP_SCHED case, i.e., PREEMPT_ENABLED is not enabled, the
  idle thread will block other threads which is not correct.

* remove the check of PREEMPT_ENABLED in the epilogue of irq and
  exception handling. Let the scheduler (should_preempt()) decide
  whether the thread should be preempted.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-02-13 09:39:25 -08:00
Ulf Magnusson
378d6b137a kconfig: Replace non-defconfig single-symbol 'if's with 'depends on'
Same deal as in commit eddd98f811 ("kconfig: Replace some single-symbol
'if's with 'depends on'"), for the remaining cases outside defconfig
files. See that commit for an explanation.

Will do the defconfigs separately in case there are any complaints
there.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-02-12 10:32:34 -06:00
Wayne Ren
7249fbf5ad arch: arc: fix the bug of irq_offload
* remove irq lock/unlock which is not needed because of
  the protection of offload_sem in irq_offload
* simplify the assembly codes related irq_offload, remove
  the thread switch logic
* the old codes may do thread switch in the epilogue of
  irq_offload handling with int locked, this is not correct
  may cause irq_offload related codes crash.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-02-12 14:30:38 +02:00
Stephanos Ioannidis
bea3ee0ed0 arch: arm: Fix incorrect Cortex-R interrupt state control logic.
This commit fixes incorrect Cortex-R interrupt lock, unlock and state
check function implementations.

The issues can be summarised as follows:

1. The current implementation of 'z_arch_irq_lock' returns the value
  of CPSR as the IRQ key and, since CPSR contains many other state
  bits, this caused 'z_arch_irq_unlocked' to return false even when
  IRQ is unlocked. This problem is fixed by isolating only the I-bit
  of CPSR and returning this value as the IRQ key, such that it
  returns a non-zero value when interrupt is disabled.

2. The current implementation of 'z_arch_irq_unlock' directly updates
  the value of CPSR control field with the IRQ key and this can cause
  other state bits in CPSR to be corrupted. This problem is fixed by
  conditionally enabling interrupt using CPSIE instruction when the
  value of IRQ key is a zero.

3. The current implementation of 'z_arch_is_in_isr' checks the value
  of CPSR MODE field and returns true if its value is IRQ or FIQ.
  While this does not normally cause an issue, the function can return
  false when IRQ offloading is used because the offload function
  executes in SVC mode. This problem is fixed by adding check for SVC
  mode.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-02-11 08:03:37 -08:00
Andrew Boie
768a30c14f x86: organize 64-bit ESF
The callee-saved registers have been separated out and will not
be saved/restored if exception debugging is shut off.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-08 08:51:43 -05:00
Andy Ross
0e32f4dab0 arch/x86_64: Save RFLAGS during arch_switch()
The context switch implementation forgot to save the current flag
state of the old thread, so on resume the flags would be restored to
whatever value they had at the last interrupt preemption or thread
initialization.  In practice this guaranteed that the interrupt enable
bit would always be wrong, becuase obviously new threads and preempted
ones have interrupts enabled, while arch_switch() is always called
with them masked.  This opened up a race between exit from
arch_switch() and the final exit path in z_swap().

The other state bits weren't relevant -- the oddball ones aren't used
by Zephyr, and as arch_switch() on this architecture is a function
call the compiler would have spilled the (caller-save) comparison
result flags anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andy Ross
eefd3daa81 kernel/smp: arch/x86_64: Address race with CPU migration
Use of the _current_cpu pointer cannot be done safely in a preemptible
context.  If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.

Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).

Note that the resulting _current expression now requires locking and
is going to be somewhat slower.  Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.

Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andrew Boie
efc5fe07a2 kernel: overhaul unused stack measurement
The existing stack_analyze APIs had some problems:

1. Not properly namespaced
2. Accepted the stack object as a parameter, yet the stack object
   does not contain the necessary information to get the associated
   buffer region, the thread object is needed for this
3. Caused a crash on certain platforms that do not allow inspection
   of unused stack space for the currently running thread
4. No user mode access
5. Separately passed in thread name

We deprecate these functions and add a new API
k_thread_stack_space_get() which addresses all of these issues.

A helper API log_stack_usage() also added which resembles
STACK_ANALYZE() in functionality.

Fixes: #17852

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-08 10:02:35 +02:00
Daniel Leung
52993afd62 Revert "arch: xtensa: Use reset-vector.S in booloader code"
This reverts commit 9987c2e2f9
which spills SoC configs into architecture files and is not
exactly desirable. So revert it.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-02-08 10:01:24 +02:00
Ulf Magnusson
de42aea18f kconfig/cmake: Check that one of the CONFIG_<arch> symbols is set
All SoCs must now 'select' one of the CONFIG_<arch> symbols. Add an
ARCH_IS_SET helper symbol that's selected by the arch symbols and
checked in CMake, printing a warning otherwise.

Might save people some time until they're used to the new scheme.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-02-08 00:50:08 -06:00
Ulf Magnusson
c5839f834b kconfig: Remove assignments to CONFIG_<arch> syms and hide them
All board defconfig files currently set the architecture in addition to
the board and the SoC, by setting e.g. CONFIG_ARM=y. This spams up
defconfig files.

CONFIG_<arch> symbols currently being set in configuration files also
means that they are configurable (can be changed in menuconfig and in
configuration files), even though changing the architecture won't work,
since other things get set from -DBOARD=<board>. Many boards also allow
changing the architecture symbols independently from the SoC symbols,
which doesn't make sense.

Get rid of all assignments to CONFIG_<arch> symbols and clean up the
relationships between symbols and the configuration interface, like
this:

1. Remove the choice with the CONFIG_<arch> symbols in arch/Kconfig and
   turn the CONFIG_<arch> symbols into invisible
   (promptless/nonconfigurable) symbols instead.

   Getting rid of the choice allows the symbols to be 'select'ed (choice
   symbols don't support 'select').

2. Select the right CONFIG_<arch> symbol from the SOC_SERIES_* symbols.
   This makes sense since you know the architecture if you know the SoC.

   Put the select on the SOC_* symbol instead for boards that don't have
   a SOC_SERIES_*.

3. Remove all assignments to CONFIG_<arch> symbols. The assignments
   would generate errors now, since the symbols are promptless.

The change was done by grepping for assignments to CONFIG_<arch>
symbols, finding the SOC_SERIES_* (or SOC_*) symbol being set in the
same defconfig file, and putting a 'select' on it instead.

See
https://github.com/ulfalizer/zephyr/commits/hide-arch-syms-unsquashed
for a split-up version of this commit, which will make it easier to see
how stuff was done. This needs to go in as one commit though.

This change is safer than it might seem re. outstanding PRs, because any
assignment to CONFIG_<arch> symbols generates an error now, making
outdated stuff easy to catch.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-02-08 00:50:08 -06:00
Ulf Magnusson
7b52475720 kconfig: Remove duplicated ARM_MPU dependency on CUSTOM_SECTION_ALIGN
CUSTOM_SECTION_ALIGN is already defined within an 'if ARM_MPU', so it
does not need a 'depends on ARM_MPU'.

Flagged by https://github.com/zephyrproject-rtos/ci-tools/pull/128.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-02-07 15:50:33 -06:00
Ulf Magnusson
1b604b946c kconfig: arm: Remove 'if CPU_CORTEX_R' within 'if CPU_CORTEX_R'
Flagged by https://github.com/zephyrproject-rtos/ci-tools/pull/128.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-02-07 15:48:29 -06:00
Anas Nashif
73008b427c tracing: move headers under include/tracing
Move tracing.h to include/tracing/ to align with subsystem reorg.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-02-07 15:58:05 -05:00
Wentong Wu
aa5d45d7cc tracing: add TRACING_ISR Kconfig
Add TRACING_ISR Kconfig to help high latency backend working well.

Currently the ISR tracing hook function is put at the begining and
ending of ISR wrapper, when there is ISR needed in the tracing path
(especially tracing backend), it will cause tracing buffer easily
be exhausted if async tracing method enabled. Also it will increase
system latency if all the ISRs are traced. So add TRACING_ISR to
enable/disable ISR tracing here. Later a filter out mechanism based
on irq number will be added.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2020-02-05 23:54:26 -05:00
Wayne Ren
d1fa27c42f arch: arc: fix the bug where buffer validate should be atomic
reported by #22290 and similar to #22275, the access to mpu regs
in buffer validate should be atomic.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-02-05 08:49:10 -08:00
Wayne Ren
8450d26b34 arch: arc: fix the bug where firq comes out in user mode
firq's case is similar to regular irq, it also needs to consider
the case of reg bank switch.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-02-05 08:49:10 -08:00
Wayne Ren
05a669c568 arch: arc: fix the bug where regular irq comes out in user mode
if USERSPACE is configured, it needs to record the user/kernel mode
of interrupted thread, because the switch of aux_sec_k_sp/aux_user_sp
depends on the aux_irq_act's U bit.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2020-02-05 08:49:10 -08:00
Andrei Emeltchenko
9987c2e2f9 arch: xtensa: Use reset-vector.S in booloader code
Use BOOTLOADER definition to separate bootloader code. This allows to
use the same file reset-vector.S when building bootloader and when
CONFIG_XTENSA_RESET_VECTOR is enabled.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2020-02-05 10:43:25 -05:00
Johan Hedberg
8183a7fd29 arch: xtensa: Add support for Intel Apollolake
Add the necessary architecture changes for Intel Apollolake.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-02-05 10:43:25 -05:00
Andy Ross
5b85d6da6a arch/x86_64: Poison instruction pointer of running threads
There was a bug where double-dispatch of a single thread on multiple
SMP CPUs was possible.  This can be mind-bending to diagnose, so when
CONFIG_ASSERT is enabled add an extra instruction to __resume (the
shared code path for both interupt return and context switch) that
poisons the shared RIP of the now-running thread with a recognizable
invalid value.

Now attempts to run the thread again will crash instantly with a
discoverable cookie in their instruction pointer, and this will remain
true until it gets a new RIP at the next interrupt or switch.

This is under CONFIG_ASSERT because it meets the same design goals of
"a cheap test for impossible situations", not because it's part of the
assertion framework.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Carlo Caione
0958673ee1 arch: arm64: Enable shared IRQ line for UART
Enable the shared IRQ for the UART line and enable the remaining tasks
that depends on a separated declaration of the TX/RX/Err/... IRQs.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Carlo Caione
3cdead707d arch: arm64: Fix cmsis_rtos tests
The cmsis_rtos tests are failing because the stack size used by CMSIS is
too small. Customize the stack size for the aarch64 architecture and
re-enable the tests.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Carlo Caione
3aef85458d arch: arm64: Dump registers content on fatal error
Extend the ESF structure and dump the most important registers in the
error exception handler.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Carlo Caione
6531d1c649 arch: arm64: Enable config option to switch from EL3 to EL1
ARMv8-A SoCs enter EL3 after reset. Add a new config option
(CONFIG_SWITCH_TO_EL1) to switch from EL3 to EL1 at boot and default it
to 'y'.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Carlo Caione
0d5e011be4 arch: arm64: Enable tracing
Enable the tracing hooks on the ARM64 platform and white-list the
tracing tests.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Carlo Caione
528319bff7 arch: arm64: Support all the ELn
While QEMU's Cortex-A53 emulation by default only emulates a CPU in EL1,
other QEMU forks (for example the QEMU released by Xilinx) and real
hardware starts in EL3.

To support all the ELn we introduce a macro to identify at run-time the
Exception Level and take the correct actions.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Carlo Caione
868264b8b4 tests: benchmarks: Add ARM64 case
To be able to pass the unit test we need to add a set of defines for the
ARM64 architecture. Fix this.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Carlo Caione
87d8a035dd arch: arm64: Support aarch64-gcc compiler
To be able to successfully compile the kernel for the ARM64 architecture
we have to tweak the compiler-related files to be able to use the
AArch64 GCC compiler.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Carlo Caione
1be0c05311 arch: arm64: Introduce ARM64 (AArch64) architecture
Introduce the basic ARM64 architecture support.

A new CONFIG_ARM64 symbol is introduced for the new architecture and new
cmake / Kconfig files are added to switch between ARM and ARM64.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-02-01 08:08:43 -05:00
Ioannis Glaropoulos
8345434a24 arch: arm: mpu: no dynamic MPU re-program in swap if not required
Dynamic MPU regions are used in build configurations with User
mode or MPU-based stack-overflow guards. If these features are
disabled, we skip calling the ARM function for re-programming
the MPU peripheral during context-switch. We also skip doing
this when jumping to main thread (although this brings limited
performace gain as it is called once in the boot cycle)

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-01-31 21:38:35 +01:00
Øyvind Rønningstad
350f184d98 arch: common: Delete isr_tables.ld which was a copy of intlist.ld
Refer directly to intlist.ld in zephyr_linker_sources()

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-01-30 14:19:14 -05:00
Sebastian Bøe
7d3fa1c2c8 gen_isr_tables: Improve error message
Users are reportedly not able to understand how to debug the following
error message from gen_isr_tables:

gen_isr_tables.py: multiple registrations at table_index 8 for irq 8

Debugging issues these kinds of issues is difficult so we need to give
users as much information as possible.

To make it clearer that it could be an abuse of the 'IRQ_CONNECT' API
that is causing the issue we add this to the error message.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2020-01-29 14:21:00 -08:00
Ulf Magnusson
cf89ba33ea global: Fix up leading/trailing blank lines in files
To make the updated test in
https://github.com/zephyrproject-rtos/ci-tools/pull/121 clean, though it
only checks modified files.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-01-27 17:41:55 -06:00
Øyvind Rønningstad
05f0d85b6a extensions.cmake: Replace TEXT_START with ROM_START
In zephyr_linker_sources().
This is done since the point of the location is to place things at given
offsets. This can only be done consistenly if the linker code is placed
into the _first_ section.

All uses of TEXT_START are replaced with ROM_START.

ROM_START is only supported in some arches, as some arches have several
custom sections before text. These don't currently have ROM_START or
TEXT_START available, but that could be added with a bit of refactoring
in their linker script.

No SORT_KEYs are changed.

This also fixes an error introduced when TEXT_START was added, where
TEXT_SECTION_OFFSET was applied to riscv's common linker.ld instead of
to openisa_rv32m1's specific linker.ld.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2020-01-23 03:22:59 -08:00
Ioannis Glaropoulos
5fcbe4e5d5 arch: arm: cortex-m: move openocd sections after _vector_end
openocd linker sections are not supposed to be part of the
vector table sections. Place the sections after we define
the _vector_end linker symbol.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-01-23 03:22:59 -08:00
Andy Ross
86430d8d46 kernel: arch: Clarify output switch handle requirements in arch_switch
The original intent was that the output handle be written through the
pointer in the second argument, though not all architectures used that
scheme.  As it turns out, that write is becoming a synchronization
signal, so it's no longer optional.

Clarify the documentation in arch_switch() about this requirement, and
add an instruction to the x86_64 context switch to implement it as
original envisioned.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Anas Nashif
22b95a2047 base: add error checking macros
Define there options for runtime error handling:
- assert on all errors (ASSERT_ON_ERRORS)
- no runtime checks (no asserts, no runtime error handling)
  (NO_RUNTIME_CHECKS)
- full runtime error handling (the default) (RUNTIME_ERROR_CHECKS)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-20 17:19:54 -05:00
Stephanos Ioannidis
c4e6bd2d7c arch: arm: cortex_m: Add CPU DWT feature symbol
This commit adds a Kconfig symbol for specifying whether the SoC
implements the CPU DWT feature.

The Data Watchpoint and Trace (DWT) is an optional debug unit for the
Cortex-M family cores (except ARMv6-M; i.e. M0 and M0+) that provides
watchpoints, data tracing and system profiling capabilities.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-01-20 14:05:47 +01:00
Andrew Boie
e34f1cee06 x86: implement kernel page table isolation
Implement a set of per-cpu trampoline stacks which all
interrupts and exceptions will initially land on, and also
as an intermediate stack for privilege changes as we need
some stack space to swap page tables.

Set up the special trampoline page which contains all the
trampoline stacks, TSS, and GDT. This page needs to be
present in the user page tables or interrupts don't work.

CPU exceptions, with KPTI turned on, are treated as interrupts
and not traps so that we have IRQs locked on exception entry.

Add some additional macros for defining IDT entries.

Add special handling of locore text/rodata sections when
creating user mode page tables on x86-64.

Restore qemu_x86_64 to use KPTI, and remove restrictions on
enabling user mode on x86-64.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-17 16:17:39 -05:00
Ulf Magnusson
4e85006ba4 dts: Rename generated_dts_board*.{h,conf} to devicetree*.{h,conf}
generated_dts_board.h is pretty redundant and confusing as a name. Call
it devicetree.h instead.

dts.h would be another option, but DTS stands for "devicetree source"
and is the source code format, so it's a bit confusing too.

The replacement was done by grepping for 'generated_dts_board' and
'GENERATED_DTS_BOARD'.

Two build diagram and input-output SVG files were updated as well, along
with misc. documentation.

hal_ti, mcuboot, and ci-tools updates are included too, in the west.yml
update.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-01-17 17:57:59 +01:00
Andrew Boie
2690c9e550 x86: move some per-cpu initialization to C
No reason we need to stay in assembly domain once we have
GS and a stack set up.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
a594ca7c8f kernel: cleanup and formally define CPU start fn
The "key" parameter is legacy, remove it.

Add a typedef for the expected function pointer type.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
808cca0efb x86: disable usermode on 64-bit unless no meltdown
KPTI is still work-in-progress on x86_64. Don't allow
user mode to be enabled unless the SOC/board configuration
indicates that the CPU in use is invulnerable to meltdown
attacks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
4fcf28ef25 x86: mitigate swapgs Spectre V1 attacks
See CVE-2019-1125. We mitigate this by adding an 'lfence'
upon interrupt/exception entry after the decision has been
made whether it's necessary to invoke 'swapgs' or not.

Only applies to x86_64, 32-bit doesn't use swapgs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
3d80208025 x86: implement user mode on 64-bit
- In early boot, enable the syscall instruction and set up
  necessary MSRs
- Add a hook to update page tables on context switch
- Properly initialize thread based on whether it will
  start in user or supervisor mode
- Add landing function for system calls to execute the
  desired handler
- Implement arch_user_string_nlen()
- Implement logic for dropping a thread down to user mode
- Reserve per-CPU storage space for user and privilege
  elevation stack pointers, necessary for handling syscalls
  when no free registers are available
- Proper handling of gs register considerations when
  transitioning privilege levels

Kernel page table isolation (KPTI) is not yet implemented.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
07c278382a x86: remove retpoline code
This code:

1) Doesn't work
2) Hasn't ever been enabled by default
3) We mitigate Spectre V2 via Extended IBRS anyway

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
077b587447 x86: implement hw-based oops for both variants
We use a fixed value of 32 as the way interrupts/exceptions
are setup in x86_64's locore.S do not lend themselves to
Kconfig configuration of the vector to use.

HW-based kernel oops is now permanently on, there's no reason
to make it optional that I can see.

Default vectors for IPI and irq offload adjusted to not
collide.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
708d5f7922 x86: don't use privilege stack areas as a guard
This is causing problems, as if we create a thread in
a system call we will *not* be using the kernel page
tables if CONFIG_KPTI=n.

Just don't fiddle with this page's permissions; we don't
need it as a guard area anyway since we have a stack
guard placed immediately before it, and this page
is unused if user mode isn't active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
06c4207602 x86: add CONFIG_X86_USERSPACE for Intel64
Hidden config to select dependencies.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
edc14e50ad x86: up-level speculative attack mitigations
These are now part of the common Kconfig and we
build spec_ctrl.c for all.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
c71e66e2a5 x86: add system call functions for 64-bit
Nothing too fancy here, we try as much as possible to
use the same register layout as the C calling convention.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
7f82b99ad4 x86: up-level some user mode functions
These are now common code, all are related to user mode
threads. The rat's nest of ifdefs in ia32's arch_new_thread
has been greatly simplified, there is now just one hook
if user mode is turned on.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
7ea958e0dd x86: optimize locations of psp and thread ptables
z_x86_thread_page_tables_get() now works for both user
and supervisor threads, returning the kernel page tables
in the latter case. This API has been up-leveled to
a common header.

The per-thread privilege elevation stack initial stack
pointer, and the per-thread page table locations are no
longer computed from other values, and instead are stored
in thread->arch.

A problem where the wrong page tables were dumped out
on certain kinds of page faults has been fixed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
e45c6eeebc x86: expose APIs for dumping MMU entry flags
Add two new non-static APIs for dumping out the
page table entries for a specified memory address,
and move to the main MMU code. Has debugging uses
when trying to figure out why memory domains are not
set up correctly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
7c293831c6 x86: add support for 64-bit thread ptables
Slightly different layout since the top-lebel PML4
is page-sized and must be page-aligned.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
fc589d7279 x86: implement 64-bit exception recovery logic
The esf has a different set of members on 64-bit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
ded0185eb8 x86: add GDT descriptors for user mode
These are arranged in the particular order required
by the syscall/sysret instructions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
692fda47fc x86: use MSRs for %gs
We don't need to set up GDT data descriptors for setting
%gs. Instead, we use the x86 MSRs to set GS_BASE and
KERNEL_GS_BASE.

We don't currently allow user mode to set %gs on its own,
but later on if we do, we have everything set up to issue
'swapgs' instructions on syscall or IRQ.

Unused entries in the GDT have been removed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
3256e9e00b x86: use BIT() macros for cr0/cr4 bits
Easier to establish correspondence with the technical
manuals.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
10d033ebf0 x86: enable recoverable exceptions on 64-bit
These were previously assumed to always be fatal.
We can't have the faulting thread's XMM registers
clobbered, so put the SIMD/FPU state onto the stack
as well. This is fairly large (512 bytes) and the
execption stack is already uncomfortably small, so
increase to 2K.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Stephanos Ioannidis
bc8524eb82 arch: arm: Rewrite Cortex-R reset vector function.
This commit addresses the following issues:

1. Add a new Kconfig configuration for specifying Dual-redundant Core
   Lock-step (DCLS) processor topology.

2. Register initialisation is only required when Dual-redundant Core
   Lock-step (DCLS) is implemented in hardware. This initialisation is
   required on DCLS only because the architectural registers are in an
   indeterminate state after reset and therefore the initial register
   state of the two parallel executing cores are not guaranteed to be
   identical, which can lead to DCCM detecting it as a hardware fault.
   A conditional compilation check for this hardware configuration
   using the newly added CONFIG_CPU_HAS_DCLS flag has been added.

3. The existing CPU register initialisation code did not take into
   account the banked registers for every execution mode. The new
   implementation ensures that all architectural registers of every
   mode are initialised.

4. Add VFP register initialisation for when floating-point support is
   enabled and the core is configured in DCLS topology. This
   initialisation sequence is required for the same reason given in
   the first issue.

5. Add provision for platform-specific initialisation on Cortex-R
   using PLATFORM_SPECIFIC_INIT config and z_platform_init function.

6. Remove seemingly pointless and inadequately defined STACK_MARGIN.
   Not only does it violate the 8-byte stack alignment rule, it does
   not provide any form of real stack protection.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-01-10 10:34:17 +01:00
Anas Nashif
26cecf74c4 arc: remove old macro used for event logger
Leftover macro from the kernel event logger which was replaced by
generic tracing infrastructure.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-09 11:21:19 -05:00
Daniel Leung
017a8eea4f xtensa: fix atomic_cas reporting value swapped even when not
The atomic_cas function was using incorrect register when determining
whether value was swapped. The swapping instruction s32c1i in
atomic_cas stores the value at memory location in register a4
regardless of whether swapping is done. In this case, the register a4
should be used to determine whether a swap is done. However, register
a3 (containing the oldValue as function argument) is used instead.
Since register a5 contains the old value at address loaded before
the swapping instruction, a3 and a5 contain the same value.
Since a3 == a5 is always true in this case, the function will always
return 1 even though values are not swapped. So fix it by using
the correct register.

Also, in case the value is not swapped, it jumps to where it returns
zero instead of loading from memory and comparing again.
The function was simply looping until swapping was done, which did not
align with the API where it would return 0 when swapping is not done
(regardless whether the memory location contains the old value or not).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-01-08 19:57:05 -05:00
Ioannis Glaropoulos
a07cb30d18 arch: arm: cortex-m: implement support for dynamic direct interrupts
This commits implements the support for dynamic direct
interrupts for the ARM Cortex-M architecture, and exposes
the support to the user as an ARM-only API.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-01-08 10:15:09 -08:00
Ioannis Glaropoulos
33048680b0 arch: arm: introduce support for dynamic direct interrupts
With this commit we add support for Dynamic Direct interrupts
for the ARM Cortex-M architecture. For that we introduce a new,
user-enabled, Kconfig symbol, DYNAMIC_DIRECT_INTERRUPTS.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2020-01-08 10:15:09 -08:00
Stephanos Ioannidis
916a8eb3a5 arch: arm: Fix Cortex-R power management interrupt handling
The system power management handling code in the '_isr_wrapper' enables
interrupts by executing the 'cpsie i' instruction, which causes a
system crash on the Cortex-R devices because the Cortex-R arch port
does not support nested interrupts at this time.

This commit restricts the interrupt state manipulations in the system
power management code to the Cortex-M arch, in order to prevent
interrupt nesting on other AArch32 family archs (only Cortex-R for
now).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-01-08 15:20:40 +01:00
Ulf Magnusson
def1f0e2d5 devicetree: Remove DT_SRAM_{BASE_ADDRESS,SIZE}, use CONFIG_* versions
The SRAM address and size are currently available as both
DT_SRAM_{BASE_ADDRESS,SIZE} and as CONFIG_SRAM_{BASE_ADDRESS,SIZE} (via
the Kconfig preprocessor).

Use the CONFIG_SRAM_* versions everywhere, and remove generation of the
DT_SRAM_* versions from gen_defines.py.

The Kconfig symbols currently depend on 'ARC || ARM || NIOS2 || X86'.
Not sure why, so I removed it.

It looks like no configuration files set CONFIG_SRAM_* at the moment, so
another option might be to use the DT_* symbols everywhere instead. Some
Kconfig.defconfig.series files add defaults to them though.

Also improve the help texts for CONFIG_SRAM_* to say that they normally
come from devicetree rather than configuration files.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-01-07 17:19:36 +01:00
Olof Johansson
a6b3b616f5 riscv: use standard MSTATUS
This is no longer needed, since all in-tree platforms are only using
the standard mstatus formats. Remove it to avoid the complexity.

Signed-off-by: Olof Johansson <olof@lixom.net>
2020-01-06 13:27:45 -05:00
Ulf Magnusson
bd9962d8d9 kconfig: Remove '# hidden' comments on promptless symbols
Same deal as in commit 41713244b3 ("kconfig: Remove '# Hidden' comments
on promptless symbols"). I forgot to do a case-insensitive search.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-01-03 11:38:40 +01:00
Stephanos Ioannidis
cad30e817f arch: arm: Enable CMSIS-Core for Cortex-R by default
This commit enables the CMSIS-Core(R) processor interface driver for
the Cortex-R platforms by default.

The CMSIS-Core component provides a set of standard interface functions
to control the Cortex-R series processor cores and will be required by
the arch port as well as other CMSIS library components (e.g. CMSIS-DSP
and CMSIS-NN).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-01-02 16:59:04 -05:00
Ulf Magnusson
41713244b3 kconfig: Remove '# Hidden' comments on promptless symbols
How prompts work is better documented nowadays, and these comments might
not be that helpful if you don't know.

There are lots promptless symbols that don't have a comment.

Also fix up some comments in arch/Kconfig that seem misplaced/redundant,
and clean up some whitespace (no blank line after a comment makes it
look like it only applies to the symbol directly after it to me).

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-12-21 10:30:33 -05:00
Ulf Magnusson
9c9eb3452b kconfig: Fix some formatting nits
Same deal as in commit bd6e04411e ("kconfig: Clean up header comments
and make them consistent") and commit 1f38ea77ba ("kconfig: Clean up
'config  FOO' (two spaces) definitions"), for some newly-introduced
stuff.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-12-21 10:30:02 -05:00
Ulf Magnusson
6ef29c0250 kconfig: Remove some redundant single-item menus and ifs
A single menu within an if like

    if FOO

    menu "blah"

    ...

    endmenu

    endif

can be replaced with

    menu "blah"
            depends on FOO

    ...

    endmenu

Fix up all existing instances.

Also remove redundant extra menus underneath 'menuconfig' symbols.
'menuconfig' already creates a menu.

Also remove the menu in arch/arm/core/aarch32/Kconfig around the
"Floating point ABI" choice. The choice depends on FLOAT, which depends
on CPU_HAS_CPU, so remove the 'depends on CPU_HAS_FPU' too.

Piggyback removing a redundant 'default n' for BME280.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-12-21 10:26:54 -05:00
Carlo Caione
d048faacf2 aarch32: Add header shims to support old file locations
Out-of-tree code can still be using the old file locations. Introduce
header shims to include the headers from the new correct location and
print a warning message.

Add also a new Kconfig symbol to suppress such warning.

The shim will go away after two releases, so make sure to adapt your
application for the new locations.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2019-12-20 11:40:59 -05:00
Carlo Caione
13e671e381 arch: arm: Fix header guards
Fix the header guards for the ARM header files to reflect the new code
location.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2019-12-20 11:40:59 -05:00
Carlo Caione
aec9a8c4be arch: arm: Move ARM code to AArch32 sub-directory
Before introducing the code for ARM64 (AArch64) we need to relocate the
current ARM code to a new AArch32 sub-directory. For now we can assume
that no code is shared between ARM and ARM64.

There are no functional changes. The code is moved to the new location
and the file paths are fixed to reflect this change.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2019-12-20 11:40:59 -05:00
Øyvind Rønningstad
0b2c8e201c arm, x86, riscv: linker.ld: Move TEXT_SECTION_OFFSET
to its own linker file snippet so snippets can be placed before it.
Using zephyr_linker_sources().

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2019-12-20 08:54:53 -05:00
Øyvind Rønningstad
1134f1b49d cortex_m: linker.ld: Port secure entry funcs to zephyr_linker_sources()
Place in its own linker snippet file.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2019-12-20 08:54:53 -05:00
Øyvind Rønningstad
3925132456 arc: linker.ld: Port vector table to zephyr_linker_sources()
Place it in its own linker file snippet.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2019-12-20 08:54:53 -05:00
Øyvind Rønningstad
321462b310 arm: linker.ld: Port the vector table to zephyr_linker_sources()
Also port vector table relay.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2019-12-20 08:54:53 -05:00
Daniel Leung
b61f448a3f xtensa: add support to build HAL as part of build process
This adds the necessary bits to build the Xtensa HAL as
a module, and removes the bits to use the HAL built with
the Zephyr SDK.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-12-18 20:24:18 -05:00
Alberto Escolar Piedras
66bdb76e7c POSIX arch: Fix C++ main() linkage issue
To be able to define main() in C++ code we need to have its
prototype defined somewhere visibly. Otherwise name mangling
will prevent the linker from finding it.
Zephyr assumes a void main(void) prototype and therefore
this will be the prototype after renaming:
void zephyr_app_main(void);

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2019-12-18 21:53:47 +01:00
Andrew Boie
1b45de8010 x86: fix stack traces on x86_64
Runtime stack traces (at least as currently implemented)
don't work on x86_64 normally as RBP is treated as a general-
purpose register. Depend on CONFIG_NO_OPTIMIZATIONS to enable
this on 64-bit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-18 08:15:54 -05:00
Andrew Boie
7e014f6b30 x86: up-level QEMU arch_system_halt()
qemu_x86_64 will exit the emulator on a fatal system error,
like qemu_x86 already does.

Improves CI times when tests fail since sanitycheck will not
need to wait for the timeout to expire.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-17 18:56:54 -08:00
Andrew Boie
2b67ca8ac9 x86: improve exception debugging
We now dump more information for less common cases,
and this is now centralized code for 32-bit/64-bit.
All of this code is now correctly wrapped around
CONFIG_EXCEPTION_DEBUG. Some cruft and unused defines
removed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-17 11:39:22 -08:00
Andrew Boie
a824821b86 kernel: fix k_mem_partition data types
We need a size_t and not a u32_t for partition sizes,
for 64-bit compatibility.

Additionally, app_memdomain.h was also casting the base
address to a u32_t instead of a uintptr_t.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-12 14:48:42 -08:00
Ulf Magnusson
984bfae831 global: Remove leading/trailing blank lines in files
Remove leading/trailing blank lines in .c, .h, .py, .rst, .yml, and
.yaml files.

Will avoid failures with the new CI test in
https://github.com/zephyrproject-rtos/ci-tools/pull/112, though it only
checks changed files.

Move the 'target-notes' target in boards/xtensa/odroid_go/doc/index.rst
to get rid of the trailing blank line there. It was probably misplaced.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-12-11 19:17:27 +01:00
Kumar Gala
24ae1b1aa7 include: Fix use of <misc/FOO.h> -> <sys/FOO.h>
Fix #include <misc/FOO.h> as misc/FOO.h has been deprecated and
should be #include <sys/FOO.h>.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-12-10 08:39:37 -05:00
Flavio Ceolin
4afa181831 x86: fatal: Explicitely include syscall header
This file is using definitions from ia32/syscall.h, just properly
including this file.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-12-09 12:47:15 -05:00
Andrew Boie
9849c7d3e7 x86: don't use privilege stack areas as a guard
This is causing problems, as if we create a thread in
a system call we will *not* be using the kernel page
tables if CONFIG_KPTI=n, resulting in a crash when
the later call to copy_page_tables() tries to initialize
the PDPT (which is in the same page as the privilege
stack).

Just don't fiddle with this page's permissions; we don't
need it as a guard area anyway since we have a stack
guard placed immediately before it, and this page
is unused if user mode isn't active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-09 12:40:31 -05:00
Ulf Magnusson
87e917a925 kconfig: Remove redundant 'default n' and 'prompt' properties
Bool symbols implicitly default to 'n'.

A 'default n' can make sense e.g. in a Kconfig.defconfig file, if you
want to override a 'default y' on the base definition of the symbol. It
isn't used like that on any of these symbols though.

Also replace some

    config
    	prompt "foo"
    	bool/int

with the more common shorthand

    config
    	bool/int "foo"

See the 'Style recommendations and shorthands' section in
https://docs.zephyrproject.org/latest/guides/kconfig/index.html.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-12-09 16:14:50 +01:00
David B. Kinder
416a174ac2 doc: fix misspellings in docs
Fix misspellings in docs (and Kconfig and headers processed into docs)
missed during regular reviews.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2019-12-04 14:12:53 -06:00
Ioannis Glaropoulos
98201228ee arch: enable MPU Gap filling by default in build without user mode
When we build without support for user mode, we do not need
a large number of MPU regions, so we should not allow having
MPU_GAP_FILLING unset. This would allow PRIV code execute from
SRAM, which is an unnecessary compromise on ARMv8-M builds
without USERSPACE support. We update the Kconfig dependencies
and add a sentence for clarification.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-11-22 11:36:59 +01:00
Alberto Escolar Piedras
4fd7cd5824 posix arch: Use zephyr_link_libraries() to set -m32
For some reason, some users have been facing a bizarre issue
in which the -m32 option was not being passed to the linker
by cmake when building for the POSIX arch as a 32bit target,
even though the option was actually supported.

Instead of using zephyr_ld_options() which checks if an
option is supported and drops it otherwise, use
zephyr_link_libraries()

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2019-11-21 13:08:56 +01:00
Andrew Fernandes
d52ba5102a arch/arm: Fix formatting in arch/arm/core/fatal esf_dump
arch/arm/core/fatal: fix formatting in esf_dump

Signed-off-by: Andrew Fernandes <andrew@fernandes.org>
2019-11-18 13:51:47 +01:00
Wayne Ren
1e80f25cd1 arch: arc: clear ici interrupt during init
clear the ici interrupts during init

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-11-13 12:04:18 -08:00
Wayne Ren
d56a12d955 arch: arc: do not use sleep instruction for nsim smp
It's found that in nsim_hs_smp, sometimes the cpu
doesn't response inter-core interrupt after executing sleep
instruction.

It may be a bug of nsim, but needs more time to
investigate the root of this issue.

This commit is a workround for this, as nsim is just an
instruction simulator, no direct impact.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-11-13 12:04:18 -08:00
Wayne Ren
63d3828fa3 arch: arc: split codes for SMP and codes for multicore
reported by #19599, this commit splits the codes for Zephyr SMP
and codes for ARC mulicore.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-11-13 12:04:18 -08:00
Wayne Ren
efc00b5612 arch: arc: necessary fixes after normal idle is used for SMP
* necessary fixes after commit 11bd67db where ipi interrupt is used
to notify other cores to do a thread switch if necessary

* then for arc, it's needed to ignore swap_ok and check whether thread
switch is needed in the exit of irq handling.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-11-13 12:04:18 -08:00
Flavio Ceolin
03a93d521b arc: core: Fix possible overrun
dyn_reg_info has MPU_DYNAMIC_REGION_AREAS_NUM elements, just changing
the if check to be greater equal to this number to avoid access
MPU_DYNAMIC_REGION_AREAS_NUM element causing an out-of-bounds write.

CID: 205648
Fixes #20487

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-11-12 23:18:05 +01:00
Andrew Boie
1c97851726 x86: enable MMU on 64-bit with SMP
The races are believed to be resolved with the patch to
irq_offload(). Allow the MMU to be turned on and enable
it for qemu_x86_64.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-08 15:16:43 -08:00
Stephanos Ioannidis
92625d710d arch: arm: Make PLATFORM_SPECIFIC_INIT available to all ARM variants.
Move PLATFORM_SPECIFIC_INIT declaration from Cortex-M Kconfig to the
ARM arch Kconfig in order to make it available for all ARM variants.

The rationale is that there is really no good reason why
platform-specific initialisation should be a Cortex-M-specific feature
and that Cortex-R port is expected to utilise this in a near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-08 19:17:58 +01:00
Stephanos Ioannidis
9695763f5f arch: x86: Inline direct ISR functions.
This commit inlines the direct ISR functions that were previously
implemented in irq_manage.c, since the PR #20119 resolved the circular
dependency between arch.h and kernel_structs.h described in the issue
#3056.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-08 15:50:23 +01:00
Stephanos Ioannidis
5ba0c8e0c5 arch: arm: Inline arch_isr_direct_header.
This commit inlines arch_isr_direct_header function that was previously
placed in irq_manage.c for no good reason (possibly in relation to the
FIXME for #3056).

In addition, since the PR #20119 resolved the header circular
dependency issue described in the issue #3056, this commit removes the
references to it in the code.

The reason for not inlining _arch_is_direct_pm as the #3056 FIXME
suggests is that there is little to gain from doing so and there still
exists circular dependency for the headers required by this function
(#20119 only addresses kernel_structs.h, which is required for _current
and _kernel, which, in turn, is required for handling interrupt nesting
in many architectures; in fact, Cortex-A and Cortex-R port will require
it as well).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-08 12:23:05 +01:00
Andy Ross
8892406c1d kernel/sys_clock.h: Deprecate and convert uses of old conversions
Mark the old time conversion APIs deprecated, leave compatibility
macros in place, and replace all usage with the new API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-11-08 11:08:58 +01:00
Andrew Boie
4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Andrew Boie
a6de79b4af arm: remove unused header
This had two functions in it, neither were implemented
anywhere.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Andrew Boie
3f1cf02e05 arc: rename some arc-specific functions
These are not part of the generic kernel to
architecture interface, rename appropriately to
reflect they are ARC-specific.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Andrew Boie
beba1e0a84 kernel: restrict irq_offload() to test cases
This API was only created to facilitate testing of kernel
objects in IRQ context, never for actual applications.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 23:16:35 +01:00
Ioannis Glaropoulos
a1f5db0e0b arch: arm: mpu: allow optional region gap filling in the ARMv8-M MPU
We allow the run-time, full paritioning of the SRAM space by the
ARMv8-M MPU driver to be an optional feature.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-11-07 09:41:54 -08:00
Ioannis Glaropoulos
f103c68699 arch: arm: mpu: move mpu_configure_regions(.) in arm_mpu.c
This commit moves the function mpu_configure_regions(.) from
arm_mpu_v7_internal.h to arm_mpu.c. The function is to be used
by the both ARMv7-M MPU driver, as well as the ARMv8-M MPU
driver (when it behaves like the ARMv7-M driver).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-11-07 09:41:54 -08:00
Ioannis Glaropoulos
6d789510a5 arch: arm: mpu: introduce option to skip background MPU region fill
We introduce MPU_GAP_FILLING Kconfig option that instructs
the MPU driver to enforce a full SRAM partitioning, when it
programs the dynamic MPU regions (user thread stack, PRIV stack
guard and application memory domains) at context-switch. We
allow this to be configurable, in order to increase the number
of MPU regions available for application memory domain programming.

This option is introduced in arch/Kconfig, as it is expected
to serve as a cross-ARCH symbol. The option can be set by the
user during build configuration.

By not enforcing full partition, we may leave part of kernel
SRAM area covered only by the default ARM memory map. This
is fine for User Mode, since the background ARM map does not
allow nPRIV access at all. The difference is that kernel code
will be able to attempt fetching instructions from kernel SRAM
area without this leading directly to a MemManage exception.

Since this does not compromize User Mode, we make the skipping
of full partitioning the default behavior for the ARMv8-M MPU
driver. The application developer may be able to overwrite this.

In the wake of this change we update the macro definitions in
arm_core_mpu_dev.h that derive the maximum number of MPU regions
for application memory domains.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-11-07 09:41:54 -08:00
Andrew Boie
0b27cacabd x86: up-level page fault handling
Now in common code for 32-bit and 64-bit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 11:19:38 -05:00
Andrew Boie
7d1ae023f8 x86: enable stack overflow detection on 64-bit
CONFIG_HW_STACK_PROTECTION is now available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-06 17:50:34 -08:00
Andrew Boie
9fb89ccf32 x86: consolidate some fatal error code
Some code for unwinding stacks and z_x86_fatal_error()
now in a common C file, suitable for both modes.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-06 17:50:34 -08:00
Andrew Boie
64f6e2ac6b x86: consolidate STACK_ROUND_* definition
There was no definition for 64-bit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-06 17:50:34 -08:00
Stephanos Ioannidis
7bcfdadf81 arch: Simplify private header include path configuration.
When compiling the components under the arch directory, the compiler
include paths for arch and kernel private headers need to be specified.

This was previously done by adding 'zephyr_library_include_directories'
to CMakeLists.txt file for every component under the arch directory,
and this resulted in a significant amount of duplicate code.

This commit uses the CMake 'include_directories' command in the root
CMakeLists.txt to simplify specification of the private header include
paths for all the arch components.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Stephanos Ioannidis
2d7460482d headers: Refactor kernel and arch headers.
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.

The refactoring strategy used in this commit is detailed in the issue

This commit introduces the following major changes:

1. Establish a clear boundary between private and public headers by
  removing "kernel/include" and "arch/*/include" from the global
  include paths. Ideally, only kernel/ and arch/*/ source files should
  reference the headers in these directories. If these headers must be
  used by a component, these include paths shall be manually added to
  the CMakeLists.txt file of the component. This is intended to
  discourage applications from including private kernel and arch
  headers either knowingly and unknowingly.

  - kernel/include/ (PRIVATE)
    This directory contains the private headers that provide private
   kernel definitions which should not be visible outside the kernel
   and arch source code. All public kernel definitions must be added
   to an appropriate header located under include/.

  - arch/*/include/ (PRIVATE)
    This directory contains the private headers that provide private
   architecture-specific definitions which should not be visible
   outside the arch and kernel source code. All public architecture-
   specific definitions must be added to an appropriate header located
   under include/arch/*/.

  - include/ AND include/sys/ (PUBLIC)
    This directory contains the public headers that provide public
   kernel definitions which can be referenced by both kernel and
   application code.

  - include/arch/*/ (PUBLIC)
    This directory contains the public headers that provide public
   architecture-specific definitions which can be referenced by both
   kernel and application code.

2. Split arch_interface.h into "kernel-to-arch interface" and "public
  arch interface" divisions.

  - kernel/include/kernel_arch_interface.h
    * provides private "kernel-to-arch interface" definition.
    * includes arch/*/include/kernel_arch_func.h to ensure that the
     interface function implementations are always available.
    * includes sys/arch_interface.h so that public arch interface
     definitions are automatically included when including this file.

  - arch/*/include/kernel_arch_func.h
    * provides architecture-specific "kernel-to-arch interface"
     implementation.
    * only the functions that will be used in kernel and arch source
     files are defined here.

  - include/sys/arch_interface.h
    * provides "public arch interface" definition.
    * includes include/arch/arch_inlines.h to ensure that the
     architecture-specific public inline interface function
     implementations are always available.

  - include/arch/arch_inlines.h
    * includes architecture-specific arch_inlines.h in
     include/arch/*/arch_inline.h.

  - include/arch/*/arch_inline.h
    * provides architecture-specific "public arch interface" inline
     function implementation.
    * supersedes include/sys/arch_inline.h.

3. Refactor kernel and the existing architecture implementations.

  - Remove circular dependency of kernel and arch headers. The
   following general rules should be observed:

    * Never include any private headers from public headers
    * Never include kernel_internal.h in kernel_arch_data.h
    * Always include kernel_arch_data.h from kernel_arch_func.h
    * Never include kernel.h from kernel_struct.h either directly or
     indirectly. Only add the kernel structures that must be referenced
     from public arch headers in this file.

  - Relocate syscall_handler.h to include/ so it can be used in the
   public code. This is necessary because many user-mode public codes
   reference the functions defined in this header.

  - Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
   necessary to provide architecture-specific thread definition for
   'struct k_thread' in kernel.h.

  - Remove any private header dependencies from public headers using
   the following methods:

    * If dependency is not required, simply omit
    * If dependency is required,
      - Relocate a portion of the required dependencies from the
       private header to an appropriate public header OR
      - Relocate the required private header to make it public.

This commit supersedes #20047, addresses #19666, and fixes #3056.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Ulf Magnusson
1f38ea77ba kconfig: Clean up 'config FOO' (two spaces) definitions
Must've been copy-pasted around.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-11-04 17:31:27 -05:00
Ulf Magnusson
bd6e04411e kconfig: Clean up header comments and make them consistent
Use this short header style in all Kconfig files:

    # <description>

    # <copyright>
    # <license>

    ...

Also change all <description>s from

    # Kconfig[.extension] - Foo-related options

to just

    # Foo-related options

It's clear enough that it's about Kconfig.

The <description> cleanup was done with this command, along with some
manual cleanup (big letter at the start, etc.)

    git ls-files '*Kconfig*' | \
        xargs sed -i -E '1 s/#\s*Kconfig[\w.-]*\s*-\s*/# /'

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-11-04 17:31:27 -05:00
Ulf Magnusson
975de21858 kconfig: Global whitespace/consistency cleanup
Clean up space errors and use a consistent style throughout the Kconfig
files. This makes reading the Kconfig files more distraction-free, helps
with grepping, and encourages the same style getting copied around
everywhere (meaning another pass hopefully won't be needed).

Go for the most common style:

 - Indent properties with a single tab, including for choices.

   Properties on choices work exactly the same syntactically as
   properties on symbols, so not sure how the no-indentation thing
   happened.

 - Indent help texts with a tab followed by two spaces

 - Put a space between 'config' and the symbol name, not a tab. This
   also helps when grepping for definitions.

 - Do '# A comment' instead of '#A comment'

I tweaked Kconfiglib a bit to find most of the stuff.

Some help texts were reflowed to 79 columns with 'gq' in Vim as well,
though not all, because I was afraid I'd accidentally mess up
formatting.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-11-01 15:53:23 +01:00
Carlo Caione
1f6d4e2705 irq: cortex-r: Fix wrong irq enabling
In the cortex-r port we are currently using GIC as a fake cascade
controller hooked to a fake parent IRQ #0. And in gic_init() we use
IRQ_CONNECT() to connect this dummy IRQ.

Unfortunately this value is shifted and offset when calling
irq_set_priority_next_level() that tries to set the IRQ priority on a
value of 0xffffffff.

This value is offset again in gic_irq_set_priority() that actually sets
the priority on the PPI #31.

Fix this avoiding to set any priority for IRQ #0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2019-10-31 16:07:45 +01:00
David B. Kinder
241044f178 doc: fix misspellings in Kconfig files
Fix misspellings in Kconfig files missed during regular reviews.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2019-10-30 10:24:30 +01:00
Ulf Magnusson
e00bd00517 arch: arc/arm: kconfig: Remove unused DATA_ENDIANNESS_LITTLE symbol
Existed already in commit 8ddf82cf70 ("First commit"). Has never been
used.

Found with a script.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-30 09:14:12 +01:00
Ulf Magnusson
4f8e626312 arch: cortex_m/r: kconfig: Remove unused LDREX_STREX_AVAILABLE symbol
Existed already in commit 8ddf82c ("First commit"). Has never been
used.

Found with a script.

Also remove some pointless menus that have no visible symbols in them.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-30 08:15:09 +01:00
Ioannis Glaropoulos
03734eeb1e arch: arm: protect r0 in z_arch_switch_to_main_thread() inline ASM
Adding r0 to the clobber list in the inline ASM block of
z_arch_switch_to_main_thread(). This instructs assembler
to not use r0 to store ASM expression operands, e.g. in
the subsequent instruction, msr PSR %1.

We also do a minor optimization with the clearing of R1
before jumping to main.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-29 06:11:52 +01:00
Ulf Magnusson
a02c333963 arm/riscv: kconfig: Remove type from NUM_IRQS in defconfig files
Add a common definition for NUM_IRQS in arch/arm/core/Kconfig and
arch/riscv/Kconfig. That way, the type doesn't have to be given for
NUM_IRQS in all the Kconfig.defconfig files.

Trying to get rid of unnecessary "full" symbol definitions in
Kconfig.defconfig files, to make the organization clearer. It can also
help with finding unused symbols.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-28 08:27:28 -05:00
Daniel Leung
b7eb04b300 x86: consolidate x86_64 architecture, SoC and boards
There are two set of code supporting x86_64: x86_64 using x32 ABI,
and x86 long mode, and this consolidates both into one x86_64
architecture and SoC supporting truly 64-bit mode.

() Removes the x86_64:x32 architecture and SoC, and replaces
   them with the existing x86 long mode arch and SoC.
() Replace qemu_x86_64 with qemu_x86_long as qemu_x86_64.
() Updates samples and tests to remove reference to
   qemu_x86_long.
() Renames CONFIG_X86_LONGMODE to CONFIG_X86_64.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-10-25 17:57:55 -04:00
Ulf Magnusson
e073c4f54c arch: arc: kconfig: Define FP_FPU_DA outside Kconfig.defconfig files
Define FP_FPU_DA in arch/arc/Kconfig to make it always available. That
way, the Kconfig.defconfig definitions can skip the type, making them
incomplete if the base definition of the symbol disappears. That makes
the organization easier to understand and errors easier to spot.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-25 16:38:54 -05:00
Ulf Magnusson
9343e25baa arch: arc: kconfig: Define CPU_EM* syms outside Kconfig.defconfig files
Define CPU_EM4* and CPU_EM6 in arch/arc/Kconfig to make them always
available. That way, the Kconfig.defconfig definitions can skip the
type, making them incomplete if the base definition of the symbol
disappears. That makes the organization easier to understand and errors
easier to spot.

The help texts were taken from
https://gcc.gnu.org/onlinedocs/gcc/ARC-Options.html. Help texts for
invisible symbols can be checked in the menuconfig too if you go into
show-all mode, so they're better than adding a comment.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-25 16:38:54 -05:00
Alberto Escolar Piedras
f974cb0ae1 posix arch: Untangle headers
posix_soc_if.h is meant to be a private header between
the POSIX ARCH, SOC, and maybe boards,
it should not contain definitions meant to be used directly
by the kernel or app.

Some definitions were placed here due to a dependency moebius
loop.
Unravel that by removing all header dependencies in posix_soc_if.h,
move those definitions out to a more logical place,
and while we are here reduce the amount of users of
irq_offload.h in POSIX arch related code

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2019-10-25 11:23:49 -07:00
Andrew Boie
abf5157fdd REVERTME: x86: disable X86_MMU if MP_NUM_CPUS != 1
We are working through some SMP issues with this enabled,
prevent this for now.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
e1f2292895 x86: intel64: set page tables for every CPU
The page tables to use are now stored in the cpuboot struct.
For the first CPU, we set to the flat page tables, and then
update later in z_x86_prep_c() once the runtime tables have
been generated.

For other CPUs, by the time we get to z_arch_start_cpu()
the runtime tables are ready do go, and so we just install
them directly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
f6e82ea1bd x86: generate runtime 64-bit page tables
- Bring in CONFIG_X86_MMU and some related defines to
  common X86 Kconfig
- Don't set ARCH_HAS_USERSPACE for intel64 yet when
  X86_MMU is enabled
- Uplevel x86_mmu.c to common code
- Add logic for handling PML4 table and generating PDPTs
- move z_x86_paging_init() to common kernel_arch_func.h
- Uplevel inclusion of mmustructs.h to common x86 arch.h,
  both need it for memory domain defines

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
4c0d044863 x86: mmustructs: use Z_STRUCT_SECTION_ITERABLE()
This does the right thing for arches with 8-byte words.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
46540f4bb1 intel64: don't set global flag in ptables
May cause issues when runtime page tables are installed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
cdd721db3b locore: organize data by type
Program text, rodata, and data need different MMU
permissions. Split out rodata and data from the program
text, updating the linker script appropriately.

Region size symbols added to the linker script, so these
can later be used with MMU_BOOT_REGION().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Ulf Magnusson
2b61031c8f kconfig: Remove symbol types from Kconfig.defconfig files
Same deal as in commit 7fdb525754 ("kconfig: Use 'default' instead of
'def_bool' in Kconfig.defconfig files"), but I hacked Kconfiglib to also
find cases where the type is given separately as e.g.

    config FOO
            int
            default 3

Motivation (from a note in
https://docs.zephyrproject.org/latest/guides/kconfig/index.html):

    For a symbol defined in multiple locations (e.g., in a
    Kconfig.defconfig file in Zephyr), it is best to only give the
    symbol type for the "base" definition of the symbol, and to use
    'default' (instead of 'def_<type>' value) for the remaining
    definitions. That way, if the base definition of the symbol is
    removed, the symbol ends up without a type, which generates a
    warning that points to the other definitions. That makes the extra
    definitions easier to discover and remove.

It's also nice if 'def_bool' and the like turn into a semi-reliable flag
that the symbol is only defined in Kconfig.defconfig files. That might
be a sign that things could be cleaned up.

Will do a separate pass later to remove some symbols only defined in
Kconfig.defconfig files.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-24 12:40:22 -05:00
Ioannis Glaropoulos
17630f637e arch: arm: internal API to check return execution mode
We add an ARM internal API which allows the kernel to
infer the execution mode we are going to return after
the current exception.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-24 10:12:08 -07:00
Ioannis Glaropoulos
f030608701 arch: add Kconfig to signify ability to detect nested IRQ
We introduce a Kconfig option to signify whether
an Architecture has the capability of detecting
whether execution is, currently, in a nested
exception.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-24 10:12:08 -07:00
Ioannis Glaropoulos
4f11b6f8cf arch: arm: re-implement z_arch_is_in_isr
We re-implement the z_arch_is_in_isr function
so it aligns with the implementation for other
ARCHEs, i.e. returning false whenever any IRQ
or system exception is active.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-24 10:12:08 -07:00
Ioannis Glaropoulos
4aa3f71337 arch: arm: major cleanup and refactoring for fault function
This commit refactors and cleans up __fault, so the function
- reduces to supplying MSP, PSP, and EXC_RETURN to the C
  function for fault handling
- simplifies itself, removing conditional
  implementation, i.e. based on ARM Secure firmware,

The reason for that is simple: it is much better to write the
fault handling in C instead of assembly, so we really do only
what is strictly required, in assembly.

Therefore, the commit refactors the z_arm_fault() function
as well, organizing better the different functional blocks,
that is:
- unlocking interrupts
- retriving ESF
- asserting for HW errors
- printing additional error logs

The refactoring unifies the way the ESF is retrieved for the
different Cortex-M variants and security execution states.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-24 10:12:08 -07:00
Ioannis Glaropoulos
26e4d43916 arch: arm: fatal: add documentation for z_do_kernel_oops()
Add some documentation for ARM-specific function
z_do_kernel_oops, stating clearly that it is only
invoked inside SVC context. We also comment on
the validity of the supplied ESF.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-24 10:12:08 -07:00
Ioannis Glaropoulos
da6c3d14ce arch: arm: swap: add useful inline comment for SVC return
We add a useful inline comment in the SVC handler (written in
assembly), which identifies one of the function return points
a bit more clearly.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-24 10:12:08 -07:00
Kumar Gala
22e7449b73 kconfig: Introduce typed dt kconfig functions
Replace:
  dt_chosen_reg_addr
  dt_chosen_reg_size
  dt_node_reg_addr
  dt_node_reg_size

with:
  dt_chosen_reg_addr_int
  dt_chosen_reg_size_int
  dt_chosen_reg_addr_hex
  dt_chosen_reg_size_hex
  dt_node_reg_addr_int
  dt_node_reg_size_int
  dt_node_reg_addr_hex
  dt_node_reg_size_hex

So that we get the proper formatted string for the type of symbol.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-10-24 08:51:06 -05:00
Alexey Brodkin
19d4caaaab arch: arc: Implement z_arch_system_halt()
"brk" is a break-point instruction which among other things
halts ARC core. As compared to pure halt (which is "flag 1" for ARC)
it is much more convenient as it might be executed from either
secure mode or normal mode (with SecureShield enabled), while "flag"
instruction will raise privilege violation exception if SecureShield
is enabled and we're in "normal" mode.

Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
2019-10-24 13:26:41 +02:00
Ulf Magnusson
79d82f6805 xtensa: kconfig: Remove unused XTENSA_OMIT_HIGH_INTERRUPTS symbol
Unused since commit 6fd6b7e50a ("xtensa: remove legacy arch
implementation").

Found with a script.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-22 14:31:46 -05:00
Ulf Magnusson
bb3cd11bf1 xtensa: kconfig: Remove unused SW_ISR_TABLE symbol
Unused since commit 6fd6b7e50a ("xtensa: remove legacy arch
implementation").

Found with a script.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-21 16:15:41 -07:00
Andrew Boie
55cb961878 x86: arm: rename some functions
z_arch_ is only for those APIs in the arch interface.
Other stuff needs to be renamed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-21 10:13:38 -07:00
Andrew Boie
979b17f243 kernel: activate arch interface headers
Duplicate definitions elsewhere have been removed.

A couple functions which are defined by the arch interface
to be non-inline, but were implemented inline by native_posix
and intel64, have been moved to non-inline.

Some missing conditional compilation for z_arch_irq_offload()
has been fixed, as this is an optional feature.

Some massaging of native_posix headers to get everything
in the right scope.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-21 10:13:38 -07:00
Wayne Ren
601b9afc9e arch: arc: implement DIRECT IRQ support
* implement DIRECT IRQ support both for normal irq and fast irq.
* add separate interrupt stack for fast irq and use CONFIG_ARC_
  _FIRQ_STACK to control it. This will bring shortest interrupt
  latency for fast irq.
* note that scheduing in DIRECT IRQ is not supported.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-10-21 09:06:17 -07:00
Ulf Magnusson
eba1aa4bc5 arch: posix: kconfig: Remove unused ARCH_POSIX_STOP_ON_FATAL_ERROR sym
Unused after commit 71ce8ceb18 ("kernel: consolidate error handling
code").

Found with a script.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-21 16:20:40 +02:00
Andrew Boie
f82beb984d x86: remove unused thread arch member
The PDPT was moved to the stack area since it has alignment
requirements, but never removed from here.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-19 20:40:26 -07:00
Andy Ross
8bc3b6f673 arch/x86/intel64: Fix assumption with dummy threads
The intel64 switch implementation doesn't actually use a switch handle
per se, just the raw thread struct pointers which get stored into the
handle field.  This works fine for normally initialized threads, but
when switching out of a dummy thread at initialization, nothing has
initialized that field and the code was dumping registers into the
bottom of memory through the resulting NULL pointer.

Fix this by skipping the load of the field value and just using an
offset instead to get the struct address, which is actually slightly
faster anyway (a SUB immediate instruction vs. the load).

Actually for extra credit we could even move the switch_handle field
to the top of the thread struct and eliminate the instruction
entirely, though if we did that it's probably worth adding some
conditional code to make the switch_handle field disappear entirely.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-10-19 12:09:32 -07:00
Stephanos Ioannidis
bb14d8c5e5 ext: hal: cmsis: Update references to HAS_CMSIS to HAS_CMSIS_CORE.
This commit updates all references to HAS_CMSIS to use HAS_CMSIS_CORE
instead. With the changes introduced to allow multiple CMSIS variants
to be specified, the latter is semantically equivalent to the former.

For more details, see issue #19717.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-10-18 14:01:07 -05:00
Ioannis Glaropoulos
cfe1b1de1a arch: arm: userspace: adapt assembly code for Cortex-M Baseline
In this commit we implement the assembly functions in userspace.S
- z_arm_userspace_enter()
- z_arm_do_syscall()
- z_arch_user_string_nlen()
for ARMv6-M and ARMv8-M Baseline architecture. We "inline" the
implementation for Baseline, along with the Mainline (ARMv7-M)
implementation, i.e. we rework only what is required to build
for Baseline Cortex-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-18 08:46:03 -05:00
Ioannis Glaropoulos
2d6bb624d6 arch: arm: swap_helper: adapt assembly code for Cortex-M Baseline
In this commit we implement the assembly functions in
swap_helper.S, namely
  - z_arm_pendsv()
  - z_arm_svc()
for ARMv6-M and ARMv8-M Baseline architecture. We "inline" the
implementation for Baseline, along with the Mainline (ARMv7-M)
implementation, i.e. we rework only what is required to build
for Baseline Cortex-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-18 08:46:03 -05:00
Ioannis Glaropoulos
b689700326 arch: arm: no HW stack protection capabilities in Cortex-M Baseline
We do not support HW Stack protection capabilities in
Cortex-M Baseline CPUs (unless they have built-in stack
overflow detection capability). We adapt the Kconfig
option to reflect this.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-18 08:46:03 -05:00
Ioannis Glaropoulos
2a13f91597 arch: arm: update minimum MPU region alignment for ARMv6-M
ARMv6-M architecture has an MPU that requires minimum region
size of 256 bytes.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-18 08:46:03 -05:00
Ulf Magnusson
ac9fe11f2f Kconfig: Remove copy-pasted comments on some promptless symbols
Remove the

    # Omit prompt to signify a "hidden" option

comments that appear on some symbols. They seem to have been copy-pasted
at random, as there are lots of promptless symbols that don't have them
(that's confusing in itself, because it might give the idea that the
ones with comments are special in some way).

I suspect those comments wouldn't have helped me much if I didn't know
Kconfig either. There's a lot more Kconfig documentation now too, e.g.
https://docs.zephyrproject.org/latest/guides/kconfig/index.html.

Keep some comments that give more information than the symbol having no
prompt.

Also do some minor drive-by cleanup.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-10-17 13:05:24 -05:00
Andrew Boie
1aaa3d29ce riscv: properly pull in irq_offload logic
This is an optional feature and no logic for it should
be present unless CONFIG_IRQ_OFFLOAD is enabled.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-15 09:33:52 -07:00
Andrew Boie
9c76d28bee x86: intel64: fatal: minor formatting improvements
Line up everything nicely, add leading '0x' to hex
addresses, and remove redundant newlines. Add
whitespace between the register name and contents
so the contents can be easily selected from a terminal.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-15 09:00:49 -07:00
Andrew Boie
24958f30d9 x86: move z_x86_early_serial_init()
This works with long mode as well, uplevel to
common kernel_arch_func.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-15 09:00:49 -07:00
Andrew Boie
fdd3ba896a x86: intel64: set the WP bit for paging
Otherwise, supervisor mode can write to read-only areas,
failing tests that check this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-15 09:00:49 -07:00
Andrew Boie
31620b90e2 x86: refactor mmustructs.h
The struct definitions for pdpt, pd, and pt entries has been
removed:

 - Bitfield ordering in a struct is implementation dependent,
   it can be right-to-left or left-to-right
 - The two different structures for page directory entries were
   not being used consistently, or when the type of the PDE
   was unknown
 - Anonymous structs/unions are GCC extensions

Instead these are now u64_t, with bitwise operations used to
get/set fields.

A new set of inline functions for fetcing various page table
structures has been implemented, replacing the older macros.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-14 11:49:39 -07:00
Andrew Boie
ab4d647e6d x86: mmu: get rid of x86_page_entry_data_t typedef
This hasn't been necessary since we dropped support for 32-bit
non-PAE page tables. Replace it with u64_t and scrub any
unnecessary casts left behind.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-14 11:49:39 -07:00
Andrew Boie
e3ab43580c x86: move mmustructs.h
This will be used for both 32-bit and 64-bit mode.
This header gets pulled in by x86's arch/cpu.h, so put
it in include/arch/x86/.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-14 11:49:39 -07:00
Ioannis Glaropoulos
a6accb3e75 arch: arm: swap-helper: simplify several assembly expressions
Some assembly simplifications, to make code common for ARMv6
and ARMv7 architecture.

We can use ldrb, directly for reading the SVC encoding; this
removes the need for ANDing the result with 0xff right below.
We remove an immediate value of 0 from an str instruction, as
it's redundant.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-14 09:25:32 -07:00
Ioannis Glaropoulos
9ea2df1788 arch: arm: documentation improvements in ARM assembly sources
Add more documentation and inline explanatory comments in
assembly sources swap_helper.S and userspace.S and remove
redundant/wrong documentation when applicable.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-14 09:25:32 -07:00
Ioannis Glaropoulos
237033c61b arch: arm: userspace: z_arm_userspace_enter: reduce push/pop overhead
ARM user space requires ARM_MPU. We can, therefore,
remove the unnecessary #ifdef CONFIG_ARM_MPU blocks
in userspace.S. In addition, we do minor refactoring
in z_arm_userspace_enter(), and z_arm_pendsv(), and
z_arm_svc(), aiming at reducing the push/pop overhead
as much as possible.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-14 09:25:32 -07:00
Andrew Boie
8ffff144ea kernel: add architecture interface headers
include/sys/arch_inlines.h will contain all architecture APIs
that are used by public inline functions and macros,
with implementations deriving from include/arch/cpu.h.

kernel/include/arch_interface.h will contain everything
else, with implementations deriving from
arch/*/include/kernel_arch_func.h.

Instances of duplicate documentation for these APIs have been
removed; implementation details have been left in place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-11 13:30:46 -07:00
Andrew Boie
e340f8d22e x86: intel64: enable no-execute
Set the NXE bit in the EFER MSR so that the NX bit can
be set in page tables. Otherwise, the NX bit is treated
as reserved and leads to a fault if set.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-10 13:41:02 -07:00
Ioannis Glaropoulos
61c60be302 arch: arm: core: add Cortex-R in the files description headers
arch/arm/core is shared between Cortex-M and Cortex-R, so
enhance the file description headers accordingly.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-09 17:54:16 -05:00
Ioannis Glaropoulos
463d8f7e86 arch: arm: clean up inclusions in assembly files
A clean-up commit that removes unnecessary inclusions from
assembly files in arm/core and arm/core/cortex_m. It also
ogranizes the inclusions based on the following order and
set of rules:
- never include kernel_structs.h
- include toolchain.h and linker/sections.h in all ASM files
- include offsets-short.h, if ASM accesses offset constants
- include arch/cpu.h, if ASM accesses CMSIS constants
  (defined locally in include/arch/arm)
- include file-specific headers, if needed (e.g. vector-table.h)

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-10-09 17:54:16 -05:00
Wayne Ren
eb2422ed76 arch: arc: enlarge the exception handling stack
after recent changes in zephyr's fault handling, e.g. use log
to repace printk, it requires more stack to exception handling, or
the stack overflow may happen and crash the system.

this commit adds a kconfig option for exception stack size with
a larger default size.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-10-09 14:21:56 -04:00
Alberto Escolar Piedras
b237597ab9 arch: posix: isolate arch-soc/board IF from kernel-arch IF
The POSIX ARCH delegates some of the tasks which normally
are taken care of by the ARCH to the SOC or BOARD levels.

To avoid changes in the kernel-arch IF propagating into
the arch-soc and arch-board interfaces (which would break
off-tree posix boards) isolate them.

Also move arch inlined functions into the arch.h header,
and out from the headers which specify the  posix arch-soc
and arch-board interfaces.

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2019-10-09 09:14:18 -04:00
Andrew Boie
d1aca7f11b x86_64: fix arch headers
arch/cpu.h and kernel_arch_func.h are expected to define different
functions, per the architecture interface.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-09 09:14:18 -04:00
Andrew Boie
7f715bd113 intel64: add inline definition of z_arch_switch()
This just calls the assembly code. Needed to be
inline per the architecture API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-09 09:14:18 -04:00
Andrew Boie
c968422a78 xtensa: fix z_arch_switch()
This was in the wrong header and declared as a macro instead
of an inline function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-09 09:14:18 -04:00
Andrew Boie
747fec226c arm: move z_arch_switch_to_main_thread to C code
There's no compelling reason why this should be inline unlike all
other arches, it's a large function, called exactly once.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-09 09:14:18 -04:00
Andrew Boie
54506b5a9b arches: fix z_arch_is_in_isr() defintion
For some reason these were implemented as macros when they
should be inline functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-09 09:14:18 -04:00
Andrew Boie
f0f700c121 posix: expose z_arch_irq_lock/unlock as inlines
The specification for these arch APIs is to have them inline,
and the bodies were just oneliners calling another function
anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-09 09:14:18 -04:00
Stephanos Ioannidis
fe85c2e2e0 arch: arm: Add Cortex-R exception handling documentation.
Add in-line documentation describing the process of register
preservation and exception handling on Cortex-R.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-10-08 16:03:32 -05:00
Stephanos Ioannidis
8c4de7e4b0 arch: arm: Remove unnecessary register preservation in Cortex-R port.
The interrupt exit and swap service routines for Cortex-R
unnecessarily preserve r0 and lr registers when making function calls
using bl instruction.

In case of _IntExit in exc_exit.S, the r0 register containing the
caller mode is preserved at the top, and the lr register can safely be
assumed to have been saved into the system mode stack by the interrupt
service routine.

In case of __svc in swap_helper.S, since the function saves lr to the
system mode stack at the top and exits through _IntExit, it is not
necessary to preserve lr register when executing bl instructions.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-10-08 16:03:32 -05:00
Charles E. Youse
f7cfb4303b arch/x86: do not assume MP means SMP
It's possible to have multiple processors configured without using the
SMP scheduler, so don't make definitions dependent on CONFIG_SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
e6a31a9e89 arch/x86: (Intel64) initialize TSS interrupt stack from cpuboot[]
In non-SMP MP situations, the interrupt stacks might not exist, so
do not assume they do. Instead, initialize the TSS IST1 from the
cpuboot[] vector (meaning, on APs, the stack from z_arch_start_cpu).
Eliminates redundancy at the same time.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
643661cb44 arch/x86: declare z_x86_prep_c() in kernel_arch_func.h
And remove the ad hoc prototype in cpu.c for Intel64.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
5abab591c2 arch/x86: (Intel64) make z_arch_start_cpu() synchronous
Don't leave z_arch_start_cpu() until the target CPU has been started.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
3b145c0d4b arch/x86: (Intel64) do not lock interrupts around irq_offload()
This is the Wrong Thing(tm) with SMP enabled. Previously this
worked because interrupts would be re-enabled in the interrupt
entry sequence, but this is no longer the case.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
66510db98c arch/x86: (Intel64) add scheduler IPI support
Add z_arch_sched_ipi() and such to enable scheduler IPIs when SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
74e3717af6 arch/x86: (Intel64) fix conditional assembly in locore.S
was ignoring the rest of the expression, though the effect was
harmless (including unreachable code in some builds).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
f361798cdf arch/x86: limit number of IRQ vectors to 224
Trivial change to the Kconfig: the first 32 vectors are reserved,
so it's not possible to have 256 IRQ vectors. Change max to 224.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
3eb1a8b59a arch/x86: (Intel64) implement SMP support
Add duplicate per-CPU data structures (x86_cpuboot, tss, stacks, etc.)
for up to 4 total CPUs, add code in locore and z_arch_start_cpu().

The test board, qemu_x86_long, now defaults to 2 CPUs.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
2808908816 arch/x86: alter signature of z_x86_prep_c() function
Take a dummy first argument, so that the BSP entry point (z_x86_prep_c)
has the same signature as the AP entry point (smp_init_top).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
f9eaee35b8 arch/x86: (Intel64) use per-CPU parameter struct for CPU startup
A new 'struct x86_cpuboot' is created as well as an instance called
'x86_cpuboot[]' which contains per-CPU boot data (initial stack,
entry function/arg, selectors, etc.). The locore now consults this
table to set up per-CPU registers, etc. during early boot.

Also, rename tss.c to cpu.c as its scope is growing.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
edf5761c83 arch/x86: (Intel64) rename kernel segment constants
There's no need to qualify the 64-bit CS/DS selectors, and the GS and
TR selectors are renamed CPU0_GS and CPU0_TR as they are CPU-specific.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
90bf0da332 arch/x86: (Intel64) optimize and re-order startup assembly sequence
In some places the code was being overly pedantic; e.g., there is no
need to load our own 32-bit descriptors because the loader's are fine
for our purposes. We can defer loading our own segments until 64-bit.

The sequence is re-ordered to faciliate code sharing between the BSP
and APs when SMP is enabled (all BSP-specific operations occur before
the per-CPU initialization).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
17e135bc41 arch/x86: (Intel64) clear BSS before entering long mode
This is really just to facilitate CPU bootstrap code between
the BSP and the APs, moving the clear operation out of the way.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
a981f51fe6 arch/x86: drivers/loapic_intr.c: move local APIC initialization
In the general case, the local APIC can't be treated as a normal device
with a single boot-time initialization - on SMP systems, each CPU must
initialize its own. Hence the initialization proper is separated from
the device-driver initialization, and said initialization is called
from the early startup-assembly code when appropriate.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
418e5c1b38 arch/x86: factor out common assembly startup code
The 32-bit and 64-bit assembly startup sequences share quite a
bunch of common code, so it's factored out into one file to avoid
repeating ourselves (and potentially falling out of sync).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
25a7cc1136 arch/x86: (Intel64) add missing linker symbols
The linker script was missing symbols that defined the boundaries
of kernel memory segments (_image_rom_end, etc.). These are added
so that core/memmap.c can properly account for those segments.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
8279af76c8 arch/x86: elevate prep_c.c to common code
Elevate the previously 32-bit-only z_x86_prep_c() function to common
code, so both 32-bit and 64-bit arches now enter the kernel this way.
Minor changes to prep_c.c to make it build with the SMP scheduler on.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
8d97750eef arch/x86: (Intel64) add z_arch_curr_cpu() to enable CONFIG_SMP=y
And set qemu_x86_long board to build with CONFIG_SMP=y by default.
Apparently two benchmark tests - latency_measure and sys_kernel -
do not work with the SMP scheduler, so those tests are disabled.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
cc9be2e982 arch/x86: (Intel64) start up on _interrupt_stack, not _exception_stack
Simply for consistency with other platforms.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Jan Van Winkel
23a866b828 cmake: toolchain abstraction for undefined behaviour sanitizer
Added toolchain abstraction for undefined behaviour sanitizer

Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
2019-10-07 15:00:20 +02:00
Andrew Boie
f0ddbd7eee x86: abstract toplevel page table pointer
This patch is a preparatory step in enabling the MMU in
long mode; no steps are taken to implement long mode support.

We introduce struct x86_page_tables, which represents the
top-level data structure for page tables:

- For 32-bit, this will contain a four-entry page directory
  pointer table (PDPT)
- For 64-bit, this will (eventually) contain a page map level 4
  table (PML4)

In either case, this pointer value is what gets programmed into
CR3 to activate a set of page tables. There are extra bits in
CR3 to set for long mode, we'll get around to that later.

This abstraction will allow us to use the same APIs that work
with page tables in either mode, rather than hard-coding that
the top level data structure is a PDPT.

z_x86_mmu_validate() has been re-written to make it easier to
add another level of paging for long mode, to support 2MB
PDPT entries, and correctly validate regions which span PDPTE
entries.

Some MMU-related APIs moved out of 32-bit x86's arch.h into
mmustructs.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-04 15:53:49 -07:00
Jan Van Winkel
89e97d13ec cmake: toolchain abstraction for address sanitizer
Added toolchain abstraction for address sanitizer

Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
2019-10-04 13:39:52 -04:00
Andrew Boie
8c98a97581 arm: arch code naming cleanup
This patch re-namespaces global variables and functions
that are used only within the arch/arm/ code to be
prefixed with z_arm_.

Some instances of CamelCase have been corrected.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-04 10:46:23 +02:00
Andrew Boie
13433fcc4b x86: fix EXCEPTION_DEBUG dependency
This requires logging, not printk.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-01 16:15:55 -05:00
Wayne Ren
56650aff7e arch: arc: fix the bug in prologue of sys call handling
* the old codes may not save the caller saved regs correctly,
  e.g. r7- r12. Because the sys call entry is called in the form
  of static inline function. The compiler optimizations may not save
  all the caller saved regs.

* new codes use the irq stack frame as the sys call frame and gurantee
  all the called saved regs are pushed and popped correctly.

* the side effect of new codes are more stack operations and a little
  overhead.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-10-01 09:22:30 -04:00
Andrew Boie
89d4c6928e kernel: add arch abstraction for irq_offload()
This makes it clearer that this is an API that is expected
to be implemented at the architecture level.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-01 11:11:42 +02:00
Andrew Boie
bd6d8dc070 arc: rename k_cpu_sleep_mode
This is only used internally by the ARC arch code
and has been renamed to z_arc_cpu_sleep_mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
2c1fb971e0 kernel: rename __swap
This is part of the core kernel -> architecture API and
has been renamed to z_arch_swap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
fe031611fd kernel: rename main/idle thread/stacks
The main and idle threads, and their associated stacks,
were being referenced in various parts of the kernel
with no central definition. Expose these in kernel_internal.h
and namespace with z_ appropriately.

The main and idle threads were being defined statically,
with another variable exposed to contain their pointer
value. This wastes a bit of memory and isn't accessible
to user threads anyway, just expose the actual thread
objects.

Redundance MAIN_STACK_SIZE and IDLE_STACK_SIZE defines
in init.c removed, just use the Kconfigs they derive
from.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
fa22ecffec x86: remove redunant idle timestamp setting
This metric shows when the system first enters an idle
state, which has already been recorded in the arch-
independent implementation of the idle thread.

Only x86 was doing this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
f6fb634b89 kernel: rename kernel_arch_init()
This is part of the core kernel -> architecture interface and
has been renamed z_arch_kernel_init().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
4ad9f687df kernel: rename thread return value functions
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.

z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
07525a3d54 kernel: add arch interface for idle functions
k_cpu_idle() and k_cpu_atomic_idle() were being directly
implemented by arch code.

Rename these implementations to z_arch_cpu_idle() and
z_arch_cpu_atomic_idle(), and call them from new inline
function definitions in kernel.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
845aa6d114 kernel: renamespace arch_nop()
This is part of the core kernel -> architecture interface
and has been renamed to z_arch_nop().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
e1ec59f9c2 kernel: renamespace z_is_in_isr()
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().

References from test cases changed to k_is_in_isr().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
61901ccb4c kernel: rename z_new_thread()
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
9e1dda8804 timing_info: rename globals
Global variables related to timing information have been
renamed to be prefixed with z_arch, with naming arranged
in increasing order of specificity.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Anas Nashif
40ca638a00 arch: posix: fix function name in comment
posix_core_main_thread_start does not exist.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-30 10:49:37 -04:00
Charles E. Youse
7637571871 arch/x86: rename CONFIG_X86_ACPI and related to CONFIG_ACPI
ACPI is predominantly x86, and only currently implemented on x86,
but it is employed on other architectures, so rename accordingly.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse
200056df2f arch/x86: rename CONFIG_X86_MULTIBOOT and related to CONFIG_MULTIBOOT
Simple naming change, since MULTIBOOT is clear enough by itself and
"namespacing" it to X86 is unnecessary and/or inappropriate.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse
2cf52476ea arch/x86: add support for non-trivial memory maps
x86 has more complex memory maps than most Zephyr targets. A mechanism
is introduced here to manage such a map, and some methods are provided
to populate it (e.g., Multiboot).

The x86_info tool is extended to display memory map data.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse
81eeff83b0 arch/x86: multiboot: migrate multiboot initialization to early C
Originally, the multiboot info struct was copied in the early assembly
language code. This code is moved to a C function in multiboot.c for
two reasons:

1. It's about to get more complicated, as we want the ability to use
   a multiboot-provided memory map if available, and
2. this will faciliate its sharing between 32- and 64-bit subarches.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse
1ffab8a5f2 arch/x86: rudimentary ACPI support
Implement a simple ACPI parser with enough functionality to
enumerate CPU cores and determine their local APIC IDs.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Wayne Ren
c5c3fdd67b arch: arc: fix the bug in _firq_enter
* In ARC, pop reg ==> sp=sp-4; *sp= b; The original codes have bug that
  the save of ilink (st ilink [sp]) will crash the interruptted stack's
  content. This commit fixes this bug and makes the codes easier to
  understand

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-09-27 11:36:17 -04:00
Mrinal Sen
1246cb8cef debug: tracing: Remove unneeded abstraction
Various C and Assembly modules
make function calls to z_sys_trace_*. These merely call
corresponding functions sys_trace_*. This commit
is to simplify these by making direct function calls
to the sys_trace_* functions from these modules.
Subsequently, the z_sys_trace_* functions are removed.

Signed-off-by: Mrinal Sen <msen@oticon.com>
2019-09-26 06:26:22 -04:00
Ioannis Glaropoulos
4be1f45d1e arch: arm: minor clean-up in irq_init.c and timing_info_bench.c
- Remove redundant inclusions in irq_init.c
- Remove comment about thread_abort function,
  which does not belong in this file (probably
  left-out during code refactoring)
- Include arm cmsis.h only under #ifdef CONFIG_ARM

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-09-24 16:59:42 +02:00
Charles E. Youse
66a2ed2360 arch/x86: (Intel64) move RAX to volatile register set
This used to be part of the "restore always" set of registers because
__swap was expected to return a value.  No longer required, so RAX is
moved to the volatile registers and we save a few cycles occasionally.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
074ce889fb arch/x86: (Intel64) migrate from __swap to z_arch_switch()
The latter primitive is required for SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
32fc239aa2 arch/x86: (Intel64) use TSS for per-CPU variables
A space is allocated in the TSS for per-CPU variables. At present,
this is only a 'struct _cpu *' to find the _kernel CPU struct. The
locore routines are rewritten to find _current and _nested via this
pointer rather than referencing the _kernel global directly.

This is obviously in preparation for SMP support.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
1d8c80bc05 arch/x86: (Intel64) move STACK_SENTINEL check
This function call was erroneously inserted between the instruction
that set the Z flag and the instruction that tested the Z flag. The
call is moved up a few instructions where it can't junk CPU state.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
e4d5ab363c arch/x86: (Intel64) define TSS in C, not assembly
Declare the 64-bit TSS as a struct, and define the instance in C.
Add a data segment selector that overlaps the TSS and keep that
loaded in GS so we can access the TSS via a segment-override prefix.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
a1afde043c arch/x86: share declaration of _interrupt_stack
This is moved from arch/x86/include/ia32/kernel_arch_func.h to the
common header arch/x86/include/kernel_arch_func.h so it can be shared.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
a8de9577c9 arch/x86: restructure ISR stacks (conceptually)
This is largely a conceptual change rather than an actual change.
Instead of using an array of interrupt stacks (one for each IRQ
nesting level), we use one interrupt stack and subdivide it. The
effect is the same, but this is more in line with the Zephyr model
of one ISR stack per CPU (as reflected in init.c).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
bd094ddac2 arch/x86: inline x2APIC EOI in 64-bit code
Like its 32-bit sibling, the 64-bit code should EOI inline rather than
invoking a function. Defeats the performance advantages of x2APIC.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
3036faf88a tests/benchmarks: fix BOOT_TIME_MEASUREMENT
The boot time measurement sample was giving bogus values on x86: an
assumption was made that the system timer is in sync with the CPU TSC,
which is not the case on most x86 boards.

Boot time measurements are no longer permitted unless the timer source
is the local APIC. To avoid issues of TSC scaling, the startup datum
has been forced to 0, which is in line with the ARM implementation
(which is the only other platform which supports this feature).

Cleanups along the way:

As the datum is now assumed zero, some variables are removed and
calculations simplified. The global variables involved in boot time
measurements are moved to the kernel.h header rather than being
redeclared in every place they are referenced. Since none of the
measurements actually use 64-bit precision, the samples are reduced
to 32-bit quantities.

In addition, this feature has been enabled in long mode.

Fixes: #19144

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-21 16:43:26 -07:00
Charles E. Youse
a95c94cfe2 arch/x86/ia32: move IA32 thread state to _thread_arch
There are not enough bits in k_thread.thread_state with SMP enabled,
and the field is (should be) private to the scheduler, anyway. So
move state bits to the _thread_arch where they belong.

While we're at it, refactor some offset data w/r/t _thread_arch
because it can be shared between 32- and 64-bit subarches.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-20 14:31:18 -04:00
Charles E. Youse
a224998355 arch/x86/intel64: do not use thread_state for arch data
k_thread.thread_state (or rather, _thread_base.thread_state) should be
private to the kernel/scheduler, so flags previously stored there are
moved to _thread_arch where the belong.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-20 14:31:18 -04:00
Huang Qi
2c277077fe arch: riscv: Use infinite loop instead of simple wfi to halt slave core
If it's a multicore system, infinite loop wfi to halt slave core

Signed-off-by: Huang Qi <757509347@qq.com>
2019-09-20 10:42:28 -04:00
Huang Qi
19da4ee379 arch: riscv: Add simple wrokaround to boot multicore system
Just boot master core, halt others

Signed-off-by: Huang Qi <757509347@qq.com>
2019-09-20 10:42:28 -04:00
Wentong Wu
da31c81737 linker: add custom align size to reduce alignment memory wasting
when enable CONFIG_CUSTOM_SECTION_ALIGN, it need less alignment
memory for image rom region. But that needs carefully configure
MPU region and sub-regions(ARMv7-M) to cover this feature.

Fixes: #17337.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-09-19 21:38:31 -04:00
Ioannis Glaropoulos
a78e5a267f arch: arm: cmse: re-introduce workaround for typeof
The GNU ARM Embedded "8-2019-q3-update" toolchain
erroneously uses "typeof" instead of "__typeof__".
To work around this we define typeof to be able to
support it.

This reverts commit 01a71eae3d.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-09-17 16:31:42 +02:00
Watson Zeng
9451bde98b arc: arc_connect: bug fix in arc_connect
bug fix in arc_connect

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2019-09-17 20:40:38 +08:00
Jan Van Winkel
cb56813374 native: removed redundant compiler args MMD & MP
Removed redundant compiler command line arguments MMD and MP as MD is
already specified

Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
2019-09-17 11:27:19 +02:00
Jan Van Winkel
f3eec6cba3 cmake: toolchain abstraction for coverage
Added toolchain abstraction for coverage for both gcc and clang.

Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
2019-09-17 11:25:29 +02:00
Charles E. Youse
dc0314af7f arch/x86: honor CONFIG_INIT_STACKS in 64-bit mode
Initialize the IRQ stacks with 0xAA bytes when the option is enabled.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
d506489999 arch/x86: optimize nested IRQ entry/exit
We don't need to save the ABI caller-save registers here, because
we don't preempt threads from nested IRQ contexts.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
468cd4d98f arch/x86: add support for CONFIG_STACK_SENTINEL
Apparently I missed the arch-dependent bit of this. Fixed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
a5eea17dda arch/x86: add SSE floating-point to Intel64 subarch
This is a naive implementation which does "eager" context switching
for floating-point context, which, of course, introduces performance
concerns. Other approaches have security concerns, SMP implications,
and impact the x86 arch and Zephyr project as a whole. Discussion is
needed, so punting with the straightforward solution for now.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
91896dde5e arch/x86: select CONFIG_64BIT when CONFIG_X86_LONGMODE
We need a 64-bit clean kernel when in long mode.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
2bb59fc84e arch/x86: add nested interrupt support to Intel64
Add support for multiple IRQ stacks and interrupt nesting.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
cdb9ac3895 arch/x86: Add exception reporting code for Intel64
Fleshed out z_arch_esf_t and added code to build this frame when
exceptions occur. Created a separate small stack for exceptions and
shifted the initialization code to use this instead of the IRQ stack.

Moved IRQ stack(s) to irq.c.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
a10f2601cc arch/x86: add IRQ offloading to Intel64 subarch
The IRQ_OFFLOAD_VECTOR config option is also moved to the arch level,
as it is shared between both 32- and 64-bit subarches.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
0e0199387a arch/x86: set default stack sizes
Using the arch Kconfig here, instead of kernel/Kconfig. Intel64 with
the SysV ABI requires some pretty big stacks. These 4K-8K defaults
are arguably a bit small, but the Zephyr defaults are REALLY too small.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
4ddaa59a89 arch/x86: initial Intel64 support
First "complete" version of Intel64 support for x86. Compilation of
apps for supported boards (read: up_squared) with CONFIG_X86_LONGMODE=y
is now working. Booting, device drivers, interrupts, scheduling, etc.
appear to be functioning properly. Beware that this is ALHPA quality,
not ready for production use, but the port has advanced far enough that
it's time to start working through the test suite and samples, fleshing
out any missing features, and squashing bugs.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
58bbbddbef arch/x86: fix multiboot.c pointer cast
Widen the integer to pointer size before conversion, to make
explicit the intent (and silence the compiler warning). Also
fix a minor bug involving a duplicate (and thus dead) store.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
34307a54f0 arch/x86: initial Intel64 bootstrap framework
This patch adds basic build infrastructure, definitions, a linker
script, etc. to use the Zephyr and 0.10.1 SDK to build a 64-bit
ELF binary suitable for use with GRUB to minimally bootstrap an
Apollo Lake (e.g., UpSquared) board. The resulting binary can hardly
be called a Zephyr kernel as it is lacking most of the glue logic,
but it is a starting point to flesh those out in the x86 tree.

The "kernel" builds with a few harmless warnings, both with GCC from
the Zephyr SDK and with ICC (which is currently being worked on in
a separate branch). These warnings are either related to pointer size
differences (since this is an LP64 build) and/or dummy functions
that will be replaced with working versions shortly.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
c58b28ab0a arch/x86: add placeholders for Intel64 headers
Use different headers for kernel_arch_{func,thread}.h when in
CONFIG_X86_LONGMODE, and add placeholders for Intel64 versions.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
5e10d590c6 arch/x86: refactor kernel_arch_data.h
Some definitions may be shared between subarchitectures, so refactor
accordingly. The definitions are also modified to separate bits. A
placeholder is created for the Intel64 definitions.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
c18f028366 arch/x86: refactor offsets.c
The IA32 and Intel64 subarchitectures will generate different offset
symbols, so they are refactored. No functional change.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
d25ef6ed44 arch/x86/pcie: use Z_IRQ_TO_INTERRUPT_VECTOR() macro
The _irq_to_interrupt_vector[] array shouldn't be accessed directly,
as there is a macro for this.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
6cf904cc0d arch/x86: rename X64 references to Intel64
Intel's X86-64 implementation is officially "Intel64", so follow suit.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Kumar Gala
8ce0cf0126 kconfig: Convert device tree chosen properties to new kconfigfunctions
Convert how we get the various chosen properties like "zephyr,console"
to use the new kconfig functions like dt_chosen_to_label.

Because of how kconfig parses things we define a set of variables of the
form DT_CHOSEN_Z_<PROP> since comma's are parsed as field seperators in
macros.

This conversion allows us to remove code in gen_defines.py for the
following chosen properties:

zephyr,console
zephyr,shell-uart
zephyr,bt-uart
zephyr,uart-pipe
zephyr,bt-mon-uart
zephyr,uart-mcumgr
zephyr,bt-c2h-uart

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-09-13 11:42:34 -05:00
Andrew Boie
a470ba1999 kernel: remove z_fatal_print()
Use LOG_ERR instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-12 05:17:39 -04:00
Andrew Boie
6fd6b7e50a xtensa: remove legacy arch implementation
We re-wrote the xtensa arch code, but never got around
to purging the old implementation.

Removed those boards which hadn't been moved to the new
arch code. These were all xt-sim simulator targets and not
real hardware.

Fixes: #18138

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-12 01:26:34 -04:00
Charles E. Youse
ee525c2597 arch/x86: inline x2APIC EOI
From the Jailhouse days, this has been a function call. That's silly.
We now inline the EOI in the ISR when in x2APIC mode. Also clean up
z_irq_controller_eoi(), so it now uses the inline macros.

Also, we now enable x2APIC on up_squared by default.

Fixes: #17133

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-12 09:53:45 +08:00
Ulf Magnusson
72e71d6000 x86: gen_idt.py: Use enumerate() to fix pylint warning
Use enumerate() to fix this pylint warning:

    C0200: Consider using enumerate instead of iterating with range and
    len (consider-using-enumerate)

enumerate() is handy when the loop body needs both the element and its
index. It returns (index, element) tuples.

Also use a tuple unpacking to extract 'handler' from the elements in
'vector'.

Piggyback a slightly simpler way to build a list of num_chars 0s.

Getting rid of warnings for a CI check.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-09-10 15:53:49 +02:00
Ulf Magnusson
4e90dd67a9 x86: gen_idt.py: Fix broken error() call in update_irq_vec_map()
Accidentally passed two arguments instead of one. Fixes this pylint
error:

    arch/x86/gen_idt.py:132:8: E1121: Too many positional arguments for
    function call (too-many-function-args)

Fixing pylint warning for a CI check.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-09-10 15:51:52 +02:00
Ulf Magnusson
247d40cf55 gen_isr_tables: Fix pylint warning by using isinstance()
Fix this warning, as a preparation for a CI check:

    arch/common/gen_isr_tables.py:167:11: C0123: Using type() instead of
    isinstance() for a typecheck. (unidiomatic-typecheck)

isinstance() has the advantage that it also handles inheritance, though
it doesn't really matter here. It's more common at least.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-09-08 22:24:45 -04:00
Ulf Magnusson
ad8ac7469b x86: gen_idt.py: Simplify test with 'not in'
Getting slightly subjective, but fixes this pylint warning:

    arch/x86/gen_idt.py:281:11: R1714: Consider merging these
    comparisons with "in" to 'handler not in (spur_code, spur_nocode)'
    (consider-using-in)

Getting rid of pylint warnings for a CI check. I could disable any
controversial ones (it's already a list of warnings to enable anyway).

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-09-08 22:21:46 -04:00
Wayne Ren
a75b0014fb arc: replace 32-bit instructions with possible 16-bit instructions
replace 32-bit instructions with possible 16-bit instructions to
get better code density

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-09-08 12:36:02 +02:00
Ulf Magnusson
50b9b1249b scripts: Simplify code with sys.exit(<string>)
Promote a handy and often-overlooked sys.exit() feature: Passing it a
string (or any other non-int object) prints it to stderr and exits with
status 1.

See the documentation at
https://docs.python.org/3/library/sys.html#sys.exit.

This indirectly prints some errors to stderr that previously went to
stdout.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-09-08 12:34:16 +02:00
Daniel Leung
ea438d0a9b xtensa: asm2: add code for double exception vector
This adds a simple infinite loop when double exception is raised.
Without this, if double exception occurs, it would execute
arbitrary code.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-09-07 10:21:16 -04:00
Daniel Leung
984002de6d xtensa: rename z_arch_irq_is_enabled for multi-level interrupts
This follows the z_arch_irq_en-/dis-able() so that the SoC
definitions are responsible for functions related to multi-level
interrupts.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-09-07 10:20:51 -04:00
Charles E. Youse
6767563f94 arch/x86: remove support for IAMCU ABI
This ABI is no longer required by any targets and is deprecated.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-07 10:07:42 -04:00
Charles E. Youse
37929b3428 arch/x86_64: do not modify CR8 in interrupt path
Currently, the interrupt service code manually raises the CPU task
priority to the priority level of the vector being serviced to defer
any lower-priority interrupts. This is unnecessary; the local APIC
is aware that an interrupt is in-service and accounts for its priority
when deciding whether to issue an overriding interrupt to the CPU.

Signed-off-by: Charles E. Youse <charles@gnuless.org>
2019-09-07 10:06:13 -04:00
Ulf Magnusson
a3793098cb xtensa: xtensa_intgen.py: Change 'not lvl in ...' to 'lvl not in ...'
Use the 'not in' operator. Fixes this pylint warning:

    arch/xtensa/core/xtensa_intgen.py:77:7: C0113: Consider changing
    "not lvl in ints_by_lvl" to "lvl not in ints_by_lvl" (unneeded-not)

Fixing pylint warnings for a CI check.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-09-07 07:55:01 -04:00
Ulf Magnusson
0d39a10fbb scripts: Fix random typo'd whitespace
Reported by pylint's 'bad-whitespace' warning.

Not gonna enable this warning in the CI check, because it flags stuff
like deliberately aligning assignments and gets too cultish. Just a
cleanup pass.

For whatever reason, the common convention in Python is to skip spaces
around '=' when passing keyword arguments and giving default arguments:

    f(x=3, y=4)
    def f(x, y=8):
        ...

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-09-07 07:54:17 -04:00
Alexey Brodkin
9a3ee641b6 arc: interrupts: Explain return from interrupt to cooperative thread
The code in question is very non-trivial so without good explanation
it takes a lot of time to realize what's done there and why
it still works in the end.

Here I'm trying to save a couple of man-days for the next developers
who's going to touch that piece of code.

Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
2019-08-30 20:10:14 +02:00
Ioannis Glaropoulos
eddf058e1e arch: arm: be able to infer Z_ARCH_EXCEPT() for baseline SOCs
This commit makes it possible to infer Z_ARCH_EXCEPT()
calls in SVCs that escalate to HardFault due to being
invoked from priority level equal or higher to the
interrupt priority level of the SVC Handler.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-08-29 11:29:50 +02:00
Alexey Brodkin
d52b8df96b arch: arc: threads: Comment clean-up
commit 780324b8ed ("cleanup: rename fiber/task -> thread")
seems to be done by a script and in that particular case turned
menaingful sentence into nonsense. Alas, threads might be in all
four states.

Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Cc: Anas Nashif <anas.nashif@intel.com>
2019-08-28 11:53:32 +02:00
Alexey Brodkin
4449ad52da arch: arc: _rirq_exit: Comment clean-up
We manage IRQs in a quite a different way now since
commit f8d061faf7 ("arch: arc: add nested interrupt support")
so that comment not only makes no sense but also may fool a reader
as disabling of interrupts happens in the very beginning of
_rirq_exit() but not here.

Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
2019-08-28 11:53:32 +02:00
Wayne Ren
3590c80aaf arch: arc: for fast irq ERET has no copy of ilink
we should not rely on that eret has a copy of ilink in fast
irq handling. This will cause crash for hs cores.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-28 08:22:14 +02:00
Wayne Ren
44c917e6b2 arch: arc: fix the bug that interrupt stack is not switched
For the old codes, if nest interrupts come out after _isr_wrapper
and before _check_nest_int_by_irq_act, then multi-bits in irq_act
will be set, this will result irq stack will not be switched in
correctly

As a fix, it's still need to use nest interrupt counter to do
interrupt stack switch as before

The difference is in the past exc_nest_count is used, but here
_kernel.nested/_kernel.cpus[cpu_id].nested is used.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-28 08:22:14 +02:00
Wayne Ren
146c7e8c7e arch: arc: secure world only check secure interrupt
* in arc secureshield interrupts can be configured
  as secure or normal
* in sw design, high interrupt priorites are allocated to
  secure world, low priorities are allocated to normal world.
* secure interrupt > secure thread > normal interrupt > normal
  thead

So, here secure world/firmware only checks secure interrupt
priorities

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-28 08:22:14 +02:00
Wayne Ren
bb0a189d42 arch: arc: use _curr_cpu to replace _curr_irq_stack
use _curr_cpu to record the _cpu_t of each cpu.
the irq_stack is also covered

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-28 08:22:14 +02:00
Wayne Ren
5dbd4ce738 arch: arc: not allowed to switch to thread preempted by exception
it's not allowed to switch to thread preempted by exception as
its context is not saved.

So if a thread switch is required in exception handling, e.g.
kill a thread, the old thread cannot be switched back

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-28 08:22:14 +02:00
Wayne Ren
5a73bf3966 arch: arc: fix and optimize the handling of SECT_STAT.IRM
For arc processor equiped with secureshield, SEC_STAT.IRM
bit should be recorded, it determins which mode irq should
return

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-28 08:22:14 +02:00
Wayne Ren
8cbcdd71ec arch: arc: secure stat should also be reset correctly
secure status should aslo be set correctly

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-28 08:22:14 +02:00
Jan Van Winkel
8cd7cd48cf native: Set recommended stack size to 40 for 64 bit
Set the recommended thread stack size to 40 bytes in case a build is
made for a 64-bit native posix board

Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
2019-08-24 19:55:09 +02:00
Andy Ross
77719b81e9 arch/xtensa: Clean up fatal error handling
Update the xtensa backend to work better with the new fatal error
architecture.  Move the stack frame dump (xtensa uses a variable-size
frame becuase we don't spill unused register windows, so it doesn't
strictly have an ESF struct) into z_xtensa_fatal_error().  Unify the
older exception logging with the newer one (they'd been sort of glomed
together in the recent rework), mostly using the asm2 code but with
the exception cause stringification and the PS register field
extraction from the older one.

Note that one shortcoming is that the way the dispatch code works, we
don't have access to the spilled frame from within the spurious error
handler, so this can't log the interrupted CPU state.  This isn't
fixable easily without adding overhead to every interrupt entry, so it
needs to stay the way it is for now.  Longer term we could exract the
caller frame from the window state and figure it out with some
elaborate assembly, I guess.

Fixes #18140

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-22 17:57:40 -04:00
Andy Ross
915739e724 arch/xtensa: Add z_arch_irq_is_enabled()
This function got dropped, and is needed for dynamic interrupt support

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-22 17:53:51 -04:00
Andy Ross
74d26094f8 arch/common: Provide a weak, generic z_arch_irq_connect_dynamic()
It was discovered that the xtensa version of
z_arch_irq_connect_dynamic() was being removed along with the old
xtensa architecture support, because it was never included in the asm2
builds.

But there's no xtensa-specific code in it at all.  Architectures that
use the existing sw_isr_table mechanism and don't (or can't, in the
case of xtensa which has fixed interrupt priority) interpret the other
parameters might as well have access to a working generic
implementation.

Fixes #18272

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-22 17:53:51 -04:00
Jan Van Winkel
d157527ec1 native: Added dummy member to struct _thread_arch
Added dummy member to struct _thread_arch to suppress clang compiler
warning.

Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
2019-08-22 09:06:00 +02:00
Peter Bigot
4a470114fa arc: rearrange for standard use of extern "C"
Consistently place C++ use of extern "C" after all include directives,
within the negative branch of _ASMLANGUAGE if used.

Remove extern "C" support from files that don't declare objects or
functions.

In arch/arc/arch.h the extern "C" in the including context is left
active during an include to avoid more complex restructuring.

Background from issue #17997:

Declarations that use C linkage should be placed within extern "C"
so the language linkage is correct when the header is included by
a C++ compiler.

Similarly #include directives should be outside the extern "C" to
ensure the language-specific default linkage is applied to any
declarations provided by the included header.

See: https://en.cppreference.com/w/cpp/language/language_linkage
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2019-08-20 00:49:15 +02:00
Peter Bigot
20bb672266 arch/nios2: rearrange for standard use of extern "C"
Consistently place C++ use of extern "C" after all include directives,
within the negative branch of _ASMLANGUAGE if used.

Remove extern "C" support from files that don't declare objects or
functions.

Background from issue #17997:

Declarations that use C linkage should be placed within extern "C"
so the language linkage is correct when the header is included by
a C++ compiler.

Similarly #include directives should be outside the extern "C" to
ensure the language-specific default linkage is applied to any
declarations provided by the included header.

See: https://en.cppreference.com/w/cpp/language/language_linkage
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2019-08-20 00:49:15 +02:00
Peter Bigot
817f527641 arch/xtensa: rearrange for standard use of extern "C"
Consistently place C++ use of extern "C" after all include directives,
within the negative branch of _ASMLANGUAGE if used.

Background from issue #17997:

Declarations that use C linkage should be placed within extern "C"
so the language linkage is correct when the header is included by
a C++ compiler.

Similarly #include directives should be outside the extern "C" to
ensure the language-specific default linkage is applied to any
declarations provided by the included header.

See: https://en.cppreference.com/w/cpp/language/language_linkage
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2019-08-20 00:49:15 +02:00
Peter Bigot
ce3f07954a arch/arm: rearrange for standard use of extern "C"
Consistently place C++ use of extern "C" after all include directives,
within the negative branch of _ASMLANGUAGE if used.

In arch.h the extern "C" in the including context is left active during
include of target-specific mpu headers to avoid more complex
restructuring.

Background from issue #17997:

Declarations that use C linkage should be placed within extern "C"
so the language linkage is correct when the header is included by
a C++ compiler.

Similarly #include directives should be outside the extern "C" to
ensure the language-specific default linkage is applied to any
declarations provided by the included header.

See: https://en.cppreference.com/w/cpp/language/language_linkage
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2019-08-20 00:49:15 +02:00
Peter Bigot
324203f79b arch/x86: rearrange for standard use of extern "C"
Consistently place C++ use of extern "C" after all include directives,
within the negative branch of _ASMLANGUAGE if used.

Background from issue #17997:

Declarations that use C linkage should be placed within extern "C"
so the language linkage is correct when the header is included by
a C++ compiler.

Similarly #include directives should be outside the extern "C" to
ensure the language-specific default linkage is applied to any
declarations provided by the included header.

See: https://en.cppreference.com/w/cpp/language/language_linkage
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2019-08-20 00:49:15 +02:00
Ioannis Glaropoulos
b8cd6fe626 arch: arm: fault: fix check on recoverable fault flag
'recoverable' is a value passed by reference and we
should be dereferencing the pointer, to check if the
fault has been classified as recoverable.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-08-19 09:46:24 +02:00
Peter Bigot
43fc6a7eff arch/riscv: rearrange for standard use of extern "C"
Consistently place C++ use of extern "C" after all include directives,
within the negative branch of _ASMLANGUAGE if used.

Remove extern "C" support from files that don't declare objects or
functions.

Background from issue #17997:

Declarations that use C linkage should be placed within extern "C"
so the language linkage is correct when the header is included by
a C++ compiler.

Similarly #include directives should be outside the extern "C" to
ensure the language-specific default linkage is applied to any
declarations provided by the included header.

See: https://en.cppreference.com/w/cpp/language/language_linkage
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2019-08-18 16:20:10 +02:00
Karsten Koenig
f0d4bdfe3f include: arch: riscv: rename global macro
SR and LR were used as global names for load and store RISC-V assembler
operations, colliding with other uses such as SR for STATUS REGISTER in
some peripherals. Renamed them to a longer more specific name to avoid
the collision.

Signed-off-by: Karsten Koenig <karsten.koenig.030@gmail.com>
2019-08-17 11:48:02 +02:00
Andrew Boie
dafd3485bf xtensa: mask interrupts earlier
When coming out of an exception, we need to mask interrupts
to avoid races when decrementing the nested count. Move
the instruction that does this earlier.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-14 10:11:05 -07:00
Alberto Escolar Piedras
fce0316687 POSIX arch: Fixe issues related to extern "C"
Related to #17997, for the POSIX arch:
* Remove some unnecessary extern "C" and ifdef blocks
* Move an include out of one of these blocks
* Add a missing extern "C" block

Background:
Declarations that use C linkage should be placed within extern "C"
so the language linkage is correct when the header is included by
a C++ compiler.

Similarly #include directives should be outside the extern "C" to
ensure the language-specific default linkage is applied to any
declarations provided by the included header.

See: https://en.cppreference.com/w/cpp/language/language_linkage

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2019-08-12 15:10:15 +02:00
Wayne Ren
36a56e7a8f arch: arc: fix a bug when CONFIG_SMP is enabled
the bug is forgot to fixes, when CONFIG_SMP is enabled,
it will cause build error

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-11 21:18:38 +02:00
Wayne Ren
a97053d5eb arch: arc: no need of default n for arc_connect
as suggested in comments of PR #17747, no need of default n
for arc_connect

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-10 20:11:29 +02:00
Wayne Ren
cca39204c2 arch: arc: add initial support of ARC TEE
* it's based on ARC SecureShield
* add basic secure service in arch/arc/core/secureshield
* necesssary changes in arch level
   * thread switch
   * irq/exception handling
   * initialization
* add secure time support

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-10 17:45:22 +02:00
Bradley Bolen
8080a84887 arch: arm: Add Cortex-R5 support
Pass the correct -mcpu flags to the compiler when building for the
Cortex-R5.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-09 22:50:50 +02:00
Bradley Bolen
e439cfdf38 arch: arm: Add Cortex-R4 support
Pass the correct -march and -mcpu flags to the compiler when building
for the Cortex-R4.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-09 22:50:50 +02:00
Bradley Bolen
c30a71df95 arch: arm: Add Cortex-R support
This adds initial Cortex-R support for interrupts and context switching.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-09 22:50:50 +02:00
Wayne Ren
db8ddaa410 arch: arc: fix on the reason of software-triggered fatal exceptions
according to high-level design,in user mode software-triggered system
fatal exceptions only allow oops and stack check failure

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-09 20:11:58 +02:00
Wayne Ren
61d570bb38 arch: arc: add extra handling about exception raised in interrupt
exception, different with irq offload,  may be raised interrupt
handling, e.g.
  * z_check_stack_sentinel
  * wrong code

we need to add specific handling of this case in exception handling

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-09 20:11:58 +02:00
Wayne Ren
0757583892 arch: arc: the caculation of exception stack is wrong
after appling the new "_get_curr_cpu_irq_stack" in _exc_entry,
the caculation of exception stack is wrong, this will
cause stack overflow, make the exception handling corrupt.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-09 20:10:29 +02:00
Ioannis Glaropoulos
a3ee56f9a1 arch: arm: BusFault, NMI, and HardFault in Secure state when in test
This commit enables the option to route the BusFault,
HardFault, and NMI exceptions in Secure state, when
building for Cortex-M CPUs with ARM_SECURE_FIRMWARE=y.
This allows the various test to utilize BusFault,
HardFault and NMI exceptions during testing.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-08-09 16:14:16 +02:00
Nicolas Pitre
c351492bc7 riscv: toolchain arguments for a 64-bit build
For now we enforce the medany code model for 64-bit builds as we get
reloc issues otherwise. The instruction set and ABI are also set to
soft-float usage.

The ilp32 ABI is explicitly specified on 32-bit build to make sure
it is not using a wrong default if the same toolchain is used for both
32- and 64-bit builds. The archittecture options are the same as the
SDK's riscv32 toolchain default in that case.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-08-09 09:11:45 -05:00
Bradley Bolen
06a79cc82e arch: arm: cpu_idle: Remove unused functions
Since commit c535300539 ("drivers/timer: New ARM SysTick driver"),
_NanoIdleValGet and _NanoIdleValClear have been unused.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-08 09:10:09 +02:00
Andrew Boie
ec5eb8f7d4 x86: add build assert that RAM bounds <= 4GB
DTS will fail this first, but there's no cost to adding
a second level of defense.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-07 12:50:53 -07:00
Andrew Boie
02629b69b5 x86: add prep_c function
Assembly language start code will enter here, which sets up
early kernel initialization and then calls z_cstart() when
finished.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-07 12:50:53 -07:00
Andrew Boie
c3b3aafaec x86: generate page tables at runtime
Removes very complex boot-time generation of page tables
with a much simpler runtime generation of them at bootup.

For those x86 boards that enable the MMU in the defconfig,
set the number of page pool pages appropriately.

The MMU_RUNTIME_* flags have been removed. They were an
artifact of the old page table generation and did not
correspond to any hardware state.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-07 12:50:53 -07:00
Wayne Ren
e11be42558 arch: arc: add initial support of SMP
* modify the reset flow for SMP
* add smp related initialization
* implement ipi related functions
* implement thread switch in isr/exception

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-07 12:21:00 +02:00
Wayne Ren
83dfe5eac4 arch: arc: add macros to get current cpu id
add macros for assembly and C to get current cpu id

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-07 12:21:00 +02:00
Wayne Ren
1f4232ad7e arch: arc: add basic arc connect driver support
* arc connect is a component to connect multiple arc cores
* it's necessary for arc smp support
* the following features are implemented
  * inter-core interrupt unit
  * gloabl free running counter
  * inter-core debug unit
  * interrupt distribute unit

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-07 12:21:00 +02:00
Nicolas Pitre
39ada71688 riscv: isr.S: fix a missing lw to LR conversion
This loads a pointer and therefore has to use LR to be 64-bit
compatible.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-08-05 21:51:09 +02:00
Andrew Boie
0add92523c x86: use a struct to specify stack layout
Makes the code that defines stacks, and code referencing
areas within the stack object, much clearer.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
8014e075f4 x86: use per-thread page tables
Previously, context switching on x86 with memory protection
enabled involved walking the page tables, de-configuring all
the partitions in the outgoing thread's memory domain, and
then configuring all the partitions in the incoming thread's
domain, on a global set of page tables.

We now have a much faster design. Each thread has reserved in
its stack object a number of pages to store page directories
and page tables pertaining to the system RAM area. Each
thread also has a toplevel PDPT which is configured to use
the per-thread tables for system RAM, and the global tables
for the rest of the address space.

The result of this is on context switch, at most we just have
to update the CR3 register to the incoming thread's PDPT.

The x86_mmu_api test was making too many assumptions and has
been adjusted to work with the new design.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
8915e41b7b userspace: adjust arch memory domain interface
The current API was assuming too much, in that it expected that
arch-specific memory domain configuration is only maintained
in some global area, and updates to domains that are not currently
active have no effect.

This was true when all memory domain state was tracked in page
tables or MPU registers, but no longer works when arch-specific
memory management information is stored in thread-specific areas.

This is needed for: #13441 #13074 #15135

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
0c3e05ae7c x86: report CR3 on fatal exception
Lets us know what set of page tables were in use when
the error occurred.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
fcd2c14500 x86: add functions to get/set page tables
Wrapper to assembly code working with CR3 register.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
ea201b206f x86: add debug functions for dumping page tables
These turned out to be quite useful when debugging MMU
issues, commit them to the tree. The output format is
virtually the same as gen_mmu_x86.py's verbose output.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
26dccaabcb x86: reserve room for per-thread page tables
Currently page tables have to be re-computed in
an expensive operation on context switch. Here we
reserve some room in the page tables such that
we can have per-thread page table data, which will
be much simpler to update on context switch at
the expense of memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
76310f6896 x86: make guard pages ro instead of non-present
Has the same effect of catching stack overflows, but
makes debugging with GDB simpler since we won't get
errors when inspecting such regions. Making these
areas non-present was more than we needed, read-only
is sufficient.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Wayne Ren
f7fd1ff67c arch: arc: fix the offset generation of accl_regs
* the offset generation of accl_regs should
  rely on CONFIG_ARC_HAS_ACCL_REGS not CONFIG_FP_SHARING

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-05 13:19:13 +02:00
Nicolas Pitre
0440a815a9 riscv: make core code 64-bit compatible
There are two aspects to this: CPU registers are twice as big, and the
load and store instructions must use the 'd' suffix instead of the 'w'
one. To abstract register differences, we simply use a ulong_t instead
of u32_t given that RISC-V is either ILP32 or LP64. And the relevant
lw/sw instructions are replaced by LR/SR (load/store register) that get
defined as either lw/sw or ld/sd. Finally a few constants to deal with
register offsets are also provided.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-08-02 13:54:48 -07:00
Nicolas Pitre
1f4b5ddd0f riscv32: rename to riscv
With the upcoming riscv64 support, it is best to use "riscv" as the
subdirectory name and common symbols as riscv32 and riscv64 support
code is almost identical. Then later decide whether 32-bit or 64-bit
compilation is wanted.

Redirects for the web documentation are also included.

Then zephyrbot complained about this:

"
New files added that are not covered in CODEOWNERS:

dts/riscv/microsemi-miv.dtsi
dts/riscv/riscv32-fe310.dtsi

Please add one or more entries in the CODEOWNERS file to cover
those files
"

So I assigned them to those who created them. Feel free to readjust
as necessary.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-08-02 13:54:48 -07:00
Wayne Ren
48b4ad4b33 arch: arc: remove custom atomic operations
* arc gcc toolchain has builtin atomic operations,
  use them to make things simpler

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-02 13:54:22 -07:00
Bradley Bolen
4cee0eecdc arch: arm: Move header files to common location
These files will be used for Cortex-R support as well.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-02 23:37:03 +03:00
Bradley Bolen
e788290522 arch: arm: Move prep_c.c to common location
This file provides functionality that will be common to Cortex-R.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-02 23:37:03 +03:00
Bradley Bolen
505aebf5c9 arch: arm: Move nmi code for Cortex-R support
This code can start out being common between Cortex-R and Cortex-M.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-02 23:37:03 +03:00
Bradley Bolen
808b953ee3 arch: arm: Move fault.c to cortex_m directory
This fault handling code is specific to Cortex-M so move it to prepare
for Cortex-R support.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-02 23:37:03 +03:00
Bradley Bolen
eb9f23fdb1 arch: arm: Move thread_abort.c to cortex_m specific directory
The ARM specific _impl_k_thread_abort function only applies to Cortex-M
so move it to the cortex_m specific directory.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-02 23:37:03 +03:00
Bradley Bolen
3015e7b780 arch: arm: Move irq_init.c to cortex_m specific directory
The NVIC is Cortex M specific.  Move in order to add Cortex R support.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2019-08-02 23:37:03 +03:00
Andrew Boie
bd709c7322 x86: support very early printk() if desired
Adapted from similar code in the x86_64 port.
Useful when debugging boot problems on actual x86
hardware if a JTAG isn't handy or feasible.

Turn this on for qemu_x86.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-02 00:29:21 -07:00
Wayne Ren
14db558939 arch: arc: typo fixes and comments clean up
typo fixes and comments clean up

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-01 18:09:35 -07:00
Wayne Ren
908f9ec8f5 arch: arc: add handling for accl regs, r25, r30
* when fpu is configured or mpy_option > 6,
accl regs (r58, r59) will be configured,
they are used by fpu and mac, and are caller
-saved scratch regs, so need to be saved before
jumping to interrupt handlers

* r25 and r30 are also caller-saved scratch reg.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-01 18:09:35 -07:00
Wayne Ren
a7845b10f0 arch: arc: implement z_arch_float_enable
for arc, floating point support cannot be enabled
automatically, so k_float_enable is requred.

z_arch_float_enable is for k_float_enable

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-01 18:09:35 -07:00
Wayne Ren
8b04c7de13 arch: arc: optimize the float support
* enable float support
* implement z_arch_float_disable
* add arc support in fp_sharing test

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-08-01 18:09:35 -07:00
Wayne Ren
f2fd40e90d ARC: Add support for ARC HS family of CPU cores
The ARC HS is a family of high performance CPUs from Synopsys
capable of running wide range of applications from heavy DPS
calculation to full-scale OS.

Still as with other ARC cores ARC HS might be tailored to
a particular application.

As opposed to EM cores ARC HS cores always have support of unaligned
data access and by default GCC generates such a data layout with
so we have to always enable unaligned data access in runtime otherwise
on attempt to access such data we'd see "Unaligned memory exception".

Note we had to explicitly mention CONFIG_CPU_ARCEM=y in
all current defconfigs as CPU_ARC{EM|HS} are now parts of a
choice so we cannot simply select ether option in board's Kconfig.

And while at it change "-mmpy-option" of ARC EM to "wlh1"
which is the same as previously used "6" but matches
Programmer's Reference Manual (PRM) and is more human-friendly.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
2019-07-31 09:25:15 -07:00
Alexey Brodkin
5947014685 arc: Add support for unaligned access
ARCv2 cores may access data not aligned by the data size boundary.
I.e. read entire 32-bit word from address 0x1.

This feature is configurable for ARC EM cores excluding those with
secure shield 2+2 mode. When it's available in hardware it's required
to enable that feature in run-time as well setting status32.AD bit.

Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
2019-07-31 09:25:15 -07:00
Alexey Brodkin
a312c7a756 arc: Preserve STATUS32 flags while resetting AE flag
KFLAG instruction might affect multiple flags in STATUS32 register
and so when we need just AE-bit to be reset we need first read current
state of STATUS32, then change our bit and set STATUS32 again.

Otherwise critical flags including stack checking, unaligned access etc
will be dropped for good.

Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
2019-07-31 09:25:15 -07:00
Wayne Ren
46b7fd1630 ARC: Fix selection of custom atomic ops
Up until now only ARC EM family has been supported in Zephyr
which don't support atomic operations other than
compare-and-excange, so custom atomic ops with load-locked(LLOCK)/
store-conditional(SCOND) were never used that's how we never
realised CONFIG_ATOMIC_OPERATIONS_CUSTOM points to the wrong file:
"atomic.c" while real implementation is in "atomic.S".

Fix that now.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
2019-07-31 09:25:15 -07:00
Charles E. Youse
8e437adc42 thread.c: remove vestigial CONFIG_INIT_STACKS cruft
It looks like, at some point in the past, initializing thread stacks
was the responsibility of the arch layer. After that was centralized,
we forgot to remove the related conditional header inclusion. Fixed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-31 09:15:45 +03:00
Wayne Ren
733c11b11b arch: arc: use IRQ_ACT to check nest interrupt
* use IRQ_ACT to check nest interrupt
* implement an asm macro for nest interrupt check
* no need to use exc_nest_count, remove it

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-07-30 10:16:38 -07:00
Wayne Ren
0318e3a758 arch: arc: remove saved_r0/saved_sp used in firq handling
* do not use a specific variable (saved_r0/saved_sp) to free r0
  /exchange sp, but use stack to do that.

* it will make code scalable, e.g. for SMP, no need to define
  variables for each core

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-07-30 10:16:38 -07:00
Wayne Ren
abac940c43 arch: arc: remove arc_exc_saved_sp used in exc handling
* as ilink has a copy in ERET, it can be reused as a gp
* use ilink to do the job of arc_exc_saved_sp to save 4 bytes
  and save some cycles because no load/store of memory
* it will make code scalable, e.g. for SMP, no need to
  define variables for each core

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-07-30 10:16:38 -07:00
Anas Nashif
cb412df725 x86: remove code for interrupt forwarding bug
This only applied to quark_se, so removing it.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-07-29 21:30:25 -07:00
Charles E. Youse
f7a0dce636 arch/x86: remove support for CONFIG_REALMODE
We no longer support any platforms that bootstrap from real mode.

Fixes: #17166

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-29 21:29:38 -07:00
Ioannis Glaropoulos
e78b61b187 arch: arm: only allow OOPS and STACK_CHK_FAIL from nPRIV mode
User mode is only allowed to induce oopses and stack check
failures via software-triggered system fatal exceptions. This
commit forces a kernel oops if any other fatal exception reason
is enforced.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-29 11:08:49 -07:00
Andrew Boie
96571a8c40 kernel: rename NANO_ESF
This is now called z_arch_esf_t, conforming to our naming
convention.

This needs to remain a typedef due to how our offset generation
header mechanism works.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
c9a4bd47a7 arm: dump registers on fatal exceptions
We had a function that did this, but it was dead code.
Move to fatal.c and call from z_arm_fatal_error().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
fe8d75acbf arm: fix exception reason code for bad syscall
ARM was reporting as a CPU exception and not a kernel
oops.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
297ca06934 arc: use z_fatal_error() for spurious IRQs
We shouldn't special-case this to just spin forever.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
8a9e8e0cd7 kernel: support log system for fatal errors
We introduce a new z_fatal_print() API and replace all
occurrences of exception handling code to use it.
This routes messages to the logging subsystem if enabled.
Otherwise, messages are sent to printk().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
5623637a48 kernel: abolish _default_esf
NANO_ESF parameters may now be NULL, indicating that no
exception frame is available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
71ce8ceb18 kernel: consolidate error handling code
* z_NanoFatalErrorHandler() is now moved to common kernel code
  and renamed z_fatal_error(). Arches dump arch-specific info
  before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
  and renamed k_sys_fatal_error_handler(). It is now much simpler;
  the default policy is simply to lock interrupts and halt the system.
  If an implementation of this function returns, then the currently
  running thread is aborted.
* New arch-specific APIs introduced:
  - z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
  namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
81245a0193 arm: don't use exc reason codes for internal state
We are standardizing to a arch-independent set of exception
reason codes, don't overload it with internal state of
the ARM fault handling code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Yasushi SHOJI
51bc0a065c linker: Make alignment size for sw_isr_table configurable
sw_isr_table has two entries, an argument and an ISR function.  The
comment on struct _isr_table_entry in include/sw_isr_table.h says that
"This allows a table entry to be loaded [...] with one ldmia
instruction, on ARM [...]".  Some arch, e.g. SPARC, also has a double
word load instruction, "ldd", but the instruct must have address align
to double word or 8 bytes.

This commit makes the table alignment configurable.  It allows each
architecture to specify it, if needed.  The default value is 0 for no
alignment.

Signed-off-by: Yasushi SHOJI <y-shoji@ispace-inc.com>
2019-07-24 10:09:02 -07:00
Piotr Zięcik
f2d84f08ff arch: xtensa: Get CPU clock frequency from DTS
The SoC initialization code used system clock frequency
as a CPU clock frequency. This commit corrects that by
obtaining the needed value from DTS.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-07-24 15:10:02 +02:00
Alberto Escolar Piedras
fece6907b4 arch: POSIX: Fix race with unused threads
Fix a race which seems to have been presenting itself
very sporadically on loaded systems.
The race seems to have caused tests/kernel/sched/schedule_api
to fail at random on native_posix.

The case is a bit convoluted:
When the kernel calls z_new_thread(), the POSIX arch saves
the new thread entry call in that new Zephyr thread stack
together with a bit of extra info for the POSIX arch.
And spawns a new pthread (posix_thread_starter()) which
will eventually (after the Zephyr kernel swapped to it),
call that entry function.
(Note that in principle a thread spawned by pthreads may
be arbitrarily delayed)
The POSIX arch does not try to synchronize to that new
pthread (because why should it) until the first time the
Zephyr kernel tries to swap to that thread.
But, the kernel may never try to swap to it.
And therefore that thread's posix_thread_starter() may never
have got to run before the thread was aborted, and its
Zephyr stack reused for something else by the Zephyr app.

As posix_thread_starter() was relaying on looking into that
thread stack, it may now be looking into another thread stack
or anything else.

So, this commit fixes it by having posix_thread_starter()
get the input it always needs not from the Zephyr stack,
but from its own pthread_create() parameter pointing to a
structure kept by the POSIX arch.

Note that if the thread was aborted before reaching that point
posix_thread_starter() will NOT call the Zephyr thread entry
function, but just cleanup.

With this change all "asynchronous" parts of the POSIX arch
should relay only on the POSIX arch own structures.

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2019-07-19 11:37:34 +02:00
Ioannis Glaropoulos
cbc4d41c32 arch: arm: cleanup workaround for QEMU Cortex-M3
Qemu is already updated past 2.9 release, so this
workaround for QEMU_CORTEX_M3 is now obsolete and
can be removed.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-17 09:14:44 -07:00
Andrew Boie
caa47e6c97 x86: allow user mode to induce kernel oops
Before, attempting to induce a kernel oops would instead
lead to a general protection fault as the interrupt vector
was at DPL=0.

Now we allow by setting DPL=3. We restrict the allowable
reason codes to either stack overflows or kernel oops; we
don't want user mode to be able to create a kernel panic,
or fake some other kind of exception.

Fixes an issue where the stack canary test case was triggering
a GPF instead of a stack check exception on x86.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-16 18:09:49 -07:00
Nicolas Pitre
1f783d9256 arch/posix: 64-bit build flags
We need to pass -m64 instead of -m32 when CONFIG_64BIT is set.
This is pretty x86 centric. Many platforms don't have the ability
to select between 32-bits or 64-bits builds and either of those should
be dropped in that case with restriction on the available configuration
done elsewhere. But for the time being this allows for testing both.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-07-16 10:41:11 -07:00
Alberto Escolar Piedras
81503a2087 arch: POSIX: Do not assume 32bit pointers
Correct the storage type of the thread status pointer
not assuming 32bit pointer and integer size

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2019-07-12 13:35:47 +02:00
Ioannis Glaropoulos
85f7aeeced arch: x86: make z_arch_float_disable return -ENOSYS if not supported
For the x86 architecture the z_arch_float_disable() is only
implemented when building with CONFIG_LAZY_FP_SHARING, so we
make z_arch_float_disable() return -ENOSYS when we build with
FLOAT and FP_SHARING but on an x86 platform where
LAZY_FP_SHARING is not supported.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-10 13:44:02 -07:00
Anas Nashif
fe03e39cdd arch: arc: build cache.c conditionally
Instead of the ifdef in the c file, exclude it from build completely
using cmake.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-07-04 10:04:27 -04:00
Charles E. Youse
e96c178e93 arch/x86: refactor offsets_short_arch.h
The current version is 32-bit specific, so move it to ia32/
and add a layer of indirection via an arch-level header file.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
820ea28f87 arch/x86: move kernel_arch_func.h to ia32/
Refactoring 32- and 64-bit subarchitectures, so this file is moved
to ia32/ and a new "redirector" header file is introduced.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
f40fe36ca6 arch/x86: refactor kernel_arch_thread.h
This data is subarchitecture-specific, so move it to ia32/
and add a layer of indirection at the architecture level.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
aa6d5b43f2 arch/x86: refactor kernel_arch_data.h
Some of this is 32-bit specific, some applies to all subarchitectures.
A preliminary attempt is made to refactor and place 32-bit-specific
portions in ia32/kernel_arch_data.h.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
0fb9d3450b arch/x86: move exception.h to ia32/exception.h
This file is currently 32-bit specific. Move it and references to it.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
3ff2746857 arch/x86: eliminate cache_private.h
This file merely declares external functions referenced only
by ia32/cache.c, so the declarations are inlined instead.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
589b86f534 arch/x86: remove swapstk.h and references to it
This file was used to generate offsets for host tools that are no
longer in use, so it's removed and the offsets are no longer generated.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
b4316fef48 arch/x86: eliminate arch/x86/include/asm_inline.h
Over time, this has been reduced to a few functions dealing solely
with floating-point support, referenced only from core/ia32/float.c.
Thus they are moved into that file and the header is eliminated.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
7c2d7d7b69 arch/x86: move arch/x86/include/mmustructs.h to ia32/mmustructs.h
For now, only the 32-bit subarchitecture supports memory protection.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Wayne Ren
4f2e873454 arch: arc: fix the bug caused by hardware sp switch in interrupt
* if thread switchs in interrupt, the target sp must be in
thread's kernel stack, no need to do hardware sp switch

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-07-02 19:42:14 -07:00
Charles E. Youse
0325a3d972 arch/x86: eliminate include/arch/x86/irq_controller.h
The MVIC is no longer supported, and only the APIC-based interrupt
subsystem remains. Thus this layer of indirection is unnecessary.

This also corrects an oversight left over from the Jailhouse x2APIC
implementation affecting EOI delivery for direct ISRs only.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
930e6af999 arch/x86: move include/arch/x86/segmentation.h to ia32/segmentation.h
This header is currently IA32-specific, so move it into the subarch
directory and update references to it.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
dff016b53c arch/x86: move include/arch/x86/arch.h to ia32/arch.h
Making room for the Intel64 subarch in this tree. This header is
32-bit specific and so it's relocated, and references rewritten
to find it in its new location.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
6f3009ecf0 arch/x86: move include/arch/x86/asm.h to include/arch/x86/ia32/asm.h
This file is 32-bit specific, so it is moved into the ia32/ directory
and references to it are updated accordingly.

Also, SP_ARG* definitions are no longer used, so they are removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
c7bc7a8c86 arch/x86: clean up model-specific register definitions in msr.h
Eliminate definitions for MSRs that we don't use. Centralize the
definitions for the MSRs that we do use, including their fields.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
8a8e6a1e52 arch/x86: merge asm_inline_gcc.h with asm_inline.h
This pattern exists in both the include/arch/x86 and arch/x86/include
trees. This indirection is historic and unnecessary, as all supported
toolchains for x86 support gas/gcc-style inline assembly.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Ioannis Glaropoulos
da735b9c73 arch: arm: userspace: don't use the default stack in z_arm_do_syscall
z_arm_do_syscall is executing in privileged mode. This implies
that we shall not be allowed to use the thread's default
unprivileged stack, (i.e push to or pop from it), to avoid any
possible stack corruptions.

Note that since we execute in PRIV mode and no MPU guard or
PSPLIM register is guarding the end of the default stack, we
won't be able to detect any stack overflows.

This commit implement the above change, by forcing
z_arm_do_syscall() to FIRST switch to privileged
stack and then do all the preparations to execute
the system call.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-02 19:18:48 -04:00
Ioannis Glaropoulos
f3a1270f85 arch: arm: userspace: correct inline comment for bad syscalls
We need to correct the inline comment in swap_helper.S,
which is suggesting that system call attempts with
invalid syscall IDs (i.e. above the limit) do not force
the CPU to elevate privileges. This is in fact not true,
since the execution flow moves into valid syscall ID
handling.

In other words, all we do for system calls with invalid
ID numbers is to treat them as valid syscalls with the
K_SYSCALL_BAD ID value.

We fix the inline documentation to reflect the actual
execution flow.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-02 19:18:48 -04:00
Ioannis Glaropoulos
5d423b8078 userspace: minor typo fixes in various places
System call arguments are indexed from 1 to 6, so arg0
is corrected to arg1 in two occasions. In addition, the
ARM function for system calls is now called z_arm_do_syscall,
so we update the inline comment in __svc handler.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-02 19:18:48 -04:00
Andrew Boie
1ee017050b arc: use different load instruction
If the offset within the thread struct to the
ARC arch-specific 'relinquish_cause' member is too
large, ld_s instructions referencing it will not
compile. This happens easily if CONFIG_THREAD_NAME
reserves a name buffer within the thread struct, since
all the arch-specific members come last.

Use the regular 'ld' instruction instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-01 16:29:45 -07:00
Ioannis Glaropoulos
9b03cd8bdc arch: arm: allow user to fall-back to MPU-based guards in ARMv8-M
ARMv8-M architecture supports the built-in stack overflow
detection mechanisms via the SPLIM registers. However, the
user might still wish to use the traditional MPU-based stack
overflow detection mechanism (for testing or other reasons).
We now allow the user to enable HW stack protection, but
manually turn off BUILTIN_STACK_GUARD option. This will force
the MPU_STACK_GUARD option to be selected.

It is still not allowed for the user to not select any stack
guard mechanisms, if HW_STACK_PROTECTION is selected.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-01 12:54:20 -07:00
Andrew Boie
a3a89ed9d5 x86: only use lfence if x86 bcb config enabled
Work around a testcase problem, where we want to check some
logic for the bounds check bypass mitigation in the common
kernel code. By changing the ifdef to the x86-specific option
for these lfence instructions, we avoid IAMCU build errors
but still test the common code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-06-30 09:22:09 -04:00
Ioannis Glaropoulos
e82004e211 arch: arm: mpu: minor fix to the start of the guard
Fix the start of the guard to take into account the
configurable size of the guard.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-28 12:25:08 -07:00
Wayne Ren
890b9ebab7 arch: arc: implement z_arch_switch to replace swap
* here use new style z_arch_switch,i.e. CONFIG_USE_SWITCH
to replace old swap mechnism.

* it's also required by SMP support

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-06-28 09:56:03 -04:00
Anas Nashif
5b0aa794b2 cleanup: include/: move misc/reboot.h to power/reboot.h
move misc/reboot.h to power/reboot.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
a2fd7d70ec cleanup: include/: move misc/util.h to sys/util.h
move misc/util.h to sys/util.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
9ab2a56751 cleanup: include/: move misc/printk.h to sys/printk.h
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
ee9dd1a54a cleanup: include/: move misc/dlist.h to sys/dlist.h
move misc/dlist.h to sys/dlist.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
5eb90ec169 cleanup: include/: move misc/__assert.h to sys/__assert.h
move misc/__assert.h to sys/__assert.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
e1e05a2eac cleanup: include/: move atomic.h to sys/atomic.h
move atomic.h to sys/atomic.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
10291a0789 cleanup: include/: move tracing.h to debug/tracing.h
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Ioannis Glaropoulos
171272cf31 arch: arm: update thread options flag and CONTROL atomically
Under FP shared registers mode (CONFIG_FP_SHARING=y),
a thread's user_options flag is checked during swap and
during stack fail check. Therefore, in k_float_disable()
we want to ensure that a thread won't be swapped-out with
K_FP_REGS flag cleared but still FP-active (CONTROL.FPCA
being not zero). To ensure that we temporarily disable
interrupts.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-27 18:07:03 -07:00
Ioannis Glaropoulos
f70093afb8 arch: arm: rework stack fail checking for FP capable threads
This commit reworks the ARM stack fail checking, under FP
Sharing registers mode, to account for the right width of
the MPU stack guard.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-27 18:07:03 -07:00
Ioannis Glaropoulos
360ad9e277 arch: arm: mpu: program a wide MPU stack guard for FP capable threads
For threads that appear to be FP-capable (i.e. with K_FP_REGS
option flag set), we configure a wide MPU stack guard, if we
build with stack protection enabled (CONFIG_MPU_STACK_GUARD=y).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-27 18:07:03 -07:00
Ioannis Glaropoulos
6a9b3f5ddd arch: arm: allocate a wide priv stack guard for FP-capable threads
When an FP capable thread (i.e. with K_FP_REGS option)
transitions into user mode, we want to allocate a wider
MPU stack guard region, to be able to successfully detect
overflows of the privilege stack during system calls. For
that we also need to re-adjust the .priv_stack_start pointer,
which denotes the start of the writable area of the privilege
stack buffer.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-27 18:07:03 -07:00
Ioannis Glaropoulos
1ef7a858a0 arch: arm: allocate a wide stack guard for FP-capable threads
When an FP capable thread is created (i.e. with K_FP_REGS
option) we want to allocate a wider MPU stack guard region,
to be able to successfully detect stack overflows. For that
we also need to re-adjust the values that will be passed to
the thread's stack_info .start and .size parameters.

applicable) for a thread which intends to use the FP services.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-27 18:07:03 -07:00
Daniel Leung
1c5fa6a128 cmake: use sdk-ng built toolchain for x86_64
This adds the necessary bits to utilize the x86_64 toolchain
built by sdk-ng for x86_64 when toolchain variant is either
zephyr or xtools. This allows decoupling the builds from
the host toolchain.

Newlib is also available with this toolchain so remove
the Kconfig restriction on CONFIG_NEWLIB_LIBC.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-06-27 16:08:32 -04:00
Daniel Leung
06a3735754 x86_64: minimally preparing for enabling newlib
The libc hooks for Newlib requires CONFIG_SRAM_SIZE and
the symbol "_end" at the end of memory. This is in preparation
for enabling Newlib for x86_64.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-06-27 16:08:32 -04:00
Anas Nashif
94cb13fffe arc: logging: fix logging expression
Fix log expressions to use %lx instead of %x for uintptr_t variables.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-26 09:51:14 -04:00
Nicolas Pitre
f32330b22c stdint.h: streamline type definitions
Compilers (at least gcc and clang) already provide definitions to
create standard types and their range. For example, __INT16_TYPE__ is
normally defined as a short to be used with the int16_t typedef, and
__INT16_MAX__ is defined as 32767. So it makes sense to rely on them
rather than hardcoding our own, especially for the fast types where
the compiler itself knows what basic type is best.

Using compiler provided definitions makes even more sense when dealing
with 64-bit targets where some types such as intptr_t and size_t must
have a different size and range. Those definitions are then adjusted
by the compiler directly.

However there are two cases for which we should override those
definitions:

* The __INT32_TYPE__ definition on 32-bit targets vary between an int
  and a long int depending on the architecture and configuration.
  Notably, all compilers shipped with the Zephyr SDK, except for the
  i586-zephyr-elfiamcu variant, define __INT32_TYPE__ to a long int.
  Whereas, all Linux configurations for gcc, both 32-bit and 64-bit,
  always define __INT32_TYPE__ as an int. Having variability here is
  not welcome as pointers to a long int and to an int are not deemed
  compatible by the compiler, and printing an int32_t defined with a
  long using %d makes the compiler to complain, even if they're the
  same size on 32-bit targets. Given that an int is always 32 bits
  on all targets we might care about, and given that Zephyr hardcoded
  int32_t to an int before, then we just redefine __INT32_TYPE__ and
  derrivatives to an int to keep the peace in the code.

* The confusion also exists with __INTPTR_TYPE__. Looking again at the
  Zephyr SDK, it is defined as an int, even even when __INT32_TYPE__ is
  initially a long int. One notable exception is i586-zephyr-elf where
  __INTPTR_TYPE__ is a long int even when using -m32. On 64-bit targets
  this is always a long int. So let's redefine __INTPTR_TYPE__ to always
  be a long int on Zephyr which simplifies the code, works for both
  32-bit and 64-bit targets, and mimics what the Linux kernel does.
  Only a few print format strings needed adjustment.

In those two cases, there is a safeguard to ensure the type we're
enforcing has the right size and fail the build otherwise.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-25 23:29:22 -04:00
Charles E. Youse
a506aa3dfb arch/x86: remove CONFIG_X86_FIXED_IRQ_MAPPING support
This was only enabled by the MVIC, which in turn was only used
by the Quark D2000, which has been removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-25 08:06:43 -04:00
Charles E. Youse
3dc7c7a6ea drivers/interrupt_controller/mvic.c: remove MVIC interrupt controller
The Quark D2000 is the only x86 with an MVIC, and since support for
it has been dropped, the interrupt controller is orphaned. Removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-25 08:06:43 -04:00
Ioannis Glaropoulos
639eb76729 arch: arm: make priv stack guard programming similar to normal guard
This commit aligns the programming of the privileged stack MPU
guard with that of the default stack guard (i.e of supervisor
threads). In particular:
- the guard is programmed BELOW the address indicated in
  arch.priv_stack_start; it is, therefore, similar to the
  default guard that is programmed BELOW stack_info.start.
  An ASSERT is added to confirm that the guard is programmed
  inside the thread privilege stack area.
- the stack fail check is updated accordningly
- arch.priv_stack_start is adjusted in arch_userspace_enter(),
  to make sure we account for a (possible) guard requirement,
  that is, if building with CONFIG_MPU_STACK_GUARD=y.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-24 10:16:57 -07:00
Ioannis Glaropoulos
e0db39447b arch: arm: re-organize thread stack macro defines in arch.h
This commit re-organizes the macro definitions in arch.h for
the ARM architecture. In particular, the commit:
- defines the minimum alignment requirement for thread stacks,
  that is, excluding alignment requirement for (possible)
  MPU stack guards.
- defines convenience macros for the MPU stack guard align and
  size for threads using the FP services under Shared registers
  mode (CONFIG_FP_SHARING=y). For that, a hidden Kconfig option
  is defined in arch/arm/core/cortex_m/mpu/Kconfig.
- enforces stack alignment with a wide MPU stack guard (128
  bytes) under CONFIG_FP_SHARING=y for the ARMv7-M architecture,
  which requires start address alignment with power-of-two and
  region size.

The commit does not change the amount of stack that is reserved
with K_THREAD_STACK_DEFINE; it only determines the stack buffer
alignment as explained above.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-24 10:16:57 -07:00
Charles E. Youse
ef736f77c2 arch/x86: relocate and rename SYS_X86_RST_* constants
These constants do not need global exposure, as they're only
referenced in the reboot API implementation. Also their names
are trimmed to fit into the X86-arch-specific namespace.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-24 07:46:24 -07:00
Charles E. Youse
4bdbd879ef arch/x86: remove old PRINTK() debugging macro
This appears to date all the way back to the initial import
and is used in exactly one place if DEBUG is on. Removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-24 07:46:24 -07:00
Charles E. Youse
2835c22985 arch/x86: used fixed initial EFLAGS on thread creation
Previously the existing EFLAGS was used as a base which was
then manipulated accordingly. This is unnecessary as the bits
preserved contain no useful state related to the new thread.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-24 07:46:24 -07:00