Commit graph

5846 commits

Author SHA1 Message Date
Daniel Leung
2c2d313cb9 x86: ia32: mark symbols for boot and pinned regions
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
512cb905d1 x86: ia32/linker: add boot and pinned sections
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
45d1fc9cc2 x86: gen_mmu: add support for boot and pinned regions
Both boot and pinned regions need to be mapped and permissions
set correctly.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
af49ec0277 linker: remove TEXT_START macro
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Nicolas Pitre
5f6e257b0b arm64: provide an optimized arch_page_phys_get()
The AT instruction gives the corresponding physical address directly.
Much faster than the default implementation.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-08 17:06:58 -04:00
Carlo Caione
f000695243 cache: Rename sys_{dcache,icache}_* to sys_{data,instr}_cache_*
To have a common prefix.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Carlo Caione
e2333269ae cache: Introduce external cache controller system support
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.

Rework the cache code introducing support for external cache
controllers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Evgeniy Paltsev
93bf5f58e7 ARC: add TLS support for ARCv3
For ARCv3 the register is fixed to r30, so we don't need to
configure it at compile-time.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
9a3d925860 ARC: boost default stacks in case of 64BIT
Increase stacks required for ARCv3 64-bit CI to pass.
The CMSIS stacks are for programs in samples/portability

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
8048b14135 ARC: allow to build code for processors without ZOL
ARCv3 64 bit processors doesn't have Zero Delay Loop
(also named Zero Overhead Loop, ZOL) mechanism. Add kconfig
option to remove ZOL register save/restore so the code
can be build for both ARCv2 and ARCv3.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
3f12ca57b8 ARC: make vector table bit agnostic
ARCv2 32 bit and ARCv3 64 bit share the same vector table
structure but with different vector entry size (32 and 64 bit),
so we can easily make vector table bit agnostic.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
0d859796be ARC: make variables with regs and addresses bit agnostic
Make variables where we store CPU registers values and
memory addresses bit agnostic.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
ab17a59ba5 ARC: mark accesses which are 32 bit despite of platform bittnes
Mark the places where we intentionally use st instead of STR for
code common for ARCv2 and ARCv3.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
9d309d300a ARC: workaround bloated structure access in ASM with _st_huge_offset
When we accessing bloated structure member we can exceed u9 operand
in store instruction. So we can use _st32_huge_offset macro instead
for 32 bit accesses

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
c2b61dfe72 ARC: rewrite ASM code with asm-compat macroses
Rewrite ARC assembler code with asm-compat macroses, so the same
code can be used for both ARCv2 (GNU and MWDT assemblers) and
ARCv3 (GNU assembler)

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
8cb122ea5d ARC: reuse headers for both ARCv3 and ARCv3 if possible
Reuse ARCv2 headers [where it is possible] for ARCv3.
In this commit we simply allow to use them for ARCv3, we'll
move it to proper folder and rename them [where it is required]
in the upcoming cleanup patch.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
6afe7c5fd2 ARC: prepare for building for ARCv3 HS6x
Do basic preparations for building code for ARCv3 HS6x
* add ISA_ARCV3 and CPU_HS6X config options
* add off_t type support for __ARC64__
* use elf64-littlearc format for linking
* use arc64 mcpu for CPU_HS6X

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Daniel Leung
18aad13d76 x86: mmu: implement arch_page_phys_get()
This implements arch_page_phys_get() to translate mapped
virtual addresses back to physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
786cf641dc x86: mmu: implement arch_mem_unmap()
This implements arch_mem_unmap() as counterpart to
arch_mem_map().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
c481fd412e x86: mmu: don't decrement z_free_page_count in reserving code
In z_mem_manage_init(), z_free_page_count is only manipulated
after all reserved pages are marked, and will reflect
the actual number of page frames being added to the free page
frame list. Manipulating z_free_page_count before this is
going to mess up the accounting, so remove the code to
decrement z_free_page_count in arch_reserved_pages_update()
under x86.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
783b20712e arch: implement brute force find_lsb_set()
On RISC-V 64-bit, GCC complains about undefined reference
to 'ffs' via __builtin_ffs(). So implement a brute force
way to do it. Once the toolchain has __builtin_ffs(),
this can be reverted.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Bradley Bolen
95a7e71661 arch: arm: aarch32: Move mpu code up a level
Move the mpu code to the common aarch32 directory in preparation for
Cortex-R mpu support

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-05-06 19:39:09 +02:00
Daniel Leung
37672958ac x86: mmu: relax KERNEL_VM_OFFSET == SRAM_OFFSET
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-05 19:42:25 -04:00
Gerard Marull-Paretas
280ca7a632 arch: replace power/power.h with pm/pm.h
Replace old header with the new one.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Jennifer Williams
ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00
Øyvind Rønningstad
a2cfb8431d arch: arm: Add code for swapping threads between secure and non-secure
This adds code to swap_helper.S which does special handling of LR when
the interrupt came from secure. The LR value is stored to memory, and
put back into LR when swapping back to the relevant thread.

Also, add special handling of FP state when switching from secure to
non-secure, since we don't know whether the original non-secure thread
(which called a secure service) was using FP registers, so we always
store them, just in case.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-05-05 13:00:31 +02:00
Ioannis Glaropoulos
ad808354d2 arch: arm: Add config for non-blocking secure calls
Introduce a Kconfig option to allow Secure function calls to be
pre-empted.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-05-05 13:00:31 +02:00
Nicolas Pitre
76494f8589 arm64: optimize offsets in z_arm64_context_switch
We can use build-time offsets from a struct k_thread pointer directly
to struct _callee_saved members. No need to compute that at run time.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-04 22:41:32 -04:00
Mahesh Mahadevan
d6b50233ac arch: arm: Setup Static MPU regions earlier in boot flow
Setup the static MPU regions before PRE_KERNEL_1 and
PRE_KERNEL_2 functions are invoked. This will setup
the MPU for SRAM regions in case code relocated to SRAM
is invoked from any of these functions.

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Mahesh Mahadevan
1b36c6c00e arch: arm: Create a MPU entry for relocated code
Code relocated using CONFIG_CODE_DATA_RELOCATION_SRAM should
be allowed to execute from SRAM

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Mahesh Mahadevan
64e973fdcd Kconfig: Add a new config CODE_DATA_RELOCATION_SRAM
1. This will help us identify if the relocation is to
SRAM which is used when setting up the MPU entry
for the SRAM region where code is relocated
2. Move CODE_DATA_RELOCATION configs to ARM specific
folder

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2021-05-04 15:46:52 +02:00
Nicolas Pitre
35c9ed6a4b arm64: don't create a section for z_arm64_exit_exc_fpu_done
Both z_arm64_exit_exc and z_arm64_exit_exc_fpu_done must be within
the same section as execution falls through here.

If z_arm64_exit_exc_fpu_done creates a section of its own then the
linker is free to disjoint the code and we absolutely don't want that.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 19:56:26 -04:00
Guennadi Liakhovetski
29abc8adc0 xtensa: fix booting secondary cores on the dummy thread
When secondary cores are booted, they use the dummy thread and
the IRQ stack until they switch over to a real thread. Therefore
dummy threads shouldn't be skipped when cohering outgoing thread
stack, only threads with zero stack size should be skipped.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-03 17:13:01 -04:00
Nicolas Pitre
f1f63dda17 arm64: FPU context switching support
This adds FPU sharing support with a lazy context switching algorithm.

Every thread is allowed to use FPU/SIMD registers. In fact, the compiler
may insert FPU reg accesses in anycontext to optimize even non-FP code
unless the -mgeneral-regs-only compiler flag is used, but Zephyr
currently doesn't support such a build.

It is therefore possible to do FP access in IRS as well with this patch
although IRQs are then disabled to prevent nested IRQs in such cases.

Because the thread object grows in size, some tests have to be adjusted.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Nicolas Pitre
a82fff04ff arm64: implement exception depth count
Add the exception depth count to tpidrro_el0 and make it available
through the arch_exception_depth() accessor.

The IN_EL0 flag is now updated unconditionally even if userspace is
not configured. Doing otherwise made the code rather hairy and
I doubt the overhead is measurable.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Nicolas Pitre
949ef7c660 Kconfig: clean up FPU and FPU_SHARING entries
CONFIG_FPU: The architecture dependency list is redundant.
Having CPU_HAS_FPU being selected by those archs as a dependency
is sufficient and cleaner.

CONFIG_FPU_SHARING: The default should always be y to be on the safe
side here, but as a compromise for not affecting existing config, let's
move the default selection local to those configs that care, again to
avoid a growing list of conditionals here. Adjust the help text which
applies to more than just Cortex-M.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Jiafei Pan
4a87c08606 arm64: cache: fix arch_dcache_all()
Add data barrier before and after dcachle flush or clean,
and restore to data cache level 0 after all ops.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-05-03 11:55:52 +02:00
Jiafei Pan
12b9b5aacc arm64: cache: refine arch_dcache_range()
Moved all assembly code to c code. Fixed arch_dcache_line_size_get()
to get dcache line size by using "4 << dminline" and don't consider
CWG according to sample code in cotexta-v8 programer guider.

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2021-05-03 11:55:52 +02:00
Daniel Leung
54283efcce x86: mmu: allow page table extra mappings to have cache disabled
This adds the bits to the gen_mmu.py script so that extra mappings
can be added with caching disabled. This is useful for mapping
MMIO regions where caching is not desired.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 21:17:24 -04:00
Daniel Leung
43f0726985 arm: aarch32: timing: fix potential divide by zero if DWT
There is a possibility that the DWT frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta DWT cycles
until they both are not zero.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 16:49:17 -04:00
Daniel Leung
d6cbdace78 x86: timing: fix potential divide by zero
There is a possibility that the TSC frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta TSC cycles
until they both are not zero.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 16:49:17 -04:00
Jennifer Williams
3e28a570c2 arch: x86: core: pcie: rephrase use of ain't
Rephrasing away from ain't, which is informal, uncommon, and can
be viewed as substandard or 'slang'.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-29 07:15:50 -04:00
Gerard Marull-Paretas
f163bdb280 power: move reboot functionality to os lib
Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-28 20:34:00 -04:00
Jennifer Williams
734c65ad23 arch: arm: core: aarch32: cortex_m: fault: fix if...else ifs
bus_fault() and hard_fault() were missing final else statement
in the if else if constructs. This commit adds non-empty else {}
to comply with coding guideline 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams
a5c27d69b5 arch: arm: core: aarch32: cortex_m: debug: remove if...else if construct
z_arm_debug_monitor_event_error_check() was missing final
else statement in the if else if construct so violated guideline
15.7. This commit removes the else if for symmetry in the limited
early-exit conditions, rather than empty final else {}, to comply.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Gerard Marull-Paretas
6c7c9e2b99 arch: x86: remove usage of device_pm_control_nop
If device PM is not implemented just use NULL.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-27 16:28:49 -04:00
Hou Zhiqiang
50d263d138 arm64: Do not try to bring up the cores disabled in DT node
The macro DT_FOREACH_CHILD will iterates all child nodes ignoring the
status property, this patch changes to use DT_FOREACH_CHILD_STATUS_OKAY
to avoid trying to bring up disabled cores, which only iterates the
enabled child nodes.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2021-04-27 13:32:55 -04:00
Hou Zhiqiang
9681034875 arm64: Fix MPID load instruction for secondary cores
Change to load MPID for secondary cores adding offset macro
BOOT_PARAM_MPID_OFFSET.

Currently the code load MPID for secondary cores from offset 0x0
of the struct arm64_cpu_boot_params, it's working as currently
the macro BOOT_PARAM_MPID_OFFSET has value 0x0, but when the
location of the member "mpid" is changed, it can result in SMP
booting failure and the build assert won't throw out any warning.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2021-04-27 13:32:18 -04:00
Daniel Leung
1117169980 kernel: generate placeholders for kobj tables before final build
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-27 13:32:00 -04:00
Morten Priess
a0dd44c5e0 arch: select HAS_DTS for SPARC
With PR#34449, architectures that use DTS must select the HAS_DTS
configuration.

Signed-off-by: Morten Priess <mtpr@oticon.com>
2021-04-26 13:42:10 +02:00