This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The AT instruction gives the corresponding physical address directly.
Much faster than the default implementation.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.
Rework the cache code introducing support for external cache
controllers.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
For ARCv3 the register is fixed to r30, so we don't need to
configure it at compile-time.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Increase stacks required for ARCv3 64-bit CI to pass.
The CMSIS stacks are for programs in samples/portability
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
ARCv3 64 bit processors doesn't have Zero Delay Loop
(also named Zero Overhead Loop, ZOL) mechanism. Add kconfig
option to remove ZOL register save/restore so the code
can be build for both ARCv2 and ARCv3.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
ARCv2 32 bit and ARCv3 64 bit share the same vector table
structure but with different vector entry size (32 and 64 bit),
so we can easily make vector table bit agnostic.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Make variables where we store CPU registers values and
memory addresses bit agnostic.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Mark the places where we intentionally use st instead of STR for
code common for ARCv2 and ARCv3.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
When we accessing bloated structure member we can exceed u9 operand
in store instruction. So we can use _st32_huge_offset macro instead
for 32 bit accesses
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Rewrite ARC assembler code with asm-compat macroses, so the same
code can be used for both ARCv2 (GNU and MWDT assemblers) and
ARCv3 (GNU assembler)
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Reuse ARCv2 headers [where it is possible] for ARCv3.
In this commit we simply allow to use them for ARCv3, we'll
move it to proper folder and rename them [where it is required]
in the upcoming cleanup patch.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Do basic preparations for building code for ARCv3 HS6x
* add ISA_ARCV3 and CPU_HS6X config options
* add off_t type support for __ARC64__
* use elf64-littlearc format for linking
* use arc64 mcpu for CPU_HS6X
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
This implements arch_page_phys_get() to translate mapped
virtual addresses back to physical addresses.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In z_mem_manage_init(), z_free_page_count is only manipulated
after all reserved pages are marked, and will reflect
the actual number of page frames being added to the free page
frame list. Manipulating z_free_page_count before this is
going to mess up the accounting, so remove the code to
decrement z_free_page_count in arch_reserved_pages_update()
under x86.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
On RISC-V 64-bit, GCC complains about undefined reference
to 'ffs' via __builtin_ffs(). So implement a brute force
way to do it. Once the toolchain has __builtin_ffs(),
this can be reverted.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
This adds code to swap_helper.S which does special handling of LR when
the interrupt came from secure. The LR value is stored to memory, and
put back into LR when swapping back to the relevant thread.
Also, add special handling of FP state when switching from secure to
non-secure, since we don't know whether the original non-secure thread
(which called a secure service) was using FP registers, so we always
store them, just in case.
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Introduce a Kconfig option to allow Secure function calls to be
pre-empted.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
We can use build-time offsets from a struct k_thread pointer directly
to struct _callee_saved members. No need to compute that at run time.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Setup the static MPU regions before PRE_KERNEL_1 and
PRE_KERNEL_2 functions are invoked. This will setup
the MPU for SRAM regions in case code relocated to SRAM
is invoked from any of these functions.
Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
Code relocated using CONFIG_CODE_DATA_RELOCATION_SRAM should
be allowed to execute from SRAM
Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
1. This will help us identify if the relocation is to
SRAM which is used when setting up the MPU entry
for the SRAM region where code is relocated
2. Move CODE_DATA_RELOCATION configs to ARM specific
folder
Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
Both z_arm64_exit_exc and z_arm64_exit_exc_fpu_done must be within
the same section as execution falls through here.
If z_arm64_exit_exc_fpu_done creates a section of its own then the
linker is free to disjoint the code and we absolutely don't want that.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
When secondary cores are booted, they use the dummy thread and
the IRQ stack until they switch over to a real thread. Therefore
dummy threads shouldn't be skipped when cohering outgoing thread
stack, only threads with zero stack size should be skipped.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
This adds FPU sharing support with a lazy context switching algorithm.
Every thread is allowed to use FPU/SIMD registers. In fact, the compiler
may insert FPU reg accesses in anycontext to optimize even non-FP code
unless the -mgeneral-regs-only compiler flag is used, but Zephyr
currently doesn't support such a build.
It is therefore possible to do FP access in IRS as well with this patch
although IRQs are then disabled to prevent nested IRQs in such cases.
Because the thread object grows in size, some tests have to be adjusted.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add the exception depth count to tpidrro_el0 and make it available
through the arch_exception_depth() accessor.
The IN_EL0 flag is now updated unconditionally even if userspace is
not configured. Doing otherwise made the code rather hairy and
I doubt the overhead is measurable.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
CONFIG_FPU: The architecture dependency list is redundant.
Having CPU_HAS_FPU being selected by those archs as a dependency
is sufficient and cleaner.
CONFIG_FPU_SHARING: The default should always be y to be on the safe
side here, but as a compromise for not affecting existing config, let's
move the default selection local to those configs that care, again to
avoid a growing list of conditionals here. Adjust the help text which
applies to more than just Cortex-M.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add data barrier before and after dcachle flush or clean,
and restore to data cache level 0 after all ops.
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
Moved all assembly code to c code. Fixed arch_dcache_line_size_get()
to get dcache line size by using "4 << dminline" and don't consider
CWG according to sample code in cotexta-v8 programer guider.
Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
This adds the bits to the gen_mmu.py script so that extra mappings
can be added with caching disabled. This is useful for mapping
MMIO regions where caching is not desired.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is a possibility that the DWT frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta DWT cycles
until they both are not zero.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is a possibility that the TSC frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta TSC cycles
until they both are not zero.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Rephrasing away from ain't, which is informal, uncommon, and can
be viewed as substandard or 'slang'.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
bus_fault() and hard_fault() were missing final else statement
in the if else if constructs. This commit adds non-empty else {}
to comply with coding guideline 15.7.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
z_arm_debug_monitor_event_error_check() was missing final
else statement in the if else if construct so violated guideline
15.7. This commit removes the else if for symmetry in the limited
early-exit conditions, rather than empty final else {}, to comply.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
The macro DT_FOREACH_CHILD will iterates all child nodes ignoring the
status property, this patch changes to use DT_FOREACH_CHILD_STATUS_OKAY
to avoid trying to bring up disabled cores, which only iterates the
enabled child nodes.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Change to load MPID for secondary cores adding offset macro
BOOT_PARAM_MPID_OFFSET.
Currently the code load MPID for secondary cores from offset 0x0
of the struct arm64_cpu_boot_params, it's working as currently
the macro BOOT_PARAM_MPID_OFFSET has value 0x0, but when the
location of the member "mpid" is changed, it can result in SMP
booting failure and the build assert won't throw out any warning.
Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>