Commit graph

4576 commits

Author SHA1 Message Date
Shih-Wei Teng
d109805cb2 RISC-V: Round up pre-populated stack frame to arch stack alignment
The stack frame size, used for context switch, is rounded up to 16-bytes
alignment. Therefore, we need round down the pointer of top of the
pre-populated stack frame so that the preserved stack frame size is also
rounded up to 16-bytes alignment.

Fixes #29535

Signed-off-by: Shih-Wei Teng <swteng@andestech.com>
2021-06-11 16:13:01 +02:00
Daniel Leung
253314aabe x86: reduce VM size if ACPI to 1GB
Since physical memory is no longer wholly identity mapped,
it is not needed to set the VM size to be larger than
physical memory size. The VM size was 2GB (max physical
memory size of x86 boards) + 1GB (for memory mappings).
So simply shrink the size to 1GB, as the kernel size is
small and we still have a large chunk of space to do
memory mapping.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Daniel Leung
39ba281686 x86: acpi: no need to map all physical memory
With ACPI doing dynamic memory mapping and unmapping
to access ACPI tables, there is no need to identity
map all the physical memory anymore. So remove
the "select" statement in ACPI kconfig.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Daniel Leung
454522430f x86: acpi: use memory mapping/unmapping to access ACPI tables
Instead of accessing ACPI tables through physical address, do
memory mapping/unmapping so they can be accessed via virtual
addresses. This allows us to avoid identity mapping all
physical memory, and thus no need for a page table large enough
to map everything.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Daniel Leung
a3e817700f x86: acpi: limit search on where EBDA can be
This limits the search for Extended BIOS Data Area (EBDA) to
0x80000 to 0x100000 as this is usually the area for it.
If 0000:040e has an address not pointing to this area, it is
probably an invalid address, and should not be de-referenced
to avoid segfault.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Jeremy Bettis
2de4a902de cmake: Support coverage flags on all archs
Most arch's CMakeLists.txt contain rules to add compiler and linker
flags for coverage if CONFIG_COVERAGE is enabled, but 4 of them were
missing this.

Instead, set the coverage flags in arch/common/CMakeLists.txt which
affects all archs.

Signed-off-by: Jeremy Bettis <jbettis@chromium.org>
2021-06-10 18:01:36 -04:00
Maksim Masalski
e96df40004 arch: x86: cast to the same size composite expression
Essential type of RHS operand (64 bit) is wider than essential
type of composite expression in LHS operand (32 bit).
LHS entry_val is 32 bit, and RHS (phys+offset) is 64 bit.
Cast RHS composite expression to the (pentry_t) type.

Found as a coding guideline violation (MISRA R10.7) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-10 17:17:23 -04:00
Jaxson Han
0c03a0572b arch: arm64: mpu: Fix mpu init assertion fail
During mpu init, we check MSA_frac bits[55:52] and MSA bits[51:48] of
the ID_AA64MMFR0_EL1 register. Currently we only allow 1F to pass the
check. But according to Armv8-R AArch64 manual [1], both 1F and 2F
indicates the processor supports MPU. This commit aims at fixing this.

[1]: https://developer.arm.com/documentation/ddi0600/latest/

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-06-09 23:40:03 -05:00
Jaxson Han
cd536ae8ff arch: arm64: Refine the assertion in arch_start_cpu
When SMP enabled, the primary core calls arch_start_cpu to start
secondary cpus. There is an assertion checking the core mpid to make
sure it is called by primary core.

But the checking is bogus. After the first secondary core is brought
up, arm64_cpu_boot_params.mpid will be changed, which will fail the
assertion.

The current solution restores the arm64_cpu_boot_params.mpid.
However, using the arch_curr_cpu()->id == 0 as the assertion will be
better.

The _current_cpu->id will always fail assertion inside this macro
(__ASSERT_NO_MSG(!z_smp_cpu_mobile()), so I use arch_curr_cpu()->id
instead.

Signed-off-by: Jaxson Han <jaxson.han@arm.com>
2021-06-09 05:42:00 -05:00
Jim Shu
1b4dad433f arch: riscv: enable FPU of threads in unshared FP mode
In unshared FP mode, only 1 thread can use FPU but kernel doesn't know
which one, so riscv arch would enable FPU of each thread.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2021-06-08 11:47:02 -05:00
Øyvind Rønningstad
382bbacb0a tfm: Put saving of FPU context into its own file so it can be reused
Also, this eases readability.

The new API can be used any time all FP registers must be manually
saved and restored for an operation.

Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
2021-06-07 15:23:22 +02:00
Bradley Bolen
131af7648f arch: arm: cortex_r: Use assembler macros for exceptions
Most of the code for the three exception functions is identical so use
macros to make things easier to read.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-06-04 16:18:01 -05:00
Bradley Bolen
90e76bd891 arch: arm: cortex_r: Use macro for svc call
Use the context switch macro for z_arm_cortex_r_svc to be more clear
about the svc call being executed.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2021-06-04 16:18:01 -05:00
Andy Ross
9cb8dcbf84 arch/x86_64: Use modern CR0 assembly
The 16 bit bootstrap code for SMP CPUs was using the 286-era "lmsw"
instruction (load machine status word) to set the protected bit in CR0
(which is the modern evolution of the same register), presumably
because this is 16 bit code and we can't move a dword into CR0.

But that's wrong, because the full instruction set *is* available in
real mode on a 386, you just have to use a operand size prefix to get
to it, which the assembler emits for you automatically when you use
the .code16 directive.

Write this conventionally and use modern (e.g. 1986-era) instructions.
It also has the advantage of not confusing much more modern
hypervisors like ACRN by issuing instructions they (and I!) never knew
existed.

Fixes #35076

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-06-03 20:07:50 -05:00
Andy Ross
5e9c583c24 arch/x86_64: Terrible, awful hackery to bootstrap entry
Because of a historical misunderstanding, by default the ACRN
hypervisor wants to load Zephyr at address 0x1000 and enter the binary
at that same address.  This entry point corresponds to the __start
symbol of the build they were given, which is a 1-cpu non-SMP
configuration.  Unfortunately, when we build with
CONFIG_MP_NUM_CPUS=1, the code in locore.S #if's out the 16 bit entry
point for the auxiliary CPUs at the start of the section.  So in the
build ACRN received, the start address happened to be 0x7000, the same
address we need to launch the AP processors from.

That's right: under ACRN, the SAME ADDRESS used to enter the OS in 32
bit mode needs to be used later to boot CPUs running in 16 bit real
mode!

The solution, such as it is, is to put a 32 bit jump at the entry
address which hops to the 32 bit OS entry code, and then scribble NOP
instructions over that jump once we get there so that the next time we
reach that address (in real mode) we fall through to the correct
entry.

This patch should be considered a temporary workaround.  While it
works on all x86 hardware, it's not really needed.  A much better
solution would be to eliminate the locore linker region entirely
(which causes other headaches) and enter the Zephyr binary in a 32 bit
address somewhere in the contiguous high memory area.  All that locore
is needed for is the 16 bit bootstrap code for SMP processors, which
is ~6 instructions and can be copied in from the kernel at runtime.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-06-03 20:07:50 -05:00
Daniel Leung
dfa4b7e375 kernel: mmu: z_backing_store* to k_mem_paging_backing_store*
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Daniel Leung
31c362d966 kernel: mmu: rename z_eviction* to k_mem_paging_eviction*
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Martin Åberg
aa0a90d09c SPARC: add the Flush windows software trap
This commit implements the SPARC V8 ABI "Flush windows" software trap.
It enables support for C++ exceptions and longjmp().

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-05-28 06:32:36 -05:00
Henrik Brix Andersen
2b0a481291 arch: arm: cortex-m: add support for clearing NXP MPU regions at boot
Clear NXP MPU regions at boot if CONFIG_INIT_ARCH_HW_AT_BOOT is
enabled.

Fixes: #34045

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2021-05-26 18:14:03 -05:00
Ioannis Glaropoulos
b3b36f69a6 arm: cortex-m: shrink hidden option for null-pointer detection
Shrink the name of the hidden cortex-m option for the
null-pointer dereference detection feature.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 12:30:05 -05:00
Ioannis Glaropoulos
d105a2b76c arm: shrink names for null-pointer exception detection Kconfigs
Reduce the length of the Kconfig defines related to
null-pointed dereference detection in Cortex-M.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 12:30:05 -05:00
Ioannis Glaropoulos
4084242a71 kernel: make MULTITHREADING promptless if single-thread not supported
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 11:03:22 -05:00
Watson Zeng
8414e86b42 arch: arc: _reset and _start section fix
SECTION_FUNC allows only one function to reside in a sub-section
SECTION_SUBSEC_FUNC allows multiple functions to reside in a sub-section
we should use SECTION_SUBSEC_FUNC for _reset and _start

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-05-26 04:43:06 -05:00
Huifeng Zhang
cb32268fad arch: arm64: Fix the assertion failed when MP_NUM_CPUS >= 3
"arm64_cpu_boot_params.mpid" should be assigned to "master_core_mpid"
after secondary CPU core up.

Because "arm64_cpu_boot_params.mpid" is used to check the next up CPU
core's mpid is the excepted mpid. After excepted CPU core up, the
"arm64_cpu_boot_params.mpid" doesn't restore to primary CPU core's mpid
and then the primary CPU core try to up third CPU core will crash in
assertion.

Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
2021-05-26 04:42:49 -05:00
Watson Zeng
5516b02d53 arch: archs: using ATOMIC_OPERATIONS_BUILTIN
ATOMIC_OPERATIONS_BUILTIN issue (internal jira number: P10019563-43273)
has been fixed in new relasese MWDT 2021.03. We can use builtin atomic.
this commit reverts PR: #28528

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2021-05-25 12:55:48 -05:00
Johan Hedberg
8341a136d6 x86: multiboot: Fix NULL pointer dereferences
From the point of checking the info pointer value all code in the
z_multiboot_init() function depends on it being non-NULL. Therefore,
simply return from the function if it's NULL.

Fixes #33084

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2021-05-25 13:37:19 -04:00
Nicolas Pitre
b8d24ffb45 arm64: mitigate FPU-in-exception usage side effects
Every va_start() currently triggers a FPU access trap if FPU is not
already used. This is due to the fact that va_start() must copy FPU
registers that are used for float argument passing into the va_list
object. Flushing the FPU context to its owner and granting access to
the current thread is wasteful if this is only for va_start(),
especially since in most cases there are simply no FP arguments
being passed by the caller.

This is made even worse with exception code (syscalls, IRQ handlers,
etc.) where the exception code has to be resumed with interrupts
disabled upon FPU access as there is no provision for preserving an
interrupted exception mode's FPU context.

Fix those issues by simply simulating the sequence of STR instructions
that the va_start() generates without actually granting FPU access.
We limit ourselves only to exception context to keep changes to a
minimum for now.

This also allows for reverting the ARM64 exception in the nested IRQ
test as it now works properly even if FPU_SHARING is enabled.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-21 04:52:44 -05:00
Aurelien Jarno
be49df628f arch: arm: cortex_m: z_arm_mpu_init: fix D-Cache invalidation
In case CONFIG_NOCACHE_MEMORY=y, the D-Cache need to be clean and
invalidated before enabling the MPU to make sure no data from a
__nocache__ region is present in the D-Cache.

If the D-Cache is disabled, SCB_CleanInvalidateDCache() shall not be
used as it might contains random data for random addresses, and this
might just create a bus fault.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2021-05-18 11:39:26 -05:00
Aurelien Jarno
1a583e44ba arch: arm: cortex_m: fix D-Cache reset with CONFIG_INIT_ARCH_HW_AT_BOOT
On reset we do not know what is the status of the D-Cache, nor its
content.

If it is disabled, do not try to clean it, as it might contains random
data for random addresses, and this might just create a bus fault.
Invalidating it is enough.

If it is enabled, it means its content is not random.
SCB_InvalidateDCache() will clean it, invalidate it and disable it.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2021-05-18 11:39:26 -05:00
Andy Ross
41e885947e arch/x86: Correct multiboot interpretation when building for EFI
When loaded via EFI, we obviously don't have a multiboot info pointer
available (we might have an EFI system table, but zefi doesn't pass
that through yet).  Don't try to parse the "whatever garbage was in
%rbp" as a multiboot table.

The configuration is a little clumsy, as strictly our EFI kconfig just
says we're "building for" EFI but not that we'll boot that way.  And
tests like arch/x86/info are trying to set CONFIG_MULTIBOOT=n
unconditionally, when it really should be something they detect from
devicetree or wherever.

Fixes #33545

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-15 15:30:02 -04:00
Daniel Leung
2c2d313cb9 x86: ia32: mark symbols for boot and pinned regions
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
512cb905d1 x86: ia32/linker: add boot and pinned sections
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
45d1fc9cc2 x86: gen_mmu: add support for boot and pinned regions
Both boot and pinned regions need to be mapped and permissions
set correctly.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
af49ec0277 linker: remove TEXT_START macro
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Nicolas Pitre
5f6e257b0b arm64: provide an optimized arch_page_phys_get()
The AT instruction gives the corresponding physical address directly.
Much faster than the default implementation.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-08 17:06:58 -04:00
Carlo Caione
f000695243 cache: Rename sys_{dcache,icache}_* to sys_{data,instr}_cache_*
To have a common prefix.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Carlo Caione
e2333269ae cache: Introduce external cache controller system support
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.

Rework the cache code introducing support for external cache
controllers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Evgeniy Paltsev
93bf5f58e7 ARC: add TLS support for ARCv3
For ARCv3 the register is fixed to r30, so we don't need to
configure it at compile-time.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
9a3d925860 ARC: boost default stacks in case of 64BIT
Increase stacks required for ARCv3 64-bit CI to pass.
The CMSIS stacks are for programs in samples/portability

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
8048b14135 ARC: allow to build code for processors without ZOL
ARCv3 64 bit processors doesn't have Zero Delay Loop
(also named Zero Overhead Loop, ZOL) mechanism. Add kconfig
option to remove ZOL register save/restore so the code
can be build for both ARCv2 and ARCv3.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
3f12ca57b8 ARC: make vector table bit agnostic
ARCv2 32 bit and ARCv3 64 bit share the same vector table
structure but with different vector entry size (32 and 64 bit),
so we can easily make vector table bit agnostic.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
0d859796be ARC: make variables with regs and addresses bit agnostic
Make variables where we store CPU registers values and
memory addresses bit agnostic.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
ab17a59ba5 ARC: mark accesses which are 32 bit despite of platform bittnes
Mark the places where we intentionally use st instead of STR for
code common for ARCv2 and ARCv3.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
9d309d300a ARC: workaround bloated structure access in ASM with _st_huge_offset
When we accessing bloated structure member we can exceed u9 operand
in store instruction. So we can use _st32_huge_offset macro instead
for 32 bit accesses

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
c2b61dfe72 ARC: rewrite ASM code with asm-compat macroses
Rewrite ARC assembler code with asm-compat macroses, so the same
code can be used for both ARCv2 (GNU and MWDT assemblers) and
ARCv3 (GNU assembler)

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
8cb122ea5d ARC: reuse headers for both ARCv3 and ARCv3 if possible
Reuse ARCv2 headers [where it is possible] for ARCv3.
In this commit we simply allow to use them for ARCv3, we'll
move it to proper folder and rename them [where it is required]
in the upcoming cleanup patch.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Evgeniy Paltsev
6afe7c5fd2 ARC: prepare for building for ARCv3 HS6x
Do basic preparations for building code for ARCv3 HS6x
* add ISA_ARCV3 and CPU_HS6X config options
* add off_t type support for __ARC64__
* use elf64-littlearc format for linking
* use arc64 mcpu for CPU_HS6X

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-05-07 14:55:49 -05:00
Daniel Leung
18aad13d76 x86: mmu: implement arch_page_phys_get()
This implements arch_page_phys_get() to translate mapped
virtual addresses back to physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
786cf641dc x86: mmu: implement arch_mem_unmap()
This implements arch_mem_unmap() as counterpart to
arch_mem_map().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
c481fd412e x86: mmu: don't decrement z_free_page_count in reserving code
In z_mem_manage_init(), z_free_page_count is only manipulated
after all reserved pages are marked, and will reflect
the actual number of page frames being added to the free page
frame list. Manipulating z_free_page_count before this is
going to mess up the accounting, so remove the code to
decrement z_free_page_count in arch_reserved_pages_update()
under x86.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00