Commit graph

4409 commits

Author SHA1 Message Date
Daniel Leung
ee3d345c09 x86: mmu: reserve more space for page table if linking in virt
When linking in virtual address space, we still need physical
addresses in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So we need to enlarge the reserved space for page table
to accommodate this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
9ce77abf23 x86: ia32: jump to virtual address before calling z_x86_prep_c
We have been having the assumption that the physical memory
is identity-mapped to virtual address space. However, with
the ability to set CONFIG_KERNEL_VM_BASE separately from
CONFIG_SRAM_BASE_ADDRESS, the assumption is no longer valid.
This changes the boot code in x86 32-bit, so that once
the page table is loaded, we can proceed with executing in
the virtual address space. So do a long jump to virtual
address just before calling z_x86_prep_c. From this point on,
code execution is in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
991300e1ba x86: gen_mmu: also map SRAM if linking in virtual address space
When linking in virtual address space, we still need physical
addressed in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So identity maps the kernel in SRAM.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
d40e8ede8e x86: gen_gdt: add address translation if needed
When the kernel is mapped into virtual address space
that is different than the physical address space,
the dynamic GDT generation uses the virtual addresses.
However, the GDT table is required at boot before
page table is loaded where the virtual addresses are
invalid. So make sure GDT generation is using
physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
7a51aab397 x86: gen_mmu: add address translation if needed
There is an assumption made in the page table generation code
that the kernel would occupy the same physical and virtual
addresses. However, we may want to map the kernel into
a virtual address space which differs from kernel's physical
address space. For example, with demand paging enabled on
kernel code and data, we can accommodate kernel that is
larger than physical memory size, and may want to utilize
a bigger virtual address space. So add address translation
in the gen_mmu.py script for this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
a1afe9be5e x86: ia32: do virtual address translation at boot
This adds virtual address translation to a few variables
used in crt0.S. This is needed as they are linked at
virtual addresses but before page table is loaded,
they are not available at virtual addresses and must be
referred via physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung
bbe4b39f8d x86: mmu: cast to uintptr_t for page table using Z_X86_PHYS_ADDR
When feeding &z_shared_kernel_page_start directly to
Z_X86_PHYS_ADDR(), the compiler would complain array subscript
out of bound if linking in virtual address space. So cast it
into uintptr_t first before translation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Nicolas Pitre
0c45b548e2 aarch64: rationalize exception entry/exit code
Each vector slot has room for 32 instructions. The exception context
saving needs 15 instructions already. Rather than duplicating those
instructions in each out-of-line exception routines, let's store
them directly in the vector table. That vector space is otherwise
wasted anyway. Move the z_arm64_enter_exc macro into vector_table.S
as this is the only place where it should be used.

To further reduce code size, let's make z_arm64_exit_exc into a
function of its own to avoid code duplication again. It is put in
vector_table.S as this is the most logical location to go with its
z_arm64_enter_exc counterpart.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-03-03 16:26:40 +03:00
Shubham Kulkarni
26efef4f94 arch: xtensa: Fix backtrace from ISR
a0 is used as scratch register. Restore value of a0 (return address)
from stack frame before spilling registers on stack

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-03-03 13:02:57 +01:00
Ioannis Glaropoulos
f1a27a8189 arm: cortex_m: assert if DebugMonitor exc is enabled in debug mode
Assert if the null pointer de-referencing detection (via DWT) is
enabled when the processor is in debug mode, because the debug
monitor exception can not be triggered in debug mode (i.e. the
behavior is unpredictable). Add a note in the Kconfig definition
of the null-pointer detection implementation via DWT, stressing
that the solution requires the core be in normal mode.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
77c76a3b79 arm: cortex_m: build time assert for null-pointer exception page size
We introduce build time asserts for
CONFIG_CORTEX_M_DEBUG_NULL_POINTER_EXCEPTION_PAGE_SIZE
to catch that the user-supplied value has, as requested
by the Kconfig symbol specification, a power of 2 value.
For the MPU-based implementation of null-pointer detection
we can use an existing macro for the build time assert,
since the region for catching null-pointer exceptions
is a regular MPU region, with different restrictions,
depending on the MPU architecture. For the DWT-based
implementation, we introduce a custom build-time assert.

We add also a run-time ASSERT for the MPU-based
implementation in ARMv8-M platforms, which require
that the null pointer exception detection page is
already mapped by the MPU.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
1db78aae73 arm: cortex_m: ensure DebugMonitor is targeting Secure domain
By design, the DebugMonitor exception is only employed
for null-pointer dereferencing detection, and enabling
that feature is not supported in Non-Secure builds. So
when enabling the DebugMonitor exception, assert that
it is not targeting the Non Secure domain.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
1b22f6b8c8 arm: cortex_m: enable null-pointer exception detection in the tests
Enable the null-pointer dereferencing detection by default
throughout the test-suite. Explicitly disable this for the
gen_isr_table test which needs to perform vector table reads.
Disable null-pointer exception detection on qemu_cortex_m3
board, as DWT it is not emulated by QEMU on this platform.
Additionally, disable null-pointer exception detection on
mps2_an521 (QEMU target), as DWT is not present and the MPU
based solution won't work, since the target does not have
the area 0x0 - 0x400 mapped, but the QEMU still permits
read access.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
d86d2c6f65 arm: cortex_m: implement null pointer exception detection with MPU
Implementation for null pointer exception detection feature
using the MPU on Cortex-M. Null-pointer detection is implemented
by programming an MPU to guard a limited area starting at
address 0x0. on non ARMv8-M we program an MPU region with
No-access policy. On ARMv8-M we program a region with any
permissions, assuming the region will overlap with fixed
FLASH0 region. We add a compile-time message to warn the
user if the MPU-based null-pointer exception solution can
not be used (ARMv8-M only).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
66ef96fded arm: cortex_m: add vector table padding for null pointer detection
Padding inserted after the (first-stage) vector table,
so that the Zephyr image does not attempt to use the
area which we reserve to detect null pointer dereferencing
(0x0 - <size>). If the end of the vector table section is
higher than the upper end of the reserved area, no padding
 will be added. Note also that the padding will be added
only once, to the first stage vector table, even if the current
snipped is included multiple times (this is for a corner case,
when we want to use this feature together with SW Vector Relaying
on MCUs without VTOR but with an MPU present).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
0bac92db96 arm: cortex-m: null pointer detection additions for ARMv8-M
Additions to the null-pointer exception detection mechanism
for ARMv8-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
3054c1351a arm: cortex_m: null-pointer exception detection via DWT
Implement the functionality to detect null pointer dereference
exceptions via the DWT unit in the ARMv7-M Mainline MCUs.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
f97ccd940c arm: cortex-m: build debug.c for null-pointer detection feature
When we enable the null pointer exceptino feature (using DWT)
we include debug.c in the build. debug.c contains the functions
to configure and enable null pointer detection using the Data
Watchdog and Trace unit.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
c42a8d9d24 arm: cortex_m: fault: hook up debug monitor exception handler
Extend the debug monitor exception handler to
- return recoverable faults when the debug monitor
  is enabled but we do not get an expected DWT event,
- call a debug monitor routine to check for null pointer
  exceptions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
712a7951db arm: cortex_m: move static inline DWT functions in internal header
Move the DWT utility functions, present in timing.c
in an internal cortex-m header.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Ioannis Glaropoulos
b3cd5065eb arm: cortex_m: Kconfig symbols for null pointer detection feature
Introduce the required Kconfig symbol framework for the
Cortex-M-specific null pointer dereferencing detection
feature. There are two implementations (based on DWT and
MPU) so we introduce the corresponding choice symbols,
including a choice symbol to signify that the feature
is to be disabled.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-03-03 10:38:29 +01:00
Carlo Caione
eb72b2d72a aarch64: smccc: Retrieve up to 8 64-bit values
The most common secure monitor firmware in the ARM world is TF-A. The
current release allows up to 8 64-bit values to be returned from a
SMC64 call from AArch64 state.

Extend the number of possible return values from 4 to 8.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione
bc7cb75a82 aarch64: smccc: Use offset macros
Instead of relying on hardcoded offset in the assembly code, introduce
the offset macros to make the code more clear.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione
998856bacb aarch64: smccc: Update specs link
The link points to an outdated version. Update it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Carlo Caione
90859c6bf3 aarch64: smccc: Decouple PSCI from SMCCC
The current code is assuming that the SMC/HVC helpers can only be used
by the PSCI driver. This is wrong because a mechanism to call into the
secure monitor should be made available regardless of using PSCI or not.

For example several SoCs relies on SMC calls to read/write e-fuses,
retrieve the chip ID, control power domains, etc...

This patch introduces a new CONFIG_HAS_ARM_SMCCC symbol to enable the
SMC/HVC helpers support and export that to drivers that require it.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-01 09:26:56 -05:00
Nicolas Pitre
443e3f519e arm64: mmu: initialize early
This is fundamental enough that it better be initialized ASAP.
Many other things get initialized soon afterwards assuming the MMU
is already operational.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
9461600c86 aarch64: mmu: rationalize debugging output
Make it into a generic call that can be used in various places.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
b40a2fdb8b aarch64: mmu: fix common MMU mapping
Location of __kernel_ram_start is too far and _app_smem .bss areas
are not covered. Use _image_ram_start instead.

Location of __kernel_ram_end is also way too far. We should stop at
_image_ram_end where the expected unmapped area starts.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
fb3de16f0c aarch64: mmu: use a range (start..end) for common MMU mapping
This is easier to cover multiple segments this way. Especially since
not all boundary symbols from the linker script come with a size
derrivative.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
cb49e4b789 aarch64: mmu: invert the MT_OVERWRITE flag
The MT_OVERWRITE case is much more common. Redefine that flag as
MT_NO_OVERWRITE instead for those fewer cases where it is needed.

One such case is platform provided mappings. Apply them after the
common kernel mappings and use the MT_NO_OVERWRITE on them.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
56c77118d3 aarch64: mmu: factor out the phys argument out of set_mapping()
Minor cleanup.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
f53bd24a4d aarch64: mmu: move get_region_desc() closer to usage points
Simple code tidiness.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
b696090bb7 aarch64: mmu: make page table pool global
There is no real reason for keeping page tables into separate pools.
Make it global which allows for more efficient memory usage and
simplifies the code.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
459bfed9ea aarch64: mmu: dynamic mapping support
Introduce a remove_map() to ... remove a mapping.

Add a use count to the page table pool so pages can be dynamically
allocated, deallocated and reused.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-28 16:49:12 -05:00
Nicolas Pitre
861f6ce2c8 aarch64: a few trivial assembly optimizations
Removed some instructions when possible.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-25 10:35:37 -05:00
Andy Ross
6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Shih-Wei Teng
8912f549ce arch: riscv: Update the description of CONFIG_PMP_STACK_GUARD_MIN_SIZE
Update the uints in bytes instead of words in its description. It can
avoid confusion.

Signed-off-by: Shih-Wei Teng <swteng@andestech.com>
2021-02-24 10:37:03 -05:00
Yuguo Zou
a8b6936c7d arch: arc: fix mpu version number
ARC mpu version used a wrong number 3, could cause conflict in future.
This commit fix this issue to version number 4.

Signed-off-by: Yuguo Zou <yuguo.zou@synopsys.com>
2021-02-24 08:57:35 -05:00
Ioannis Glaropoulos
8289b8c877 arch: arm: cortex_m: fix ASSERT expression in MemManage handler
We need to form the ASSERT expression inside the MemManage
fault handler for the case we building without USERSPACE
and STACK GUARD support, in the same way it is formed for
the case with USERSPACE or MPU STACK GUARD support, that
is, we only assert if we came across a stacking error.

Data access violations can still occur even without user
mode or guards, e.g. when trying to write to Read-only
memory (such as the code region).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-23 11:29:49 +01:00
Andrei Emeltchenko
377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Daniel Leung
2816c17a09 x86: allow linking in virtual address space
This adds the pieces to allow the kernel to be linked
in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung
d340afd456 x86: use CONFIG_SRAM_OFFSET instead of CONFIG_X86_KERNEL_OFFSET
This changes x86 to use CONFIG_SRAM_OFFSET instead of
arch-specific CONFIG_X86_KERNEL_OFFSET. This allows the common
MMU macro Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
function properly if we ever need to map the kernel into
virtual address space that does not have the same starting
physical address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung
ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung
c0ee8c4a43 x86: use z_bss_zero and z_data_copy
Instead of doing these in assembly, use the common z_bss_zero()
and z_data_copy() C functions instead. This simplifies code
a bit and we won't miss any additions to these two functions
(if any) under x86 in the future (as x86_64 was actually not
clearing gcov bss area).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung
dd98de880a x86: move calling z_loapic_enable into z_x86_prep_c
This moves calling z_loapic_enable() from crt0.S into
z_x86_prep_c(). This is done so we can move BSS clearing
and data section copying inside z_x86_prep_c() as
these are needed before calling z_loapic_enable().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung
78837c769a soc: x86: add Lakemont SoC
This adds a very basic SoC configuration for Intel Lakemont SoC.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Daniel Leung
9a189da03b x86: add kconfig CONFIG_X86_MEMMAP
This adds a new kconfig to enable the use of memory map.
This map can be populated automatically if
CONFIG_MULTIBOOT_MEMMAP=y or can be manually defined
via x86_memmap[].

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Daniel Leung
c027494dba x86: add kconfig CONFIG_X86_PC_COMPATIBLE
This is an hidden option to indicate we are building for
PC-compatible devices (where there are BIOS, ACPI, etc.
which are standard on such devices).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-19 18:51:04 -05:00
Carlo Caione
3f055058dc aarch64: Remove QEMU 'wfi' issue workaround
The problem is not reproducible when CONFIG_QEMU_ICOUNT=n. We can now
revert the commit aebb9d8a45.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-19 16:26:38 +03:00
Nicolas Pitre
7a91cf0176 Revert "lib/os/heap: introduce option to force big heap mode"
This reverts commit b6b6d39bb6.

With both commit 4690b8d5ec ("libc/minimal: fix malloc() allocated
memory alignment") and commit c822e0abbd ("libc/minimal: fix
realloc() allocated memory alignment") in place, there is no longer
a need for enforcing the big heap mode on every allocations.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-19 07:32:22 -05:00