A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This change uses stack frame to print backtrace once exception occurs
Printing backtrace helps to identify the cause of exception
Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
When zefi.py was changed to pass compiler and objcopy the flag to
objcopy for the EFI target was dropped. This is because the current
SDK (0.12.1) doesn't support that target type for objcopy. However,
target is necessary for the images to be created correctly and boot.
Switch back to use the host objcopy as a stop gap fix, until the SDK
can support target for EFI.
Fixes: #31517
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This changes the timing functions to use TSC to gather
timing information instead of using the timer for
scheduling as it provides higher resolution for timing
information.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.
Instantiation of new memory domains may still require allocations
unless a common page table is used.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.
We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.
The default address space size is now 8MB, but this can be
tuned by the application.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This adds the correct compiler and linker flags to
support software floating point operations. The flags
need to be added to TOOLCHAIN_*_FLAGS for GCC to find
the correct library (when calling GCC with
--print-libgcc-file-name).
Note that software floating point needs to be turned
on for Newlib. This is due to Newlib having floating
point numbers in its various printf() functions which
results in floating point instructions being emitted
from toolchain. These instructions are placed very
early in the functions which results in them being
executed even though the format string contains
no floating point conversions. Without using CONFIG_FPU
to enable hardware floating point support, any calls to
printf() like functions will result in exceptions
complaining FPU is not available. Although forcing
CONFIG_FPU=y with newlib is an option, and because
the OS doesn't know which threads would call these
printf() functions, Zephyr has to assume all threads
are using FPU and thus incurring performance penalty as
every context switching now needs to save FPU registers.
A compromise here is to use soft float instead. Newlib
with soft float enabled does not have floating point
instructions and yet can still support its printf()
like functions.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Currently, zefi.py takes host GCC OBJCOPY as
default. Fixing the script to use CMAKE_C_COMPILER
and CMAKE_OBJCOPY.
Fixes: #27047
Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
The only two supported operations for data caches in the cache framework
are currently arch_dcache_flush() and arch_dcache_invd().
This is quite restrictive because for some architectures we also want to
control i-cache and in general we want a finer control over what can be
flushed, invalidated or cleaned. To address these needs this patch
expands the set of operations that can be performed on data and
instruction caches, adding hooks for the operations on the whole cache,
a specific level or a specific address range.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The new APIs are not only dealing with cache flushing. Rename the
Kconfig symbol to CACHE_MANAGEMENT to better reflect this change.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The kconfig options to configure the cache flushing framework are
currently living in the arch-specific kconfigs of ARC and X86 (32-bit)
architectures even though these are defining the same things.
Move the common symbols in one place accessible by all the architectures
and create a menu for those.
Leave the default values in the arch-specific locations.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Firmware implementing the PSCI functions described in ARM document
number ARM DEN 0022A ("Power State Coordination Interface System
Software on ARM processors") can be used by Zephyr to initiate various
CPU-centric power operations.
It is needed for virtualization, it is used to coordinate OSes and
hypervisors and it provides the functions used for SMP bring-up such as
CPU_ON and CPU_OFF.
A new PSCI driver is introduced to setup a proper subsystem used to
communicate with the PSCI firmware, implementing the basic operations:
get_version, cpu_on, cpu_off and affinity_info.
The current implementation only supports PSCI 0.2 and PSCI 1.0
The PSCI conduit (SMC or HVC) is setup reading the corresponding
property in the DTS node.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Increased stacks required for RISC-V 64-bit CI to pass. Most of these
were catched by the kernel stack sentinel.
The CMSIS stacks are for programs in samples/portability.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
The CONFIG_FLOAT_HARD config previously enabled the C (compressed)
ISA extensions (CONFIG_COMPRESSED_ISA). This commit removes that
dependency.
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
Until now, any attempts to call printk prior to early serial init has
caused page faults due to the device not being mapped yet. Add static
variable to track the pre-init status, and instead of page faulting
just suppress the characters and log a warning right after init to
give an indication that output characters have been lost.
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Using newlibc with AArch64 is causing an alignement fault in
z_bss_zero() when the code is run on real hardware (on QEMU the problem
is not reproducible).
The main cause is that the memset() function exported by newlibc is
using 'DC ZVA' to zero out memory.
While this is often a nice optimization, this is causing the issue on
AArch64 because memset() is being used before the MMU is enabled, and
when the MMU is disabled all data accesses will be treated as
Device_nGnRnE.
This is a problem because quoting from the ARM reference manual: "If the
memory region being zeroed is any type of Device memory, then DC ZVA
generates an Alignment fault which is prioritized in the same way as
other alignment faults that are determined by the memory type".
newlibc tries to be a bit smart about this reading the DCZID_EL0
register before deciding whether using 'DC ZVA' or not. While this is a
good idea for code running in EL0, currently the Zephyr kernel is
running in EL1. This means that the value of the DCZID_EL0 register is
actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because
EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is
unconditionally used in the newlibc memset() implementation.
The "standard" solution for this case is usually to use a different
memset routine to be specifically used for two cases: (1) against IO
memory or (2) against normal memory but with MMU disabled (which means
all memory is considered device memory for data accesses).
To fix this issue in Zephyr we avoid calling memset() when clearing the
bss, and instead we use a simple loop to zero the memory region.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
In rare cases when a thread may overflow its stack, the
core will not report a Stacking Error. This is the case
when a large stack array is created, making the PSP cross
beyond the stack guard; in this case a MemManage fault
won't cause a stacking error (but only a Data Access
Violation error). We fix the fault handling logic so
such errors are reported as stack overflows and not as
generic CPU exceptions.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
When the MMARVALID bit is not set, do not read the MMFAR
register to get the fault address in a MemManage fault.
This change prevents the fault handler to erroneously
assume MMFAR contains a valid address.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Currently Zephyr links reset-vector.S twice in xtensa builds:
into the bootloader and the main image. It is run at the end
of the boot loader execution and immediately after that again
in the beginning of the main code. This patch adds a
configuration option to select whether to link the file to the
bootloader or to the application. The default is to the
application, as needed e.g. for QEMU, SOF links it to the
bootloader like in native builds.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Before hooking up the MMU driver code to the Zephyr MMU core code it's
better to match the expected variable types of the two parts.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The MMU code is currently assuming that Zephyr only uses one single set
of page tables shared by kernel and user threads. This could possibly be
not longer true in the future when multiple set of page tables can be
present and swapped at run-time.
With this patch a new arm_mmu_ptables struct is introduced that is used
to host a buffer pointing to the memory region containing the page
tables and the helper variables used to manage the page tables. This new
struct is then used by the ARM64 MMU code instead of assuming that the
kernel page tables are the only ones present.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
The ARM64 MMU code used to create the page tables is strictly tied to
the custom arm_mmu_region struct. To be able to hook up this code to the
Zephyr MMU APIs we need to make it more generic.
This patch makes the mapping function more generic and creates a new
helper function add_arm_mmu_region() to map the regions defined by the
old arm_mmu_region structs using this new generic function.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
In the current code the base xlat table is a standalone array. This is
done because we know at compile time the size of this table so we can
allocate the correct size and save a bit of memory. All the other xlat
tables are statically allocated in a different array with full size.
With this patch we move all the page tables in one single array,
including the base table. This is probably going to waste a bit of space
but it makes easier to:
- have all the page tables mapped in one single contiguous memory region
instead of having to take care of two different arrays in two
different locations
- duplicate the page tables more quickly if we need to
- use a pre-allocated space to host the page tables
- use a pre-computed set of page tables saved in a contiguous memory
region
Signed-off-by: Carlo Caione <ccaione@baylibre.com>