Until #31333 is resolved, the periodic timer in the eviction
algorithm interacts with this test in such a way that the system
deadlocks.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.
The backing store now always reserves a free storage location
for actual page faults.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
More to be added, but for now show that we can map more
anonymous memory than we physically have, and that reading/
writing to it works as expected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Show we can measure free memory properly and map a page of
anonymous memory, which has been zeroed and is writable.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Backing stores and eviction algorithms will be included here.
Exactly one must be chosen, with a default option to leave
the implementation to the application.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.
Instantiation of new memory domains may still require allocations
unless a common page table is used.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.
We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.
The default address space size is now 8MB, but this can be
tuned by the application.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Just tell the kernel that RAM starts 1MB in, period.
Better simulation of a low-memory microcontroller as
we're not managing a very large number of page frames
we'll never use.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Previously, newlib claimed all free physical memory in the
system.
Now, the kernel manages this, allowing for memory to be
used via k_mem_map() calls.
Establish an upper bound to how much newlib will try to
claim on system startup, instead of trying to take all
of it, allowing other parts of the system to also map
anonymous memory.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We now draw heap memory from an anonymous memory mapping
instead of a hard-coded region past the kernel image,
which is no longer mapped by default.
Some readability cleanups were made to a particuarly
horrible set of nested ifdefs. A few types were adjusted.
sbrk()'s count argument is an intptr_t, not an int.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
All RAM may not be mapped. Check the mapping for the main kernel
image and the locore if it exists.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A more comprehensive solution would use E820 enumeration, but we
are unlikely to ever care that much, as we intend to use demand
paging on microcontrollers and not PC-like hardware. This is
really to just prevent QEMU from crashing.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add linker symbols corresponding to the start and end of the
mapped Zephyr image. This is not used by the ARM arch yet, but
is required to compile the core kernel MMU code.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We will use this to map the kernel instead of all RAM.
The end of the kernel is always page-aligned, regardless
of CONFIG_SRAM_REGION_PERMISSIONS as it must be mapped.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These are needed on MMU systems and define where the kernel
image resides in virtual memory at boot so that it may be
memory-mapped.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.
Fix the calculations on X86 where this is the case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
To get the daily build to hopefully run completely w/o timeouts lets
increase the number of builders.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Add a conf file to make sure the kernel will use simple linked-list
ready queue as scheduling algorithm. This operation will increase module
testcase coverage and z_priq_dumb_add z_prj_dum_remove function are
called.
Signed-off-by: Ying ming <mingx.ying@intel.com>
This change moves .rodata for panic handler and fatal.c into DRAM
Moves panic handler and its dependent functions into IRAM
Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
This change uses stack frame to print backtrace once exception occurs
Printing backtrace helps to identify the cause of exception
Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>