Commit graph

87 commits

Author SHA1 Message Date
Andrew Boie c7be5dddda mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 60d306642e kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 611b626b39 mmu: pin the whole kernel
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie a5cb878144 kernel: add demand paging implementation
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie ecb25fec51 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 299a2cf62e mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 5db615bb38 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 8ccec8eba6 kernel: add k_mem_map() interface
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie e35f179db3 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Anas Nashif 8e84eaf73e Revert "kernel: add page frame management"
This reverts commit 2ca5fb7e06.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 0417b97257 Revert "kernel: add k_mem_map() interface"
This reverts commit 69d39af5e6.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 6b82664a5a Revert "mmu: add k_mem_free_get()"
This reverts commit 9111ec2c19.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif a2ec139bf7 Revert "mmu: arch_mem_map() may no longer fail"
This reverts commit db56722729.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif d887e078f9 Revert "mmu: ensure gperf data is mapped"
This reverts commit e9bfd64110.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 752934b3c8 Revert "kernel: add demand paging implementation"
This reverts commit 2fe1fc53c8.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 75ebe4c7bd Revert "mmu: pin the whole kernel"
This reverts commit a45486e1d5.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif c2c87c99c7 Revert "kernel: add z_num_pagefaults_get()"
This reverts commit d7e6bc3e84.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 5e978d237c Revert "mmu: backing stores reserve page fault room"
This reverts commit 7a642f81ab.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif ef17f889dc Revert "mmu: promote public APIs"
This reverts commit 63fc93e21f.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Andrew Boie 63fc93e21f mmu: promote public APIs
These are application facing and are prefixed with k_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 7a642f81ab mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie d7e6bc3e84 kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie a45486e1d5 mmu: pin the whole kernel
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 2fe1fc53c8 kernel: add demand paging implementation
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie e9bfd64110 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie db56722729 mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 9111ec2c19 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 69d39af5e6 kernel: add k_mem_map() interface
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 2ca5fb7e06 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie d2ad783a97 mmu: rename z_mem_map to z_phys_map
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.

mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.

Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-16 08:55:55 -05:00
Krzysztof Chruscinski 3ed8083dc1 kernel: Cleanup logger setup in kernel files
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2020-11-27 09:56:34 -05:00
Andrew Boie ea6e4ad098 kernel: support non-identity RAM mapping
Some platforms may have multiple RAM regions which are
dis-continuous in the physical memory map. We really want
these to be in a continuous virtual region, and we need to
stop assuming that there is just one SRAM region that is
identity-mapped.

We no longer use CONFIG_SRAM_BASE_ADDRESS and CONFIG_SRAM_SIZE
as the bounds of kernel RAM, and no longer assume in the core
kernel that these are identity mapped at boot.

Two new Kconfigs, CONFIG_KERNEL_VM_BASE and
CONFIG_KERNEL_RAM_SIZE now indicate the bounds of this region
in virtual memory.

We are currently only memory-mapping physical device driver
MMIO regions so we do not need virtual-to-physical calculations
to re-map RAM yet. When the time comes an architecture interface
will be defined for this.

Platforms which just have one RAM region may continue to
identity-map it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-09 20:19:13 -05:00
Carlo Caione 47c592cd32 mmu: Fix mapping_pos calculation
In the MMU code mapping_pos is miscalculated when for example
SRAM_BASE_ADDRESS==0x40000000 and KERNEL_VM_SIZE==0xc0000000 getting a
mapping_pos of 0x0.

The problem is that we must cast the two values to uintptr_t before
casting the result to avoid the rollover to 0.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-10-23 16:24:04 -04:00
Andrew Boie 5e0b55c30e kernel: demote k_mem_map to z_mem_map
Memory mapping, for now, will be a private kernel API
and is not intended to be application-facing at this time.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Andrew Boie 7d32e9f9a5 mmu: support only identity RAM mapping
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.

We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.

CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.

Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Andrew Boie e433494f40 kernel: mmu: implement virtual mappings
These will grow downward from CONFIG_KERNEL_VM_LIMIT.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie 06cf6d27f7 kernel: add k_mem_map() and related defines
This will be the interface for mapping memory in the kernel's
part of the address space, which is guaranteed to be persistent
regardless of what thread is scheduled.

Further code for specifically managing virtual memory will end up in
kernel/mmu.c.

Further defintions for memory management in general will end up
in sys/mem_manage.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00