Commit graph

23 commits

Author SHA1 Message Date
Daniel Leung
ae86519819 kernel: mmu: collect more demand paging statistics
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.

Also extends this to gather per-thread demand paging
statistics.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Andrew Boie
acda9bf9ce linker-tool-gcc: revise for MMU support
We need to do a few things differently if we are to support
a virtual memory map, i.e. CONFIG_MMU where CONFIG_KERNEL_VM_BASE
is not the same as CONFIG_SRAM_BASE_ADDRESS.

 - All sections must be specified with a VMA and LMA, where
   VMA is the virtual address and LMA is the physical memory
   location.
 - All sections must be specified with ALIGN_WITH_INPUT to
   keep VMAs and LMAs synchronized

To do this, the existing linker macros need some adjustment:

 - GROUP_LINK_IN undefined when CONFIG_KERNEL_VM_BASE is not
   the same as CONFIG_SRAM_BASE_ADDRESS.
 - New macro GROUP_ROM_LINK_IN for text/rodata sections
 - New macro GROUP_NOLOAD_LINK_IN for bss/noinit sections
 - Implicit ALIGN_WITH_INPUT for all sections

GROUP_FOLLOWS_AT is unused anywhere in the kernel for years
now and has been removed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Andrew Boie
5ca6f22cba kernel: mmu: restore address conversion functions
This reverts commit 7d32e9f9a5.

These functions are needed for linking kernel in virtual address
space when it differs from the physical address space.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Andrew Boie
6c97ab3167 mmu: promote public APIs
These are application facing and are prefixed with k_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
b9bbef2a2c kernel: add app-facing demand paging APIs
Routines to evict memory, page-in memory, and set pinned state
at runtime.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
5db615bb38 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
8ccec8eba6 kernel: add k_mem_map() interface
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
e35f179db3 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Anas Nashif
8e84eaf73e Revert "kernel: add page frame management"
This reverts commit 2ca5fb7e06.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
0417b97257 Revert "kernel: add k_mem_map() interface"
This reverts commit 69d39af5e6.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
6b82664a5a Revert "mmu: add k_mem_free_get()"
This reverts commit 9111ec2c19.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
495c6c72ab Revert "kernel: add app-facing demand paging APIs"
This reverts commit d5b8fe16ad.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
ef17f889dc Revert "mmu: promote public APIs"
This reverts commit 63fc93e21f.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Andrew Boie
63fc93e21f mmu: promote public APIs
These are application facing and are prefixed with k_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
d5b8fe16ad kernel: add app-facing demand paging APIs
Routines to evict memory, page-in memory, and set pinned state
at runtime.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
9111ec2c19 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
69d39af5e6 kernel: add k_mem_map() interface
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
2ca5fb7e06 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
d2ad783a97 mmu: rename z_mem_map to z_phys_map
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.

mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.

Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-16 08:55:55 -05:00
Andrew Boie
5e0b55c30e kernel: demote k_mem_map to z_mem_map
Memory mapping, for now, will be a private kernel API
and is not intended to be application-facing at this time.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Andrew Boie
7d32e9f9a5 mmu: support only identity RAM mapping
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.

We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.

CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.

Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Andrew Boie
bcc69f944d mem_manage: add private conversion APIs
Private macros/function for converting a virtual address
to a physical address. Only works for a specific range
of virtual addresses (the permanent SRAM mappings).

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00
Andrew Boie
06cf6d27f7 kernel: add k_mem_map() and related defines
This will be the interface for mapping memory in the kernel's
part of the address space, which is guaranteed to be persistent
regardless of what thread is scheduled.

Further code for specifically managing virtual memory will end up in
kernel/mmu.c.

Further defintions for memory management in general will end up
in sys/mem_manage.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00