kernel: support non-identity RAM mapping

Some platforms may have multiple RAM regions which are
dis-continuous in the physical memory map. We really want
these to be in a continuous virtual region, and we need to
stop assuming that there is just one SRAM region that is
identity-mapped.

We no longer use CONFIG_SRAM_BASE_ADDRESS and CONFIG_SRAM_SIZE
as the bounds of kernel RAM, and no longer assume in the core
kernel that these are identity mapped at boot.

Two new Kconfigs, CONFIG_KERNEL_VM_BASE and
CONFIG_KERNEL_RAM_SIZE now indicate the bounds of this region
in virtual memory.

We are currently only memory-mapping physical device driver
MMIO regions so we do not need virtual-to-physical calculations
to re-map RAM yet. When the time comes an architecture interface
will be defined for this.

Platforms which just have one RAM region may continue to
identity-map it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit is contained in:
Andrew Boie 2020-11-04 13:31:14 -08:00 committed by Anas Nashif
commit ea6e4ad098
3 changed files with 68 additions and 16 deletions

View file

@ -575,6 +575,51 @@ config SRAM_REGION_PERMISSIONS
If not enabled, all SRAM mappings will allow supervisor mode to
read, write, and execute. User mode support requires this.
config KERNEL_VM_BASE
hex "Base virtual address to link the kernel"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM))
help
Define the base virtual memory address for the core kernel.
The kernel expects a mappings for all physical RAM regions starting at
this virtual address, with any unused space up to the size denoted by
KERNEL_VM_SIZE available for memory mappings. This base address denotes
the start of the RAM mapping and may not be the base address of the
kernel itself, but the offset of the kernel here will be the same as the
offset from the beginning of physical memory where it was loaded.
If there are multiple physical RAM regions which are discontinuous in
the physical memory map, they should all be mapped in a continuous
virtual region, with bounds defined by KERNEL_RAM_SIZE.
By default, this is the same as the DT_CHOSEN_Z_SRAM physical base SRAM
address from DTS, in which case RAM will be identity-mapped. Some
architectures may require RAM to be mapped in this way; they may have
just one RAM region and doing this makes linking much simpler, as
at least when the kernel boots all virtual RAM addresses are the same
as their physical address (demand paging at runtime may later modify
this for some subset of non-pinned pages).
Otherwise, if RAM isn't identity-mapped:
1. It is the architecture's responsibility to transition the
instruction pointer to virtual addresses at early boot before
entering the kernel at z_cstart().
2. The underlying architecture may impose constraints on the bounds of
the kernel's address space, such as not overlapping physical RAM
regions if RAM is not identity-mapped, or the virtual and physical
base addresses being aligned to some common value (which allows
double-linking of paging structures to make the instruction pointer
transition simpler).
config KERNEL_RAM_SIZE
hex "Total size of RAM mappings in bytes"
default $(dt_chosen_reg_size_hex,$(DT_CHOSEN_Z_SRAM))
help
Indicates to the kernel the total size of RAM that is mapped. The
kernel expects that all physical RAM has a memory mapping in the virtual
address space, and that these RAM mappings are all within the virtual
region [KERNEL_VM_BASE..KERNEL_VM_BASE + KERNEL_RAM_SIZE).
config KERNEL_VM_SIZE
hex "Size of kernel address space in bytes"
default 0xC0000000
@ -582,10 +627,17 @@ config KERNEL_VM_SIZE
Size of the kernel's address space. Constraining this helps control
how much total memory can be used for page tables.
The area defined by SRAM_BASE_ADDRESS to SRAM_BASE_ADDRESS +
KERNEL_VM_SIZE must have enough room to map system RAM, plus any driver
mappings. Further mappings may be made at runtime depending on
configuration options (such as memory-mapping stacks, VDSO pages, etc).
The difference between KERNEL_RAM_SIZE and KERNEL_VM_SIZE indicates the
size of the virtual region for runtime memory mappings. This is needed
for mapping driver MMIO regions, as well as special RAM mapping use-cases
such as VSDO pages, memory mapped thread stacks, and anonymous memory
mappings.
The system currently assumes all RAM can be mapped in the virtual address
space. Systems with very large amounts of memory (such as 512M or more)
will want to use a 64-bit build of Zephyr, there are no plans to
implement a notion of "high" memory in Zephyr to work around physical
RAM which can't have a boot-time mapping due to a too-small address space.
endif # MMU

View file

@ -20,14 +20,17 @@ LOG_MODULE_DECLARE(os);
static struct k_spinlock mm_lock;
/*
* Overall virtual memory map. System RAM is identity-mapped:
* Overall virtual memory map. When the kernel starts, it is expected that all
* memory regions are mapped into one large virtual region at the beginning of
* CONFIG_KERNEL_VM_BASE. Unused virtual memory up to the limit noted by
* CONFIG_KERNEL_VM_SIZE may be used for runtime memory mappings.
*
* +--------------+ <- CONFIG_SRAM_BASE_ADDRESS
* +--------------+ <- CONFIG_KERNEL_VM_BASE
* | Mapping for |
* | all RAM |
* | |
* | |
* +--------------+ <- CONFIG_SRAM_BASE_ADDRESS + CONFIG_SRAM_SIZE
* +--------------+ <- CONFIG_KERNEL_VM_BASE + CONFIG_KERNEL_RAM_SIZE
* | Available | also the mapping limit as mappings grown downward
* | virtual mem |
* | |
@ -39,7 +42,7 @@ static struct k_spinlock mm_lock;
* | ... |
* +--------------+
* | Mapping |
* +--------------+ <- CONFIG_SRAM_BASE_ADDRESS + CONFIG_KERNEL_VM_SIZE
* +--------------+ <- CONFIG_KERNEL_VM_BASE + CONFIG_KERNEL_VM_SIZE
*
* At the moment we just have one area for mappings and they are permanent.
* This is under heavy development and may change.
@ -50,21 +53,18 @@ static struct k_spinlock mm_lock;
* z_mem_map() mappings start at the end of the address space, and grow
* downward.
*
* TODO: If we ever encounter a board with RAM in high enough memory
* such that there isn't room in the address space, define mapping_pos
* and mapping_limit such that we have mappings grow downward from the
* beginning of system RAM.
* All of this is under heavy development and is subject to change.
*/
static uint8_t *mapping_pos =
(uint8_t *)((uintptr_t)CONFIG_SRAM_BASE_ADDRESS +
(uint8_t *)((uintptr_t)CONFIG_KERNEL_VM_BASE +
(uintptr_t)CONFIG_KERNEL_VM_SIZE);
/* Lower-limit of virtual address mapping. Immediately below this is the
* permanent identity mapping for all SRAM.
*/
static uint8_t *mapping_limit =
(uint8_t *)((uintptr_t)CONFIG_SRAM_BASE_ADDRESS +
KB((size_t)CONFIG_SRAM_SIZE));
(uint8_t *)((uintptr_t)CONFIG_KERNEL_VM_BASE +
(size_t)CONFIG_KERNEL_RAM_SIZE);
size_t k_mem_region_align(uintptr_t *aligned_addr, size_t *aligned_size,
uintptr_t phys_addr, size_t size, size_t align)

View file

@ -17,7 +17,7 @@
#define FAULTY_ADDRESS 0x0FFFFFFF
#elif CONFIG_MMU
/* Just past the permanent RAM mapping should be a non-present page */
#define FAULTY_ADDRESS (CONFIG_SRAM_BASE_ADDRESS + (CONFIG_SRAM_SIZE * 1024UL))
#define FAULTY_ADDRESS (CONFIG_KERNEL_VM_BASE + CONFIG_KERNEL_RAM_SIZE)
#else
#define FAULTY_ADDRESS 0xFFFFFFF0
#endif