Commit graph

26 commits

Author SHA1 Message Date
Daniel Leung db9d3134c5 kernel: mm: rename Z_MEM_PHYS/VIRT_ADDR to K_MEM_*
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
frei tycho 4b7e09230c arch: x86: coding guidelines: cast unused arguments to void
- added missing ARG_UNUSED
- added void to cast where ARG_UNUSED macro is not available

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-06 22:53:13 +01:00
Daniel Leung ac5835565b x86: synchronize usage of CONFIG_X86_STACK_PROTECTION
Most places use CONFIG_X86_STACK_PROTECTION, but there are some
places using CONFIG_HW_STACK_PROTECTION. So synchronize all
to use CONFIG_X86_STACK_PROTECTION instead.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung c972ef1a0f kernel: mm: move kernel mm functions under kernel includes
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.

Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-20 09:19:14 +01:00
Gerard Marull-Paretas 16811660ee arch: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-06 19:57:22 +02:00
Andrew Boie 348d1315d2 x86: 32-bit: restore virtual linking capability
This reverts commit 7d32e9f9a5.

We now allow the kernel to be linked virtually. This patch:

- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
  in boot page tables to facilitate instruction pointer
  transition, with logic to clean this up after completed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung d39012a590 x86: use Z_MEM_*_ADDR instead of Z_X86_*_ADDR
With the introduction of Z_MEM_*_ADDR for physical<->virtual
address translation, there is no need to have x86 specific
versions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Andrew Boie ed22064e27 x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 69355d13a8 arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Anas Nashif 6f61663695 Revert "arch: add KERNEL_VM_OFFSET"
This reverts commit fd2434edbd.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif adff757c72 Revert "x86: implement demand paging APIs"
This reverts commit 7711c9a82d.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Andrew Boie 7711c9a82d x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie fd2434edbd arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 0a791b7a09 x86: mmu: clarify physical/virtual conversions
The page table implementation requires conversion between virtual
and physical addresses when creating and walking page tables. Add
a phys_addr() and virt_addr() functions instead of hard-casting
these values, plus a macro for doing the same in ASM code.

Currently, all pages are identity mapped so VIRT_OFFSET = 0, but
this will now still work if they are not the same.

ASM language was also updated for 32-bit. Comments were left in
64-bit, as long mode semantics don't allow use of Z_X86_PHYS_ADDR
macro; this can be revisited later.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-15 14:16:51 -05:00
Andrew Boie 3605832682 x86: fix z_x86_thread_page_tables_get()
This was reporting the wrong page tables for supervisor
threads with KPTI enabled.

Analysis of existing use of this API revealed no problems
caused by this issue, but someone may trip over it eventually.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-10 18:22:58 -05:00
Andrew Boie 7a6cb633c0 x86: add MMU page fault codes to header
We will soon need these for demand paging.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-03 17:33:39 -05:00
Andrew Boie 5b5bdb5fbd x86: add inline function to fetch cr2
Better to encapsulate asm operations in inline functions than
embed directly in other C code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-03 17:33:39 -05:00
Andrew Boie d2a72273b7 x86: add support for common page tables
We provide an option for low-memory systems to use a single set
of page tables for all threads. This is only supported if
KPTI and SMP are disabled. This configuration saves a considerable
amount of RAM, especially if multiple memory domains are used,
at a cost of context switching overhead.

Some caching techniques are used to reduce the amount of context
switch updates; the page tables aren't updated if switching to
a supervisor thread, and the page table configuration of the last
user thread switched in is cached.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie b8242bff64 x86: move page tables to the memory domain level
- z_x86_userspace_enter() for both 32-bit and 64-bit now
  call into C code to clear the stack buffer and set the
  US bits in the page tables for the memory range.

- Page tables are now associated with memory domains,
  instead of having separate page tables per thread.
  A spinlock protects write access to these page tables,
  and read/write access to the list of active page
  tables.

- arch_mem_domain_init() implemented, allocating and
  copying page tables from the boot page tables.

- struct arch_mem_domain defined for x86. It has
  a page table link and also a list node for iterating
  over them.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie bd76bdb8ff x86: define some unused PTE bits
These are ignored by the CPU and may be used for OS
accounting.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie d31f62a955 x86: smp: add TLB shootdown logic
This will be needed when we support memory un-mapping, or
the same user mode page tables on multiple CPUs. Neither
are implemented yet.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-05 09:33:40 -05:00
Andrew Boie f6c64e92ce x86: fix arch_user_mode_enter locking
This function iterates over the thread's memory domain
and updates page tables based on it. We need to be holding
z_mem_domain_lock while this happens.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
Wentong Wu 50252bbf92 arch: x86: mmu: use z_x86_kernel_ptables as array.
Use z_x86_kernel_ptables as array to make Coverity happy.

Coverity-CID: 212957.
Fixes: 27832.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2020-09-22 16:00:03 -05:00
Andrew Boie 7d32e9f9a5 mmu: support only identity RAM mapping
We no longer plan to support a split address space with
the kernel in high memory and per-process address spaces.
Because of this, we can simplify some things. System RAM
is now always identity mapped at boot.

We no longer require any virtual-to-physical translation
for page tables, and can remove the dual-mapping logic
from the page table generation script since we won't need
to transition the instruction point off of physical
addresses.

CONFIG_KERNEL_VM_BASE and CONFIG_KERNEL_VM_LIMIT
have been removed. The kernel's address space always
starts at CONFIG_SRAM_BASE_ADDRESS, of a fixed size
specified by CONFIG_KERNEL_VM_SIZE.

Driver MMIOs and other uses of k_mem_map() are still
virtually mapped, and the later introduction of demand
paging will result in only a subset of system RAM being
a fixed identity mapping instead of all of it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Andrew Boie 38e17b68e3 x86: paging code rewrite
The x86 paging code has been rewritten to support another paging mode
and non-identity virtual mappings.

 - Paging code now uses an array of paging level characteristics and
   walks tables using for loops. This is opposed to having different
   functions for every paging level and lots of #ifdefs. The code is
   now more concise and adding new paging modes should be trivial.

 - We now support 32-bit, PAE, and IA-32e page tables.

 - The page tables created by gen_mmu.py are now installed at early
   boot. There are no longer separate "flat" page tables. These tables
   are mutable at any time.

 - The x86_mmu code now has a private header. Many definitions that did
   not need to be in public scope have been moved out of mmustructs.h
   and either placed in the C file or in the private header.

 - Improvements to dumping page table information, with the physical
   mapping and flags all shown

 - arch_mem_map() implemented

 - x86 userspace/memory domain code ported to use the new
   infrastructure.

 - add logic for physical -> virtual instruction pointer transition,
   including cleaning up identity mappings after this takes place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-25 15:49:59 -04:00