Commit graph

30 commits

Author SHA1 Message Date
Nicolas Pitre e9a47d932c kernel: mmu: shrink and align struct z_page_frame
The struct z_page_frame is marked __packed to avoid extra padding as
such padding may represent significant memory waste when lots of page
frames are used. However this is a bad strategy.

The code contained this somewhat dubious comment and code in
free_page_frame_list_put():

	/* The structure is packed, which ensures that this is true */
	void *node = pf;
	sys_slist_append(&free_page_frame_list, node);

This is bad for many reasons:

- type checking is completely bypassed;

- if the sys_snode_t node member is no longer located at the front of
  struct z_page_frame then the code will still compile and possibly run
  but be broken with memory corruption as a likely outcome;

- the sys_slist_append() code is completely unaware of the packed
  attribute which breaks architectures with alignment restrictions.

Let's improve code efficiency as well as memory usage by removing the
packed attribute and manually packing the flags in the unused virtual
address bits. This way the page frame array remains naturally aligned,
data access becomes optimal and the actual array size gets even smaller.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Nicolas Pitre 57305971d1 kernel: mmu: abstract access to page frame flags and address
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Hess Nathan e05c4a8786 coding guidelines: comply with MISRA Rule 11.8
- modified parameter types to receive a const pointer when a
  non-const pointer is not needed

- avoided redundant casts

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-10 14:45:14 -05:00
Hess Nathan 32af724fbb coding guidelines: comply with MISRA C:2012 Rule 11.2
avoid convert pointers to incomplete type using the pointer to first item

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:53:20 -04:00
Simon Hein bcd1d19322 kernel: add closing comments to config endifs
Add a closing comment to the endif with the configuration
information to which the endif belongs too.
To make the code more clearer if the configs need adaptions.

Signed-off-by: Simon Hein <Shein@baumer.com>
2024-03-25 18:03:31 -04:00
Daniel Leung a7dccc4475 kernel: mmu: mitigate range check overflow issue
It is possible that address + size will overflow the available
address space and the pointer wraps around back to zero. Some
of these have been fixed in previous commits. This fixes
the remaining ones with regard to Z_PHYS_RAM_START/_END,
and Z_VIRT_RAM_START/_END.

Fixes #65542

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-20 11:37:17 -05:00
Daniel Leung fa561ccd59 kernel: mmu: no need to expose z_free_page_count
z_free_page_count is only used in one file, so there is
no need to expose it, even to other part of kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-12 18:46:21 +00:00
Daniel Leung c972ef1a0f kernel: mm: move kernel mm functions under kernel includes
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.

Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-20 09:19:14 +01:00
Peter Mitsis bd5839ec9e kernel: Fix wrap-around check in kernel/mmu.h
Fixes the buffer wrap-around check so that it will not be ignored
by the GNU C compiler.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-08-01 09:51:33 +02:00
Flavio Ceolin df4be07b26 kernel: mmu: Fix Xtensa memory alignment issue
z_page_frame can't be packed on Xtensa due memory alignment
constraints. When this is struct is packed it is 5 bytes long it will
cause an memory alignment problem on Xtensa.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-23 08:54:29 +02:00
Gerard Marull-Paretas cffefc818d kernel: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 09:26:20 +02:00
Daniel Leung dfa4b7e375 kernel: mmu: z_backing_store* to k_mem_paging_backing_store*
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Daniel Leung 31c362d966 kernel: mmu: rename z_eviction* to k_mem_paging_eviction*
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Anas Nashif bbbc38ba8f kernel: Make both operands of operators of same essential type category
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Daniel Leung ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung ec21c0b92f kernel: mmu: fix boot address translation macros
The Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() address
translation macros are flipped in their calculations.
The calculation is supposed to be:

  virt = phys + ((KERNEL_VM_BASE + KERNEL_VM_OFFSET) -
                 SRAM_BASE_ADDRESS)

So fix the them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Nicolas Pitre f9461d1ac4 mmu: fix ARM64 compilation by removing z_mapped_size usage
The linker script defines `z_mapped_size` as follows:

```
	z_mapped_size = z_mapped_end - z_mapped_start;
```

This is done with the belief that precomputed values at link time will
make the code smaller and faster.

On Aarch64, symbol values are relocated and loaded relative to the PC
as those are normally meant to be memory addresses.

Now if you have e.g. `CONFIG_SRAM_BASE_ADDRESS=0x2000000000` then
`z_mapped_size` might still have a reasonable value, say 0x59334.
But, when interpreted as an address, that's very very far from the PC
whose value is in the neighborhood of 0x2000000000. That overflows the
4GB relocation range:

```
kernel/libkernel.a(mmu.c.obj): in function `z_mem_manage_init':
kernel/mmu.c:527:(.text.z_mem_manage_init+0x1c):
relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21
```

The solution is to define `Z_KERNEL_VIRT_SIZE` in terms of
`z_mapped_end - z_mapped_start` at the source code level. Given this
is used within loops that already start with `z_mapped_start` anyway,
the compiler is smart enough to combine the two occurrences and
dispense with a size counter, making the code effectively
slightly better for all while avoiding the Aarch64 relocation
overflow:

```
   text    data     bss     dec     hex filename
   1216       8  294936  296160   484e0 mmu.c.obj.arm64.before
   1212       8  294936  296156   484dc mmu.c.obj.arm64.after
   1110       8    9244   10362    287a mmu.c.obj.x86-64.before
   1106       8    9244   10358    2876 mmu.c.obj.x86-64.after
```

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-05 17:19:56 -05:00
Andrew Boie 14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie c7be5dddda mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 60d306642e kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 431b7c0fe5 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie e35f179db3 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Anas Nashif 8e84eaf73e Revert "kernel: add page frame management"
This reverts commit 2ca5fb7e06.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif cd0beca292 Revert "kernel: add demand paging internal interfaces"
This reverts commit 3e51a7a775.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif c2c87c99c7 Revert "kernel: add z_num_pagefaults_get()"
This reverts commit d7e6bc3e84.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 5e978d237c Revert "mmu: backing stores reserve page fault room"
This reverts commit 7a642f81ab.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Andrew Boie 7a642f81ab mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie d7e6bc3e84 kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 3e51a7a775 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 2ca5fb7e06 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00