Commit graph

40 commits

Author SHA1 Message Date
Daniel Leung
95d260e77e xtensa: mmu/ptables: rename flags to attrs under arch_mem_map()
arch_mem_map() takes in some flags to describe the to-be mapped
memory regions' permissions and cache status. When the flags are
translated, they become attributes in PTEs. So for functions
being called by arch_mem_map() and beyond, rename flags to
attrs to better describe its purpose.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-15 11:47:25 -04:00
Daniel Leung
84ade183ed xtensa: mmu: cosmetic changes to page table variable names
In functions which manipulate both L1 and L2 tables, make
the variable names obvious by prefixing them with l1_ or l2_.
This is mainly done to avoid confusion when reading through
those functions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-15 11:47:25 -04:00
Daniel Leung
e709cbe3c8 xtensa: mmu: fix __arch_mem_map assert message
The assert error messages when l2_page_table_map() fails are not
correct. It returns false when it cannot allocate new L2 table,
and it is not able the address having already mapped. So correct
the error message.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-15 11:47:25 -04:00
Daniel Leung
6877bc8644 xtensa: userspace: handle DTLB multihit exception
This adds a function to handle DTLB multihit exception when
userspace is enabled, as this exception may be raised due to
the same page being able to accessed by both kernel and user
threads. The auto-refill DTLBs may contain entries for same
page, one for kernel and one for user under some situations.
We need to invalidate those existing DTLB entries so that
hardware can reload from the page table.

This is an alternative to the kconfig invalidating DTLBs on
every swap: CONFIG_XTENSA_MMU_FLUSH_AUTOREFILL_DTLBS_ON_SWAP.
Although this does not have extra processing per context
switching, exception handling itself has a high cost. So
care needs to be taken on whether to enable that kconfig.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-15 11:47:25 -04:00
Daniel Leung
d3a126cb5d xtensa: userspace: handle load/store ring exception
When a page can be accessed by both kernel and user threads,
the autofill DTLB may contain an entry for kernel thread.
This will result in load/store ring exception when it is
accessed by user thread later. In this case, we need to
invalidate all associated TLBs related to kernel access so
hardware can reload the page table the correct permission
for user thread.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-15 11:47:25 -04:00
Daniel Leung
c76b338ec4 xtensa: mmu: properly restore PTE attributes via reset_region()
The software bits inside PTE are used to store original PTE
attributes and ring value, and those bits are used to
restore PTE to previous state.

This modifies reset_region() to properly restore attributes
and ring value when resetting memory regions to the same as
when the system boots.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-15 11:47:25 -04:00
Guennadi Liakhovetski
c6b6c62c21 arch: xtensa: (cosmetic) simplify function prototypes
Several static functions in ptables.c always return 0, make them void
to improve readability.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2025-09-17 19:01:35 +02:00
Peter Mitsis
dde9462666 arch: tweak xtensa_mem_kernel_has_access() API
Adds 'const' to address pointer as its memory contents
do not change.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-06-19 00:03:00 +02:00
Daniel Leung
d31ee53b60 xtensa: allow flushing auto-refill DTLBs on page table swap
This adds a new kconfig and corresponding code to allow flushing
auto-refill data TLBs when page tables are swapped (e.g. during
context switching). This is mainly used to avoid multi-hit TLB
exception raised by certain memory access pattern. If memory is
only marked for user mode access but not inside a memory domain,
accessing that page in kernel mode would result in a TLB being
filled with kernel ASID. When going back into user mode, access
to the memory would result in another TLB being filled with
the user mode ASID. Now there are two entries on the same memory
page, and the multi-hit TLB exception will be raised if that
memory page is accessed. This type of access is better served
using memory partition and memory domain to share data. However,
this type of access is not prohibited but highly discouraged.
Wrapping the code in kconfig is simply because of the execution
penalty as there will be unnecessary TLB refilling being done.
So only enable this if necessary.

Fixes #88772

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-05-28 20:01:58 +02:00
Daniel Leung
277fa9e8ac xtensa: userspace: swap page tables via assembly code
Since the necessary register values are now pre-computed and
stored in the memory domain struct, we can use them directly
in various assembly locations, thus replacing the function
call to xtensa_swap_update_page_tables().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-04-17 00:57:19 +02:00
Daniel Leung
0dce0bdc0b xtensa: userspace: pre-compute MMU registers at domain init
Instead of computing all the needed register values when
swapping page tables, we can pre-compute those values when
the memory domain is first initialized. Should save some
time during context switching.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-04-17 00:57:19 +02:00
Daniel Leung
a4367eb514 xtensa: remove CONFIG_XTENSA_INVALIDATE_MEM_DOMAIN_TLB_ON_SWAP
Remove CONFIG_XTENSA_INVALIDATE_MEM_DOMAIN_TLB_ON_SWAP as it is
remnant from early MMU enabling work which is not needed as
the page table code is different from early version where
the PTEVADDR would be the same for all memory domains.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-04-17 00:57:19 +02:00
Daniel Leung
833bb667b2 xtensa: fix thread_page_tables_get being unused
thread_page_tables_get() is only used when userspace is
enabled. So move it with userspace #ifdef, or else
compiler would complain about it being unused.

Fixes #88421

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-04-15 19:09:52 +02:00
Daniel Leung
c925b0ecd5 xtensa: mmu: fix incorrect caching attrs on double mapping
Inside map_memory() with double mapping enabled, we should not
be mapping the memory with the incoming attributes as-is since
the incoming address may be on un-cached region but with
caching attribute. So we need to sanitize the attributes
according to the incoming address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-03-14 01:01:32 +01:00
Nicolas Pitre
46aa6717ff Revert "arch: deprecate _current"
Mostly a revert of commit b1def7145f ("arch: deprecate `_current`").

This commit was part of PR #80716 whose initial purpose was about providing
an architecture specific optimization for _current. The actual deprecation
was sneaked in later on without proper discussion.

The Zephyr core always used _current before and that was fine. It is quite
prevalent as well and the alternative is proving rather verbose.
Furthermore, as a concept, the "current thread" is not something that is
necessarily architecture specific. Therefore the primary abstraction
should not carry the arch_ prefix.

Hence this revert.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-01-10 07:49:08 +01:00
Yong Cong Sin
b1def7145f arch: deprecate _current
`_current` is now functionally equals to `arch_curr_thread()`, remove
its usage in-tree and deprecate it instead of removing it outright,
as it has been with us since forever.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-11-23 20:12:24 -05:00
Daniel Leung
b0185189ad xtensa: mmu: fix page table initialization
xtensa_mmu_init() is called really early in the boot process
where the _kernel struct has not yet been initialized, and
thus we cannot use it to determine if the current CPU is
the boot CPU. In some cases, this may skip the call to
initialize the page tables which leaves us with incorrect
page table entries. Fix it by using a static variable to
determine whether the page tables have been initialized so
we only do it once per boot.

Fixes #76909

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-14 09:36:19 +02:00
Daniel Leung
8c10964435 xtensa: mmu: clear ZSR_DEPC_SAVE at boot
ZSR_DEPC_SAVE is being used to determine whether we are faulting
inside double exception if this is not zero. It is possible that
the boot ROM or custom startup code leaves this non-zero, which
would result in a fake triple fault. So clear it at boot. Note
that the zeroing is done in MMU init code as these triple
faults are not actual hardware ones but only semantics, and will
occur once MMU is enabled.

Fixes #75194

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-28 20:53:37 -04:00
Daniel Leung
79939e3279 xtensa: mmu: mpu: add xtensa_mem_kernel_has_access()
This adds a new function xtensa_mem_kernel_has_access() to
determine if a memory region can be accessed by kernel threads.
This allows checking for valid mapped memory before accessing
them to avoid relying on page faults to detect invalid access.

Also fixed an issue with arch_buffer_validate() on MPU where
it may return okay even if the incoming memory region has no
corresponding entry in the MPU table.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
61ec0d15d5 xtensa: mmu: arch_buffer_validate is only for user thread
arch_buffer_validate() is only to verify that user threads have
access to the memory region. It should not be used to verify
if kernel thread has access (which they should anyway). So
change the logic.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
54af5dda84 kernel: mm: rename z_page_frame_* to k_mem_page_frame_*
Also any demand paging and page frame related bits are
renamed.

This is part of a series to move memory management related
stuff out of the Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00
Flavio Ceolin
f74a84b251 xtensa: mmu: MMU re-initialization API
With power managment is enabled, depending on the SoC power state
used when idle, the MMU may lose context and may need to be re-initialized.
When re-initializing the MMU, we must not re-create the page table
because it may overwrite changes done during the execution, but we still
need to set the asid and page table for the current context.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-06-04 16:27:55 -05:00
Flavio Ceolin
d6d3c1098d xtensa: mmu: dup_table does not need parameter
The only page table duplicated is the kernel page table. This function
does not need a parameter.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
0012725d85 xtensa: mmu: Avoid k_mem_domain_default duplication
We can use some extra bits available for SW implementation to
save original permissions and avoid duplicating the kernel page tables
for the default memory domain.

Whe duplicating the page table to a new domain we just ensure
to restore the original map.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Flavio Ceolin
dcaceda39b xtensa: mmu: Simplify memory map
Simplify the logic around the shared attribute. Checks if a memory
region should be shared only in the function that actually maps the
memory.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-14 15:54:55 +02:00
Nicolas Pitre
57305971d1 kernel: mmu: abstract access to page frame flags and address
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Hess Nathan
ed12a0cc35 coding guidelines: comply with MISRA Rule 11.8
modified parameter types to receive a const pointer when a
non-const pointer is not needed

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-09 10:28:44 +02:00
Flavio Ceolin
d8635905e9 xtensa: mmu: Avoid unnecessary mapping
When duplicating a page table, we don't need to copy
the mapping to the kernel l1 page table virtual address.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-04-27 11:05:45 +03:00
Flavio Ceolin
69b831ddcb xtensa: mmu: Remove unused __data_*
__data_start and __data_end are not needed in ptables.c

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-03-26 10:38:02 +01:00
Flavio Ceolin
25292c8d38 xtensa: mmu: Remove unused _bss_*
_bss_start and _bss_end are not used in ptables.c
Just remove them.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-03-26 10:38:02 +01:00
Daniel Leung
33f586f312 xtensa: mmu: no need to ignore array bound for SoC MMU ranges
The #pragma to ignore array bounds for xtensa_soc_mmu_ranges[]
was a remnant before code refactoring during review. Since
this array is no longer declared __weak and as a zero length
array, the #pragma to ignore array bounds is no longer needed.
So remove them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-22 16:17:22 -05:00
Anas Nashif
d7678f1694 xtensa: move to use system cache API support for coherency
Remove custom implementation and use system cache interface instead.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-02-03 13:42:33 -05:00
Daniel Leung
fa25c0b0b8 xtensa: mmu: invalidate mem domain TLBs during page table swap
This adds a kconfig to enable invalidating the TLBs related to
the incoming thread's memory domain during page table swaps.
It provides a workaround, if needed, to clear out stale TLB
entries used by the thread being swapped out. Those stale
entries may contain incorrect permissions and rings.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-27 15:59:05 +00:00
Johan Hedberg
3fbf12487c kernel: Introduce a way to specify minimum system heap size
There are several subsystems and boards which require a relatively large
system heap (used by k_malloc()) to function properly. This became even
more notable with the recent introduction of the ACPICA library, which
causes ACPI-using boards to require a system heap of up to several
megabytes in size.

Until now, subsystems and boards have tried to solve this by having
Kconfig overlays which modify the default value of HEAP_MEM_POOL_SIZE.
This works ok, except when applications start explicitly setting values
in their prj.conf files:

$ git grep CONFIG_HEAP_MEM_POOL_SIZE= tests samples|wc -l
     157

The vast majority of values set by current sample or test applications
is much too small for subsystems like ACPI, which results in the
application not being able to run on such boards.

To solve this situation, we introduce support for subsystems to specify
their own custom system heap size requirement. Subsystems do
this by defining Kconfig options with the prefix HEAP_MEM_POOL_ADD_SIZE_.
The final value of the system heap is the sum of the custom
minimum requirements, or the value existing HEAP_MEM_POOL_SIZE option,
whichever is greater.

We also introduce a new HEAP_MEM_POOL_IGNORE_MIN Kconfig option which
applications can use to force a lower value than what subsystems have
specficied, however this behavior is disabled by default.

Whenever the minimum is greater than the requested value a CMake warning
will be issued in the build output.

This patch ends up modifying several places outside of kernel code,
since the presence of the system heap is no longer detected using a
non-zero CONFIG_HEAP_MEM_POOL_SIZE value, rather it's now detected using
a new K_HEAP_MEM_POOL_SIZE value that's evaluated at build.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2023-12-20 11:01:42 +01:00
Daniel Leung
a819bfb2d5 xtensa: rename z_xtensa to simply xtensa
Rename the remaining z_xtensa stuff as these are (mostly)
under arch/xtensa.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung
8bf20ee975 xtensa: mmu: rename prefix z_xtensa to xtensa_mmu
This follows the idea to remove any z_ prefix. Since MMU has
a large number of these, separate out these changes into one
commit to ease review effort.

Since these are no longer have z_, these need proper doxygen
doc. So add them too.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Flavio Ceolin
0a7251e365 xtensa: mmu: Fix tlb shootdown
Fix the way we read the current l1 page table set in the mmu.
We use it check if the current page table is different from the
running thread.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-12-12 10:58:03 +01:00
Flavio Ceolin
7382d7052b xtensa: mmu: Fix partition permission
The ring field in the pte mapping a memory partition should
be based in the partition attribute and not in the domain
asid that is used only to set the ASID (in RASID) position for
user ring.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-12-12 10:57:57 +01:00
Flavio Ceolin
c47880af0d arch/xtensa: Add new MMU layer
Andy Ross re-implementation of MMU layer with some subtle changes,
like re-using existent macros, fix page table cache property when
direct mapping it in TLB.

From Andy's original commit message:

This is a reworked MMU layer, sitting cleanly below the page table
handling in the OS.  Notable differences from the original work:

+ Significantly smaller code and simpler API (just three functions to
  be called from the OS/userspace/ptable layer).

+ Big README-MMU document containing my learnings over the process, so
  hopefully fewer people need to go through this in the future.

+ No TLB flushing needed.  Clean separation of ASIDs, just requires
  that the upper levels match the ASID to the L1 page table page
  consistently.

+ Vector mapping is done with a 4k page and not a 4M page, leading to
  much more flexibility with hardware memory layout.  The original
  scheme required that the 4M region containing vecbase be mapped
  virtually to a location other than the hardware address, which makes
  confusing linkage with call0 and difficult initialization
  constraints where the exception vectors run at different addresses
  before and after MMU setup (effectively forcing them to be PIC
  code).

+ More provably correct initialization, all MMU changes happen in a
  single asm block with no memory accesses which would generate a
  refill.

Signed-off-by: Andy Ross <andyross@google.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin
8dd84bc181 arch: xtensa: Rename xtensa_mmu.c to ptables.c
Initial work to split page table manipulation from
mmu hardware interaction.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Renamed from arch/xtensa/core/xtensa_mmu.c (Browse further)