Commit graph

512 commits

Author SHA1 Message Date
Johan Hedberg 3fbf12487c kernel: Introduce a way to specify minimum system heap size
There are several subsystems and boards which require a relatively large
system heap (used by k_malloc()) to function properly. This became even
more notable with the recent introduction of the ACPICA library, which
causes ACPI-using boards to require a system heap of up to several
megabytes in size.

Until now, subsystems and boards have tried to solve this by having
Kconfig overlays which modify the default value of HEAP_MEM_POOL_SIZE.
This works ok, except when applications start explicitly setting values
in their prj.conf files:

$ git grep CONFIG_HEAP_MEM_POOL_SIZE= tests samples|wc -l
     157

The vast majority of values set by current sample or test applications
is much too small for subsystems like ACPI, which results in the
application not being able to run on such boards.

To solve this situation, we introduce support for subsystems to specify
their own custom system heap size requirement. Subsystems do
this by defining Kconfig options with the prefix HEAP_MEM_POOL_ADD_SIZE_.
The final value of the system heap is the sum of the custom
minimum requirements, or the value existing HEAP_MEM_POOL_SIZE option,
whichever is greater.

We also introduce a new HEAP_MEM_POOL_IGNORE_MIN Kconfig option which
applications can use to force a lower value than what subsystems have
specficied, however this behavior is disabled by default.

Whenever the minimum is greater than the requested value a CMake warning
will be issued in the build output.

This patch ends up modifying several places outside of kernel code,
since the presence of the system heap is no longer detected using a
non-zero CONFIG_HEAP_MEM_POOL_SIZE value, rather it's now detected using
a new K_HEAP_MEM_POOL_SIZE value that's evaluated at build.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2023-12-20 11:01:42 +01:00
Luca Burelli 4d86162989 llext: merge llext_mem and llext_section enums
The only difference in the two enums are some entries related to
relocation sections. However, these entries are not used in the
code, so they can be safely removed, along with the mapping function.

Use LLEXT_MEM_* to avoid confusion with low-level "section" names.

Signed-off-by: Luca Burelli <l.burelli@arduino.cc>
2023-12-14 19:06:55 +00:00
Anas Nashif e25f31ab78 arch: guard more code with CONFIG_EXCEPTION_DEBUG
It should be possible to disable exception debug, which is enabled by
default to reduce image size. Add missing guards now that the option is
cross architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-12-14 09:32:27 +01:00
Anas Nashif a7e8391e31 debug: coredump: handle xtensa coredump like everyone else
There should not be any special handling for coredump with xtensa.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-12-14 09:32:27 +01:00
Daniel Leung 0636e52eff xtensa: rename xtensa_asm2.c to vector_handlers.c
Rename xtensa_asm2.c to have a more meaningful name to actually
reflect the content of the file. This file is mostly about
handling interrupts and exceptions (via the predefined vectors
in Xtensa core).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung b2f20d65b3 xtensa: remove z_arch_get_next_switch_handle
Fold z_arch_get_next_switch_handle() into return_to(). This is
not exactly an arch interface, and is simple enough to be
moved into return_to().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung a819bfb2d5 xtensa: rename z_xtensa to simply xtensa
Rename the remaining z_xtensa stuff as these are (mostly)
under arch/xtensa.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 6d5e0c25a6 xtensa: rename z_xtensa_irq to simple xtensa_irq
This gets rid of the z_ prefix.

Note that z_xt_*() are being used by the HAL so they cannot be
renamed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 8bf20ee975 xtensa: mmu: rename prefix z_xtensa to xtensa_mmu
This follows the idea to remove any z_ prefix. Since MMU has
a large number of these, separate out these changes into one
commit to ease review effort.

Since these are no longer have z_, these need proper doxygen
doc. So add them too.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 711622cbba xtensa: remove get_sreg macro from fatal.c
There is no in-file user. So remove it.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 004e68ccea xtensa: move exception handling func to arch internal header
z_xtensa_dump_stack() and z_xtensa_exccause() are both arch
internal functions that should not be exposed in public API.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 43b0b48de7 xtensa: move files under core/include/ into include/
Header files under arch/xtensa/include are considered internal
to architecture. There is really no need for two places to
house architecture internal header files.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung d9b34c6108 xtensa: move irq management stuff into irq_manage.c...
... from xtensa_asm2.c. Other architectures have
z_irq_spurious() and *_irq_is_enabled() test in irq_manage.c.
So follow the trend here.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 264391fe88 xtensa: refactor thread related stuff into its own file...
... from xtensa_asm2.c.

Everything has been stuffed inside xtensa_asm2.c where
they are all mangled together. So extract thread related
stuff into its own file.

Note that arch_float_*() may not be thread related but
most other architectures put them into thread.c. So we
also do it here.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung aa64b1f98e xtensa: move arch_spin_relax into smp.c
arch_spin_relax() does not really fit into the scheme of
xtensa_asm2.c as it is mainly about handling interrupts
and exceptions. So move it into smp.c, similar to other
architectures which arch_spin_relax() defined.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 106061b307 xtensa: rename files with hyphens to underscores
Simply to provide some consistencies on file naming under
arch/xtensa.

These are all internally used files and are not public.
So there is no need to provide a deprecation path for
them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 43990d1c0e xtensa: remove xtensa-asm2.h
xtensa-asm2.h only contains the function declaration of
xtensa_init_stack() which is only used in one file. So
make the actual implementation a static function in that
file. Also there is really no need to expose stack init
function as arch public API. So remove xtensa-asm2.h.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung 6694390ac8 xtensa: cleanup Kconfig file
* Wording on CONFIG_SIMULATOR_XTENSA
* Remove "default n" as default is no anyway.
* Remove some tabs as we almost never indent inside a if block
  in Zephyr.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung fc7ef47ed5 xtensa: remove CONFIG_XTENSA_NO_IPC
There is no in-tree user. Also, it is misleading as we use
SCOMPARE1 for spinlock too, not just IPC.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Daniel Leung d17524b86c xtensa: remove CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC from arch
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC should be defined at the SoC
or the board level since Xtensa cores are high configurable.
The default is just for ISS (Instruction Set Simulator). So
remove it from the arch level.

The xt-sim board is the only one in tree that is targeting
the ISS, so add it there.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-12-13 09:41:24 +01:00
Anas Nashif 8e5c3ce616 arch: xtensa: check for xt-clang instead of xcc-clang
Check against new variant name. xcc-clang usage is deprecated.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-12-12 18:48:14 +00:00
Flavio Ceolin 0a7251e365 xtensa: mmu: Fix tlb shootdown
Fix the way we read the current l1 page table set in the mmu.
We use it check if the current page table is different from the
running thread.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-12-12 10:58:03 +01:00
Flavio Ceolin a19d415c35 xtensa: mmu: Fix xtensa_ptevaddr_get
We use ptevaddr_get to know the address the page table is set.
It happens that for this use, we should just use the ptebase field
of ptevaddr register.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-12-12 10:58:03 +01:00
Flavio Ceolin 7382d7052b xtensa: mmu: Fix partition permission
The ring field in the pte mapping a memory partition should
be based in the partition attribute and not in the domain
asid that is used only to set the ASID (in RASID) position for
user ring.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-12-12 10:57:57 +01:00
Guennadi Liakhovetski 03519afb84 llext: xtensa: add support for local symbol relocations
Add support for relocating local symbols, as specified in the
.rela.dyn section.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2023-12-01 10:08:12 -05:00
Flavio Ceolin 683accaef0 xtensa: mmu: Include missing header
This file uses inline functions declared in cache.h and consequently
has to include that file.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-27 19:57:46 +01:00
Flavio Ceolin c47880af0d arch/xtensa: Add new MMU layer
Andy Ross re-implementation of MMU layer with some subtle changes,
like re-using existent macros, fix page table cache property when
direct mapping it in TLB.

From Andy's original commit message:

This is a reworked MMU layer, sitting cleanly below the page table
handling in the OS.  Notable differences from the original work:

+ Significantly smaller code and simpler API (just three functions to
  be called from the OS/userspace/ptable layer).

+ Big README-MMU document containing my learnings over the process, so
  hopefully fewer people need to go through this in the future.

+ No TLB flushing needed.  Clean separation of ASIDs, just requires
  that the upper levels match the ASID to the L1 page table page
  consistently.

+ Vector mapping is done with a 4k page and not a 4M page, leading to
  much more flexibility with hardware memory layout.  The original
  scheme required that the 4M region containing vecbase be mapped
  virtually to a location other than the hardware address, which makes
  confusing linkage with call0 and difficult initialization
  constraints where the exception vectors run at different addresses
  before and after MMU setup (effectively forcing them to be PIC
  code).

+ More provably correct initialization, all MMU changes happen in a
  single asm block with no memory accesses which would generate a
  refill.

Signed-off-by: Andy Ross <andyross@google.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin 8dd84bc181 arch: xtensa: Rename xtensa_mmu.c to ptables.c
Initial work to split page table manipulation from
mmu hardware interaction.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Andy Ross e7e8f6655c arch/xtensa: #include cleanup
This file doesn't need the asm2 header, and the preprocessor logic
around whether to include the backtrace header is needless (all it
does is declare functions).

Signed-off-by: Andy Ross <andyross@google.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin b7d96ab42a xtensa: userspace: Warning about impl security
Add a Kconfig option (and build warning) alerting about the problem
of the kernel spilling register in behave of the userspace.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin dd36389879 arch: xtensa: Not use TLS to store current thread
Xtensa clears out threadptr durint ISR when userspace
is enabled.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin 1247f8465c xtensa: userspace: Supports tls on userspace
Use thread local storage to check whether or not a thread is running
in user mode. This allows to use threadptr to properly support tls.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung 0d79481540 xtensa: userspace: only write 0xAA to stack if INIT_STACKS
Only clear the user stack to 0xAA if CONFIG_INIT_STACKS is
enabled. Otherwise, write 0x00 as if the stack is in BSS.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung 0e7def1977 xtensa: selectively init interrupt stack at boot
During arch_kernel_init(), the interrupt stack is being
initialized. However, if the current in-use stack is
the interrupt stack, it would wipe all the data up to
that point in stack, and might result in crash. So skip
initializing the interrupt stack if the current stack
pointer is within the boundary of interrupt stack.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung 6252fcfccf xtensa: userspace: simplify syscall helper
Consolidate all syscall helpers into one functions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung d9f643d007 xtensa: mmu: do not map heap if not using heap
Do not map the heap area by default if we are not using
heap at all.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin 9a33c400a1 xtensa: mmu: Fix possible race condition on tlb shootdown
We need to use the mmu spin lock when invalidating the cache during
tlb shootdown, otherwise it is possible that this happens when another
thread is updating the page tables.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin 156f1d4436 xtensa: mmu: Flush cache when altering pages
When the target has a cache way size (cache size / cache wasy) bigger
than the page size we have cache aliasing, since the number of bits
required by the cache index is bigger than the number of bits in the page
offset.

To avoid this problem we flush the whole cache on context switch or when
the current page table is changed.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung 7a5d2a2d81 xtensa: userspace: swap page tables at context restore
Swap page tables at exit of exception handler if we are going to
be restored to another thread context. Or else we would be using
the outgoing thread's page tables which is not going to work
correctly due to mapping and permissions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung c9c88a4368 xtensa: mmu: cache and TLB actions when adding thread to domain
When adding a thread to a memory domain, we need to also update
the mapped page table if it is the current running thread on
the same CPU. If it's not on the same CPU, we need to notify
the other CPUs in case the thread is running in one of them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung 81ea43692c xtensa: mmu: send IPI to invalidate TLBs on other CPUs
After changing content of page table(s), it is needed to notify
the other CPUs that the page table(s) have been changed so they
can do the necessary steps to use the updated version. Note that
the actual way to send IPI is SoC specific as Xtensa does not
have a common way to do this at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung eb546a8d87 xtensa: rework kernel oops exception path
When kernel OOPS is raised, we need to actually go through
the process of terminating the offending thread, instead of
simply printing the stack and continue running. This change
employs similar mechanism to xtensa_arch_except() to use
illegal instruction to raise hardware exception, and going
through the fatal exception path.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin 586bb92049 xtensa: userspace: Add syscall for user exception
Trigger exception on Xtensa requires kernel privileges. Add
a new syscall that is used when ARCH_EXCEPT is invoked from userspace.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin 75936d8db2 xtensa: userspace: Implement arch_syscall_oops
This function is needed by userspace.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung 716efb2e40 xtensa: extract printing of fatal exception into its own func
This extracts the printing of fatal exception information into
its own function to declutter xtensa_excint1_c().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung e9c449a737 xtensa: mmu: do not fault for known exceptions
There are known exceptions which are not fatal, and we need to
handle them properly by returning to the fixup addresses as
indicated. This adds the code necessary in the exception
handler for this situation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung bc0656a92e xtensa: mmu: allocate scratch registers for MMU
When MMU is enabled, we need some scratch registers to preload
page table entries. So update gen_zsr.py to that.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Daniel Leung c4706a3823 xtensa: mmu: handle page faults in double exception handler
This changes the TLB misses handling back to the assembly
in user exception, and any page faults during TLB misses to be
handled in double exception handler. This should speed up
simple TLB miss handling as we don't have to go all the way to
the C handler.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin a651862b30 xtensa: Enable userspace
Userspace support for Xtensa architecture using Xtensa MMU.

Some considerations:

- Syscalls are not inline functions like in other architectures because
  some compiler issues when using multiple registers to pass parameters
  to the syscall. So here we have a function call so we can use
  registers as we need.
- TLS is not supported by xcc in xtensa and reading PS register is
  a privileged instruction. So, we have to use threadptr to know if a
  thread is an user mode thread.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin fff91cb542 xtensa: mmu: Simplify initialization
Simplify the logic around xtensa_mmu_init.

- Do not have a different path to init part of kernel
- Call xtensa_mmu_init from C

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Flavio Ceolin a1bb2b9c64 xtensa: mmu: Simplify autorefill TLB helpers
Replace all autorefill helpers with only one that invalidates both,
DTLB and ITLB, since that is what is really needed.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-11-21 15:49:48 +01:00
Andy Ross 080e14f0f4 arch/xtensa: Rename "ALLOCA" ZSR to "A0SAVE"
This register alias was originally introduced to allow A0 to be used
as a scratch register when handling exceptions from MOVSP
instructions. (It replaced some upstream code from Cadence that
hard-coded EXCSAVE1).  Now the MMU code is now using too, and for
exactly the same purpose.

Calling it "ALLOCA" is only confusing.  Rename it to make it clear
what it's doing.

Signed-off-by: Andy Ross <andyross@google.com>
2023-11-21 15:49:48 +01:00
Rander Wang 954901296c arch/xtensa: clean up arch_cpu_idle function
Some workarounds were introduced for intel cavs2.5 platform bring up.
It is not general so move them to platform code.

Signed-off-by: Rander Wang <rander.wang@intel.com>
2023-11-20 11:14:41 +01:00
Daniel Leung c972ef1a0f kernel: mm: move kernel mm functions under kernel includes
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.

Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-20 09:19:14 +01:00
Daniel Leung b485cd717b xtensa: remove unused z_mp_entry declaration
z_mp_entry has been removed from Xtensa architecture.
So there is no need for a function declaration. Remove it.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-17 18:23:06 -05:00
Daniel Leung cdd4d84703 xtensa: add custom mem range check functions
This provides custom memory range check functions as
it gets a bit complicated with cached/uncached regions.
These functions are marked as __weak so SoC or board
can override these if needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-10-20 15:08:34 +02:00
Daniel Leung b4da11f929 gdbstub: xtensa: add support for dc233c core
This adds support for using coredump with Xtensa DC233C core,
which are being used by qemu_xtensa and qemu_xtensa_mmu.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-09-27 19:30:15 -05:00
Daniel Leung ba6c9c2136 xtensa: dc233c: enable backtrace support
Adds the necessary bits to enable backtrace support
for Xtensa DC233C core.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-09-26 08:37:43 +02:00
Daniel Leung 1194a35aa2 xtensa: cast char* to void* during stack dump with %p
cbprintf_package() warns about using char* for %p. So cast
it to void* to avoid the warning.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-09-26 08:37:29 +02:00
Daniel Leung fcf22e59b8 xtensa: mark arch_switch ALWAYS_INLINE
arch_switch() is basically an alias to xtensa_switch() so
we can mark arch_switch() as ALWAYS_INLINE to avoid another
function call, especially when no optimization is used when
debugging.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-09-26 08:37:29 +02:00
Daniel Leung ca23a5f0cf xtensa: mmu: allow SoC to do additional MMU init steps
This adds a function arch_xtensa_mmu_post_init() which can
be implemented on the SoC layer to perform additional MMU
initialization steps if necessary.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 088a31e2bf xtensa: mmu: preload ITLB for VECBASE before restoring...
...VECBASE during MMU initialization. This is to make sure
that we can use the TLB miss handling in the exception
vector after we have moved back the VECBASE during MMU
initialization. Or else we would be forever stuck in ITLB
miss because the exception vectors are not in TLB and we
cannot populate the TLB because those vectors are not in
TLB.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 40f2486b68 xtensa: mmu: rename MMU_KERNEL_RING to Z_XTENSA_KERNEL_RING...
...and move it to xtensa_mmu_priv.h.

This would allow the SoC layer to use the RING number if needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 18eb17f4cd xtensa: mmu: add arch_reserved_pages_update
This adds arch_reserved_pages_update() which is called in
k_mem_manage_init() to reserve some physical pages so they
are not re-mapped. This is due to Zephyr's linker scripts
for Xtensa which often puts something before z_mapped_start
(aka .text, for example, vecbase). That space needs to be
reserved or else k_mem_map() would be mapping those that
could result in faults.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 4778c13bbe xtensa: mmu: handle all data TLB misses in double exception
Instead of only handling data TLB misses for VECBASE, change it
to handle all data TLB misses in the double exception handler.
It is because we may encounter data TLB misses when trying to
preload page table entries inside user exception handler.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-08-26 16:50:40 -04:00
Flavio Ceolin c723d8b8d3 xtensa: Add missing synchronization
rsync after writing MISC0..3 or EXCSAVE1..7 registers is
needed before reading them.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 24148718fc xtensa: mmu: cache common data and heap if !XTENSA_RPO_CACHE
If CONFIG_XTENSA_RPO_CACHE is not enabled, it can be assumed
that memory is not double mapped in hardware for cached and
uncached access. So we can specify those regions to have
cache via TLB.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung b6ccbae58d xtensa: mmu: use _image_ram_start/end for data region
Simply using __data_start and __data_end is not enough as
it leaves out kobject regions which is supposed to be
near .data section. So use _image_ram_start and
_image_ram_end instead to enclose data, bss and various
kobject regions (among others).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 257404a143 xtensa: mmu: init: only clear enough entries in way 6
During MMU initialization, we clear TLB way 6 to remove all
identity mapping. Depending on CPU configuration, there are
certain number of entries per way. So use the number from
core-isa.h to clear enough entries instead of hard-coded
number 8. Specifying an entry number outside of permitted
range may result in CPU reacting in weird way so better to
avoid that.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 614e64325d xtensa: mmu: no longer identity map the first 512MB
This removes the identity map of the first 512MB in TLB way 6.
Or else it would interfere with mapped entries resulting in
double mapped TLB exception.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung b5016714b0 xtensa: mmu: handle TLB misses during user exception
This adds code to deal with TLB misses as these comes as
level 1 interrupts.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 98ffd1addd xtensa: crt1: call z_xtensa_mmu_init
MMU needs to be initialized before going in to C, so
z_xtensa_mmu_init() is called in crt1.S before call
to z_cstart(). Note that this is the default case
and crt1.S can be disabled if board and SoC desire
to do so.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
Daniel Leung 38d4b78724 xtensa: mmu: remove printing vaddr registers during exception
Turns out not all MMU enabled Xtensa cores have vaddrstatus,
vaddr0 and vaddr1. And there does not seem to be a way to
determine whether they are available. So remove them from
the exception printout for now.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-26 16:50:40 -04:00
YuLong Yao 823e6b70d2 arch: xtensa: Implement arch_float_enable&disable
Every arch must have arch_float_enable&disable functions.

Signed-off-by: YuLong Yao <feilongphone@gmail.com>
2023-08-21 10:10:06 +02:00
Marek Matej 6b57b3b786 soc: xtensa,riscv: esp32xx: refactor folder structure
Refactor the ESP32 target SOCs together with
all related boards. Most braking changes includes:

- changing the CONFIG_SOC_ESP32* to refer to
  the actual soc line (esp32,esp32s2,esp32s3,esp32c3)
- replacing CONFIG_SOC with the CONFIG_SOC_SERIES
- creating CONFIG_SOC_FAMILY_ESP32 to embrace all
  the ESP32 across all used architectures
- introducing CONFIG_SOC_PART_NUMBER_* to
  provide a SOC model config
- introducing the 'common' folder to hide all
  commonly used configs and files.
- updating west.yml to reflect previous changes in hal

Signed-off-by: Marek Matej <marek.matej@espressif.com>
2023-07-25 18:12:33 +02:00
Daniel Leung e6d8926857 xtensa: set no optimization for arch_cpu_idle() with xt-clang
xt-clang likes to remove any consecutive NOPs more than 8. So
we need to force the function to have no optimization to avoid
this behavior and to retain all those NOPs.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-07-24 11:07:30 -04:00
Daniel Leung a458d0443a xtensa: allow arch-specific arch_spin_relax() with more NOPs
This adds a Kconfig to introduce the Xtensa specific
arch_spin_relax() which can do more NOPs. Some Xtensa SoCs
may need more NOPs after failure to lock a spinlock,
especially under SMP. This gives the bus extra time to
propagate the RCW transactions among CPUs.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-07-20 10:47:47 +00:00
Lucas Tamborrino eb028ccf55 debug: coredump: xtensa: Add esp32s3
Add coredump support for esp32s3.

Signed-off-by: Lucas Tamborrino <lucas.tamborrino@espressif.com>
2023-06-21 16:06:06 -04:00
Lucas Tamborrino ba3766a75f debug: coredump: xtensa: add esp32s2
Add coredump support for esp32s2.

Signed-off-by: Lucas Tamborrino <lucas.tamborrino@espressif.com>
2023-06-21 16:06:06 -04:00
Marek Matej 1c130d0060 arch: xtensa: Enable builds without the multithreading
Allow builds which has CONFIG_MULTITHREADING disabled.
This is reduce code footprint which is handy for
constrained targets as bootloaders.

Signed-off-by: Marek Matej <marek.matej@espressif.com>
2023-05-25 16:15:54 +02:00
Marek Matej 4796746b5e soc: esp32: MCUboot support
This make MCUboot build as Zephyr application.
Providing optinal 2nd stage bootloader to the
IDF bootloader, which is used by default.
This provides more flexibility when building
and loading multiple images and aims to
brings better DX to users by using the sysbuild.
MCUboot and applications has now separate
linker scripts.

Signed-off-by: Marek Matej <marek.matej@espressif.com>
2023-05-25 16:15:54 +02:00
Rander Wang 445f4e8877 arch/xtensa: undefine NOP32
It should not be NOP16 since it is not defined by this file

Signed-off-by: Rander Wang <rander.wang@intel.com>
2023-05-25 04:49:14 -04:00
Daniel Leung e444cc9fb9 xtensa: mmu: always map data TLB for VECBASE
This adds code to always map data TLB for VECBASE so that
we would be dealing with fewer data TLB misses during
exception handling. With VECBASE always mapped, there is
no need to pre-load anymore.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-05-23 08:54:29 +02:00
Daniel Leung c3d1fa2138 xtensa: mmu: handle TLB misses in C exception handler
This moves the TLB miss handling to the C exception handler.
This also allows us to handle page faults (for example,
unmapped pages) during this time as any more exceptions
handled in the C handler will not trigger the double
exception handler but the same C handler.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-05-23 08:54:29 +02:00
Daniel Leung ce1bf365b6 xtensa: make CONFIG_XTENSA_MMU_PTEVADDR based on choice
Instead of being able to arbitrarily set the PTEVADDR for page
table, this provides choices (currently just one). This is in
preparation to enable handling memory management exception in
C code. For that to work, we will need to pre-load the page
table address (PTEVADDR) for the memory page containing
exception code and data (containing jump addresses), and
various stacks. This is to prempt any TLB misses during handling
the level 1 interrupt code. If a TLB miss is encountered during
handling of level 1 interrupt, we will be thrown into double
exception handling code where we will get stuck in infinite
loop. However, in order to pre-load the page table entries,
PTEVADDR needs to be calculated. This requires the use of
PTEVADDR base which cannot be loaded via l32r, as we may cause
a data TLB miss. So we must be able to grab the PTEVADDR base
address strictly within code, and must be without any data
load. So changing CONFIG_XTENSA_MMU_PTEVADDR to be based on
choice so we can have pre-defined bit shift value for shift
operation. This shift value will be used in exception handling
code.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-05-23 08:54:29 +02:00
Daniel Leung dfc87e2754 xtensa: gen_zsr: add _STR for extra registers
This also generates the correspoing _STR entries for
extra registers.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-05-23 08:54:29 +02:00
Flavio Ceolin d091740b00 xtensa: mmu: Add option to map memory in cached/uncached
Add a build option to tell if memory should be mapped in cached
and uncachedr regions.

If the memory is neither in cached nor uncached region it is not double
mapped.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-23 08:54:29 +02:00
Flavio Ceolin 020df54ba4 xtensa: mmu: Initial implementation
Initial support for Xtensa MMU version 3. It is using a two level page
table based on fact that the page table is in the virtual space.  Only
the top level (page directory) is wired mapped in the TLB to avoid
second level page miss.

The mapped memory is completely fragmented in multiple sections, maybe
we find a better way in future.

The exception handler is where we effectively map the memory, the way it
works is:

1) SW try to access some memory address
2) The address is not mapped, so the MMU will try the auto-refill,
   looking the page table
3) The page table contents is not mapped (remember, just the top-level page
   is mapped)
4) An exception will be triggered, in the exception we try to read the
   portion of the page table that maps the original address
5) The address is not mapped, so the MMU will try again the auto-refill.
   This time though, the address is mapped by the top level page that is
   properly mapped. (The top-level page maps the page table itself).

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-23 08:54:29 +02:00
Flavio Ceolin c4025f026f arch: xtensa: Remove unecessary logic in backtrace
In z_xtensa_backtrace_print the parameter depth is checked for <= 0.
There is no need to check it again later, also, since the variable is
not used after the while loop we can use directly the parameter without
an additional variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-12 18:31:13 -04:00
Flavio Ceolin f3bec2ffee xtensa: tls: Fix invalid reference
bsa is not defined. It should be access through frame pointer.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-11 17:38:16 -04:00
Andy Ross e31ae60058 arch/xtensa: Fix nested interrupt entry
The "cross stack call" mechanism has intermediate states where the
stack frames are not valid for our own interrupt entry code, which
causes corruption if an interrupt races at exactly the right time.
Leave interrupts masked until just before the call.

The fix is midly complicated by the fact that we RELY on nested window
exception frames to spill registers from the interruptee, so have to
do the masking with PS.INTLEVEL, which requires a register to save its
contents, which we don't have since everything needs to happen in one
4-register window.  But thankfully our Zephyr-reserved EPS register is
guaranteed to be available through this process.

Fixes #57009

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-08 16:56:17 -04:00
Kumar Gala f8ddd3d77f xtensa: limit speical exit() to XT_SIMULATOR
Use the common exit() provided by libc so we get standard behavior
across all architectures.  So only implement a special exit when
XT_SIMULATOR is defined.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2023-05-08 09:59:54 +02:00
Anas Nashif 6388f5f106 xtensa: use sys_cache API instead of custom interfaces
Use sys_cache instead of custom and internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-04-26 07:31:22 -04:00
Daniel Leung 5aa1aaddc2 xtensa: fatal: no backtrace if no stack is passed in
The backtrace requires a valid stack pointer to start
printing backtraces. So if there is no stack pointer
being passed in, skip printing backtraces.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-04-21 16:27:50 +02:00
Daniel Leung 5def4ab915 xtensa: fix inline assembly of rsil in exception code for XCC
Commit 408472673e added inline
assembly to lock interrupt. However, XCC doesn't like the syntax
using STRINGIFY, and also an empty clobber section. So parameterize
the second argument to rsil, and remove the last colon.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-04-20 08:14:35 -04:00
Daniel Leung 1e9d4602ab xtensa: add some structs for interrupt stack frames
This adds some structs for interrupt stack frames to make it
easier to access individual elements, and ultimately getting
rid of magic array element numbers in the code. Hopefully,
this would aid in debugging where you can view the whole
struct in debugger.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-04-20 04:45:52 -04:00
Aastha Grover 408472673e arch: xtensa: Fix xtensa error handler
In case of recoverable fatal errors the execution should
switch to another thread. This will ensure the current_cpu nested
count is reset  when there is a context switch.

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2023-04-11 14:48:51 -04:00
Daniel Leung db495a5ebe xtensa: stop execution under simulator for double exception
If running under Xtensa simulator, it is good to tell simulator
to stop execution once we reach double exception, as the current
double exception handler is simply an endless loop. If we turn
on tracing in the simulator, the output file would contain
an infinite iteration of this endless loop, and the simulator
needs to be stopped manually before the file size goes out of
control. So we need to tell the simulator to stop once
we reach this point instead of doing an endless loop.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-01-23 10:09:18 +00:00
Lucas Tamborrino 9e289c1b20 arch: xtensa: save FPU register in context switching
Save FP user register and FP register file during context switch.

This change enables shared FP registers mode using CONFIG_FPU_SHARING.

Since there is no lazy stacking, the FPU registers will be saved regardless
of whether floating point calculations are performed in the threads when
CONFIG_FPU_SHARING is enabled. This require 72 additional bytes in the
stack memory.

Signed-off-by: Lucas Tamborrino <lucas.tamborrino@espressif.com>
2022-12-27 13:23:17 +01:00
honglin leng 676441e0ca xtensa: remove xtensa asm unused header
1. this header is no use for asm

2. if use xclib, this header include xclib stdbool, and expand to typedef

Signed-off-by: honglin leng <a909204013@gmail.com>
2022-11-22 12:45:33 +09:00