If running under Xtensa simulator, it is good to tell simulator
to stop execution once we reach double exception, as the current
double exception handler is simply an endless loop. If we turn
on tracing in the simulator, the output file would contain
an infinite iteration of this endless loop, and the simulator
needs to be stopped manually before the file size goes out of
control. So we need to tell the simulator to stop once
we reach this point instead of doing an endless loop.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Save FP user register and FP register file during context switch.
This change enables shared FP registers mode using CONFIG_FPU_SHARING.
Since there is no lazy stacking, the FPU registers will be saved regardless
of whether floating point calculations are performed in the threads when
CONFIG_FPU_SHARING is enabled. This require 72 additional bytes in the
stack memory.
Signed-off-by: Lucas Tamborrino <lucas.tamborrino@espressif.com>
1. this header is no use for asm
2. if use xclib, this header include xclib stdbool, and expand to typedef
Signed-off-by: honglin leng <a909204013@gmail.com>
Triggers CPU exception with illegal instruction when z_except_reason
is called (e.g. in k_panic, k_oops). Creates exception stack frame
for use by coredump. Adds unique cause code for ARCH_EXCEPT. Disables
test case failure for qemu_xtensa.
Without an ARCH_EXCEPT implementation, z_except_reason calls
z_fatal_error directly with a null ESF and bypasses
xtensa_excint1_c's error logging. An ESF is required to coredump.
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
Assembler files were not migrated with the new <zephyr/...> prefix.
Note that the conversion has been scripted, refer to #45388 for more
details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
We had a similar sequence for interrupt return, where we were
selecting (actually only for the benefit of qemu) the highest priority
EPCn/EPSn registers for our RFI instruction. That works much better
in python the preprocessor.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The kernel coherence cache flush code was using a scratch register to
mark the top of the stack. Likewise a good candidate for ZSR use.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is actually Cadence-authored code, but its use of EXCSAVE1 as a
sideband input to the exception handler is very much in the same
family of tricks. Use ZSR assignments here too.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds basic support for GDB stub on Xtensa. Note that
this only provides the common bits on the architecture side.
SoC support is also required to fully enable GDB stub on
each Xtensa SoC.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
On CPU startup, When we reach the cache flush code in arch_switch(),
the outgoing thread is a dummy. The behavior of the existing code was
to leave the existing value in the SR unchanged (probably NULL at
startup). Then the context switch would walk from that address up to
the top of the outgoing stack, flushing everything in between. That's
wrong, because the outgoing stack is a real pointer (generally the
interrupt stack of the current CPU), and we're flushing everything in
memory underneath it.
This also reverts commit 29abc8adc0 ("xtensa: fix booting secondary
cores on the dummy thread"), which appears to have been an early
attempt to address this issue. It worked (modulo all the extra and
potentially incorrect flushing) on cavs v1.5/1.8 because of the way
the entry code worked there. But on 2.5 we now hit the first context
switch in a case where those extra lines are in address space already
marked unwritable by the CPU, so the flush explodes.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When we reach this code in interrupt context, our upper GPRs contain a
cross-stack call that may still include some registers from the
interrupted thread. Those need to go out to memory before we can do
our cache coherence dance here.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
XCC doesn't like the "rsr.<reg name>" style assembly
so fix that to the other style.
Also, XCC doesn't like _CONCAT() with the EPC/EPS
registers so need to spell out all of them.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There may be Xtensa SoCs which don't have high enough interrupt
levels for EPC6/EPS6 to exist in _restore_context. So changes
these to those which should be available according to the ISA
config file.
Fixes#30126
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.
Note that this does not enable TLS for all Xtensa SoC.
This is because Xtensa SoCs are highly configurable
so that each SoC can be considered a whole architecture.
So TLS needs to be enabled on the SoC level, instead of
at the arch level.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Implement the kernel "coherence" API on top of the linker
cached/uncached mapping work.
Add Xtensa handling for the stack coherence API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.
For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.
Furthermore, much of the assembly code used had issues.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Move tracing switched_in and switched_out to the architecture code and
remove duplications. This changes swap tracing for x86, xtensa.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Xtensa uses two instructions to perform atomic compare-and-set
instruction: first the comparison register, then the actual
instruction to do compare-and-set. There is a potential that
context switching is performed before these two instructions.
A restored context may have the wrong value in the comparison
register. So we need to save and restore the comparison
register during context switching.
Fixes#21800
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
We re-wrote the xtensa arch code, but never got around
to purging the old implementation.
Removed those boards which hadn't been moved to the new
arch code. These were all xt-sim simulator targets and not
real hardware.
Fixes: #18138
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This adds a simple infinite loop when double exception is raised.
Without this, if double exception occurs, it would execute
arbitrary code.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This allows Kconfig to specify which special register is being
used to store the pointer to the _kernel.cpu struct.
Since the SoC itself is highly configurable, sometimes MISC0 is not
available. So this adds the ability to use other special registers.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This patch provides support needed to get timing related
information from xtensa based SOC.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these. Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.
When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record. This permits compatibility with legacy
assembly on other platforms. Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.
Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Xtensa register windows have a special exception that happens when the
stack pointer needs to be moved, but the caller function has already
spilled its registers below it.
I thought these were unexercised in Zephyr code, but they turn out to
be thrown by the existing mem_pool tests when run in the 32-register
qemu environment (but not on 64-register hardwre). Because the effect
of the exception is to unspill the caller, there is no good way to
handle this in a traditional handler. Instead put a 5-instruction
stub in front of the user exception handler (i.e. incurring that cost
on every trap and every L1 interrupt) to test before doing the normal
entry.
Works, but would be nicer to optimize this in the future so that only
true alloca exceptions take that cost.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This macro was already available add an external symbol so C code can
access it (via CALL0 -- it's not and can't be an actual function).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds vectors for all interrupt levels defined by core-isa.h.
Modify the entry code a little bit to select correct linker sections
(levels 1, 6 and 7 get special names for... no particularly good
reason) and to constructed the interrupted PS value correctly (no EPS1
register for exceptions since they had to have interrupted level 0
code and thus differ only in the EXCM bit).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
SMP needs a new context switch primitive (to disentangle _swap() from
the scheduler) and new interrupt entry behavior (to be able to take a
global spinlock on behalf of legacy drivers). The existing code is
very obtuse, and working with it led me down a long path of "this
would be so much better if..." So this is a new context and entry
framework, intended to replace the code that exists now, at least on
SMP platforms.
New features:
* The new context switch primitive is xtensa_switch(), which takes a
"new" context handle as an argument instead of getting it from the
scheduler, returns an "old" context handle through a pointer
(e.g. to save it to the old thread context), and restores the lock
state(PS register) exactly as it is at entry instead of taking it as
an argument.
* The register spill code understands wrap-around register windows and
can avoid spilling A4-A15 registers when they are unused by the
interrupted function, saving as much as 48 bytes of stack space on
the interrupted stacks.
* The "spill register windows" routine is entirely different, using a
different mechanism, and is MUCH FASTER (to the tune of almost 200
cycles). See notes in comments.
* Even better, interrupt entry can be done via a clever "cross stack
call" I worked up, meaning that the interrupted thread's registers
do not need to be spilled at all until they are naturally pushed out
by the interrupt handler or until we return from the interrupt into
a different thread. This is a big efficiency win for tiny
interrupts (e.g. timers), and a big latency win for all interrupts.
* Interrupt entry is 100% symmetric with respect to medium/high
interrupts, avoiding the problems seen with hooking high priority
interrupts with the current code (e.g. ESP-32's watchdog driver).
* Much smaller code size. No cut and paste assembly. No use of HAL
calls.
* Assumes "XEA2" interrupt architecture, the register window extension
(i.e. no CALL0 ABI), and the "high priority interrupts" extension.
Does not support the legacy processor variants for which we have no
targets. The old code has some stuff in there to support this, but
it seems bitrotten, untestable, and I'm all but certain it doesn't
work.
Note that this simply adds the primitives to the existing tree in a
form where they can be unit tested. It does not replace the existing
interrupt/exception handling or _Swap() implementation.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>