Commit graph

13 commits

Author SHA1 Message Date
Iuliana Prodan
f9810ccbe1 arch: xtensa: modify asm for interrupt sections
For IMX, for timer interrupt, the interrupt handler
was not the correct one executed and that’s because
the handlers were not at the expected address.
For IMX the size constraint of the interrupt vector
table entry is 0x1C bytes of code, less than usual.

I've added a small indirection to bypass this size
constraint and moved the default handlers to the end
of vector table, renaming them to
_Level\LVL\()VectorHelper.

Signed-off-by: Iuliana Prodan <iuliana.prodan@nxp.com>
2021-08-28 23:27:02 -04:00
Andy Ross
ae4f7a1a06 arch/xtensa: Remember to spill windows in arch_cohere_stacks()
When we reach this code in interrupt context, our upper GPRs contain a
cross-stack call that may still include some registers from the
interrupted thread.  Those need to go out to memory before we can do
our cache coherence dance here.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Shubham Kulkarni
26efef4f94 arch: xtensa: Fix backtrace from ISR
a0 is used as scratch register. Restore value of a0 (return address)
from stack frame before spilling registers on stack

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-03-03 13:02:57 +01:00
Shubham Kulkarni
8b7da334d5 arch: xtensa: Print backtrace from panic handler
This change uses stack frame to print backtrace once exception occurs
Printing backtrace helps to identify the cause of exception

Signed-off-by: Shubham Kulkarni <shubham.kulkarni@espressif.com>
2021-01-23 08:43:10 -05:00
Daniel Leung
f8a909dad1 xtensa: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch.

Note that this does not enable TLS for all Xtensa SoC.
This is because Xtensa SoCs are highly configurable
so that each SoC can be considered a whole architecture.
So TLS needs to be enabled on the SoC level, instead of
at the arch level.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Daniel Leung
bf50aae693 xtensa: save/restore scompare1 during context switch
Xtensa uses two instructions to perform atomic compare-and-set
instruction: first the comparison register, then the actual
instruction to do compare-and-set. There is a potential that
context switching is performed before these two instructions.
A restored context may have the wrong value in the comparison
register. So we need to save and restore the comparison
register during context switching.

Fixes #21800

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-02-27 12:42:26 +02:00
Andrew Boie
dafd3485bf xtensa: mask interrupts earlier
When coming out of an exception, we need to mask interrupts
to avoid races when decrementing the nested count. Move
the instruction that does this earlier.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-14 10:11:05 -07:00
Flavio Ceolin
67ca176754 headers: Fix headers across the project
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-17 15:49:26 -04:00
Andy Ross
285c5b26dd xtensa/asm2: Save shift/loop registers on exception entry
This was a little embarassing.  The swap code got this right, and the
interrupt exit path got it right, but on entry we weren't ever saving
the shift and loop registers for the interrupted context.

This almost always worked anyway as the loop registers aren't ever
used in any Zephyr code (gcc won't generate this style of loop AFAICT)
and the SAR shift amount register is generally used only in two pairs
of adjacent instructions making the chance of hitting that exact cycle
quite low in general.

But of course we have shift-happy crypto code in our tests, so this
got caught, thankfully.

See https://github.com/zephyrproject-rtos/zephyr/issues/6470

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-06 14:13:56 -08:00
Andy Ross
bdac8d070f xtensa/asm2: Fix stack pointer during preemption spills
When returning into a different thread than we interrupted, we
obviously need to spill all the existing register windows to make sure
all their values are in the old thread's stack.  But the code to do
this forgot to reset the current stack pointer to the value it had at
interrupt time (it was still pointing to the saved context below
that), so the caller of the interrupted function was spilling to the
wrong spot.

This wouldn't show up as an instant failure, it would only happen when
switching BACK to the improperly-spilled thread.  And even then it
would be a noop if the original interrupt handler was deep enough to
have spilled that function naturally.

In practice, this happened only in some instances on ESP-32 (which has
more windowed registers than qemu) when interrupting the idle thread
(which is very shallow) with a (very simple) timer interrupt.  Trivial
to see, hard to find.

See https://github.com/zephyrproject-rtos/zephyr/issues/6346 for more
detail.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-05 19:06:07 -05:00
Andy Ross
865bbd6b69 xtensa-asm2: Handle alloca/movsp exceptions
Xtensa register windows have a special exception that happens when the
stack pointer needs to be moved, but the caller function has already
spilled its registers below it.

I thought these were unexercised in Zephyr code, but they turn out to
be thrown by the existing mem_pool tests when run in the 32-register
qemu environment (but not on 64-register hardwre).  Because the effect
of the exception is to unspill the caller, there is no good way to
handle this in a traditional handler.  Instead put a 5-instruction
stub in front of the user exception handler (i.e. incurring that cost
on every trap and every L1 interrupt) to test before doing the normal
entry.

Works, but would be nicer to optimize this in the future so that only
true alloca exceptions take that cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross
bf2139331c xtensa: Add exception/interrupt vectors in asm2 mode
This adds vectors for all interrupt levels defined by core-isa.h.

Modify the entry code a little bit to select correct linker sections
(levels 1, 6 and 7 get special names for... no particularly good
reason) and to constructed the interrupted PS value correctly (no EPS1
register for exceptions since they had to have interrupted level 0
code and thus differ only in the EXCM bit).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross
a34f884f23 xtensa: New asm layer to support SMP
SMP needs a new context switch primitive (to disentangle _swap() from
the scheduler) and new interrupt entry behavior (to be able to take a
global spinlock on behalf of legacy drivers).  The existing code is
very obtuse, and working with it led me down a long path of "this
would be so much better if..."  So this is a new context and entry
framework, intended to replace the code that exists now, at least on
SMP platforms.

New features:

* The new context switch primitive is xtensa_switch(), which takes a
  "new" context handle as an argument instead of getting it from the
  scheduler, returns an "old" context handle through a pointer
  (e.g. to save it to the old thread context), and restores the lock
  state(PS register) exactly as it is at entry instead of taking it as
  an argument.

* The register spill code understands wrap-around register windows and
  can avoid spilling A4-A15 registers when they are unused by the
  interrupted function, saving as much as 48 bytes of stack space on
  the interrupted stacks.

* The "spill register windows" routine is entirely different, using a
  different mechanism, and is MUCH FASTER (to the tune of almost 200
  cycles).  See notes in comments.

* Even better, interrupt entry can be done via a clever "cross stack
  call" I worked up, meaning that the interrupted thread's registers
  do not need to be spilled at all until they are naturally pushed out
  by the interrupt handler or until we return from the interrupt into
  a different thread.  This is a big efficiency win for tiny
  interrupts (e.g. timers), and a big latency win for all interrupts.

* Interrupt entry is 100% symmetric with respect to medium/high
  interrupts, avoiding the problems seen with hooking high priority
  interrupts with the current code (e.g. ESP-32's watchdog driver).

* Much smaller code size.  No cut and paste assembly.  No use of HAL
  calls.

* Assumes "XEA2" interrupt architecture, the register window extension
  (i.e. no CALL0 ABI), and the "high priority interrupts" extension.
  Does not support the legacy processor variants for which we have no
  targets.  The old code has some stuff in there to support this, but
  it seems bitrotten, untestable, and I'm all but certain it doesn't
  work.

Note that this simply adds the primitives to the existing tree in a
form where they can be unit tested.  It does not replace the existing
interrupt/exception handling or _Swap() implementation.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00