Commit graph

19400 commits

Author SHA1 Message Date
Andy Ross 3f5027f835 xtensa: Fix noreturn attribute on error handlers in asm2
In asm2, the machine exception handler runs in interrupt context (this
is good: it allows us to defer the test against exception type until
after we have done the stack switch and dispatched any true
interrupts), but that means that the user error handler needs to be
invoked and then return through the interrupt exit code.

So the __attribute__(__noreturn__) that it was being decorated with
was incorrect.  And actually fatal, as with gcc xtensa will crash
trying to return from a noreturn call.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross c3c4ea730d nios2: Add include for _check_stack_sentinel()
This API moved into kswap.h and the resulting warning on this arch got
missed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 245b54ed56 kernel/include: Missed nano_internal.h -> kernel_internal.h spots
Update heading naming given recent rename

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 59cdfe6e44 tests/kernel: SMP test
Simple SMP test to validate the two threads can be simultaneously
scheduled.  Arranges things such that both threads are at different
priorities and never yield the CPU, so on a uniprocessor build they
cannot be fairly scheduled.  Checks that both are nonetheless making
progress.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 564f59060c kernel: SMP timer integration
In SMP, the system timer is used for timeslicing on auxiliary CPUs,
but the base system timekeeping via _nano_sys_clock_tick_announce() is
still done on CPU0 only (because the framework isn't prepared for
asynchronous notification yet).  Skip processing on CPU1+.

Also, due to a hardware interaction* that is difficult to work around,
timer initialization on the auxiliary CPUs is done at the very end of
the CPU bringup, just before the swap into the scheduler.  A
smp_timer_init() API has been added for this purpose.

* On ESP-32, enabling the timer seems to result in a near-synchronous
  interrupt being delivered despite my best attempts to keep it
  masked, then blowing things up because the CPU record isn't set up
  to handle it yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bdcd18a744 kernel: Enable SMP
Now that all the pieces are in place, enable SMP for real:

Initialize the CPU records, launch the CPUs at the end of kernel
initialization, have them wait for a flag to release them into the
scheduler, then enter into the runnable threads via _Swap().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 85557b011e kernel: Simplified idle for SMP auxiliary CPUs
A pure timer-based idle won't work well in SMP.  Without an IPI to
wake up idle CPUs out of the scheduler they will sleep far too long
and the main CPU will do all the scheduling of wake-up-and-sleep
processes.  Instead just have the auxilary CPUs do a traditional
busy-wait scheduler in their idle loop.

We will need to revisit an architecture that allows both
wait-for-timer-interrupt idle and SMP.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 53eceffb7f esp32: Set CPU pointer on app cpu at startup
It's not enough to wait for a switch, lots of things need this early.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross b0b9b3d16a xtensa: Report CPU number in exceptions
In SMP contexts it's good to know which CPU blew up.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 364cbae412 kernel: Make irq_{un}lock() APIs into a global spinlock in SMP mode
In SMP mode, the idea of a single "IRQ lock" goes away.  Long term,
all usage needs to migrate to spinlocks (which become simple IRQ locks
in the uniprocessor case).  For the near term, we can ease the
migration (at the expense of performance) by providing a compatibility
implementation around a single global lock.

Note that one complication is that the older lock was recursive, while
spinlocks will deadlock if you try to lock them twice.  So we
implement a simple "count" semantic to handle multiple locks.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 780ba23eb8 kernel: Create idle threads and interrupt stacks for SMP processors
Simple implementation that caps at 4 CPUs.  Long term we should use
some linker magic to define as many as needed and loop over them
without needlessly increasing data or code size for the tracking.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross e694656345 kernel: Move per-cpu _kernel_t fields into separate struct
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these.  Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.

When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record.  This permits compatibility with legacy
assembly on other platforms.  Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.

Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 73678c4388 tests/kernel: Add spinlock test
Simple test of spinlock semantics.  Bounce between two CPUs locking
and releasing, validating that nothing changes at unexpected times.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 7a023cfb89 kernel: Simple spinlock API
Minimal spinlock API based on the existing atomic.h layer.  Usage
works just like irq_lock(), but takes an argument to a specific struct
k_spinlock_t to un/lock.  No attempt at implementing fairness or
backoff semantics.  No attempt made at architecture-specific assembly.

When CONFIG_SMP is not enabled, this code falls back to a zero-size
struct and becomes functionally identical to irq_lock/unlock().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross d3376f2781 kernel, esp32: Add SMP kconfig flag and MP_NUM_CPUS variable
Simply define the Kconfig variables in this patch so they can be used
in later patches.  Define MP_NUM_CPUS correctly on esp32.  No code
changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 86ff14824d tests/kernel: Simple test for multiprocessor start API
Starts the second CPU and verifies that it can set a variable while we
spin on the first.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross e717267abf kernel, esp32: Add _arch_start_cpu API
This is a mostly-internal API to start a secondary system CPU, with an
implementation for the ESP-32 "APP" cpu.  Exposed in kernel.h because
it's plausibly useful for asymmetric MP code managed by an app.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 00f3d2e53a esp-32, qemu_xtensa: Use asm2 by default
Set these SoCs to use asm2 by default

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 865bbd6b69 xtensa-asm2: Handle alloca/movsp exceptions
Xtensa register windows have a special exception that happens when the
stack pointer needs to be moved, but the caller function has already
spilled its registers below it.

I thought these were unexercised in Zephyr code, but they turn out to
be thrown by the existing mem_pool tests when run in the 32-register
qemu environment (but not on 64-register hardwre).  Because the effect
of the exception is to unspill the caller, there is no good way to
handle this in a traditional handler.  Instead put a 5-instruction
stub in front of the user exception handler (i.e. incurring that cost
on every trap and every L1 interrupt) to test before doing the normal
entry.

Works, but would be nicer to optimize this in the future so that only
true alloca exceptions take that cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bbd7912a6b xtensa-asm2: Exception/interrupt handler should check stack sentinel
This got forgotten.  Note that this function is empty if
CONFIG_STACK_SENTINEL is unconfigured.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 60932d1427 xtensa: Add hook to do register window spills
This macro was already available add an external symbol so C code can
access it (via CALL0 -- it's not and can't be an actual function).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2867bfc1eb xtensa asm2: Fixup stack alignment at runtime
The API allows any byte count for stack size, and tests in fact check
that a stack with a 499 byte stack works correctly.  No choice, have
to do this at runtime.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 02b2fe1c9e xtensa: THREAD_MONITOR hooks for asm2
You'd this feature would be portable, but it's arch-specific.
Initialize the CONFIG_THREAD_MONITOR stuff, placing the __thread_entry
struct (which AFAICT is dead: nothing in the tree actually reads it)
at the top of the stack.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 7707fcfa51 xtensa: asm2 needs to honor thread preemption
Forgot to check the thread preemption status when fetching the stack
to restore.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 63ad74f833 xtensa: Fix thread entry point
The stack initilaization was calling the user-provided entry function
directly, which works fine until that function returns, at which point
it will try to unspill A0-A3 from the 16 bytes above the allocated
stack and then "return" to a NULL pointer.

The kernel provides a _thread_entry() function that does cleanup
properly, so use that.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2c1449bc81 kernel, xtensa: Switch-specific thread return value
When using _arch_switch() context switching, the thread return value
is a generic hook and not provided by the architecture.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bf2139331c xtensa: Add exception/interrupt vectors in asm2 mode
This adds vectors for all interrupt levels defined by core-isa.h.

Modify the entry code a little bit to select correct linker sections
(levels 1, 6 and 7 get special names for... no particularly good
reason) and to constructed the interrupted PS value correctly (no EPS1
register for exceptions since they had to have interrupted level 0
code and thus differ only in the EXCM bit).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 7de010b5e5 xtensa: Interrupt generator script and output for qemu & esp32
This python script reads the core-isa.h interrupt definitions (via
running a template file through the toolchain preprocessor to generate
an input file) and emits a fully populated, optimized C handling code
that binary searches only the declared interrupts at a given level and
correctly detects spurious interrupts (and/or incorrect core-isa.h
definitions).

The generated code, alas, turns out not to be any faster than simply
searching the interrupt mask with CLZ (er, NSAU in xtensese), though
it could be faster in theory if the compiler made different choices,
see comments.  But I like this for the robustness of the fully
populated search trees and the checking of level vs. mask.

This simply commits the script output into the source tree, including
some checking code to force a build error if the toolchain changes the
headers incompatibly.  It would be better long term to have these
headers be generated at build time, but that requires more cmake fu
than I have.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bba98a9c38 drivers/timer/xtensa_sys_timer: Add init/update hooks for asm2
The earlier xtensa layer put the timer initialization and update
directly into the interrupt handler, which is... weird.  Under asm2,
it's just a regular ISR and needs to do the work in the driver.

Really, this driver needs a bunch of cleanup.  The xtensa CPU timer is
two registers and one ISR: a global cycle count register, and a
compare register that will fire the IRQ when they match.  There is
*way* too much code here.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross c761ae9695 xtensa: Add Kconfig for asm2 layer
The asm2 layer will build alongside the traditional assembly, but the
reverse is not true.  Add a CONFIG_XTENSA_ASM2 to force its use at
runtime and disable the older code.

Note that the older assembly had an initialization function that is
properly part of the timer driver.  Move a C equivalent into the timer
driver itself for now to prevent a build breakage.  Long term we need
to clean that driver up in a bunch of other ways.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 6f3036091a xtensa: Implement _xt_ints_on/off for asm2
Legacy xtensa had a rather complicated implementation of en/disabling
interrupts, owing to the "software priority" feature (which plays
games with INTENABLE and INTLEVEL to allow for interrupts to interrupt
each other outside their normal priorities).  But that's not a Zephyr
feature, it's enabled by a XT_USE_SWPRI value that comes from platform
headers and isn't enabled on any of our boards.  Dead code, basically.

Replace with the obvious implementation when asm2 is in use.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross a0dd2de6fd xtensa: Remove _xt_set_exception_handler()
This was a dead API.  Nothing ever used it, it wasn't exposed in any
API headers.  It never appeared in documentation.  It's not
particularly clear why a Zephy app would want to hook
architecture-specific exceptions instead of simply using the portable
error framework anyway. And it's not supported by asm2.  Delete.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bd72f71ece xtensa: Remove arch-specific (and empty) offsets.h
The xtensa arch code had this empty offsets.h header sitting around.
Its name collides with the autogenerated offsets.h, making it
dangerously dependent on include file path order.  Seems to be benign,
but it's freaking me out.  Remove.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross b2c74e017e xtensa/asm2: Add a _new_thread implementation for asm2/switch
Implement _new_thread in terms of the asm2 switch mechanism.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 042d8ecca9 kernel: Add alternative _arch_switch context switch primitive
The existing __swap() mechanism is too high level for some
applications because of its scheduler-awareness.  This introduces a
new _arch_switch() mechanism, which is a simpler primitive that looks
like:

    void _arch_switch(void *handle, void **old_handle_out);

The new thread handle (typically just a stack pointer) is specified
explicitly instead of being picked up from the scheduler by
per-architecture code, and on return the "old" thread handle that got
switched out is returned through the pointer.

The new primitive (currently available only on xtensa) is selected
when CONFIG_USE_SWITCH is "y".  A new C _Swap() implementation based
on this primitive is then added which operates compatibly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 8ac9c082e6 kernel: Move some macros
K_NUM_PRIORITIES and K_NUM_PRIO_BITMAPS were defined in
nano_internal.h, but used in only a handful of places.  Move to
kernel_structs.h (somewhat higher up in the hierarchy) to help with
include file cycle-breaking.  Arguably they are a better fit there
anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 837dd99a0e samples/xtensa-asm2: Unit test for new Xtensa assembly primitives
This sample (which should eventually become a proper test) suite
builds from simple applications of the new primitives to a full
context switch test and interrupt handling suite (based on the
CPU-internal CCOMPARE2 timer).

It's been extraordinarily useful finding regressing as the asm2 code
gets modified and should probably stick around as long as possible.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross a34f884f23 xtensa: New asm layer to support SMP
SMP needs a new context switch primitive (to disentangle _swap() from
the scheduler) and new interrupt entry behavior (to be able to take a
global spinlock on behalf of legacy drivers).  The existing code is
very obtuse, and working with it led me down a long path of "this
would be so much better if..."  So this is a new context and entry
framework, intended to replace the code that exists now, at least on
SMP platforms.

New features:

* The new context switch primitive is xtensa_switch(), which takes a
  "new" context handle as an argument instead of getting it from the
  scheduler, returns an "old" context handle through a pointer
  (e.g. to save it to the old thread context), and restores the lock
  state(PS register) exactly as it is at entry instead of taking it as
  an argument.

* The register spill code understands wrap-around register windows and
  can avoid spilling A4-A15 registers when they are unused by the
  interrupted function, saving as much as 48 bytes of stack space on
  the interrupted stacks.

* The "spill register windows" routine is entirely different, using a
  different mechanism, and is MUCH FASTER (to the tune of almost 200
  cycles).  See notes in comments.

* Even better, interrupt entry can be done via a clever "cross stack
  call" I worked up, meaning that the interrupted thread's registers
  do not need to be spilled at all until they are naturally pushed out
  by the interrupt handler or until we return from the interrupt into
  a different thread.  This is a big efficiency win for tiny
  interrupts (e.g. timers), and a big latency win for all interrupts.

* Interrupt entry is 100% symmetric with respect to medium/high
  interrupts, avoiding the problems seen with hooking high priority
  interrupts with the current code (e.g. ESP-32's watchdog driver).

* Much smaller code size.  No cut and paste assembly.  No use of HAL
  calls.

* Assumes "XEA2" interrupt architecture, the register window extension
  (i.e. no CALL0 ABI), and the "high priority interrupts" extension.
  Does not support the legacy processor variants for which we have no
  targets.  The old code has some stuff in there to support this, but
  it seems bitrotten, untestable, and I'm all but certain it doesn't
  work.

Note that this simply adds the primitives to the existing tree in a
form where they can be unit tested.  It does not replace the existing
interrupt/exception handling or _Swap() implementation.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 8dca7ae587 xtensa: Make high priority interrupts optional
Xtensa has a "high priority" class of interrupt levels which ignore
the EXCM bit and can thus interrupt running exception handlers.  These
can't be used for C handlers in the general case[1] because C code
needs to be able to throw window over/underflow exceptions, which are
not reentrant.

But the high priority interrupts might be useful to a carefully
designed application, or to unit tests of low level architecture code.
So make their generation optional with this kconfig option.

[1] ESP-32 has a high priority interrupt for its watchdog, apparently.
    Which is sort of OK given that it never needs to return to the
    interrupted code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 88538e77c1 xtensa: Move register window exception handlers into a separate file
No behavior changes, just code motion.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 39f78b9b85 drivers/timer/xtensa_sys_timer: Correctly declare ISR
The existing hand-written interrupt code is manually calling the timer
ISR, which is sort of silly and about to be replaced.  Correctly
declare the ISR with IRQ_CONNECT() so that a conventional interrupt
handling implementation can find it.  With current code this is a
noop.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Wayne Ren 078259dc7f tests: modify the user space test codes for ARC
Both em_starterkit_em7d and em_starterkit_em7d_v22 are
tested.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2018-02-16 12:20:16 +01:00
Wayne Ren bb50a88045 arch: arc: apply the new thread stack layout
The new thread stack layout is as follow:

|---------------------|
|  user stack         |
|---------------------|
| stack guard (opt.)  |
|---------------------|
|  privilege stack    |
-----------------------

For MPUv2
  * user stack is aligned to the power of 2 of user stack size
  * the stack guard is 2048 bytes
  * the default size of privileg stack is 256 bytes.
  For user thread, the following MPU regions are needded
    * one region for user stack, no need of stack guard for user stack
    * one region for stack guard when stack guard is enbaled
    * regions for memory domain.
  For kernel thread, the stack guard region will be at the top, adn
  The user stack and privilege stack will be merged.

MPUv3 is the same as V2's layout, except no need of power of 2
alignment.

* reimplement the user mode enter function. Now it's possible for
kernel thread to drop privileg to user thread.

* add a separate entry for user thread

* bug fixes in the cleanup of regs when go to user mode

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2018-02-16 12:20:16 +01:00
Wayne Ren e445ab6f21 arch: arc: handle exception in privilege task when USERSPACE enabled
when USERSPACE is enabled, exception is handled in the privilege
stack of thread. This make thread context switch is possible in the
exception handler. For some case,e.g. tests, this is useful.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2018-02-16 12:20:16 +01:00
Wayne Ren 4433f81b47 arch: arc: MPUv2 enables MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT
enable MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT for ARC MPU v2

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2018-02-16 12:20:16 +01:00
Wayne Ren 0a71b106d1 arch: arc: save user thread's context into privilege stack
disable the U bit of irq.ctrl, so the user thread's context will
be saved into privilege stack when interrupts/exception come.

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2018-02-16 12:20:16 +01:00
Wayne Ren 8d284116b0 arch: arc: modify the linker template for APPLICAITON Memory
The application memory area has a requirement of address alignment,
especially when MPU requires power of 2.

Modify the linker tmemplate to apply application memory address
alignment generation

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2018-02-16 12:20:16 +01:00