This avoids asm files from having to explicitly define the _ASMLANGUAGE
symbol themselves.
Change-Id: I71f5a169f75d7443a58a0365a41c55b20dae3029
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.
Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Replace _scs_relocate_vector_table with direct CMSIS register access and
use of __ISB/__DSB routinues. We also cleanup the code a little bit to
just have one implentation of relocate_vector_table() on ARMv7-M.
Jira: ZEP-1568
Change-Id: I088c30e680a7ba198c1527a5822114b70f10c510
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
CMSIS provides a complete implentation for reboot, we can utilize it
directly and reduce zephyr specific code.
Jira: ZEP-1568
Change-Id: Ia9d1abd5c1e02e724423b94867ea452bc806ef79
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
As a first step towards removing the custom ARM Cortex-M Core code
present in Zephyr in benefit of using CMSIS, this change replaces
the use of the custom core code with CMSIS macros in
enable_floating_point().
Jira: ZEP-1568
Change-id: I544a712bf169358c826a3b2acd032c6b30b2801b
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Support using CMSIS defines and functions, we either pull the expect
defines/enum from the SoC HAL layers via <soc.h> for the SoC or we
provide a default set based on __NVIC_PRIO_BITS is defined.
We provide defaults in the case for:
IRQn_Type enum
*_REV define (set to 0)
__MPU_PRESENT define (set to 0 - no MPU)
__NVIC_PRIO_BITS define (set to CONFIG_NUM_IRQ_PRIO_BITS)
__Vendor_SysTickConfig (set to 0 - standard SysTick)
Jira: ZEP-1568
Change-Id: Ibc203de79f4697b14849b69c0e8c5c43677b5c6e
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
In preperation for removing the scb/scs layers and using CMSIS directly
lets remove all the _Scb* and _Scs* functions that are not currently
used.
Jira: ZEP-1568
Change-Id: If4641fb9a6de616b4b8793d4678aaaed48e794bc
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
On other targets, CONFIG_TEXT_SECTION_OFFSET allows the entire image to
be moved in memory to allow space for some type of header. The Mynewt
project bootloader prepends a small header, and this config needs to be
supported for this to work.
The specific alignment requirements of the vector table are chip
specific, and generally will be a power of two larger than the size of
the vector table.
Change-Id: I631a42ff64fb8ab86bd177659f2eac5208527653
Signed-off-by: David Brown <david.brown@linaro.org>
It is called before early SoC initialization, so remove the duplicated
code from other boards and just set it by default when using XIP.
This can later be used when adding bootloader support, as an
additional option could be created to move the VTOR offset to a
different address.
Change-Id: Ia1f5d9a066de61858ee287215cefdd58596b6b1c
Signed-off-by: Ricardo Salveti <ricardo.salveti@linaro.org>
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.
Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.
Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file. Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.
Jira: ZEP-1457
Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
The cortex-m7 is an implementation of armv7-m. Adjust the Kconfig
support for cortex-m7 to reflect this and drop the unnecessary,
explicit, conditional compilation.
Change-Id: I6ec20e69c8c83c5a80b1f714506f7f9e295b15d5
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
Precursor patches have arranged that conditional compilation hanging
on CONFIG_CPU_CORTEX_M3_M4 provides support for ARMv7-M, rename the
config variable to reflect this.
Change-Id: Ifa56e3c1c04505d061b2af3aec9d8b9e55b5853d
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
Precursor patches have arranged all conditional compilation hanging on
CONFIG_CPU_CORTEX_M0_M0PLUS such that it actually represents support
for ARM ARMv6-M, rename the config variable to reflect this.
Change-Id: I553fcf3e606b350a9e823df31bac96636be1504f
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
The ARM code base provides for three mutually exclusive ARM
architecture related conditional compilation choices. M0_M0PLUS,
M3_M4 and M7. Throughout the code base we have conditional
compilation gated around these three choices. Adjust the form of this
conditional compilation to adopt a uniform structure. The uniform
structure always selects code based on the definition of an
appropriate config option rather the the absence of a definition.
Removing the extensive use of #else ensures that when support for
other ARM architecture versions is added we get hard compilation
failures rather than attempting to compile inappropriate code for the
added architecture with unexpected runtime consequences.
Adopting this uniform structure makes it straight forward to replace
the adhoc CPU_CORTEX_M3_M4 and CPU_CORTEX_M0_M0PLUS configuration
variables with ones that directly represent the actual underlying ARM
architectures we provide support for. This change also paves the way
for folding adhoc conditional compilation related to CPU_CORTEX_M7
directly in support for ARMv7-M.
This change is mechanical in nature involving two transforms:
1)
#if !defined(CONFIG_CPU_CORTEX_M0_M0PLUS)
...
is transformed to:
#if defined(CONFIG_CPU_CORTEX_M0_M0PLUS)
#elif defined(CONFIG_CPU_CORTEX_M3_M4) || defined(CONFIG_CPU_CORTEX_M7)
...
2)
#if defined(CONFIG_CPU_CORTEX_M0_M0PLUS)
...
#else
...
#endif
is transformed to:
#if defined(CONFIG_CPU_CORTEX_M0_M0PLUS)
...
#elif defined(CONFIG_CPU_CORTEX_M3_M4) || defined(CONFIG_CPU_CORTEX_M7)
...
#else
#error Unknown ARM architecture
#endif
Change-Id: I7229029b174da3a8b3c6fb2eec63d776f1d11e24
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
Adjust the layout of various ARM assember files to conform to the norm
used in the majority of files.
Change-Id: Ia5007628be5ad36ef587946861c6ea90a8062585
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
These two fields in the thread structure control the preemptibility of a
thread.
sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.
By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.
Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.
- prio can fit in one byte, limiting the priorities range to -128 to 127
- recursive scheduler locking can be limited to 255; a rollover results
most probably from a logic error
- flags are split into execution flags and thread states; 8 bits is
enough for each of them currently, with at worst two states and four
flags to spare (on x86, on other archs, there are six flags to spare)
Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.
Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.
Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The Cortex-M0(+) and in general processors that support only the ARMv6-M
instruction set have a reduced set of registers and fields compared to
the ARMv7-M compliant processors.
This change goes through all core registers and disables or removes
everything that is not part of the ARMv6-M architecture when compiling
for Cortex-M0.
Jira: ZEP-1497
Change-id: I13e2637bb730e69d02f2a5ee687038dc69ad28a8
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
replace include <nanokernel.h> with <kernel.h> everywhere and also fix
any remaining mentions of nanokernel.
Keep the legacy samples/tests as is.
Change-Id: Iac48447bd191e83f21a719c69dc26233216d08dc
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Obsolete, replaced by _set_thread_return_value().
Change-Id: I23e9cfc07e43542f0965817edc3552d456fd2ef3
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.
Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
That module is not used anymore: it was introduced pre-Zephyr to add
some kind of awareness when debugging ARM Cortex-M3 code with GDB but
was never really used by anyone. It has bitrotted, and with the recent
move of the tTCS and tNANO data structures to common _kernel and
k_thread, it does not even compile anymore.
Jira: ZEP-1284, ZEP-951
Change-Id: Ic9afed00f4229324fe5d2aa97dc6f1c935953244
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
And also remove now obsolete ARCH_HAS_TASK_ABORT.
ARC does not need the options either.
Change-Id: Ie52d63178a367ce12b911dacfe2d389f4f75ed2d
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
- does not pull in printk(), for potential footprint gain
- does not pull in k_thread_abort(), for single-threaded systems
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Ibc6a198b81a6cd73117d1e85aa05b92a4501a34d
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.
Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.
The kernel internals now make use of these APIs instead of the legacy
ones.
Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.
However, this caused two problems:
1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.
2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.
To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.
On the ARM FRDM K64F board, this improvement is seen:
Before:
1- Measure time to switch from ISR back to interrupted task
switching time is 215 tcs = 1791 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 315 tcs = 2625 nsec
After:
1- Measure time to switch from ISR back to interrupted task
switching time is 130 tcs = 1083 nsec
2- Measure time from ISR to executing a different task (rescheduled)
switch time is 225 tcs = 1875 nsec
These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.
Fixes ZEP-1401.
Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Cortex-M0/M0+ do not have other faults than the hard fault at priority
-1, so they do not need to reserve a priority to allow exceptions to
trigger during handling of ISRs.
Change-Id: I479e439f7bcac70b4b2b787bcd744a4c65437e80
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This allows using it in _EXC_PRIO() instead of hardcoding 2 and 3.
Change-Id: I3549be54602643e06823ba63beb6a6992f39f776
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use it to flag which CPUs can do zero latency interrupts, which depend
on being able to lock up to a specific interrupt priority.
Change-Id: I09f71366ea1d05486e38c513a09abc270884879f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Add a "memory" clobber to inline asm SVC call to ensure the compiler
does not reorder the instruction relative to other memory accesses.
Issue found by inspect the source code. There is no evidence to
suggest that this bug will manifest for any current ARM target using a
current compiler.
Change-Id: I32b1e5ede02a6dbea02bb8f98729fff1cca1ef2a
Signed-off-by: Marcus Shawcroft <marcus.shawcroft@arm.com>
This patch fixes the unused parameter warning found at the
arch/arm/core/fault.c and arch/arm/soc/st_stm32/stm32f1/soc_gpio.c
files.
Change-Id: I5b3013c1514cff30f4e98feb31169fb28546c534
Signed-off-by: Flavio Santes <flavio.santes@intel.com>
Initializing the interrupt stack before initializing (turning off) the
watchdog on the FRDM board pushed the initialization of the watchdog too
late, causing it to fire and reset the board. The board would be kept in
a reboot loop.
Move the initialization of the watchdog earlier: this runs on the main
stack now, instead of the interrupt stack, the same stack the interrupt
stack initalization code runs on.
Change-Id: Ic0006f4f4f4090393571d8355a80dc9390c9fbc6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Make the systick feature optional that can be selected by the SoC.
Change-Id: I4a405640b84daecc17fc1882743d3cafb78ff861
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
An IRQ would always register as a ZIL interrupt.
Change-Id: If82a85f472a60512745652aacc7e8b7dfacaa268
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The assembler was passed immediate values that are too large for the
limited Cortex-M0 thumb assembly. Load values in registers instead of
using immediate values.
Change-Id: Ib5541c92dea03e0efb1b88ab91eeb408d151a71b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
This patch enables REBOOT when RUNTIME_NMI is selected via defconfig
file. This action is required to prevent compilation errors.
Change-Id: I67c18b2860ac34ba8f96e780737b4857a6063ece
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
If CORTEX_M_SYSTICK is not selected, do not reference
_timer_int_handler. SoC will need to define a custom system
clock implementation.
Change-Id: I655f3abf66953e434fef69ed16db2d9c2dcc486e
Signed-off-by: Vinayak Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Zephyr kernel is unable to compile when CONFIG_RUNTIME_NMI is enabled in
defconfig on ARM's architectures.
This patch addresses the following issues:
* In nmi.c _DefaultHandler() is referencing a function
(_ScbSystemReset()) not defined in Zephyr. This has now been replaced
with sys_arch_reboot.
* nmi.h is included in ASM files and due to the usage of "extern" the
compilation ends with an error. Added the directive _ASMLANGUAGE to
prevent the problem.
Jira: ZEP-1319
Change-Id: I7623ca97523cde04e4c6db40dc332d93ca801928
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.
Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Use the main stack during very early boot so that we can call memset on
the interrupt stack. Initialize the interrupt stack before it is used
for the rest of the pre-kernel initialization.
Change-Id: I6fcc9a08678afdb82e83465cda1c7a2a8c849c9b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
The ARM Cortex-M early boot was using a custom stack at the end of the
SRAM instead of the interrupt stack. This works as long as no static
data that needs a known initial value occupies that stack space. This
has probably not been an issue because the .noinit section is at the
very end of the image, but it was still wrong to use that region of
memory for that initial stack.
To be able to use the interrupt stack during early boot, the stack has
to be released before an interrupt can happen. Since ARM Cortex-M uses
PendSV as a very low priority exception for context switching, if a
device driver installs and enables an interrupt during the PRE_KERNEL
initialization points, an interrupt could take precedence over PendSV
while the initial dummy thread has not yet been context switched of and
thus released the interrupt stack. To address this, rather than using
_Swap() and thus triggering PendSV, the initialization logic switches to
the main stack and branches to _main() directly instead.
Fixes ZEP-1309
Change-Id: If0b62cc66470b45b601e63826b5b3306e6a25ae9
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.
Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Prio should be an int, since values are small integers, not a fixed-size
int32_t. It aligns with the prio parameters of the other APIs.
Stack size should be size_t.
Change-Id: Id29751b86c4ad7a7c2a7ffe446c2a96ae83c77bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
There was a possible race condition when setting the return value of a
thread that is pending, from an ISR.
A kernel function causes a thread to pend, with the following series of
steps:
- disable interrupts
- move current thread to wait_q
- call _Swap
Depending if running on M3/4 or M0+, _Swap will either issue a svc #0,
or pend PendSV directly. The same problem exists in both cases.
M3/4:
__svc will:
- enable interrupts
- trigger __pendsv
M0+:
_Swap() will enable interrupts.
__pendsv will:
- save register context including PSP into the thread struct
If an interrupt occurs between interrupts being enabled them and
__pendsv saving PSP, and the ISR sets the pending thread's return value,
this will happen:
- sees the thread in a wait_q
- removes it
- makes it ready
- calls _set_thread_return_value
- _set_thread_return_value looks at the thread's saved PSP to poke
the value
In this scenario, PSP hasn't yet been updated by __pendsv so it's a
stale value from the previous context switch, resulting in unpredictable
word on the stack getting set to the return value.
There is no way to fix this issue and still have the return value being
delivered directly in the pending thread's exception stack frame, in the
M0+ case. There will always be a window between the unlocking of
interrupts and PendSV being handled. On M3/4, it could be possible with
the mix of SVC and PendSV, since the exception stack frame is created in
the __svc handler. However, because we want to keep the two
implementations as close as possible, and there were talks of moving
M3/4 to using PendSV only, to save an exception, the approach taken
solves both cases.
The approach taken is similar to the ARC and Nios2 ports, where
there is a field in the thread structure that holds the return value.
_Swap() then loads r0/a1 with that value just before returning.
Fixes ZEP-1289.
Change-Id: Iee7e06fe3f8ded84aff918fd43408c7f589344d9
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
When a thread dies, at least print the pointer to it, so we can debug
better.
Change-Id: Ief6bbc0c221e2d5271c240a4b73df16413aa5e22
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
This reverts commit
"kernel/arm: add comment about _is_next_thread_current"
and fixes the interrupt locking issue.
The comment would have been right if only reads were done the ready
queue, but that is not the case. It turns out that the comment was written
ignoring the fact that _is_next_thread_current() updates the next thread
cache when fetching the next thread.
Change-Id: I21c9230f85f4f87a6bbf14fd4a9eb7e19b59f8c5
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Normally, _is_next_thread_current() must be called with interrupts
locked, but the ARM interrupt exit code does not have to do that. Add
explanation why.
Change-Id: Id383b47a055fdd6fbd5afffa52772e92febde98f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>