Commit graph

249 commits

Author SHA1 Message Date
Andrew Boie 4e5c093e66 kernel: demote K_THREAD_STACK_BUFFER() to private
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.

As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.

The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269

Fixes: #14766

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-05 16:10:02 -04:00
Patrik Flykt 24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Anas Nashif 42f4538e40 kernel: do not use k_busy_wait when on single thread
k_busy_wait() does not work when multithreading is disabled, so do not
try to wait during boot.

Fixes #14454

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-03-26 20:09:07 -04:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Andrew Boie 62fad96802 userspace: zero app memory bss earlier
Some init tasks may use some bss app memory areas and
expect them to be zeroed out. Do this much earlier
in the boot process, before any of the init tasks
run.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-05 08:27:20 -05:00
Andrew Boie 4ce652e4b2 userspace: remove APP_SHARED_MEM Kconfig
This is an integral part of userspace and cannot be used
on its own. Fold into the main userspace configuration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-23 07:43:55 -05:00
Andrew Boie 01100eadb8 kernel: add stack canary to libc partition
User mode needs to be able to read this value in
compiler generated function prologues/epilogues.

Special handling in init.c for arches that use
_data_copy. This happens before _Cstart() gets
called. We need to make sure that the compiler
stack canary checks in _data_copy itself do not
fail.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-22 18:50:43 -05:00
Aurelien Jarno 992f29a1bc arch: make __ramfunc support transparent
Instead of having to enable ramfunc support manually, just make it
transparently available to users, keeping the MPU region disabled if not
used to not waste a MPU region. This however wastes 24 bytes of code
area when the MPU is disabled and 48 bytes when it is enabled, and
probably a dozen of CPU cycles during boot. I believe it is something
acceptable.

Note that when XIP is used, code is already in RAM, so the __ramfunc
keyword does nothing, but does not generate an error.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2019-02-22 11:36:50 -08:00
Aurelien Jarno eb097bd095 arch: arm: mpu: get the __ramfunc region size from the linker
The linker file defines the __ramfunc_ram_size symbols to get the size
of the __ramfunc_ram section. Use that instead of computing the value at
runtime from the start and end symbols. This saves 16 bytes of code with
CONFIG_RAM_FUNCTION=y.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2019-02-22 11:36:50 -08:00
qianfan Zhao e1cc657941 arm: Placing the functions which holds __ramfunc into '.ramfunc'
Using __ramfunc to places a function in RAM instead of Flash.
Code that for example reprograms flash at runtime can't execute
from flash, in that case must placing code into RAM.

This commit create a new section named '.ramfunc' in link scripts,
all functions has __ramfunc keyword saved in thats sections and
will load from flash to sram after the system booted.

Fixes: #10253

Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
2019-02-22 11:36:50 -08:00
Andy Ross 1bf9bd04b1 kernel: Add _unlocked() variant to context switch primitives
These functions, for good design reason, take a locking key to
atomically release along with the context swtich.  But there's still a
common pattern in code to do a switch unconditionally by passing
irq_lock() directly.  On SMP that's a little hurtful as it spams the
global lock.  Provide an _unlocked() variant for
_Swap/_reschedule/_pend_curr for simplicity and efficiency.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Kumar Gala bfaaa6bbe9 dts: Convert CONFIG_CCM to DT_CCM
Since we know do DTS before Kconfig we should try and remove dts from
creating Kconfig namespaced symbols and leave that to Kconfig.  So
rename CONFIG_CCM_<FOO> to DT_CCM_<FOO>.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-08 10:29:57 -06:00
Andrew Boie 41f6011c36 userspace: remove APPLICATION_MEMORY feature
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.

To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andy Ross ab46b1b3c5 kernel/sched: CPU mask affinity/pinning API
This adds a simple implementation of SMP CPU affinity to Zephyr.  The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.

Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway).  Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.

The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 21:37:24 -05:00
Andy Ross 6d9106f288 kernel/init: Fix dummy thread initialization on SMP systems
When under SMP, _current is a macro that indirects to a CPU-specific
address, and that trick won't work until kernel_arch_init() has
returned.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 19:10:08 -05:00
Andy Ross 1763a017b4 kernel/sched: Simplify init-time dummy thread & scheduling predicate
For historical reasons, some architectures had a valid _current thread
pointer at initialization time and others didn't.  So the scheduler
logic had a test that checks _current vs. NULL every time it needed to
check premption, when this was only a workaround for initialization
state.

Fix things so that there is a dummy thread always (and clean up the
code to do a struct assignment instead of a memset of bare memory),
and we can remove that test from the scheduler hot path.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Anas Nashif c0ea505b2c kernel: fix typo in kconfig name
CONFIG_MULTITHREDING -> CONFIG_MULTITHREADING

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-01-30 13:30:17 -05:00
Adithya Baglody 71e90f98fd Gcov: Enable Code coverage reporting over UART.
This patch provides support for generating Code coverage reports.
The prj.conf needs to enable CONFIG_COVERAGE. Once enabled, the
code coverage data dump now comes via UART.
This data dump on the UART is triggered once the main
thread exits.

Next step is to save this data dump on file. Then run
scripts/gen_gcov_files.py with the serial console log as argument.

The last step would be be to run the gcovr. Use the following cmd
 gcovr -r . --html -o gcov_report/coverage.html --html-details

Currently supported architectures are ARM and x86.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-16 06:12:33 -05:00
Flavio Ceolin b82a339813 kernel: init: Add nop instruction in main
The main function is just a weak function that should be override by the
applications if they need. Just adding a nop instructions to explicitly
says that this function does nothing.

MISRA-C rule 2.2

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-14 13:17:36 +01:00
Adithya Baglody 91c5b84cd5 kernel: init.c: Added required hooks for the relocation
This patch splits the text section into 2 parts. The first section
will have some info regarding vector tables and debug info. The
second section will have the complete text section.
This is needed to force the required functions and data variables
the correct locations.
This is due to the behavior of the linker. The linker will only link
once and hence this text section had to be split to make room
for the generated linker script.

Added a new Kconfig CODE_DATA_RELOCATION which when enabled will
invoke the script, which does the required relocation.

Added hooks inside init.c for bss zeroing and data copy operations.
Needed when we have to copy data from ROM to required memory type.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-12-07 10:32:41 -05:00
Peter A. Bigot 1fa00f3b36 kernel: remove outdated comment in _Cstart
The comment explaining why _IntLibInit was being invoked was left in
place after the invocation itself was removed.  Remove it too.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2018-12-03 09:18:06 -08:00
Flavio Ceolin 46715faa5c kernel: Remove _IntLibInit function
There were many platforms where this function was doing nothing. Just
merging its functionality with _PrepC function.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-28 14:59:10 -08:00
Flavio Ceolin ac14685211 kernel: Delimiting the scope of some variables
According with MISRA-C an object should be defined in a block scope if
it is used in a single function.

MISRA-C rule 8.9

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin a3dddedab6 kernel: Use distinct macro names
There is a struct and a macro called _ready_q, this is error
prone. Just removing it.

MISRA-C rule 5.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-31 19:43:47 -04:00
Marek Pieta e87193896a subsys: debug: tracing: Fix thread tracing
Change fixes issue with thread execution tracing.

Signed-off-by: Marek Pieta <Marek.Pieta@nordicsemi.no>
2018-10-29 22:09:12 -04:00
Anas Nashif 0a0c8c831f kernel: move to new logger
Use the new logger framework for kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-10-08 17:49:12 -04:00
Flavio Ceolin 6fdc56d286 kernel: Using boolean types for boolean constants
Make boolean expressions use boolean types.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Anas Nashif 57554055d2 kernel: add a new API for setting thread names
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Sebastian Bøe c0287695fb kconfig: Remove remnants of unimplemented BUILD_TIMESTAMP feature
The Kconfig option CONFIG_BUILD_TIMESTAMP became unused when
BUILD_VERSION was introduced, but it's option and parts of it's
implementation was not completely cleaned from the repository.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-09-26 22:18:29 +05:30
Anas Nashif 0a73ea04fa kernel: remove deprecate k_call_stacks_analyze
This API was deperecated and is not being used in the tree anymore, so
remove it.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-21 10:33:05 -04:00
Flavio Ceolin b3d9202704 kernel: Using boolean constants instead of 0 or 1
MISRA C requires that every controlling expression of and if or while
statement have a boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Flavio Ceolin da49f2e440 coccicnelle: Ignore return of memset
The return of memset is never checked. This patch explicitly ignore
the return to avoid MISRA-C violations.

The only directory excluded directory was ext/* since it contains
only imported code.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Flavio Ceolin 5884c7f54b kernel: Explicitly ignoring _Swap return
Ignoring _Swap return where there is no treatment or nothing to do.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Anas Nashif 6b6ecc0803 Revert "kernel: Enable interrupts for MULTITHREADING=n on supported arch's"
This reverts commit 17e9d623b4.

Single thread keep introducing more issues, decided to remove the
feature completely and push any required changes for after 1.13.

See #9808

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-06 13:09:26 -04:00
Andy Ross 8daafd4fba kernel: Final spin in !MULTITHREADING should be locked
Now that we call main() with interrupts enabled in !MULTITHREADING, we
need to disable them again for the final fallback "loop-forever
because user code returned" state.  Otherwise some architectures will
toss interrupts into a context where we obviously aren't prepared.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-30 13:29:09 -04:00
Andy Ross 17e9d623b4 kernel: Enable interrupts for MULTITHREADING=n on supported arch's
Some applications have a use case for a tiny MULTITHREADING=n build
(which lacks most of the kernel) but still want special-purpose
drivers in that mode that might need to handle interupts.  This
creates a chicken and egg problem, as arch code (for obvious reasons)
runs _Cstart() with interrupts disabled, and enables them only on
switching into a newly created thread context.  Zephyr does not have a
"turn interrupts on now, please" API at the architecture level.

So this creates one as an arch-specific wrapper around
_arch_irq_unlock().  It's implemented as an optional macro the arch
can define to enable this behavior, falling back to the previous
scheme (and printing a helpful message) if it doesn't find it defined.
Only ARM and x86 are enabled in this patch.

Fixes #8393

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-27 16:15:10 -04:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Flavio Ceolin 6699423a2f kernel: Explicitly ignoring memcpy return
memcpy always return a pointer to dest, it can be ignored. Just making
it explicitly so compilers will never raise warnings/errors to this.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Shawn Mosley 573f32b6d2 userspace: compartmentalized app memory organization
Summary: revised attempt at addressing issue 6290.  The
following provides an alternative to using
CONFIG_APPLICATION_MEMORY by compartmentalizing data into
Memory Domains.  Dependent on MPU limitations, supports
compartmentalized Memory Domains for 1...N logical
applications.  This is considered an initial attempt at
designing flexible compartmentalized Memory Domains for
multiple logical applications and, with the provided python
script and edited CMakeLists.txt, provides support for power
of 2 aligned MPU architectures.

Overview: The current patch uses qualifiers to group data into
subsections.  The qualifier usage allows for dynamic subsection
creation and affords the developer a large amount of flexibility
in the grouping, naming, and size of the resulting partitions and
domains that are built on these subsections. By additional macro
calls, functions are created that help calculate the size,
address, and permissions for the subsections and enable the
developer to control application data in specified partitions and
memory domains.

Background: Initial attempts focused on creating a single
section in the linker script that then contained internally
grouped variables/data to allow MPU/MMU alignment and protection.
This did not provide additional functionality beyond
CONFIG_APPLICATION_MEMORY as we were unable to reliably group
data or determine their grouping via exported linker symbols.
Thus, the resulting decision was made to dynamically create
subsections using the current qualifier method. An attempt to
group the data by object file was tested, but found that this
broke applications such as ztest where two object files are
created: ztest and main.  This also creates an issue of grouping
the two object files together in the same memory domain while
also allowing for compartmenting other data among threads.

Because it is not possible to know a) the name of the partition
and thus the symbol in the linker, b) the size of all the data
in the subsection, nor c) the overall number of partitions
created by the developer, it was not feasible to align the
subsections at compile time without using dynamically generated
linker script for MPU architectures requiring power of 2
alignment.

In order to provide support for MPU architectures that require a
power of 2 alignment, a python script is run at build prior to
when linker_priv_stacks.cmd is generated.  This script scans the
built object files for all possible partitions and the names given
to them. It then generates a linker file (app_smem.ld) that is
included in the main linker.ld file.  This app_smem.ld allows the
compiler and linker to then create each subsection and align to
the next power of 2.

Usage:
 - Requires: app_memory/app_memdomain.h .
 - _app_dmem(id) marks a variable to be placed into a data
section for memory partition id.
 - _app_bmem(id) marks a variable to be placed into a bss
section for memory partition id.
 - These are seen in the linker.map as "data_smem_id" and
"data_smem_idb".
 - To create a k_mem_partition, call the macro
app_mem_partition(part0) where "part0" is the name then used to
refer to that partition. This macro only creates a function and
necessary data structures for the later "initialization".
 - To create a memory domain for the partition, the macro
app_mem_domain(dom0) is called where "dom0" is the name then
used for the memory domain.
 - To initialize the partition (effectively adding the partition
to a linked list), init_part_part0() is called. This is followed
by init_app_memory(), which walks all partitions in the linked
list and calculates the sizes for each partition.
 - Once the partition is initialized, the domain can be
initialized with init_domain_dom0(part0) which initializes the
domain with partition part0.
 - After the domain has been initialized, the current thread
can be added using add_thread_dom0(k_current_get()).
 - The code used in ztests ans kernel/init has been added under
a conditional #ifdef to isolate the code from other tests.
The userspace test CMakeLists.txt file has commands to insert
the CONFIG_APP_SHARED_MEM definition into the required build
targets.
  Example:
        /* create partition at top of file outside functions */
        app_mem_partition(part0);
        /* create domain */
        app_mem_domain(dom0);
        _app_dmem(dom0) int var1;
        _app_bmem(dom0) static volatile int var2;

        int main()
        {
                init_part_part0();
                init_app_memory();
                init_domain_dom0(part0);
                add_thread_dom0(k_current_get());
                ...
        }

 - If multiple partitions are being created, a variadic
preprocessor macro can be used as provided in
app_macro_support.h:

        FOR_EACH(app_mem_partition, part0, part1, part2);

or, for multiple domains, similarly:

        FOR_EACH(app_mem_domain, dom0, dom1);

Similarly, the init_part_* can also be used in the macro:

        FOR_EACH(init_part, part0, part1, part2);

Testing:
 - This has been successfully tested on qemu_x86 and the
ARM frdm_k64f board.  It compiles and builds power of 2
aligned subsections for the linker script on the 96b_carbon
boards.  These power of 2 alignments have been checked by
hand and are viewable in the zephyr.map file that is
produced during build. However, due to a shortage of
available MPU regions on the 96b_carbon board, we are unable
to test this.
 - When run on the 96b_carbon board, the test suite will
enter execution, but each individaul test will fail due to
an MPU FAULT.  This is expected as the required number of
MPU regions exceeds the number allowed due to the static
allocation. As the MPU driver does not detect this issue,
the fault occurs because the data being accessed has been
placed outside the active MPU region.
 - This now compiles successfully for the ARC boards
em_starterkit_em7d and em_starterkit_em7d_v22. However,
as we lack ARC hardware to run this build on, we are unable
to test this build.

Current known issues:
1) While the script and edited CMakeLists.txt creates the
ability to align to the next power of 2, this does not
address the shortage of available MPU regions on certain
devices (e.g. 96b_carbon).  In testing the APB and PPB
regions were commented out.
2) checkpatch.pl lists several issues regarding the
following:
a) Complex macros. The FOR_EACH macros as defined in
app_macro_support.h are listed as complex macros needing
parentheses.  Adding parentheses breaks their
functionality, and we have otherwise been unable to
resolve the reported error.
b) __aligned() preferred. The _app_dmem_pad() and
_app_bmem_pad() macros give warnings that __aligned()
is preferred. Prior iterations had this implementation,
which resulted in errors due to "complex macros".
c) Trailing semicolon. The macro init_part(name) has
a trailing semicolon as the semicolon is needed for the
inlined macro call that is generated when this macro
expands.

Update: updated to alternative CONFIG_APPLCATION_MEMORY.
Added config option CONFIG_APP_SHARED_MEM to enable a new section
app_smem to contain the shared memory component.  This commit
seperates the Kconfig definition from the definition used for the
conditional code.  The change is in response to changes in the
way the build system treats definitions.  The python script used
to generate a linker script for app_smem was also midified to
simplify the alignment directives.  A default linker script
app_smem.ld was added to remove the conditional includes dependency
on CONFIG_APP_SHARED_MEM.  By addining the default linker script
the prebuild stages link properly prior to the python script running

Signed-off-by: Joshua Domagalski <jedomag@tycho.nsa.gov>
Signed-off-by: Shawn Mosley <smmosle@tycho.nsa.gov>
2018-07-25 12:02:01 -07:00
Anas Nashif eda3e16ac7 coverage: exclude k_call_stacks_analyze from coverage
Do not count deprecated functions.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-07-18 10:39:48 -04:00
Krzysztof Chruscinski 6b01c89935 logging: Add log initialization to system startup
Log API can be used before user can explicitly initialize the logger.
In order to ensure that logger core is ready to buffer log messages
it must be initialize as early as possible. Initialization does not
include initialization of default backend since driver may not be
ready and backend is needed only when log messages are processed.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2018-07-14 08:32:44 -04:00
Andy Ross 3d14615f56 kernel: Restore CONFIG_MULTITHREADING=n behavior
The prepare_multithreading()/switch_to_main_thread() steps were being
done unconditionally, when with multhreading disabled we want to jump
straight into the main thread on the existing stack.

Needless to say, that doesn't work well.  Fixes #8361.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-13 17:23:05 -04:00
Carles Cufi b54644913d kernel: Use IS-specific entropy function when available
During the early boot process, in prepare_multithreading(), the kernel
structures and scheduler are not ready yet. In order to obtain entropy
for early works such as stack randomization, optionally use when present
the ISR-specific function that some drivers will provide.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2018-05-24 15:13:13 -07:00
Andrew Boie 538754cb28 kernel: handle early entropy issues
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 19:38:06 -07:00
Leandro Pereira 389c36439a kernel: init: Use entropy API directly to initialize stack canary
Some sys_rand32_get() implementation will use shared state and protect
that using some synchronization primitive such as a mutex or a
semaphore.  It's too early in the boot process to use any of them,
which causes some issues.

Use the entropy API directly to set up the stack canaries.

This doesn't completely solve the problem, as some drivers will use the
same synchronization primitives anyway.  Some drivers (e.g.  the NRF5
entropy driver) provide an API to be used by ISRs that might be
suitable here, but not all drivers do that.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-23 14:42:49 -07:00
Andrew Boie 982d5c8f55 init: run kernel_arch_init() earlier
This was in prepare_multithreading(), which was moved
to after driver initialization and not before it.
The function now really just prepares system threads.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 17:22:19 -04:00
Andrew Boie 4afc6c9ff2 kernel: remove STACK_ALIGN checks
STACK_ALIGN has somewhat different semantics across our arches,
particularly ARC.

These checks are unnecessary, _new_thread() is required
to properly align stack sizes anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 15:05:15 -05:00
Andrew Boie 72c7ded561 kernel: prepare threads after PRE_KERNEL*
prepare_multithreading() was done very early as it had a call
to initialize the interrupt subsystem. This was causing problems
with stack pointer randomization as any HW-based entropy drivers
had not been initialized.

Move the call to initialize the interrupt system out of
prepare_multithreading(), which now really does just prepare
the system to start threads. This is now done after the PRE_KERNEL
phases.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-22 15:59:07 -07:00
Andy Ross 1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross 15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Andy Ross eb258706e0 kernel: Move SMP initialization to start of main thread
The smp_init() call was too early.  Device and subsystem
initialization doesn't happen until after the main thread starts
running.  Starting extra CPUs and allowing them to schedule threads
before their drivers are alive is a bad idea, even if it works in a
unit test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Anas Nashif daf7716ddd build: use git version and hash for boot banner
This uses the version and hash (git describe) and replaces the timestamp
currently used in the boot banner. This works much better than using
timestamps. It lets us point to the exact commit being used to run a
certain application or test.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-04-10 10:57:50 -04:00
Andy Ross 85bc0a3fe6 kernel: Cleanup, unify _add_thread_to_ready_q() and _ready_thread()
The scheduler exposed two APIs to do the same thing:
_add_thread_to_ready_q() was a low level primitive that in most cases
was wrapped by _ready_thread(), which also (1) checks that the thread
_is_ready() or exits, (2) flags the thread as "started" to handle the
case of a thread running for the first time out of a waitq timeout,
and (3) signals a logger event.

As it turns out, all existing usage was already checking case #1.
Case #2 can be better handled in the timeout resume path instead of on
every call.  And case #3 was probably wrong to have been skipping
anyway (there were paths that could make a thread runnable without
logging).

Now _add_thread_to_ready_q() is an internal scheduler API, as it
probably always should have been.

This also moves some asserts from the inline _ready_thread() wrapper
to the underlying true function for code size reasons, otherwise the
extra use of the inline added by this patch blows past code size
limits on Quark D2000.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Leandro Pereira a1ae8453f7 kernel: Name of static functions should not begin with an underscore
Names that begin with an underscore are reserved by the C standard.
This patch does not change names of functions defined and implemented
in header files.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-10 08:39:10 -05:00
Andy Ross 245b54ed56 kernel/include: Missed nano_internal.h -> kernel_internal.h spots
Update heading naming given recent rename

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bdcd18a744 kernel: Enable SMP
Now that all the pieces are in place, enable SMP for real:

Initialize the CPU records, launch the CPUs at the end of kernel
initialization, have them wait for a flag to release them into the
scheduler, then enter into the runnable threads via _Swap().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 780ba23eb8 kernel: Create idle threads and interrupt stacks for SMP processors
Simple implementation that caps at 4 CPUs.  Long term we should use
some linker magic to define as many as needed and loop over them
without needlessly increasing data or code size for the tracking.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Erwin Rol 1dc41d19b3 kernel: init: initialize stm32 ccm sections
Initialize the ccm_bss section to zero.
Copy the ccm_data section from the rom section.

Signed-off-by: Erwin Rol <erwin@erwinrol.com>
2018-02-13 12:36:22 -06:00
Adithya Baglody 7bb40bd9ca kernel: init: mem_domain structure is initialized for dummy thread.
For the dummy thread, contents in the mem_domain structure
is insignificant hence setting it to NULL.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Andrew Boie 818a96d3af userspace: assign thread IDs at build time
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:

* Threads can be granted permissions on kernel objects before the
  thread is initialized. Previously, it was necessary to call
  k_thread_create() with a K_FOREVER delay, assign permissions, then
  start the thread. Permissions are still completely cleared when
  a thread exits.

* No need for runtime logic to manage thread IDs

* Build error if CONFIG_MAX_THREAD_BYTES is set too low

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-03 11:29:23 -07:00
Adithya Baglody edd072e730 tests: benchmarking: cleanup of the benchmarking code.
The kernel will no longer reference the code written in the
test folder.

GH-1236

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-02 09:01:06 -04:00
Leandro Pereira adce1d1888 subsys: Add random subsystem
Some "random" drivers are not drivers at all: they just implement the
function `sys_rand32_get()`.  Move those to a random subsystem in
preparation for a reorganization.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-11-01 08:26:29 -04:00
Youvedeep Singh 9644f6782e kernel: boot_delay: change to busy wait instaed of wait
Intention of CONFIG_BOOT_DELAY is to delay booting of system for certain
time. Currently it is only delaying start of _main thread as delay is
created using k_sleep. This leads to putting _main thread into timeout
queue and continue kernel boot. This is causing some of undesirable
effects in some of test Automation usecase.
This patch changes k_sleep to k_busy_wait which result in delay in OS
boot instead of delaying start of _main.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-10-28 14:20:25 -04:00
Andrew Boie 04caa679c9 userspace: allow thread IDs to be re-used
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.

Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.

Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie 2acfcd6b05 userspace: add thread-level permission tracking
Now creating a thread will assign it a unique, monotonically increasing
id which is used to reference the permission bitfield in the kernel
object metadata.

Stub functions in userspace.c now implemented.

_new_thread is now wrapped in a common function with pre- and post-
architecture thread initialization tasks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Anas Nashif 8920cf127a cleanup: Move #include directives
Move all #include directives at the very top of the file, before any
code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-09-11 12:41:07 -04:00
Andrew Boie 0a85eaad05 init: initialize dummy thread stack info
Garbage values here could wreak havoc on the initial switch to main
depending on how arch-specific _Swap() manages memory permissions when
switching threads.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:34:41 -07:00
Andrew Boie 945af95f42 kernel: introduce object validation mechanism
All system calls made from userspace which involve pointers to kernel
objects (including device drivers) will need to have those pointers
validated; userspace should never be able to crash the kernel by passing
it garbage.

The actual validation with _k_object_validate() will be in the system
call receiver code, which doesn't exist yet.

- CONFIG_USERSPACE introduced. We are somewhat far away from having an
  end-to-end implementation, but at least need a Kconfig symbol to
  guard the incoming code with. Formal documentation doesn't exist yet
  either, but will appear later down the road once the implementation is
  mostly finalized.

- In the memory region for RAM, the data section has been moved last,
  past bss and noinit. This ensures that inserting generated tables
  with addresses of kernel objects does not change the addresses of
  those objects (which would make the table invalid)

- The DWARF debug information in the generated ELF binary is parsed to
  fetch the locations of all kernel objects and pass this to gperf to
  create a perfect hash table of their memory addresses.

- The generated gperf code doesn't know that we are exclusively working
  with memory addresses and uses memory inefficently. A post-processing
  script process_gperf.py adjusts the generated code before it is
  compiled to work with pointer values directly and not strings
  containing them.

- _k_object_init() calls inserted into the init functions for the set of
  kernel object types we are going to support so far

Issue: ZEP-2187
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:33:33 -07:00
Inaky Perez-Gonzalez 1abd064ce7 boot: move boot banner and delay before SYS_INIT_LEVEL_APPLICATION
Fixes https://github.com/zephyrproject-rtos/zephyr/issues/1280, but
also many other failures, where output was garbled due to this. Other
similarly affected issues are missing first benchmark (context) in
latency benchmark and some net tests.

Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
2017-09-07 18:29:05 -05:00
Youvedeep Singh 76b577e180 tests: benchmark: timing_info: Change API/variable Name.
The API/Variable names in timing_info looks very speicific to
platform (like systick etc), whereas these variabled are used
across platforms (nrf/arm/quark).
So this patch :-
1. changing API/Variable names to generic one.
2. Creating some of Macros whose implimentation is platform
depenent.

Jira: ZEP-2314

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-08-31 14:25:31 -04:00
Anas Nashif 83088a235c kernel: init: print boot banner before static threads
The boot banner is being printed after static threads have started, for
example this is visible with tests using ztest.
This puts the banner message before starting any threads.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-08-24 10:51:04 -04:00
Wayne Ren f8d061faf7 arch: arc: add nested interrupt support
* add nested interrupt support for interrupts
   + use a varibale exc_nest_count to trace nest interrupt and exception
   + regular interrupts can be nested by regular interrupts and fast
interrupts
   + fast interrupt's priority is the highest, cannot be nested
* remove the firq stack and exception stack
   + remove the coressponding kconfig option
   + all interrupts (normal and fast) and exceptions will be handled
     in the same stack (_interrupt stack)
   + the pros are, smaller memory footprint (no firq stack), simpler
     stack management, simpler codes, etc.. The cons are, possible
     10-15 instructions overhead for the case where fast irq nests
     regular irq
* add the case of ARC in test/kernel/gen_isr_table

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-08-10 12:47:15 -04:00
Andrew Boie 0fab8a6dc5 x86: page-aligned stacks with guard page
Subsequent patches will set this guard page as unmapped,
triggering a page fault on access. If this is due to
stack overflow, a double fault will be triggered,
which we are now capable of handling with a switch to
a know good stack.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Inaky Perez-Gonzalez c51f73f77f boot: add CONFIG_BOOT_DELAY option
Introduce a configurable boot delay option (defaulting to none) that
happens right after printing a boot delay banner, #before calling
main() in kernel/init.c:_main(), before taking timestamps for _main()
and once all the infrastructure is in place. Move also the boot banner
to happen after this delay.

The rationale for this is some boards will boot really fast and print
out some test case output in the serial port before the system that is
monitoring the serial port is able to read from the serial port.

This happens in MCUs whose serial port is embedded in a USB connection
which also is used to power the MCU board. When powering it on by
powering the USB port, there is a time it takes the host system to
detect the USB connection, enumerate the serial port, configure it and
load, start and read from the serial port. At this time, it might have
printed the output of the serial port.

While manually it is possible to press a reset button, on automation
setups this adds a lot of overhead and cabling or modifications to the
MCU that are easier (and cheaper) to overcome with this delay. Other
options (like using a separate serial line) might not be possible or
add a lot of cabling and cost, plus it'd also add extra build
configuration.

Change-Id: I2f4d1ba356de6cefa19b4ef5c9f19f87885d4dfd
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
2017-07-18 08:31:45 +03:00
Andrew Boie bf5228ea56 kernel: add early init routines for app RAM
Applications will have their own BSS and data sections which
will need to be additionally copied.

This covers the common C implementation of these functions.
Arches which implement their own optimized versions will need
to be updated.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-29 07:46:58 -04:00
Anas Nashif 397d29db42 linker: move all linker headers to include/linker
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-06-18 09:24:04 -05:00
Adithya Baglody be1cb961ad tests: benchmark: boot_time: Reading time stamps made arch agnostic
1. Changed _tsc_read() to k_cycles_get_32(). Thus reading the
time stamp will be agnostic of the architecutre used.
2. Changed the variable names from *_tsc to *_time_stamp.

JIRA: ZEP-1426

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-06-16 07:37:37 -05:00
Andrew Boie dc5d935d12 kernel: introduce stack definition macros
The existing __stack decorator is not flexible enough for upcoming
thread stack memory protection scenarios. Wrap the entire thing in
a declaration macro abstraction instead, which can be implemented
on a per-arch or per-SOC basis.

Issue: ZEP-2185
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-09 18:53:28 -04:00
Andrew Boie 50a533f7a5 kernel: init: mark initial dummy thread
The initial dummy thread context used for the initial __swap to
the main thread at early kernel initialization was not marked as a dummy
thread as it ought to be.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie d26cf2dc33 kernel: add k_thread_create() API
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.

This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.

By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.

Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-11 20:24:22 -04:00
Adithya Baglody d03b2496cd test: benchmarking: Timing metrics for the kernel
JIRA: ZEP-1822, ZEP-1823, ZEP-1825

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-05-03 08:46:30 -04:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Anas Nashif 8df439b40b kernel: rename nanoArchInit->kernel_arch_init
Change-Id: I094665e583f506cc71185cb6b8630046b2d4b2f8
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 10:59:35 -05:00
Anas Nashif 45a7e5d076 kernel: remove legacy.h and MDEF support
Change-Id: I953797f6965354c5b599f4ad91d63901401d2632
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 10:59:35 -05:00
Anas Nashif dac15b9b71 samples: shell: fix testcase.ini to be more inclusive
Filter was wrong and sample was not being built on any boards. Exclude
platforms that do not support interrupt based UART drivers.

Jira: ZEP-2014
Change-Id: I84a690e7c93fae52335434830b83086019cfd00d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-11 03:14:25 +00:00
Andrew Boie de38141898 kernel: remove deprecated init levels
Change-Id: Id69ec05d9f3417dcfe5ef7ff170681a0a40f3fe7
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-07 17:45:34 +00:00
Mazen NEIFER e2bbad9600 kernel: init: use C implementation for STACK_CANARY_INIT
Due to a limitation on XCC, the inline assembly does not
produce the expected instructions. This results in a wrong
code sequence. On the other hand, plain C code works well.
The note about compilers seems to not be an issue on any of
our currently supported compilers.

Change-Id: I9d2ab0fbf8a48d9dad51da3fd54453f205516d74
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-02-07 22:18:21 +00:00
Benjamin Walsh ed240f2796 kernel/arch: streamline thread user options
The K_<thread option> flags/options avaialble to users were hidden in
the kernel private header files: move them to include/kernel.h to
publicize them.

Also, to avoid any future confusion, rename the k_thread.execution_flags
field to user_options.

Change-Id: I65a6fd5e9e78d4ccf783f3304b607a1e6956aeac
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:50 +00:00
Benjamin Walsh 867f8ee371 kernel: move K_ESSENTIAL from thread_state to execution_flags
The execution_flags will store the user-facing states of a thread.

This also fixes a bug where K_ESSENTIAL was already assigned to
execution_flags via the options field of
k_thread_spawn()/K_THREAD_DEFINE().

Change-Id: I91ad7a62b5d180e09eead8985ff519809959ecf2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
David B. Kinder ac74d8b652 license: Replace Apache boilerplate with SPDX tag
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.

Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.

Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file.  Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.

Jira: ZEP-1457

Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-19 03:50:58 +00:00
Jean-Paul Etienne 4c6ab7cfcd unified: added _MOVE_INSTR for RISCV32 architecture
added _MOVE_INSTR for RISCV32 architecture

The store instruction has a different syntax in RISC-V,
compared to the other architectures. Hence, for each
architecture, specify the entire load instruction within
the _MOVE_INSTR variable.

Change-Id: Iedc421e73411876abd8b698f7d4b46081b473d79
Signed-off-by: Jean-Paul Etienne <fractalclone@gmail.com>
2017-01-13 19:53:57 +00:00
Carles Cufi cb0cf9f5f4 kernel: profiling: Expose an API call to analyze call stacks
The main, idle, interrupt and workqueue call stack definitions are not available
to applications to call stack_analyze() on, but they often require to be
measured empirically to tune their sizes in particular applications and
use cases.
This exposes a new k_call_stacks_analyze() API call that allows the
application to measure the used call stack space for the 4
kernel-defined call stacks.
Additionally for the ARC architecture the FIRQ stack is also profiled.

Change-id: I0cde149c7366cb6c4bbe8f9b0ab1cc5b56a36ed9
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2017-01-11 15:19:18 +00:00
Benjamin Walsh f955476559 kernel/arch: optimize memory use of some thread fields
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.

- prio can fit in one byte, limiting the priorities range to -128 to 127

- recursive scheduler locking can be limited to 255; a rollover results
  most probably from a logic error

- flags are split into execution flags and thread states; 8 bits is
  enough for each of them currently, with at worst two states and four
  flags to spare (on x86, on other archs, there are six flags to spare)

Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.

Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:24 +00:00
Anas Nashif dc3d73bf58 kernel: fix all nanokernel usage in comments
Also include kernel.h instead of nanokernel.h

Change-Id: I65dc5e31b5409b809397296817e2d5e7adf28892
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-21 18:45:03 +00:00
Anas Nashif d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Renamed from kernel/unified/init.c (Browse further)