Commit graph

471 commits

Author SHA1 Message Date
Anas Nashif 10291a0789 cleanup: include/: move tracing.h to debug/tracing.h
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Nicolas Pitre 659fa0d57d lifo/fifo: first word is not always first 4 bytes
The first word is used as a pointer, meaning it is 64 bits on 64-bit
systems. To reserve it, it has to be either a pointer, a long, or an
intptr_t. Not an int nor an u32_t.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-26 09:08:42 -04:00
David B. Kinder 8de9cc7079 doc: use @rst/@endrst for ReST in headers
Folks found the use of @rststar/@endrststar non-intuitive (wanted to use
@rststart).  The "star" was there indicating the doxygen comment lines
had a leading asterisk that needed to be stripped, but since our
commenting convention is to use the leading asterisk on continuation
lines, the leading asterisk is always there.  So, change the doxygen
alias to the more expected @rst/@endrst.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2019-06-25 23:33:55 -04:00
Wentong Wu 5611e92347 kernel: add futex support
A k_futex is a lightweight mutual exclusion primitive designed
to minimize kernel involvement. Uncontended operation relies
only on atomic access to shared memory. k_futex structure lives
in application memory. And when using futexes, the majority of
the synchronization operations are performed in user mode. A
user-mode thread employs the futex wait system call only when
it is likely that the program has to block for a longer time
until the condition becomes true. When the condition comes true,
futex wake operation will be used to wake up one or more threads
waiting on that futex.

This patch implements two futex operations: k_futex_wait and
k_futex_wake. For k_futex_wait, the comparison with the expected
value, and starting to sleep are performed atomically to prevent
lost wake-ups. If different context changed futex's value after
the calling use-mode thread decided to block himself based on
the old value, the comparison will help observing the value
change and will not start to sleep. And for k_futex_wake, it
will wake at most num_waiters of the waiters that are sleeping
on that futex. But no guarantees are made on which threads are
woken, that means scheduling priority is not taken into
consideration.

Fixes: #14493.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-06-24 15:38:21 -07:00
Nicolas Pitre 465b2cf31b mempool: fix corruption of the free block bitmap and beyond
In z_sys_mem_pool_block_alloc() the size of the first level block
allocation is rounded up to the next 4-bite boundary. This means one
or more of the trailing blocks could overlap the free block bitmap.

Let's consider this code from kernel.h:

  #define K_MEM_POOL_DEFINE(name, minsz, maxsz, nmax, align) \
       char __aligned(align) _mpool_buf_##name[_ALIGN4(maxsz * nmax) \
                              + _MPOOL_BITS_SIZE(maxsz, minsz, nmax)]; \

The static pool allocation rounds up the product of maxsz and nmax not
size of individual blocks. If we have, say maxsz = 10 and nmax = 20,
the result of _ALIGN4(10 * 20) is 200. That's the offset at which the
free block bitmap will be located.

However, because z_sys_mem_pool_block_alloc() does this:

        lsizes[0] = _ALIGN4(p->max_sz);

Individual level 0 blocks will have a size of 12 not 10. That means
the 17th block will extend up to offset 204, 18th block up to 216, 19th
block to 228, and 20th block to 240. So 4 out of the 20 blocks are
overflowing the static pool area and 3 of them are even located
completely outside of it.

In this example, we have only 20 blocks that can't be split so there is
no extra free block bitmap allocation beyond the bitmap embedded in the
sys_mem_pool_lvl structure. This means that memory corruption will
happen in whatever data is located alongside the _mpool_buf_##name
array. But even with, say, 40 blocks, or larger blocks, the extra bitmap
size would be small compared to the extent of the overflow, and it would
get corrupted too of course.

And the data corruption will happen even without allocating any memory
since z_sys_mem_pool_base_init() stores free_list pointer nodes into
those blocks, which in turn may get corrupted if that other data is
later modified instead.

Fixing this issue is simple: rounding on the static pool allocation is
"misparenthesized". Let's turn

	_ALIGN4(maxsz * nmax)

into

	_ALIGN4(maxsz) * nmax

But that's not sufficient.

In z_sys_mem_pool_base_init() we have:

        size_t buflen = p->n_max * p->max_sz, sz = p->max_sz;
        u32_t *bits = (u32_t *)((u8_t *)p->buf + buflen);

Considering the same parameters as above, here we're locating the extra
free block bitmap at offset `buflen` which is 20 * 10 = 200, again below
the reach of the last 4 memory blocks. If the number of blocks gets past
the size of the embedded bitmap, it will overlap memory blocks.

Also, the block_ptr() call used here to initialize the free block linked
list uses unrounded p->max_sz, meaning that it is initially not locating
dlist nodes within the same block boundaries as what is expected from
z_sys_mem_pool_block_alloc(). This opens the possibility for allocated
adjacent blocks to overwrite dlist nodes, leading to random crashes in
the future.

So a complete fix must round up p->max_sz here too.

Given that runtime usage of max_sz should always be rounded up, it is
then preferable to round it up once at compile time instead and avoid
further mistakes of that sort. The existing _ALIGN4() usage on p->max_sz
at run time are then redundant.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-24 12:10:09 -07:00
Nicolas Pitre 46cd5a0330 mem_slab: enforce minimum alignment on statically allocated slabs
There is no point allowing smaller alignments. And on 64-bit systems the
minimum becomes 8 rather than 4, so let's adjust things automatically.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-20 08:42:45 -04:00
Anas Nashif f2cb20c772 docs: fix misspelling across the tree
Found a few annoying typos and figured I better run script and
fix anything it can find, here are the results...

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-19 15:34:13 -05:00
Nicolas Pitre 3d51f7c266 k_stack: make it 64-bit compatible
The k_stack data type cannot be u32_t on a 64-bit system as it is
often used to store pointers. Let's define a dedicated type for stack
data values, namely stack_data_t, which can be adjusted accordingly.
For now it is defined to uintptr_t which is the integer type large
enough to hold a pointer, meaning it is equivalent to u32_t on 32-bit
systems and u64_t on 64-bit systems.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-14 05:46:29 -04:00
Ioannis Glaropoulos a6cb8b06db kernel: introduce k_float_disable system call
We introduce k_float_disable() system call, to allow threads to
disable floating point context preservation. The system call is
to be used in FP Sharing Registers mode (CONFIG_FP_SHARING=y).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-12 09:17:45 -07:00
Anas Nashif 240c516316 doc: generate documentation of ifdef`ed APIs
Enable generation of doxygen documentation for kernel APIs that are
behind Kconfig options and add a note about the option needed to enable
the APIs.

Enable both CONFIG_SCHED_CPU_MASK and CONFIG_SCHED_DEADLINE in doxygen
config file.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-10 19:37:29 -04:00
David B. Kinder 00c41ea893 doc: fix doxygen comments with embedded reST
Doxygen comments can include doxygen-specific markup tags.  If other
markup tags are used (e.g., restructuredText) we need to indicate that
in the doxygen comments (via @rststar/@endrststar tags).

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2019-06-10 18:16:12 -04:00
Nicolas Pitre b1d3742ce2 linker generated list: introduce Z_STRUCT_SECTION_ITERABLE()
This convenience macro wraps Z_DECL_ALIGN() and __in_section() to
simplify static definitions of structure instances gathered in dedicated
sections. Most of the time those go together, and the section name is
already closely related to the struct type, so abstracting things behind
a simpler interface reduces probability of mistakes and makes the code
clearer. A few input section names have been adjusted accordingly.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-06 14:21:32 -07:00
Nicolas Pitre 8bb1f2a947 linker generated list: explicit alignment on data definitions
The alignment fix on struct device definitions should be done to all
such linker list tricks. Let's abstract the declaration plus alignment
with a macro and apply it to all concerned cases.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-06 14:21:32 -07:00
Nicolas Pitre 58d839bc3c misc: memory address type conversions
The uintptr_t type is more appropriate to represent memory addresses
than u32_t.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-03 21:14:57 -04:00
Nicolas Pitre a04a2ca76c k_fifo/K_lifo macros: avoid unnecessary casts
The fifo/lifo API is implemented on top of the queue API with macros
that blindly force a cast to struct k_queue. Providing a reference to
the _queue member from the k_fifo structure is much cleaner as it let
the compiler perform pointer type checking. Generated code is identical.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-03 21:11:42 -04:00
Andy Ross 92ce767048 kernel/fatal: Clean up z_except_reason() fallback implementation
Architectures that lack implementations of synchronous traps (via
Z_ARCH_EXCEPT()) end up using a z_except_reason() implementation that
doesn't actually trap at all.  It just invokes
z_NanoFatalErrorHandler() in the current thread context.

That has two problems:

First, it was just blindly assuming that the error handling invoked
would abort the current thread, swap away, and never return.  But that
can be application code in z_SysFatalErrorHandler that we can't
control.

Second, it was too broad with this assumption and stuff a
CODE_UNREACHABLE hint in for the compiler.  But in fact
z_except_reason() may be invoked in interrupt context (for example the
stackprot check) where it may NOT swap away and WILL return
synchronously from the call.  This doesn't seem to have caused a
miscompilation in production code, but it made a total voodoo hash out
of my debugging around this macro for an hour or so until I figured
out why my logging was being optimized out.

Do the abort unconditionally instead of relying on the app, and remove
the incorrect compiler hint.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-06-03 12:03:48 -07:00
Charles E. Youse a567831bed kernel/sched.c: add k_usleep() API function
Add k_usleep() API, analogous to k_sleep(), excepting that the argument
is in microseconds rather than milliseconds.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-05-21 23:09:16 -04:00
Peter A. Bigot 773bd98c73 doc: clarify behavior of k_poll_signal_raise on error
k_poll_signal_raise() returns an error code to indicate that the raise
was too late to notify an expiring poll.  Make clear that this does not
mean that the signal was lost: a subsequent poll will find it and expire
immediately.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-05-01 10:40:19 -04:00
Flavio Ceolin 4f99a38b06 arch: all: Remove not used struct _caller_saved
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Andrew Boie 4e5c093e66 kernel: demote K_THREAD_STACK_BUFFER() to private
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.

As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.

The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269

Fixes: #14766

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-05 16:10:02 -04:00
Patrik Flykt 7c0a245d32 arch: Rename reserved function names
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-04-03 17:31:00 -04:00
Patrik Flykt 97b3bd11a7 drivers: Rename reserved function names
Rename reserved function names in drivers/ subdirectory. Update
function macros concatenatenating function names with '##'. As
there is a conflict between the existing gpio_sch_manage_callback()
and _gpio_sch_manage_callback() names, leave the latter unmodified.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-04-03 17:31:00 -04:00
Andrew Boie ac3dcc1106 doc: clarify k_queue_alloc_append and related APIs
The data isn't copied, there's just an additional implicit
allocation.

Fixes: #15090

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-01 18:27:57 -04:00
Patrik Flykt 24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Andrew Boie 575abc0150 kernel: add K_THREAD_STACK_RESERVED
This is used to have each arch canonically state how much
room in the stack object is reserved for non-thread use.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-20 13:59:26 -07:00
Charles E. Youse 6d01f67f8a kernel/msg_q: reorder _K_MSGQ_INITIALIZER() initializers
This is a trivial change to satisfy C++, which requires that designated
initializers appear in the same order as the members they initialize.

Fixes: #14540

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-03-18 17:00:04 -07:00
Andy Ross c0183fdedd kernel/work_q: Fix locking across multiple queues
There was a detected user error in the code where racing insertions of
k_delayed_work items into different queues would be detected and
flagged as an error (honestly I don't see much value there -- Zephyr
doesn't as a general rule protect against errors like this, and
work_q's are inherently kernel things that don't require
userspace-style checking).

This got broken with spinlockification, where each work_q object got
its own lock, so the single lock wouldn't protect against the other
insert function any more.  As it happens, that was needless.  The core
synchronization on a work_q is in the internal k_queue object anyway
-- the lock in this file was only ever used for (very fast,
noncontending) delayed work insertion.  So go back to a global lock to
preserve the original behavior.

Fixes #14104

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-12 18:37:41 +01:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Andy Ross d7ae2a817d kernel/work_q: Clarify docs for k_delayed_work_cancel()
You can't cancel what hasn't been submitted.  Clarification added
following minor bike shed in github.  Fixes #14105

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-08 16:43:56 -05:00
Andy Ross 2611e2dade include/kernel.h: Remove unsupported return values from k_work APIs
Nothing in the code actually returns -EINPROGRESS, and in the case of
k_work_init() I don't see how that can even be done in a reliable way.
Don't claim we do what we don't.  Fixes #14109.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-08 06:52:00 -05:00
Charles E. Youse 0ad4022e51 kernel/timeout: fix k_timer_remaining_get() when tickless
In some circumstances (e.g., a tickless kernel), k_timer_remaining_get()
would not account for time passed that didn't involve clock interrupts.
This adds a simple fix for that, and adds a test case.  In addition, the
return value of k_timer_remaining_get() is clamped at 0 in the case of
overdue timers and the API description is adjusted to reflect this.

Fixes: #13353

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-03-01 14:53:33 -08:00
Krzysztof Chruscinski be06327d39 kernel: Reworking _K_TIMER_INITIALIZER and _K_PIPE_INITIALIZER for C++
Modify macros for struct initialization to compile in C++.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2019-02-13 11:06:56 -06:00
Andy Ross f582b55dd6 kernel/pipe: Spinlockify
One spinlock per pipe object.  Also removed some vestigial locking
around _ready_thread().  That call is internally synchronized now.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross be03dbd4c7 kernel/msg_q: Spinlockify
One lock per msgq.  Straightforward synchronization.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross f0933d0ded kernel/stack: Spinlockify
One lock per stack.  Straightforward synchronization.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross 9eeb6b8779 kernel/mbox: Spinlockify
Straightforward per-struct-k_mbox lock.  Nothing changes in locking
strategy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross 603ea42764 kernel/queue: Spinlockify
Straightforward port.  Each struct k_queue object gets a spinlock to
control obvious data ownership.

Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock.  But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled.  The spinlock API detects that as a recursive lock when
asserts are enabled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross a37a981b21 kernel/work_q: Spinlockify
Each work_q object gets a separate spinlock to synchronize access
instead of the global lock.  Note that there was a recursive lock
condition in k_delayed_work_cancel(), so that's been split out into an
internal unlocked version and the API entry point that wraps it with a
lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andrew Boie 41f6011c36 userspace: remove APPLICATION_MEMORY feature
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.

To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Paul Sokolovsky 65d51fd423 kernel: Document resolution of k_uptime_get*()
While k_uptime_get() and k_uptime_get32() return time in
milliseconds, they don't need to have millisecond resolution.
Resolution with default Zephyr settings in 10ms.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2019-02-04 17:56:55 -05:00
Andy Ross ab46b1b3c5 kernel/sched: CPU mask affinity/pinning API
This adds a simple implementation of SMP CPU affinity to Zephyr.  The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.

Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway).  Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.

The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 21:37:24 -05:00
Anas Nashif 4bcb294f45 doc: move usermode API documentation
Move API reference to the main documentation section under the kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-01-24 09:16:03 -05:00
Anas Nashif 29f37f0ddb doc: threads: merge into one document
Merge kernel sections into one single document and remove the
intermediate page grouping objects.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-01-24 09:16:03 -05:00
Anas Nashif 30c3cff842 kernel: add cpu_idle functions to a doxy group
Added cpu_idle APIs to a doxygen group, otherwise there were missing
from the project documentation.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-01-24 09:16:03 -05:00
Peter A. Bigot b4ece0ad44 kernel: timeout: detect inactive timeouts using dnode linked state
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state.  This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.

Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.

Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Peter A. Bigot 82ad0d24ca kernel: thread: correct type of dlist node
Although sys_dnode_t and sys_dlist_t are aliases, their roles are
different and they appear in different positions in dlist API calls.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Peter A. Bigot bfad9721d2 kernel: remove k_alert API
This API was used in only one place in non-test code.  See whether we
can remove it.

Closes #12232

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-16 21:34:07 -05:00
Flavio Ceolin d1ed336b9b kernel: Do not use zero-length array
Zero length array is a GNU extension that works as an header for a
variable length object. The portable solution for this is using
flexible length array, but this can be used only in the end of a
struct declaration and this is violates MISRA-C rule 18.8.

The easiest way to rif of this is make the macro expand to nothing but
then we will have a trailing semicolon that is not allowed in C99. So
the macro was changed to automatically add the semicolon when needed.
This may break code identation in some editors but it is a fair price
to pay to have portability and compliance.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-08 09:58:24 -05:00
Flavio Ceolin 6a4a86e413 kernel: Change k_is_in_isr to return bool
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin 09e362e0d0 kernel: Change _is_thread_essential to return bool
Change this function to return a boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Flavio Ceolin 76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Adithya Baglody 392219eab8 kernel: Change the prototype of k_thread_access_grant.
This API was using variable number of arguments. Which is not
allowed according to misra c guidelines(Rule 17.1). Hence making
this API into a macro and using the util macro FOR_EACH_FIXED_ARG
to get the same functionality.

There is one deviation from the old function. The last argument
shouldn't be NULL.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-03 12:35:14 -08:00
David B. Kinder 06d78354ae doc: regular misspelling scan
Fix misspellings in documentation (.rst, Kconfig help text, and .h
doxygen API comments), missed during regular reviews.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-12-26 13:27:14 -05:00
Flavio Ceolin f1e5303468 kernel: Change function signature
Change k_timer_remaining_get to return unsigned.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-11 14:37:10 -08:00
Anas Nashif 50e3acd2c5 doc: kernel: fix obj param in doxygen comment
Use same argument name for doxygen.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-12-10 20:38:09 -05:00
Flavio Ceolin 4b35dd2628 misra: Fixes for MISRA-C rule 8.2
In C90 was introduced function prototype, that allows argument types
to be checked against parameter types, though it is not necessary
specify names for the parameters. MISRA-C requires names for function
prototype parameters, it claims that names can provide useful
information regarding the function interface.

MISRA-C rule 8.2

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Ioannis Glaropoulos 39bf24a9bd kernel: expose k_mem_partition_attr_t outside User mode
This commit exposes k_mem_partition_attr_t outside User Mode, so
we can use struct k_mem_partition for defining memory partitions
outside the scope of user space (for example, to describe thread
stack guards or no-cacheable MPU regions). A requirement is that
the Zephyr build supports Memory protection. To signify this, a
new hidden, all-architecture Kconfig symbol is defined (MPU). In
the wake of exposing k_mem_partition_attr_t, the commit exposes
the MPU architecture-specific access permission attribute macros
outside the User space context (for all ARCHs), so they can be
used in a more generic way.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-12-05 15:15:07 -05:00
Ioannis Glaropoulos 293247e879 kernel: remove MEM_PARTITION_ENTRY macro
MEM_PARTITION_ENTRY is problematic, as it assumes that
struct k_mem_partition contains a k_mem_partition_attr_t
field, which is only true if Memory Protection is supported.
Additionally, it works with k_mem_partition_attr_t being a
single element object (scalar or single element structure).
This commit removes the macro function and updates macro
K_MEM_PARTITION_DEFINE() (MEM_PARTITION_ENTRY has only been
used in that macro function definition).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-12-05 15:15:07 -05:00
Flavio Ceolin 82ef4f8ec4 kernel: Make boolean function return bool
MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-30 08:05:11 -08:00
Andrew Boie 2b1d54e897 kernel: add user mode work_q capability
This allows for workqueues to be started in user mode.
No additional kernel objects or system calls are defined
other than starting the workqueue in user mode; for
permission purposes the embedded queue and thread objects
are sufficient.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-29 09:21:18 -08:00
Andrew Boie c2e01dff3f workqueues: don't put k_work in special section
There's no current need for this and it makes work items
declared with K_WORK_DEFINE() inaccessible to user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-29 09:21:18 -08:00
Andrew Boie 8acf899a0d workqueues: remove object init calls
k_work and k_work_q are not kernel objects, nor will they
be. k_work_q contains some kernel objects which are tracked
independently.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-29 09:21:18 -08:00
Sathish Kuttan 3efd8e17bd kernel: Add k_msgq_peek() API
Add an API to peek into a message queue and read the first message
without removing the message from the queue.

Signed-off-by: Sathish Kuttan <sathish.k.kuttan@intel.com>
2018-11-19 17:53:22 -05:00
Andrew Boie 42cfd4ff26 kernel: expose k_busy_wait() to user mode
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-15 16:20:36 -05:00
Flavio Ceolin aecd4ecb8d kernel: Change k_poll_signal api
k_poll_signal was being used by both, struct and function. Besides
this being extremely error prone it is also a MISRA-C violation.
Changing the function to contain a verb, since it performs an action
and the struct will be a noun. This pattern must be formalized and
followed and across the project.

MISRA-C rules 5.7 and 5.9

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin 61a1057ea5 kernel: Remove redundant type name
struct k_thread already has a pointer type k_tid_t, there is no need for
this definition to tcs.

Less symbols/names make the code cleaner and more readable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-31 19:43:47 -04:00
Adithya Baglody 2a78b8d86f kernel: queue: MISRA C compliance.
This patch fixes few issues in queue.c. This patch also changes
the return type of k_queue_alloc_append and k_queue_alloc_prepend
from int to s32_t.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-31 08:44:47 -04:00
Piotr Zięcik 7700eb2a15 kernel: sched: Make k_sleep() similar to POSIX equivalent
This commit introduces k_sleep() return value, which provides
information about actual sleep time. If the returned value is
not-zero, the thread slept shorter than requested, which is
only possible if the thread has been woken up by k_wakeup() call.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-10-30 18:27:31 +01:00
Benoit Leforestier 26e0f9a9e1 Build: Improve C++ support
Can choose the C++ standard (C++98/11/14/17/2a)
Can link with standard C++ library (libstdc++)
Add support of C++ exceptions
Add support of C++ RTTI
Add C++ options to subsys/cpp/Kconfig
Implements new and delete using k_malloc and k_free
if CONFIG_HEAP_MEM_POOL_SIZE is defined

Signed-off-by: Benoit Leforestier <benoit.leforestier@gmail.com>
2018-10-29 09:15:04 -04:00
Adithya Baglody 4b066212b6 kernel: sem: Fix few MISRA C violations.
This patch fixes few of the violations inside sem.c

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 12:17:58 -04:00
Adithya Baglody 28080d3896 kernel: MISRA C: Fixes a few MISRA C issues.
MISRA C guideline compliance for various rules.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 07:59:51 -04:00
Adithya Baglody d591588ab5 kernel: MISRA C guideline compliance for rule 11.6
This patch removes the typecast (void*). This can be better
handled by typecasting to the actual typdef. This fixes the
misra rule of 11.6 for alert.

Part of GH-10042.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 07:59:51 -04:00
Andy Ross cfe62038d2 kernel: Checkpatch fixups
I was pretty careful, but these snuck in.  Most of them are due to
overbroad string replacements in comments.  The pull request is very
large, and I'm too lazy to find exactly where to back-merge all of
these.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 987c0e5fc1 kernel: New timeout implementation
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version.  The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed.  Advantages:

* Properly spinlocked and SMP-aware.  The earlier timer implementation
  relied on only CPU 0 doing timeout work, and on an irq_lock() being
  taken before entry (something that was violated in a few spots).
  Now any CPU can wake up for an event (or all of them) and everything
  works correctly.

* The *_thread_timeout() API is now expressible as a clean wrapping
  (just one liners) around the lower-level interface based on function
  pointer callbacks.  As a result the timeout objects no longer need
  to store backpointers to the thread and wait_q and have shrunk by
  33%.

* MUCH smaller, to the tune of hundreds of lines of code removed.

* Future proof, in that all operations on the queue are now fronted by
  just two entry points (_add_timeout() and z_clock_announce()) which
  can easily be augmented with fancier data structures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 52e444bc05 kernel: Move timeout_remaining API
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).

Move it to where it belongs.  Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API.  And rename it to
a more standard zephyr name.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross d61b1f8ef8 kernel/timeout: Remove timeout wait_q field
Per previous patch, this is known to be identical with
thread->pended_on.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 15d520819d kernel/timeout: Prepare unification of timeout/thread wait_q fields
The existing timeout API wants to store a wait_q on which the thread
is waiting, but it only uses that value in one spot (and there only as
a boolean flag indicating "this thread is waiting on a wait_q).

As it happens threads can already store their own backpointers to a
wait_q (needed for the SCALABLE scheduler backend), so we should use
that instead.

This patch doesn't actually perform that unification yet.  It
reorgnizes things such that the pended_on field is always set at the
point of timeout interaction, and adds a bunch of asserts to make 100%
sure the logic is correct.  The next patch will modify the API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross b8ffd9acd6 sys_clock: Make clock_always_on true by default
This flag is an indication to the timer driver that the OS doesn't
care about rollover conditions of the tick count while idling, so the
system doesn't need to wake up once per counter flip[1].  Obviously in
that circumstance values returned from k_uptime_get_32() are going to
be wrong, so the implementation had an assert to check for misuse.

But no one understood that from the docs, so the only place these APIs
were used in practice were as "guards" around code that needed to call
k_uptime_get_32(), even though that's 100% wrong per docs!

Clarify the docs.  Remove the incorrect guards.  Change the flag to
initialize to true so that uptime isn't broken-by-default in tickless
mode.  Also move the implemenations of the functions out of the
header, as there's no good reason for these to need to be inlined.

[1] Which can be significant.  A 100MHz ARM using the 24 bit SysTick
    counter rolls over at about 6 Hz, and if it had to come out of
    idle at that rate it would be a significant power issue that would
    swamp the gains from tickless.  Obviously systems with slow
    counters like nRF or 64 bit ones like RISC-V or x86's TSC aren't
    as affected.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 0d1228af36 kernel.h: Header hygine, move clock/timer handling
The kernel.h file had a bunch of internal APIs for timeout/clock
handling mixed in.  Move these to sys_clock.h, which it always
included (in a weird location, so move THAT to kernel_includes.h with
everything else).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Anas Nashif c77c043071 kernel: remove deprecated k_thread_cancel
Remove deprecated function k_thread_cancel. We now use k_thread_abort.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-10-09 13:58:01 -04:00
Dhananjay Gundapu Jayakrishnan 24bfa40964 kernel: k_queue: extend k_queue API to append unique element
k_queue has k_queue_append API which does not check if the element's
address already exists. This creates a problem if the same element
address is appended to queue. This forms circular list showing
unintended behaviour for the application using queue. The proposed
API k_queue_find_and_append takes care of checking if element exists
already before appending. This API is complimentary to k_queue_remove
which checks if the queue element is present before removing.

Signed-off-by: Dhananjay Gundapu Jayakrishnan <dhananjay.jayakrishnan@proglove.de>
2018-10-08 12:59:12 -04:00
Mark Ruvald Pedersen 9960bd9545 portability: Ensure no C99-illegal semicolons exists in structs
Macro _OBJECT_TRACING_NEXT_PTR expands to a member or to nothing.
Macro _OBJECT_TRACING_NEXT_PTR is used in a number of places, like:

        struct k_stack {
                .. omitted ..
                _OBJECT_TRACING_NEXT_PTR(k_stack);
                u8_t flags;
        };

When the macro expands to nothing, a lonesome semi would remain. This is
illegal in C99, but permitted in GCC with GNU extensions.

Rather than expand to empty, we now expand to a zero-length array.
This means we can retain the trailing semis across structs wherein the
macro is used.

Note that zero-length array (foo[0]) != flexible array member (foo[]):
 * zero-length array: Is GNU+Clang extension. Anywhere in struct.
 * flexible array member: Is C99. Only in end of struct.

Thus we have really only traded-off one portability issue for
another, more acceptable, one at least.

Signed-off-by: Mark Ruvald Pedersen <mped@oticon.com>
2018-09-28 07:57:28 +05:30
Flavio Ceolin 02ed85bd82 kernel: sched: Change boolean APIs to return bool
Change APIs that essentially return a boolean expression  - 0 for
false and 1 for true - to return a bool.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin 6fdc56d286 kernel: Using boolean types for boolean constants
Make boolean expressions use boolean types.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Ioannis Glaropoulos 12c02448aa arch: arm: style fixes in documentation of MPU region types
Some minor style fixes and rewording of the documentation
for ARM MPU region types.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-09-27 08:10:02 -05:00
Anas Nashif 57554055d2 kernel: add a new API for setting thread names
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Anas Nashif 3a117c220a kernel: remove unused macro parameter
Group parameter was not used anywhere.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Anas Nashif 0a73ea04fa kernel: remove deprecate k_call_stacks_analyze
This API was deperecated and is not being used in the tree anymore, so
remove it.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-21 10:33:05 -04:00
Flavio Ceolin 67ca176754 headers: Fix headers across the project
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-17 15:49:26 -04:00
Vinayak Kariappa Chettimada c7d2734455 kernel: Improve precision of ticks and ms conversions
The following 2 improvements are contained in this patch:

- When converting from ms to ticks, instead of using hardware cycles
  per tick, use hardware cycles per second. This ensures that the
  multiplication is done before the division, increasing precision.
- When converting from ticks to ms, instead of using cycles per tick
  and cycles per sec, use ticks per sec. This too increases the
  precision.

The concept is to make the dividend as large as possible compared to the
divisor in order to lose as little precision as possible.

Fixes #8898
Fixes #9459
Fixes #9466
Fixes #9468

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2018-08-31 11:14:39 -04:00
Paul Sokolovsky 45c0b20470 kernel: k_poll: Introduce separate status for cancelled events
Previously (as introduced in 48fadfe62), if k_poll() waited on a
queue (or subclass like fifo), and wait was cancelled on queue's
side using k_queue_cancel_wait(), k_poll returned -EINTR. But it
did not set event->state field (to anything else but
K_POLL_STATE_NOT_READY), so in case of waiting on multiple queues,
it was not possible to differentiate which of them was cancelled.

This in particular broke detection of network socket EOF conditions
in POSIX poll() implementation.

This situation is now resolved with introduction of explicit
K_POLL_STATE_CANCELLED state, which is now set for cancelled queue
(-EINTR return remains the same).

This change also elaborates docstring for the functions mentioned, to
document this behavior.

Fixes: #9032

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-08-30 09:28:29 -04:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Daniel Leung fc182430c0 kernel: userspace: reserve stack space to store local data
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.

Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.

Fixes: #9067

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2018-08-17 09:40:52 -07:00
Flavio Ceolin 8aec087268 kernel: Fix bitwise operators with unsigned operators
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Flavio Ceolin f23a8cdd2d kernel: Fix k_*_sys_clock_always_on macro
Commit 2b8cf4c98e ("include: kernel: Fix documentation for
TICKLESS_KERNEL API's")' defined a macro to fix documentation when
TKCKLESS_KERNEL is not available but this macro does not return the
same the functions returns, so its use may result in compilation
error.

Another point to consider is that if one is using this function
without it be enabled is better to return a proper error like ENOTSUP
explicitly saying that this is not supported.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Piotr Zięcik 3c7f990367 kernel: Do not use sys_clock_ticks_per_sec in _ms_to_ticks()
The value of sys_clock_ticks_per_sec is obtained using
simple integer division with rounding toward zero. As result
using this variable in _ms_to_ticks() introduces some error.

This commit eliminates sys_clock_ticks_per_sec from equation
used in _ms_to_ticks() removing introduced error.

Also, this commit fixes #8895.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-08-14 07:18:44 -07:00
Kumar Gala 8777ff1304 Fix compile errors related to errno.h
Because errno.h is defined in terms of a syscall we can get into trouble
when one syscall/<FOO.h> ends up include another syscall/<BAR.h>.

Moving errno.h from kernel_includes.h to kernel.h breaks the possible
inclusion issue on some ARM platforms (which arm_mpu.h ends up include
soc.h which ends up include kernel_includes.h which would include
errno.h).

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-07-25 23:38:13 -04:00
Piotr Zięcik 96aa0d2133 kernel: Use accurate tick/ms conversion if clock rate is set at runtime
This commit enables accurate (based on 64-bit math) tick <-> ms
conversion if system clock rate is determined at runtime.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-20 10:17:47 -04:00
Andrew Boie 7f4d006959 kernel: fix errno access for user mode
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.

In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.

The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-19 16:44:59 -07:00
Piotr Zięcik 77f42f8312 kernel: Move _ms_to_ticks() and __ticks_to_ms() close to each other.
This commit moves the _ms_to_ticks() and __ticks_to_ms() functions
close to each other in order to improve code readability.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik 91fe22ec7d kernel: Improve tick <-> ms conversion.
The kernel incorrectly assumed, that system timer frequency is always
divisible without remainder by couple "natural" tick rates (like 100).
As result on some SoCs, time calculations was not correct, producing
strange effects (invalid sleep times, incorrect k_uptime_get() etc.).

This commit enables accurate, but costly (using 64-bit math) tick <-> ms
conversion if the selected tick interval is not exact due to hardware
limitations.

Also, this commit fixes tests in which removed _ms_per_tick were used.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik e995c27b42 kernel: Do not use fixed list of "good" sys_clock_ticks_per_sec values.
This commit removes fixed list of "good" sys_clock_ticks_per_sec values
which usage results in integer _ms_per_tick value.

Instead of using the list, simply check if MSEC_PER_SEC could be divided
without remainder by sys_clock_ticks_per_sec.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik fe2ac39bf2 kernel: Cleanup _ms_to_ticks().
This commit moves all implementations of the _ms_to_ticks() into
single file. Also, the function is now inline even if
_NEED_PRECISE_TICK_MS_CONVERSION is defined.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Andy Ross 225c74bbdf kernel/Kconfig: Reorgnize wait_q and sched algorithm choices
Make these "choice" items instead of a single boolean that implies the
element unset.

Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-07-03 17:09:15 -04:00
Rajavardhan Gundi d4dd928eaa kernel/stack: Introduce K_THREAD_STACK_LEN macro
This is a public macro which calculates the size to be allocated for
stacks inside a stack array. This is necessitated because of some
internal padding (e.g. for MPU scenarios). This is particularly
useful when a reference to K_THREAD_STACK_ARRAY_DEFINE needs to be
made from within a struct.

Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
2018-07-03 08:44:09 -07:00
Ioannis Glaropoulos 92b8a41f20 include: create kernel_includes.h header to hold kernel includes
This commit creates a new header file (kernel_include.h) that
contains all header files to be included by kernel_init.h.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-06-21 22:28:00 +02:00
Andy Ross 55a7e46b66 kernel/poll: Remove POLLING thread state bit
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state".  It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.

Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll.  The disadvantage is the 4
bytes of thread space needed.  Advantages:

+ Cleaner API, it's now internal to poll instead of being globally
  visible.

+ The thread_state bit space is just one byte, and was almost full
  already.

+ Smaller code to write/test a full word and not a bitfield

+ Words are atomic, so no need for one of irq lock/unlock pairs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:25:38 -04:00
Andrew Boie 2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Leandro Pereira 0e23ad889e kernel: k_work: k_work_init() should initialize all fields
k_work_init() was not initializing all fields in the k_work struct.

Mainly, the atomic_clear_bit() function call was reading a possibly
uninitialized value, clearing a bit, and assigning it back to the
`flags` member.  The `_reserved` member was never initialized.

With the struct now initialized with the _K_WORK_INITIALIZER() macro,
initialization is consistent regardless of how a `struct k_work` is
initialized.

This fixes the Valgrind issues found in #7478.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-06-05 10:26:59 -04:00
Andrew Boie b85ac3e58f kernel: clarify thread->stack_info documentation
ARC/ARM are not properly doing this at the moment but this will
be corrected in later patches.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-02 16:29:46 -04:00
Andrew Boie e2d779159f kernel: update stack macro documentation
It's not possible to enforce that K_THREAD_STACK_SIZEOF()
returns the original number passed to K_THREAD_STACK_DEFINE().
Some arches need to round this number up in order to satisfy
alignment constraints.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-02 16:29:46 -04:00
Anas Nashif 47420d04f0 doc: add requirement IDs
Add requirement IDs for traceability.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Anas Nashif a541e93d9a doc: document thread options
Add doxygen documentation to thread options.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Anas Nashif ce78d16b73 doc: document kernel APIs with doxygen
Document a few structs and cleanup.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Andrew Boie df55524d6a userspace: align _k_object to 4 bytes
We want the struct to be packed to conserve space, but the
perms field needs to always be on a 4-byte boundary since
we do bitfield operations on it, arches like ARC require
that the sys_bitfield_* operations be aligned to a 4 byte
boundary.

Instances of struct _k_object will now be 4-byte aligned
if in an array (which they are), even though the members
are still packed.

Fixes: #7776

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-25 13:25:04 -07:00
Anas Nashif c8e0d0cebc kernel: add requirement Ids to implementation
Add requirement ID place holders based on APIS. The requirements will
appear as a list in doxygen documentation. The IDs will be expanded with
more details somewhere else, probably a requirement catalog on GH or
some other requirement management tool. This is still TBD.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-23 18:58:03 -04:00
David B. Kinder fcbd8fb631 doc: fix misspellings in API doxygen comments
Found some misspellings missed during normal code reviews

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-05-23 15:28:01 -05:00
Andy Ross 4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross 1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 3772f77119 k_poll: expose to user mode
k_poll is now accessible from user mode. A memory allocation takes place
from the caller's resource pool to copy the provided poll_events
array; this can be large enough to make allocating it on the stack
not preferable.

k_poll_signal are now proper kernel objects. Two APIs have been added,
one to reset the signaled state and one to check the current signaled
state and result value.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 2b9b4b2cf7 k_queue: allow user mode access via allocators
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.

FIFO/LIFOs are derived from k_queues and have had allocator functions
added.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Kumar Gala 85699f7c6f kernel: Fix compile warning with _impl_k_object_alloc
We get the following warning with CONFIG_DYNAMIC_OBJECTS=n in
_impl_k_object_alloc:

include/kernel.h:322:57: warning: unused parameter ‘otype’ [-Wunused-parameter]
 static inline void *_impl_k_object_alloc(enum k_objects otype)
                                                         ^~~~~
Simple fix is to ARG_UNUSED otype.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-05-17 13:06:48 -05:00
Andrew Boie f3bee951b1 kernel: stacks: add k_stack_alloc() init
Similar to what has been done with pipes and message queues,
user mode can't be trusted to provide a buffer for the kernel
to use. Remove k_stack_init() as a syscall and offer
k_stack_alloc_init() which allocates a buffer from the caller's
resource pool.

Fixes #7285

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 0fe789ff2e kernel: add k_msgq_alloc_init()
User mode can't be trusted to provide a memory buffer to
k_msgq_init(). Introduce k_msgq_alloc_init() which allocates
the buffer out of the calling thread's resource pool and expose
that as a system call instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 44fe81228d kernel: pipes: add k_pipe_alloc_init()
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.

K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 97bf001f11 userspace: get dynamic objs from thread rsrc pools
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.

Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.

A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie e9cfc54d00 kernel: remove k_object_access_revoke() as syscall
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.

We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.

However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie a2480bd472 mempool: add API for malloc semantics
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.

Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Ramakrishna Pallala 149a3296ab kernel: Deprecate k_call_stacks_analyze() API
Deprecated k_call_stacks_analyze() API as it is only
dumping (printing) the statically defined main, idle,
work and ISR stacks.

Use k_thread_foreach() API which is a generic API
to iterate over threads.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Ramakrishna Pallala 110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andy Ross 15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Rajavardhan Gundi 68040c8d78 kernel: sem: Modify the way BUILD_ASSERT is used
BUILD_ASSERT() macro makes use of __COUNTER__ which may not be
supported in some compilers (like xcc). So, multiple uses of
BUILD_ASSERT() in same scope is not possible for such compilers.
Instead, the expression to BUILD_ASSERT can be "&&"ed to achieve
the same purpose.

Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
2018-04-30 16:46:14 -04:00
Leandro Pereira c200367b68 drivers: Perform a runtime check if a driver is capable of an operation
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.

Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.

Fixes #6907.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-26 02:57:12 +05:30
Andrew Boie 31bdfc014e userspace: add support for dynamic kernel objects
A red-black tree is maintained containing the metadata for all
dynamically created kernel objects, which are allocated out of the
system heap.

Currently, k_object_alloc() and k_object_free() are supervisor-only.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-24 12:27:54 -07:00
Leandro Pereira f5f95ee3a9 kernel: sem: Ensure that initial count is lesser or equal than limit
Ensure this value during static initialization (with build assertions),
and dynamic initializations through system calls.

If initial count is larger than the limit, it's possible for the count
to wraparound, causing locking issues.

Expanding the BUILD_ASSERT() macros after declaring a k_sem struct in
K_SEM_DEFINE() is necessary to support cases where a semaphore is
defined statically.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-24 04:04:36 +05:30
Andy Ross 3f55dafebc kernel: Deprecate k_thread_cancel() API
The only difference between this call and k_thread_abort() (beyond
some minor performance deltas) is that "cancel" will act as a noop in
cases where the thread has begun execution and will return an error.
"Abort" always succeeds, of course.  That is inherently racy when used
as a "stop the thread" API: there's no way in general (or at all in
SMP situations) to know that you're calling this function "early
enough" to catch the thread before it starts.

Effectively, all k_thread_cancel() gives you that k_thread_abort()
doesn't is an indication about whether or not a thread has started.
There are many other ways to get that information that don't require
dangerous kernel APIs.

Deprecate this function.  Zephyr's own code never used it except for
its own unit test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Michael Hope 5f67a6119d include: improve compatibility with C++ apps.
This patch lets a C++ application use more of Zephyr by adding guards
and changeing some constructs to the C++11 equivalent.

Changes include:

- Adding guards
- Switching to static_assert
- Switching to a template for ARRAY_SIZE as g++ doesn't have the
  builtin.
- Re-ordering designated initialisers to match the struct field order
  as G++ only supports simple designated initialisers.

Signed-off-by: Michael Hope <mlhx@google.com>
2018-04-09 23:21:52 -04:00
David B. Kinder 3314c3675f doc: misspellings in public API doxygen comments
occasional spelling-check pass found some misspellings

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-04-05 19:16:24 -04:00
Andrew Boie aa6de29c4b lib: user mode compatible mempools
We would like to offer the capability to have memory pool heap data
structures that are usable from user mode threads. The current
k_mem_pool implementation uses IRQ locking and system-wide membership
lists that make it incompatible with user mode constraints.

However, much of the existing memory pool code can be abstracted to some
common functions that are used by both k_mem_pool and the new
sys_mem_pool implementations.

The sys_mem_pool implementation has the following differences:

* The alloc/free APIs work directly with pointers, no internal memory
block structures are exposed to the end user. A pointer to the source
pool is provided for allocation, but freeing memory just requires the
pointer and nothing else.

* k_mem_pool uses IRQ locks and required very fine-grained locking in
order to not affect system latency. sys_mem_pools just use a semaphore
to protect the pool data structures at the API level, since there aren't
implications for system responsiveness with this kind of concurrency
control.

* sys_mem_pools do not support the notion of timeouts for requesting
memory.

* sys_mem_pools are specified at compile time with macros, just like
kernel memory pools. Alternative forms of specification at runtime
will be a later enhancement.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-05 07:03:05 -07:00
Youvedeep Singh 188c1ab5ca kernel: msg_q: Add routine to fetch basic attrs from message queue.
For posix layer implementation of message queue, we need to fetch
basic attributes of message queue. Currently this routine is not
present in Zephyr. So adding this routing into message queue.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2018-04-03 15:30:44 -04:00
Ramakrishna Pallala 2b8cf4c98e include: kernel: Fix documentation for TICKLESS_KERNEL API's
Fix documentation scope for TICKLESS_KERNEL API's.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-03-30 12:15:15 -04:00
Ramakrishna Pallala 92489ea4dd include: kernel: Fix typo in fifo API description
Fix typo FIF -> FIFO in API description of k_fifo_peek_head

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-03-30 12:15:15 -04:00
Anas Nashif 954d550364 kernel: api: mark internal functions as such
Add @internal doxygen command to mark internal functions.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Anas Nashif 585fd1faec doc: kernel: capitalize Fifo/Lifo
Capitalise Fifo and Lifo in documentation, those are acronyms and need
to be all in caps.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Anas Nashif 166f5194ae kernel: api: fix doxygen group ending
Extra comments after @} were showing up in the next section details.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Adithya Baglody 3a6d72ecde kernel: mem_domain: k_mem_partition is now placed in kernel memory.
The k_mem_partition structs need to be placed in the kernel memory.
This patch ensures that these structs are placed correctly.
Also when a struct k_mem_domain is declared it is advised to add
__kernel.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-03-20 09:19:59 -07:00
Leandro Pereira 08de658eb9 kernel: mem_domain: Use u8_t for number of partitions in struct
During system initialization, the global static variable (to
mem_domain.c) is initialized with the number of maximum partitions per
domain.  This variable is of u8_t type.

Assertions throughout the code will check ranges and test for overflow
by relying on implicit type conversion.

Use an u8_t instead of u32_t to avoid doubts.  Also, reorder the
k_mem_partition struct to remove the alignment hole created by reducing
sizeof(num_partitions).

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-02 07:08:49 +01:00