Commit graph

464 commits

Author SHA1 Message Date
Ioannis Glaropoulos 293247e879 kernel: remove MEM_PARTITION_ENTRY macro
MEM_PARTITION_ENTRY is problematic, as it assumes that
struct k_mem_partition contains a k_mem_partition_attr_t
field, which is only true if Memory Protection is supported.
Additionally, it works with k_mem_partition_attr_t being a
single element object (scalar or single element structure).
This commit removes the macro function and updates macro
K_MEM_PARTITION_DEFINE() (MEM_PARTITION_ENTRY has only been
used in that macro function definition).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-12-05 15:15:07 -05:00
Flavio Ceolin 82ef4f8ec4 kernel: Make boolean function return bool
MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-30 08:05:11 -08:00
Andrew Boie 2b1d54e897 kernel: add user mode work_q capability
This allows for workqueues to be started in user mode.
No additional kernel objects or system calls are defined
other than starting the workqueue in user mode; for
permission purposes the embedded queue and thread objects
are sufficient.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-29 09:21:18 -08:00
Andrew Boie c2e01dff3f workqueues: don't put k_work in special section
There's no current need for this and it makes work items
declared with K_WORK_DEFINE() inaccessible to user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-29 09:21:18 -08:00
Andrew Boie 8acf899a0d workqueues: remove object init calls
k_work and k_work_q are not kernel objects, nor will they
be. k_work_q contains some kernel objects which are tracked
independently.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-29 09:21:18 -08:00
Sathish Kuttan 3efd8e17bd kernel: Add k_msgq_peek() API
Add an API to peek into a message queue and read the first message
without removing the message from the queue.

Signed-off-by: Sathish Kuttan <sathish.k.kuttan@intel.com>
2018-11-19 17:53:22 -05:00
Andrew Boie 42cfd4ff26 kernel: expose k_busy_wait() to user mode
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-15 16:20:36 -05:00
Flavio Ceolin aecd4ecb8d kernel: Change k_poll_signal api
k_poll_signal was being used by both, struct and function. Besides
this being extremely error prone it is also a MISRA-C violation.
Changing the function to contain a verb, since it performs an action
and the struct will be a noun. This pattern must be formalized and
followed and across the project.

MISRA-C rules 5.7 and 5.9

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin 61a1057ea5 kernel: Remove redundant type name
struct k_thread already has a pointer type k_tid_t, there is no need for
this definition to tcs.

Less symbols/names make the code cleaner and more readable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-31 19:43:47 -04:00
Adithya Baglody 2a78b8d86f kernel: queue: MISRA C compliance.
This patch fixes few issues in queue.c. This patch also changes
the return type of k_queue_alloc_append and k_queue_alloc_prepend
from int to s32_t.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-31 08:44:47 -04:00
Piotr Zięcik 7700eb2a15 kernel: sched: Make k_sleep() similar to POSIX equivalent
This commit introduces k_sleep() return value, which provides
information about actual sleep time. If the returned value is
not-zero, the thread slept shorter than requested, which is
only possible if the thread has been woken up by k_wakeup() call.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-10-30 18:27:31 +01:00
Benoit Leforestier 26e0f9a9e1 Build: Improve C++ support
Can choose the C++ standard (C++98/11/14/17/2a)
Can link with standard C++ library (libstdc++)
Add support of C++ exceptions
Add support of C++ RTTI
Add C++ options to subsys/cpp/Kconfig
Implements new and delete using k_malloc and k_free
if CONFIG_HEAP_MEM_POOL_SIZE is defined

Signed-off-by: Benoit Leforestier <benoit.leforestier@gmail.com>
2018-10-29 09:15:04 -04:00
Adithya Baglody 4b066212b6 kernel: sem: Fix few MISRA C violations.
This patch fixes few of the violations inside sem.c

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 12:17:58 -04:00
Adithya Baglody 28080d3896 kernel: MISRA C: Fixes a few MISRA C issues.
MISRA C guideline compliance for various rules.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 07:59:51 -04:00
Adithya Baglody d591588ab5 kernel: MISRA C guideline compliance for rule 11.6
This patch removes the typecast (void*). This can be better
handled by typecasting to the actual typdef. This fixes the
misra rule of 11.6 for alert.

Part of GH-10042.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 07:59:51 -04:00
Andy Ross cfe62038d2 kernel: Checkpatch fixups
I was pretty careful, but these snuck in.  Most of them are due to
overbroad string replacements in comments.  The pull request is very
large, and I'm too lazy to find exactly where to back-merge all of
these.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 987c0e5fc1 kernel: New timeout implementation
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version.  The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed.  Advantages:

* Properly spinlocked and SMP-aware.  The earlier timer implementation
  relied on only CPU 0 doing timeout work, and on an irq_lock() being
  taken before entry (something that was violated in a few spots).
  Now any CPU can wake up for an event (or all of them) and everything
  works correctly.

* The *_thread_timeout() API is now expressible as a clean wrapping
  (just one liners) around the lower-level interface based on function
  pointer callbacks.  As a result the timeout objects no longer need
  to store backpointers to the thread and wait_q and have shrunk by
  33%.

* MUCH smaller, to the tune of hundreds of lines of code removed.

* Future proof, in that all operations on the queue are now fronted by
  just two entry points (_add_timeout() and z_clock_announce()) which
  can easily be augmented with fancier data structures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 52e444bc05 kernel: Move timeout_remaining API
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).

Move it to where it belongs.  Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API.  And rename it to
a more standard zephyr name.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross d61b1f8ef8 kernel/timeout: Remove timeout wait_q field
Per previous patch, this is known to be identical with
thread->pended_on.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 15d520819d kernel/timeout: Prepare unification of timeout/thread wait_q fields
The existing timeout API wants to store a wait_q on which the thread
is waiting, but it only uses that value in one spot (and there only as
a boolean flag indicating "this thread is waiting on a wait_q).

As it happens threads can already store their own backpointers to a
wait_q (needed for the SCALABLE scheduler backend), so we should use
that instead.

This patch doesn't actually perform that unification yet.  It
reorgnizes things such that the pended_on field is always set at the
point of timeout interaction, and adds a bunch of asserts to make 100%
sure the logic is correct.  The next patch will modify the API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross b8ffd9acd6 sys_clock: Make clock_always_on true by default
This flag is an indication to the timer driver that the OS doesn't
care about rollover conditions of the tick count while idling, so the
system doesn't need to wake up once per counter flip[1].  Obviously in
that circumstance values returned from k_uptime_get_32() are going to
be wrong, so the implementation had an assert to check for misuse.

But no one understood that from the docs, so the only place these APIs
were used in practice were as "guards" around code that needed to call
k_uptime_get_32(), even though that's 100% wrong per docs!

Clarify the docs.  Remove the incorrect guards.  Change the flag to
initialize to true so that uptime isn't broken-by-default in tickless
mode.  Also move the implemenations of the functions out of the
header, as there's no good reason for these to need to be inlined.

[1] Which can be significant.  A 100MHz ARM using the 24 bit SysTick
    counter rolls over at about 6 Hz, and if it had to come out of
    idle at that rate it would be a significant power issue that would
    swamp the gains from tickless.  Obviously systems with slow
    counters like nRF or 64 bit ones like RISC-V or x86's TSC aren't
    as affected.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 0d1228af36 kernel.h: Header hygine, move clock/timer handling
The kernel.h file had a bunch of internal APIs for timeout/clock
handling mixed in.  Move these to sys_clock.h, which it always
included (in a weird location, so move THAT to kernel_includes.h with
everything else).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Anas Nashif c77c043071 kernel: remove deprecated k_thread_cancel
Remove deprecated function k_thread_cancel. We now use k_thread_abort.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-10-09 13:58:01 -04:00
Dhananjay Gundapu Jayakrishnan 24bfa40964 kernel: k_queue: extend k_queue API to append unique element
k_queue has k_queue_append API which does not check if the element's
address already exists. This creates a problem if the same element
address is appended to queue. This forms circular list showing
unintended behaviour for the application using queue. The proposed
API k_queue_find_and_append takes care of checking if element exists
already before appending. This API is complimentary to k_queue_remove
which checks if the queue element is present before removing.

Signed-off-by: Dhananjay Gundapu Jayakrishnan <dhananjay.jayakrishnan@proglove.de>
2018-10-08 12:59:12 -04:00
Mark Ruvald Pedersen 9960bd9545 portability: Ensure no C99-illegal semicolons exists in structs
Macro _OBJECT_TRACING_NEXT_PTR expands to a member or to nothing.
Macro _OBJECT_TRACING_NEXT_PTR is used in a number of places, like:

        struct k_stack {
                .. omitted ..
                _OBJECT_TRACING_NEXT_PTR(k_stack);
                u8_t flags;
        };

When the macro expands to nothing, a lonesome semi would remain. This is
illegal in C99, but permitted in GCC with GNU extensions.

Rather than expand to empty, we now expand to a zero-length array.
This means we can retain the trailing semis across structs wherein the
macro is used.

Note that zero-length array (foo[0]) != flexible array member (foo[]):
 * zero-length array: Is GNU+Clang extension. Anywhere in struct.
 * flexible array member: Is C99. Only in end of struct.

Thus we have really only traded-off one portability issue for
another, more acceptable, one at least.

Signed-off-by: Mark Ruvald Pedersen <mped@oticon.com>
2018-09-28 07:57:28 +05:30
Flavio Ceolin 02ed85bd82 kernel: sched: Change boolean APIs to return bool
Change APIs that essentially return a boolean expression  - 0 for
false and 1 for true - to return a bool.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin 6fdc56d286 kernel: Using boolean types for boolean constants
Make boolean expressions use boolean types.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Ioannis Glaropoulos 12c02448aa arch: arm: style fixes in documentation of MPU region types
Some minor style fixes and rewording of the documentation
for ARM MPU region types.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-09-27 08:10:02 -05:00
Anas Nashif 57554055d2 kernel: add a new API for setting thread names
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Anas Nashif 3a117c220a kernel: remove unused macro parameter
Group parameter was not used anywhere.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Anas Nashif 0a73ea04fa kernel: remove deprecate k_call_stacks_analyze
This API was deperecated and is not being used in the tree anymore, so
remove it.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-21 10:33:05 -04:00
Flavio Ceolin 67ca176754 headers: Fix headers across the project
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-17 15:49:26 -04:00
Vinayak Kariappa Chettimada c7d2734455 kernel: Improve precision of ticks and ms conversions
The following 2 improvements are contained in this patch:

- When converting from ms to ticks, instead of using hardware cycles
  per tick, use hardware cycles per second. This ensures that the
  multiplication is done before the division, increasing precision.
- When converting from ticks to ms, instead of using cycles per tick
  and cycles per sec, use ticks per sec. This too increases the
  precision.

The concept is to make the dividend as large as possible compared to the
divisor in order to lose as little precision as possible.

Fixes #8898
Fixes #9459
Fixes #9466
Fixes #9468

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2018-08-31 11:14:39 -04:00
Paul Sokolovsky 45c0b20470 kernel: k_poll: Introduce separate status for cancelled events
Previously (as introduced in 48fadfe62), if k_poll() waited on a
queue (or subclass like fifo), and wait was cancelled on queue's
side using k_queue_cancel_wait(), k_poll returned -EINTR. But it
did not set event->state field (to anything else but
K_POLL_STATE_NOT_READY), so in case of waiting on multiple queues,
it was not possible to differentiate which of them was cancelled.

This in particular broke detection of network socket EOF conditions
in POSIX poll() implementation.

This situation is now resolved with introduction of explicit
K_POLL_STATE_CANCELLED state, which is now set for cancelled queue
(-EINTR return remains the same).

This change also elaborates docstring for the functions mentioned, to
document this behavior.

Fixes: #9032

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-08-30 09:28:29 -04:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Daniel Leung fc182430c0 kernel: userspace: reserve stack space to store local data
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.

Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.

Fixes: #9067

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2018-08-17 09:40:52 -07:00
Flavio Ceolin 8aec087268 kernel: Fix bitwise operators with unsigned operators
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Flavio Ceolin f23a8cdd2d kernel: Fix k_*_sys_clock_always_on macro
Commit 2b8cf4c98e ("include: kernel: Fix documentation for
TICKLESS_KERNEL API's")' defined a macro to fix documentation when
TKCKLESS_KERNEL is not available but this macro does not return the
same the functions returns, so its use may result in compilation
error.

Another point to consider is that if one is using this function
without it be enabled is better to return a proper error like ENOTSUP
explicitly saying that this is not supported.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Piotr Zięcik 3c7f990367 kernel: Do not use sys_clock_ticks_per_sec in _ms_to_ticks()
The value of sys_clock_ticks_per_sec is obtained using
simple integer division with rounding toward zero. As result
using this variable in _ms_to_ticks() introduces some error.

This commit eliminates sys_clock_ticks_per_sec from equation
used in _ms_to_ticks() removing introduced error.

Also, this commit fixes #8895.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-08-14 07:18:44 -07:00
Kumar Gala 8777ff1304 Fix compile errors related to errno.h
Because errno.h is defined in terms of a syscall we can get into trouble
when one syscall/<FOO.h> ends up include another syscall/<BAR.h>.

Moving errno.h from kernel_includes.h to kernel.h breaks the possible
inclusion issue on some ARM platforms (which arm_mpu.h ends up include
soc.h which ends up include kernel_includes.h which would include
errno.h).

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-07-25 23:38:13 -04:00
Piotr Zięcik 96aa0d2133 kernel: Use accurate tick/ms conversion if clock rate is set at runtime
This commit enables accurate (based on 64-bit math) tick <-> ms
conversion if system clock rate is determined at runtime.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-20 10:17:47 -04:00
Andrew Boie 7f4d006959 kernel: fix errno access for user mode
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.

In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.

The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-19 16:44:59 -07:00
Piotr Zięcik 77f42f8312 kernel: Move _ms_to_ticks() and __ticks_to_ms() close to each other.
This commit moves the _ms_to_ticks() and __ticks_to_ms() functions
close to each other in order to improve code readability.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik 91fe22ec7d kernel: Improve tick <-> ms conversion.
The kernel incorrectly assumed, that system timer frequency is always
divisible without remainder by couple "natural" tick rates (like 100).
As result on some SoCs, time calculations was not correct, producing
strange effects (invalid sleep times, incorrect k_uptime_get() etc.).

This commit enables accurate, but costly (using 64-bit math) tick <-> ms
conversion if the selected tick interval is not exact due to hardware
limitations.

Also, this commit fixes tests in which removed _ms_per_tick were used.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik e995c27b42 kernel: Do not use fixed list of "good" sys_clock_ticks_per_sec values.
This commit removes fixed list of "good" sys_clock_ticks_per_sec values
which usage results in integer _ms_per_tick value.

Instead of using the list, simply check if MSEC_PER_SEC could be divided
without remainder by sys_clock_ticks_per_sec.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik fe2ac39bf2 kernel: Cleanup _ms_to_ticks().
This commit moves all implementations of the _ms_to_ticks() into
single file. Also, the function is now inline even if
_NEED_PRECISE_TICK_MS_CONVERSION is defined.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Andy Ross 225c74bbdf kernel/Kconfig: Reorgnize wait_q and sched algorithm choices
Make these "choice" items instead of a single boolean that implies the
element unset.

Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-07-03 17:09:15 -04:00
Rajavardhan Gundi d4dd928eaa kernel/stack: Introduce K_THREAD_STACK_LEN macro
This is a public macro which calculates the size to be allocated for
stacks inside a stack array. This is necessitated because of some
internal padding (e.g. for MPU scenarios). This is particularly
useful when a reference to K_THREAD_STACK_ARRAY_DEFINE needs to be
made from within a struct.

Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
2018-07-03 08:44:09 -07:00
Ioannis Glaropoulos 92b8a41f20 include: create kernel_includes.h header to hold kernel includes
This commit creates a new header file (kernel_include.h) that
contains all header files to be included by kernel_init.h.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-06-21 22:28:00 +02:00
Andy Ross 55a7e46b66 kernel/poll: Remove POLLING thread state bit
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state".  It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.

Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll.  The disadvantage is the 4
bytes of thread space needed.  Advantages:

+ Cleaner API, it's now internal to poll instead of being globally
  visible.

+ The thread_state bit space is just one byte, and was almost full
  already.

+ Smaller code to write/test a full word and not a bitfield

+ Words are atomic, so no need for one of irq lock/unlock pairs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:25:38 -04:00
Andrew Boie 2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Leandro Pereira 0e23ad889e kernel: k_work: k_work_init() should initialize all fields
k_work_init() was not initializing all fields in the k_work struct.

Mainly, the atomic_clear_bit() function call was reading a possibly
uninitialized value, clearing a bit, and assigning it back to the
`flags` member.  The `_reserved` member was never initialized.

With the struct now initialized with the _K_WORK_INITIALIZER() macro,
initialization is consistent regardless of how a `struct k_work` is
initialized.

This fixes the Valgrind issues found in #7478.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-06-05 10:26:59 -04:00
Andrew Boie b85ac3e58f kernel: clarify thread->stack_info documentation
ARC/ARM are not properly doing this at the moment but this will
be corrected in later patches.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-02 16:29:46 -04:00
Andrew Boie e2d779159f kernel: update stack macro documentation
It's not possible to enforce that K_THREAD_STACK_SIZEOF()
returns the original number passed to K_THREAD_STACK_DEFINE().
Some arches need to round this number up in order to satisfy
alignment constraints.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-02 16:29:46 -04:00
Anas Nashif 47420d04f0 doc: add requirement IDs
Add requirement IDs for traceability.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Anas Nashif a541e93d9a doc: document thread options
Add doxygen documentation to thread options.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Anas Nashif ce78d16b73 doc: document kernel APIs with doxygen
Document a few structs and cleanup.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Andrew Boie df55524d6a userspace: align _k_object to 4 bytes
We want the struct to be packed to conserve space, but the
perms field needs to always be on a 4-byte boundary since
we do bitfield operations on it, arches like ARC require
that the sys_bitfield_* operations be aligned to a 4 byte
boundary.

Instances of struct _k_object will now be 4-byte aligned
if in an array (which they are), even though the members
are still packed.

Fixes: #7776

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-25 13:25:04 -07:00
Anas Nashif c8e0d0cebc kernel: add requirement Ids to implementation
Add requirement ID place holders based on APIS. The requirements will
appear as a list in doxygen documentation. The IDs will be expanded with
more details somewhere else, probably a requirement catalog on GH or
some other requirement management tool. This is still TBD.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-23 18:58:03 -04:00
David B. Kinder fcbd8fb631 doc: fix misspellings in API doxygen comments
Found some misspellings missed during normal code reviews

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-05-23 15:28:01 -05:00
Andy Ross 4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross 1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 3772f77119 k_poll: expose to user mode
k_poll is now accessible from user mode. A memory allocation takes place
from the caller's resource pool to copy the provided poll_events
array; this can be large enough to make allocating it on the stack
not preferable.

k_poll_signal are now proper kernel objects. Two APIs have been added,
one to reset the signaled state and one to check the current signaled
state and result value.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 2b9b4b2cf7 k_queue: allow user mode access via allocators
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.

FIFO/LIFOs are derived from k_queues and have had allocator functions
added.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Kumar Gala 85699f7c6f kernel: Fix compile warning with _impl_k_object_alloc
We get the following warning with CONFIG_DYNAMIC_OBJECTS=n in
_impl_k_object_alloc:

include/kernel.h:322:57: warning: unused parameter ‘otype’ [-Wunused-parameter]
 static inline void *_impl_k_object_alloc(enum k_objects otype)
                                                         ^~~~~
Simple fix is to ARG_UNUSED otype.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-05-17 13:06:48 -05:00
Andrew Boie f3bee951b1 kernel: stacks: add k_stack_alloc() init
Similar to what has been done with pipes and message queues,
user mode can't be trusted to provide a buffer for the kernel
to use. Remove k_stack_init() as a syscall and offer
k_stack_alloc_init() which allocates a buffer from the caller's
resource pool.

Fixes #7285

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 0fe789ff2e kernel: add k_msgq_alloc_init()
User mode can't be trusted to provide a memory buffer to
k_msgq_init(). Introduce k_msgq_alloc_init() which allocates
the buffer out of the calling thread's resource pool and expose
that as a system call instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 44fe81228d kernel: pipes: add k_pipe_alloc_init()
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.

K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 97bf001f11 userspace: get dynamic objs from thread rsrc pools
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.

Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.

A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie e9cfc54d00 kernel: remove k_object_access_revoke() as syscall
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.

We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.

However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie a2480bd472 mempool: add API for malloc semantics
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.

Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Ramakrishna Pallala 149a3296ab kernel: Deprecate k_call_stacks_analyze() API
Deprecated k_call_stacks_analyze() API as it is only
dumping (printing) the statically defined main, idle,
work and ISR stacks.

Use k_thread_foreach() API which is a generic API
to iterate over threads.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Ramakrishna Pallala 110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andy Ross 15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Rajavardhan Gundi 68040c8d78 kernel: sem: Modify the way BUILD_ASSERT is used
BUILD_ASSERT() macro makes use of __COUNTER__ which may not be
supported in some compilers (like xcc). So, multiple uses of
BUILD_ASSERT() in same scope is not possible for such compilers.
Instead, the expression to BUILD_ASSERT can be "&&"ed to achieve
the same purpose.

Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
2018-04-30 16:46:14 -04:00
Leandro Pereira c200367b68 drivers: Perform a runtime check if a driver is capable of an operation
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.

Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.

Fixes #6907.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-26 02:57:12 +05:30
Andrew Boie 31bdfc014e userspace: add support for dynamic kernel objects
A red-black tree is maintained containing the metadata for all
dynamically created kernel objects, which are allocated out of the
system heap.

Currently, k_object_alloc() and k_object_free() are supervisor-only.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-24 12:27:54 -07:00
Leandro Pereira f5f95ee3a9 kernel: sem: Ensure that initial count is lesser or equal than limit
Ensure this value during static initialization (with build assertions),
and dynamic initializations through system calls.

If initial count is larger than the limit, it's possible for the count
to wraparound, causing locking issues.

Expanding the BUILD_ASSERT() macros after declaring a k_sem struct in
K_SEM_DEFINE() is necessary to support cases where a semaphore is
defined statically.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-24 04:04:36 +05:30
Andy Ross 3f55dafebc kernel: Deprecate k_thread_cancel() API
The only difference between this call and k_thread_abort() (beyond
some minor performance deltas) is that "cancel" will act as a noop in
cases where the thread has begun execution and will return an error.
"Abort" always succeeds, of course.  That is inherently racy when used
as a "stop the thread" API: there's no way in general (or at all in
SMP situations) to know that you're calling this function "early
enough" to catch the thread before it starts.

Effectively, all k_thread_cancel() gives you that k_thread_abort()
doesn't is an indication about whether or not a thread has started.
There are many other ways to get that information that don't require
dangerous kernel APIs.

Deprecate this function.  Zephyr's own code never used it except for
its own unit test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Michael Hope 5f67a6119d include: improve compatibility with C++ apps.
This patch lets a C++ application use more of Zephyr by adding guards
and changeing some constructs to the C++11 equivalent.

Changes include:

- Adding guards
- Switching to static_assert
- Switching to a template for ARRAY_SIZE as g++ doesn't have the
  builtin.
- Re-ordering designated initialisers to match the struct field order
  as G++ only supports simple designated initialisers.

Signed-off-by: Michael Hope <mlhx@google.com>
2018-04-09 23:21:52 -04:00
David B. Kinder 3314c3675f doc: misspellings in public API doxygen comments
occasional spelling-check pass found some misspellings

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-04-05 19:16:24 -04:00
Andrew Boie aa6de29c4b lib: user mode compatible mempools
We would like to offer the capability to have memory pool heap data
structures that are usable from user mode threads. The current
k_mem_pool implementation uses IRQ locking and system-wide membership
lists that make it incompatible with user mode constraints.

However, much of the existing memory pool code can be abstracted to some
common functions that are used by both k_mem_pool and the new
sys_mem_pool implementations.

The sys_mem_pool implementation has the following differences:

* The alloc/free APIs work directly with pointers, no internal memory
block structures are exposed to the end user. A pointer to the source
pool is provided for allocation, but freeing memory just requires the
pointer and nothing else.

* k_mem_pool uses IRQ locks and required very fine-grained locking in
order to not affect system latency. sys_mem_pools just use a semaphore
to protect the pool data structures at the API level, since there aren't
implications for system responsiveness with this kind of concurrency
control.

* sys_mem_pools do not support the notion of timeouts for requesting
memory.

* sys_mem_pools are specified at compile time with macros, just like
kernel memory pools. Alternative forms of specification at runtime
will be a later enhancement.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-05 07:03:05 -07:00
Youvedeep Singh 188c1ab5ca kernel: msg_q: Add routine to fetch basic attrs from message queue.
For posix layer implementation of message queue, we need to fetch
basic attributes of message queue. Currently this routine is not
present in Zephyr. So adding this routing into message queue.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2018-04-03 15:30:44 -04:00
Ramakrishna Pallala 2b8cf4c98e include: kernel: Fix documentation for TICKLESS_KERNEL API's
Fix documentation scope for TICKLESS_KERNEL API's.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-03-30 12:15:15 -04:00
Ramakrishna Pallala 92489ea4dd include: kernel: Fix typo in fifo API description
Fix typo FIF -> FIFO in API description of k_fifo_peek_head

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-03-30 12:15:15 -04:00
Anas Nashif 954d550364 kernel: api: mark internal functions as such
Add @internal doxygen command to mark internal functions.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Anas Nashif 585fd1faec doc: kernel: capitalize Fifo/Lifo
Capitalise Fifo and Lifo in documentation, those are acronyms and need
to be all in caps.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Anas Nashif 166f5194ae kernel: api: fix doxygen group ending
Extra comments after @} were showing up in the next section details.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Adithya Baglody 3a6d72ecde kernel: mem_domain: k_mem_partition is now placed in kernel memory.
The k_mem_partition structs need to be placed in the kernel memory.
This patch ensures that these structs are placed correctly.
Also when a struct k_mem_domain is declared it is advised to add
__kernel.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-03-20 09:19:59 -07:00
Leandro Pereira 08de658eb9 kernel: mem_domain: Use u8_t for number of partitions in struct
During system initialization, the global static variable (to
mem_domain.c) is initialized with the number of maximum partitions per
domain.  This variable is of u8_t type.

Assertions throughout the code will check ranges and test for overflow
by relying on implicit type conversion.

Use an u8_t instead of u32_t to avoid doubts.  Also, reorder the
k_mem_partition struct to remove the alignment hole created by reducing
sizeof(num_partitions).

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-02 07:08:49 +01:00
Andy Ross 2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross e717267abf kernel, esp32: Add _arch_start_cpu API
This is a mostly-internal API to start a secondary system CPU, with an
implementation for the ESP-32 "APP" cpu.  Exposed in kernel.h because
it's plausibly useful for asymmetric MP code managed by an app.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 042d8ecca9 kernel: Add alternative _arch_switch context switch primitive
The existing __swap() mechanism is too high level for some
applications because of its scheduler-awareness.  This introduces a
new _arch_switch() mechanism, which is a simpler primitive that looks
like:

    void _arch_switch(void *handle, void **old_handle_out);

The new thread handle (typically just a stack pointer) is specified
explicitly instead of being picked up from the scheduler by
per-architecture code, and on return the "old" thread handle that got
switched out is returned through the pointer.

The new primitive (currently available only on xtensa) is selected
when CONFIG_USE_SWITCH is "y".  A new C _Swap() implementation based
on this primitive is then added which operates compatibly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 03c1d28e6e work_q: Correctly clear pending flag in delayed work queue, update docs
As discovered in https://github.com/zephyrproject-rtos/zephyr/issues/5952

...a duplicate call to k_delayed_work_submit_to_queue() on a work item
whose timeout had expired but which had not yet executed (i.e. it was
pending in the queue for the active work queue thread) would fail,
because the cancellation step wouldn't clear the PENDING bit, causing
the resubmission to see the object in an invalid state.  Trivially
fixed by adding a bit clear.

It also turns out that the behavior of the code doesn't match the
docs, which state that a PENDING work item is not supposed to be
cancelled at all.  Fix the docs to remove that.

And on yet further review, it turns out that there's no way to make a
test like the one in the linked bug threadsafe.  The work queue does
no synchronization by design, so if the user code does no external
synchronization it might very well clobber the running handler.  Added
a sentence to the docs to reflect this gotcha.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-13 18:08:57 -05:00
Andrew Boie ce6c8f347b dma: add system calls for dma_start/dma_stop
As per current policy of requiring supervisor mode to register
callbacks, dma_config() is omitted.

A note added about checking the channel ID for start/stop, current
implementations already do this but best make it explicitly
documented.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-02-12 19:24:25 -05:00
Johan Hedberg 7d887cb615 mempool: Add k_mem_pool_free_id API
The k_mem_pool_free API has no use for the full k_mem_block struct. In
particular, it only needs the k_mem_block_id. Introduce a new API
which takes only this essential struct. This paves the way to
simplify & improve the k_malloc/k_free implementation a bit.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2018-01-12 08:05:08 -05:00
Anas Nashif 7b9d89971b kernel: remove empty string in assert statement
This was failing with compiler warnings. Looks like latest compilers
enable warnings by default that we do not have in the current SDK.

This was failing with unit tests being built natively.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-01-09 11:13:08 -05:00
Paul Sokolovsky e25df54eae various: Update/fix some textual material and code comments.
Of these, only struct net_ipv6_nbr_data::send_ns is a descriptive
change:

send_ns is used for timing Neighbor Solicitations in general, not
just for DAD.

The rest are typo/grammar fixes.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-12-29 09:45:39 -05:00
Anas Nashif fb4eecaf5f kernel: threads: remove thread groups
We have removed this features when we moved to the unified kernel. Those
functions existed to support migration from the old kernel and can go
now.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-09 08:48:51 -06:00
Kumar Gala a2caf36103 kernel: Remove deprecated k_mem_pool_defrag code
Remove references to k_mem_pool_defrag and any related bits associated
with mem_pool defrag that don't make sense anymore.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-11-28 15:23:22 -05:00
Luiz Augusto von Dentz 8beb5862c5 poll: k_poll: Document -EINTR return
In case K_POLL_STATE_NOT_READY is set the return will be set to -EINTR
indicating that the poll was interrupted.

Fixes #5026

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-11-21 06:54:51 -05:00
Andy Ross 8cf7ff5e2a kernel/mem_pool: Correct n_levels computation for small blocks
The new mem pool implementation has a hard minimum block size of 8
bytes, but the macros to statically compute the number of levels
didn't clamp, leading to invalid small allocations being allowed,
which would then corrupt the list pointers of nearby blocks and/or
overflow the buffer entirely and corrupt other memory.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-11-14 09:47:19 -08:00
Andrew Boie 7f95e83361 mempool: add k_calloc()
This uses the kernel heap to implement traditional calloc()
semantics.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-13 09:50:15 -08:00
Adithya Baglody 83bedcc912 ARM: MPU: Arch specific memory domain APIs
Added architecture specific support for memory domain destroy
and remove partition for arm and nxp. An optimized version of
remove partition was also added.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Andrew Boie 818a96d3af userspace: assign thread IDs at build time
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:

* Threads can be granted permissions on kernel objects before the
  thread is initialized. Previously, it was necessary to call
  k_thread_create() with a K_FOREVER delay, assign permissions, then
  start the thread. Permissions are still completely cleared when
  a thread exits.

* No need for runtime logic to manage thread IDs

* Build error if CONFIG_MAX_THREAD_BYTES is set too low

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-03 11:29:23 -07:00
Andrew Boie 43263fcf2e kernel.h: move includes to the top
We need to start enforcing everywhere that kernel.h depends on
arch/cpu.h and any header included in the arch/cpu.h space cannot
depend on kernel.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-02 13:25:01 -07:00
Leandro Pereira da9b0ddf5b drivers: Rename random to entropy
This should clear up some of the confusion with random number
generators and drivers that obtain entropy from the hardware.  Also,
many hardware number generators have limited bandwidth, so it's natural
for their output to be only used for seeding a random number generator.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-11-01 08:26:29 -04:00
Leandro Pereira adce1d1888 subsys: Add random subsystem
Some "random" drivers are not drivers at all: they just implement the
function `sys_rand32_get()`.  Move those to a random subsystem in
preparation for a reorganization.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-11-01 08:26:29 -04:00
Andrew Boie e5b3918a9f userspace: remove some driver object types
Use-cases for these  subsystems appear to be limited to board/SOC
code, network stacks, or other drivers, no need to expose to
userspace at this time. If we change our minds it's easy enough
to add them back.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-30 13:20:19 -07:00
Alberto Escolar Piedras 427397216f kernel: Preprocessor Undef warning fix in kernel.h
_POLL_NUM_TYPES & _POLL_NUM_STATES are values of an enum, which the
preprocessor does not know about.
But the first of the removed lines needs to be evaluated by the
preprocessor using them.

The result is that the preprocessor will treat _POLL_NUM_TYPES
and _POLL_NUM_STATES as 0 in that expression, which would not seem the
intended behavior. It will also produce 2 warnings about this in each
file which includes kernel.h (lots)

=> lines 3779-3781 are be removed.

--------- The compiler warning:
include/kernel.h:3774:11: warning: "_POLL_NUM_TYPES" is not defined [-W
         + _POLL_NUM_TYPES \
           ^
include/kernel.h:3779:5: note: in expansion of macro ?_POLL_EVENT_NUM_U
     ^
include/kernel.h:3775:11: warning: "_POLL_NUM_STATES" is not defined [-
         + _POLL_NUM_STATES \
           ^
include/kernel.h:3779:5: note: in expansion of macro ?_POLL_EVENT_NUM_U
     ^
--------

Signed-off-by: Alberto Escolar Piedras <alpi@oticon.com>
2017-10-27 10:18:26 -07:00
Andrew Boie e12857aabf kernel: add k_thread_access_grant()
This is a runtime counterpart to K_THREAD_ACCESS_GRANT().
This function takes a thread and a NULL-terminated list of kernel
objects and runs k_object_access_grant() on each of them.
This function doesn't require any special permissions and doesn't
need to become a system call.

__attribute__((sentinel)) added to warn users if they omit the
required NULL termination.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-18 07:37:38 -07:00
Andrew Boie 877f82e847 userspace: add K_THREAD_ACCCESS_GRANT()
It's possible to declare static threads that start up as K_USER,
but these threads can't do much since they start with permissions on
no kernel objects other than their own thread object.

Rather than do some run-time synchronization to have some other thread
grant the necessary permissions, we introduce macros
to conveniently assign object permissions to these threads when they
are brought up at boot by the kernel. The tables generated here
are constant and live in ROM when possible.

Example usage:

K_THREAD_DEFINE(my_thread, STACK_SIZE, my_thread_entry,
                NULL, NULL, NULL, 0, K_USER, K_NO_WAIT);

K_THREAD_ACCESS_GRANT(my_thread, &my_sem, &my_mutex, &my_pipe);

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-18 07:37:38 -07:00
Andrew Boie c5c104f91e kernel: fix k_thread_stack_t definition
Currently this is defined as a k_thread_stack_t pointer.
However this isn't correct, stacks are defined as arrays. Extern
references to k_thread_stack_t doesn't work properly as the compiler
treats it as a pointer to the stack array and not the array itself.

Declaring as an unsized array of k_thread_stack_t doesn't work
well either. The least amount of confusion is to leave out the
pointer/array status completely, use pointers for function prototypes,
and define K_THREAD_STACK_EXTERN() to properly create an extern
reference.

The definitions for all functions and struct that use
k_thread_stack_t need to be updated, but code that uses them should
be unchanged.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-17 08:24:29 -07:00
Andrew Boie 662c345cb6 kernel: implement k_thread_create() as a syscall
User threads can only create other nonessential user threads
of equal or lower priority and must have access to the entire
stack area.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 19:02:00 -07:00
Andrew Boie bca15da650 userspace: treat thread stacks as kernel objects
We need to track permission on stack memory regions like we do
with other kernel objects. We want stacks to live in a memory
area that is outside the scope of memory domain permission
management. We need to be able track what stacks are in use,
and what stacks may be used by user threads trying to call
k_thread_create().

Some special handling is needed because thread stacks appear as
variously-sized arrays of struct _k_thread_stack_element which is
just a char. We need the entire array to be considered an object,
but also properly handle arrays of stacks.

Validation of stacks also requires that the bounds of the stack
are not exceeded. Various approaches were considered. Storing
the size in some header region of the stack itself would not allow
the stack to live in 'noinit'. Having a stack object be a data
structure that points to the stack buffer would confound our
current APIs for declaring stacks as arrays or struct members.
In the end, the struct _k_object was extended to store this size.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 19:02:00 -07:00
Andrew Boie 41bab6e360 userspace: restrict k_object_access_all_grant()
This is too powerful for user mode, the other access APIs
require explicit permissions on the threads that are being
granted access.

The API is no longer exposed as a system call and hence will
only be usable by supervisor threads.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie 04caa679c9 userspace: allow thread IDs to be re-used
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.

Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.

Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie a811af337b userspace: use unsigned types for k_object fields
Fixes issues where these were getting sign-extended when
dumped out, resulting in (for example) "ffffffff" being
printed when it ought to be "ff".

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 10:52:31 -07:00
Andrew Boie a89bf01192 kernel: add k_object_access_revoke() system call
Does the opposite of k_object_access_grant(); the provided thread will
lose access to that kernel object.

If invoked from userspace the caller must hace sufficient access
to that object and permission on the thread being revoked access.

Fix documentation for k_object_access_grant() API to reflect that
permission on the thread parameter is needed as well.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-13 15:08:40 -07:00
Andrew Boie 47f8fd1d4d kernel: add K_INHERIT_PERMS flag
By default, threads are created only having access to their own thread
object and nothing else. This new flag to k_thread_create() gives the
thread access to all objects that the parent had at the time it was
created, with the exception of the parent thread itself.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-13 12:17:13 -07:00
Andrew Boie a73d3737f1 kernel: add k_uptime_get() as a system call
Uses new infrastructure for system calls with a 64-bit return value.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:25:00 -07:00
Andrew Boie 8e3e6d0d79 k_stack_init: num_entries should be unsigned
Allowing negative values here is a great way to get the kernel to
explode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 15:09:30 -07:00
Andrew Boie 7e3d3d782f kernel: userspace.c code cleanup
- Dumping error messages split from _k_object_validate(), to avoid spam
  in test cases that are expected to have failure result.

- _k_object_find() prototype moved to syscall_handler.h

- Clean up k_object_access() implementation to avoid double object
  lookup and use single validation function

- Added comments, minor whitespace changes

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie cee72411e4 userspace: move _k_object_validate() definition
This API only gets used inside system call handlers and a specific test
case dedicated to it. Move definition to the private kernel header along
with the rest of the defines for system call handlers.

A non-userspace inline variant of this function is unnecessary and has
been deleted.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 756f907274 misc: userspace support for printk()
To avoid making a system call for every character emitted, there is now
a small line buffer if userspace is enabled. The interface to the kernel
is a new system call which takes a sized buffer of console data.

If userspace is not enabled this works like before.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 09:23:57 -07:00
Andrew Boie c74983e8b4 kernel: remove some kernel objects from tracking
These are removed as the APIs that use them are not suitable for
exporting to userspace.

- Kernel workqueues run in supervisor mode, so it would not be
appropriate to allow user threads to submit work to them. A future
enhancement may extend or introduce parallel API where the workqueue
threads may run in user mode (or leave as an exercise to the user).

- Kernel slabs store private bookkeeping data inside the
user-accessible slab buffers themselves. Alternate APIs are planned
here for managing slabs of kernel objects, implemented within the
runtime library and not the kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 82edb6e806 kernel: convert k_msgq APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie e8734463a6 kernel: convert stack APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie a354d49c4f kernel: convert timer APIs to system calls
k_timer_init() registers callbacks that run in supervisor mode and is
excluded.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie b9a0578777 kernel: convert pipe APIs to system calls
k_pipe_block_put() will be done in another patch, we need to design
handling for the k_mem_block object.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 468190a795 kernel: convert most thread APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 76c04a21ee kernel: implement some more system calls
These are needed to demonstrate the Philosophers demo with threads
running in user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 2f7519bfd2 kernel: convert mutex APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 310e987dd5 kernel: convert alert APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 743e4686a0 kernel: add syscalls for k_object_access APIs
These modify kernel object metadata and are intended to be callable from
user threads, need a privilege elevation for these to work.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-05 12:53:41 -04:00
Andrew Boie 3b5ae804ad kernel: add k_object_access_all_grant() API
This is a helper API for objects that are intended to be globally
accessible.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-05 12:53:41 -04:00
Andrew Boie 217017c924 kernel: rename k_object_grant_access()
Zephyr naming convention is to have the verb last.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-05 12:53:41 -04:00
Andrew Boie 93eb603f48 kernel: expose API when userspace not enabled
We want applications to be able to enable and disable userspace without
changing any code. k_thread_user_mode_enter() now just jumps into the
entry point if CONFIG_USERSPACE is disabled.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-04 13:00:03 -04:00
Andrew Boie 990bf16206 kernel: abolish __syscall_inline
This used to exist because in earlier versions of the system call
interfaces, an "extern" declaration of the system call implementation
function would precede the real inline version of the implementation.
The compiler would not like this and would throw "static declaration
of ‘foo’ follows non-static declaration". So alternate macros were
needed which declare the implementation function as 'static inline'
instead of extern.

However, currently the inline version of these system call
implementations appear first, the K_SYSCALL_DECLARE() macros appear in
the header generated by gen_syscalls.py, which is always included at the
end of the header file. The compiler does not complain if a
static inline function is succeeded by an extern prototype of the
same function. This lets us simplify the generated system call
macros and just use __syscall everywhere.

The disassembly of this was checked on x86 to ensure that for
kernel-only or CONFIG_USERSPACE=n scenarios, everything is still being
inlined as expected.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-03 16:16:03 -04:00
Chunlin Han e9c9702818 kernel: add memory domain APIs
Add the following application-facing memory domain APIs:

k_mem_domain_init() - to initialize a memory domain
k_mem_domain_destroy() - to destroy a memory domain
k_mem_domain_add_partition() - to add a partition into a domain
k_mem_domain_remove_partition() - to remove a partition from a domain
k_mem_domain_add_thread() - to add a thread into a domain
k_mem_domain_remove_thread() - to remove a thread from a domain

A memory domain would contain some number of memory partitions.
A memory partition is a memory region (might be RAM, peripheral
registers, flash...) with specific attributes (access permission,
e.g. privileged read/write, unprivileged read-only, execute never...).
Memory partitions would be defined by set of MPU regions or MMU tables
underneath.
A thread could only belong to a single memory domain any point in time
but a memory domain could contain multiple threads.
Threads in the same memory domain would have the same access permission
to the memory partitions belong to the memory domain.

The memory domain APIs are used by unprivileged threads to share data
to the threads in the same memory and protect sensitive data from
threads outside their domain. It is not only for improving the security
but also useful for debugging (unexpected access would cause exception).

Jira: ZEP-2281

Signed-off-by: Chunlin Han <chunlin.han@linaro.org>
2017-09-29 16:48:53 -07:00
Andrew Boie 9928023421 kernel: make 'static inline' implicit to __syscall
The fact that these are all static inline functions internally is an
implementation detail.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-29 15:09:44 -07:00
Andrew Boie 5bd891d3b6 gen_kobject_list.py: device driver support
Device drivers need to be treated like other kernel objects, with
thread-level permissions and validation of struct device pointers passed
in from userspace when making API calls.

However it's not sufficient to identify an object as a driver, we need
to know what subsystem it belongs to (if any) so that userspace cannot,
for example, make Ethernet driver API calls using a UART driver object.

Upon encountering a variable representing a device struct, we look at
the value of its driver_api member. If that corresponds to an instance
of a driver API struct belonging to a known subsystem, the proper
K_OBJ_DRIVER_* enumeration type will be associated with this device in
the generated gperf table.

If there is no API struct or it doesn't correspond to a known subsystem,
the device is omitted from the table; it's presumably used internally
by the kernel or is a singleton with specific APIs for it that do not
take a struct device parameter.

The list of kobjects and subsystems in the script is simplified since
the enumeration type name is strongly derived from the name of the data
structure.

A device object is marked as initialized after its init function has
been run at boot.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-29 13:25:58 -07:00
Andrew Boie fa94ee7460 syscalls: greatly simplify system call declaration
To define a system call, it's now sufficient to simply tag the inline
prototype with "__syscall" or "__syscall_inline" and include a special
generated header at the end of the header file.

The system call dispatch table and enumeration of system call IDs is now
automatically generated.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-29 13:02:20 -07:00
Andrew Boie fc273c0b23 kernel: convert k_sem APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-28 08:56:20 -07:00
Andrew Boie 13ca6fe284 syscalls: reorganize headers
- syscall.h now contains those APIs needed to support invoking calls
  from user code. Some stuff moved out of main kernel.h.
- syscall_handler.h now contains directives useful for implementing
  system call handler functions. This header is not pulled in by
  kernel.h and is intended to be used by C files implementing kernel
  system calls and driver subsystem APIs.
- syscall_list.h now contains the #defines for system call IDs. This
  list is expected to grow quite large so it is put in its own header.
  This is now an enumerated type instead of defines to make things
  easier as we introduce system calls over the new few months. In the
  fullness of time when we desire to have a fixed userspace/kernel ABI,
  this can always be converted to defines.

Some new code added:

- _SYSCALL_MEMORY() macro added to check memory regions passed up from
  userspace in handler functions
- _syscall_invoke{7...10}() inline functions declare for invoking system
  calls with more than 6 arguments. 10 was chosen as the limit as that
  corresponds to the largest arg list we currently have
  which is for k_thread_create()

Other changes

- auto-generated K_SYSCALL_DECLARE* macros documented
- _k_syscall_table in userspace.c is not a placeholder. There's no
  strong need to generate it and doing so would require the introduction
  of a third build phase.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-28 08:56:20 -07:00
David B. Kinder 8065dbc314 doc: fix misspelling in kernel.h API doc
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2017-09-21 19:01:19 -04:00
Andrew Boie 1956f09590 kernel: allow up to 6 arguments for system calls
A quick look at "man syscall" shows that in Linux, all architectures
support at least 6 argument system calls, with a few supporting 7. We
can at least do 6 in Zephyr.

x86 port modified to use EBP register to carry the 6th system call
argument.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-20 09:18:59 -07:00
Andrew Boie 3f091b5dd9 kernel: add common functions for user mode
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 2acfcd6b05 userspace: add thread-level permission tracking
Now creating a thread will assign it a unique, monotonically increasing
id which is used to reference the permission bitfield in the kernel
object metadata.

Stub functions in userspace.c now implemented.

_new_thread is now wrapped in a common function with pre- and post-
architecture thread initialization tasks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 5cfa5dc8db kernel: add K_USER flag and _is_thread_user()
Indicates that the thread is configured to run in user mode.
Delete stub function in userspace.c

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 1f32d09bd8 kernel: specify arch functions for userspace
Any arches that support userspace will need to implement these
functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 1e06ffc815 zephyr: use k_thread_entry_t everywhere
In various places, a private _thread_entry_t, or the full prototype
were being used. Be consistent and use the same typedef everywhere.

Signen-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-11 11:18:22 -07:00
Andrew Boie f2c83acafc kernel: remove k_thread_spawn()
This API was deprecated in 1.8, we can remove for 1.10.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-11 12:30:51 -04:00
Andrew Boie 8749c26555 kernel: fix K_THREAD_DEFINE wrt application memory
The generated struct k_thread could end up in the wrong memory space
if CONFIG_APPLICATION_MEMORY is enabled.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:35:36 -07:00
Andrew Boie 7d627c5971 k_thread_create(): allow K_FOREVER delay
It's now possible to instantiate a thread object, but delay its
execution indefinitely. This was already supported with K_THREAD_DEFINE.

A new API, k_thread_start(), now exists to start threads that are in
this state.

The intended use-case is to initialize a thread with K_USER, then grant
it various access permissions, and only then start it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:35:04 -07:00
Andrew Boie 945af95f42 kernel: introduce object validation mechanism
All system calls made from userspace which involve pointers to kernel
objects (including device drivers) will need to have those pointers
validated; userspace should never be able to crash the kernel by passing
it garbage.

The actual validation with _k_object_validate() will be in the system
call receiver code, which doesn't exist yet.

- CONFIG_USERSPACE introduced. We are somewhat far away from having an
  end-to-end implementation, but at least need a Kconfig symbol to
  guard the incoming code with. Formal documentation doesn't exist yet
  either, but will appear later down the road once the implementation is
  mostly finalized.

- In the memory region for RAM, the data section has been moved last,
  past bss and noinit. This ensures that inserting generated tables
  with addresses of kernel objects does not change the addresses of
  those objects (which would make the table invalid)

- The DWARF debug information in the generated ELF binary is parsed to
  fetch the locations of all kernel objects and pass this to gperf to
  create a perfect hash table of their memory addresses.

- The generated gperf code doesn't know that we are exclusively working
  with memory addresses and uses memory inefficently. A post-processing
  script process_gperf.py adjusts the generated code before it is
  compiled to work with pointer values directly and not strings
  containing them.

- _k_object_init() calls inserted into the init functions for the set of
  kernel object types we are going to support so far

Issue: ZEP-2187
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:33:33 -07:00
Luiz Augusto von Dentz 7d01c5ecb7 poll: Enable multiple threads to use k_poll in the same object
This is necessary in order for k_queue_get to work properly since that
is used with buffer pools which might be used by multiple threads asking
for buffers.

Jira: ZEP-2553

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-25 09:00:46 -04:00
Luiz Augusto von Dentz c1fa82b3c6 work_q: Make k_delayed_work_cancel cancel work already pending
This has been a limitation caused by k_fifo which could only remove
items from the beggining, but with the change to use k_queue in
k_work_q it is now possible to remove items from any position with
use of k_queue_remove.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-15 08:49:09 -04:00
Luiz Augusto von Dentz adb581be8e work: Convert usage of k_fifo to k_queue
Make use of k_queue directly since it has a more flexible API.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-15 08:49:09 -04:00
Luiz Augusto von Dentz 84db641de6 queue: Use k_poll if enabled
This makes use of POLL_EVENT in case k_poll is enabled which is
preferable over wait_q as that allows objects to be removed for the
data_q at any time.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-15 08:49:09 -04:00
Luiz Augusto von Dentz 50b9377b45 queue: Add k_queue_remove
k_queue_remove can be used to remove an element from any
position in the queue.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-15 08:49:09 -04:00
Andrew Boie 507852a4ad kernel: introduce opaque data type for stacks
Historically, stacks were just character buffers and could be treated
as such if the user wanted to look inside the stack data, and also
declared as an array of the desired stack size.

This is no longer the case. Certain architectures will create a memory
region much larger to account for MPU/MMU guard pages. Unfortunately,
the kernel interfaces treat both the declared stack, and the valid
stack buffer within it as the same char * data type, even though these
absolutely cannot be used interchangeably.

We introduce an opaque k_thread_stack_t which gets instantiated by
K_THREAD_STACK_DECLARE(), this is no longer treated by the compiler
as a character pointer, even though it really is.

To access the real stack buffer within, the result of
K_THREAD_STACK_BUFFER() can be used, which will return a char * type.

This should catch a bunch of programming mistakes at build time:

- Declaring a character array outside of K_THREAD_STACK_DECLARE() and
  passing it to K_THREAD_CREATE
- Directly examining the stack created by K_THREAD_STACK_DECLARE()
  which is not actually the memory desired and may trigger a CPU
  exception

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-08-01 16:43:15 -07:00
Andrew Boie befb0695ba kernel.h: add note about K_THREAD_STACK_SIZEOF()
Each member of the array may need to have a padding size added
such that the base address of each array element corresponds to
the desired stack alignment.

This would mean that sizeof(some array element) would return
a larger size than what was originally provided.

This won't cause problems at runtime since the space is really
there, but for users who are only enabling this padding for
debug features, they may be surprised when their stacks are
effectively smaller than when this was enabled.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Paul Sokolovsky cfef979363 include: kernel: Fix use of K_POLL_MODE_INFORM_ONLY in docstring
K_POLL_MODE_INFORM_ONLY was renamed to K_POLL_MODE_NOTIFY_ONLY, but
stale use was in a docstring.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-07-19 09:59:55 +03:00
Andrew Boie 65a9d2a94a kernel: make K_.*_INITIALIZER private to kernel
Upcoming memory protection features will be placing some additional
constraints on kernel objects:

- They need to reside in memory owned by the kernel and not the
application
- Certain kernel object validation schemes will require some run-time
initialization of all kernel objects before they can be used.

Per Ben these initializer macros were never intended to be public. It is
not forbidden to use them, but doing so requires care: the memory being
initialized must reside in kernel space, and extra runtime
initialization steps may need to be peformed before they are fully
usable as kernel objects. In particular, kernel subsystems or drivers
whose objects are already in kernel memory may still need to use these
macros if they define kernel objects as members of a larger data
structure.

It is intended that application developers instead use the
K_<object>_DEFINE macros, which will automatically put the object in the
right memory and add them to a section which can be iterated over at
boot to complete initiailization.

There was no K_WORK_DEFINE() macro for creating struct k_work objects,
this is now added.

k_poll_event and k_poll_signal are intended to be instatiated from
application memory and have not been changed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-10 11:44:56 -07:00
Paul Sokolovsky 16bb3ec7ec kernel: queue, fifo: Add peek_head/peek_tail accessors
As explained in the docstrings, a usecase behind these operations is
when other container objects are put in a fifo. The typical
processing iteration make take just some data from a container at
the head of fifo, with the container still being kept at the fifo,
unless it becomes empty, and only then it's removed. Similarly with
adding more data - first step may be to try to add more data to a
container at the tail of fifo, and only if it's full, add another
container to a fifo.

The specific usecase these operations are added for is network
subsystem processing, where net_buf's and net_pkt's are added
to fifo.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-06-28 16:07:55 +03:00
Anas Nashif 397d29db42 linker: move all linker headers to include/linker
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-06-18 09:24:04 -05:00
Andrew Boie dc5d935d12 kernel: introduce stack definition macros
The existing __stack decorator is not flexible enough for upcoming
thread stack memory protection scenarios. Wrap the entire thing in
a declaration macro abstraction instead, which can be implemented
on a per-arch or per-SOC basis.

Issue: ZEP-2185
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-09 18:53:28 -04:00
Andrew Boie 41c68ece83 kernel: publish offsets to thread stack info
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andy Ross 73cb9586ce k_mem_pool: Complete rework
This patch amounts to a mostly complete rewrite of the k_mem_pool
allocator, which had been the source of historical complaints vs. the
one easily available in newlib.  The basic design of the allocator is
unchanged (it's still a 4-way buddy allocator), but the implementation
has made different choices throughout.  Major changes:

Space efficiency: The old implementation required ~2.66 bytes per
"smallest block" in overhead, plus 16 bytes per log4 "level" of the
allocation tree, plus a global tracking struct of 32 bytes and a very
surprising 12 byte overhead (in struct k_mem_block) per active
allocation on top of the returned data pointer.  This new allocator
uses a simple bit array as the only per-block storage and places the
free list into the freed blocks themselves, requiring only ~1.33 bits
per smallest block, 12 bytes per level, 32 byte globally and only 4
bytes of per-allocation bookeeping.  And it puts more of the generated
tree into BSS, slightly reducing binary sizes for non-trivial pool
sizes (even as the code size itself has increased a tiny bit).

IRQ safe: atomic operations on the store have been cut down to be at
most "4 bit sets and dlist operations" (i.e. a few dozen
instructions), reducing latency significantly and allowing us to lock
against interrupts cleanly from all APIs.  Allocations and frees can
be done from ISRs now without limitation (well, obviously you can't
sleep, so "timeout" must be K_NO_WAIT).

Deterministic performance: there is no more "defragmentation" step
that must be manually managed.  Block coalescing is done synchronously
at free time and takes constant time (strictly log4(num_levels)), as
the detection of four free "partner bits" is just a simple shift and
mask operation.

Cleaner behavior with odd sizes.  The old code assumed that the
specified maximum size would be a power of four multiple of the
minimum size, making use of non-standard buffer sizes problematic.
This implementation re-aligns the sub-blocks at each level and can
handle situations wehre alignment restrictions mean fewer than 4x will
be available.  If you want precise layout control, you can still
specify the sizes rigorously.  It just doesn't break if you don't.

More portable: the original implementation made use of GNU assembler
macros embedded inline within C __asm__ statements.  Not all
toolchains are actually backed by a GNU assembler even when the
support the GNU assembly syntax.  This is pure C, albeit with some
hairy macros to expand the compile-time-computed values.

Related changes that had to be rolled into this patch for bisectability:

* The new allocator has a firm minimum block size of 8 bytes (to store
  the dlist_node_t).  It will "work" with smaller requested min_size
  values, but obviously makes no firm promises about layout or how
  many will be available.  Unfortunately many of the tests were
  written with very small 4-byte minimum sizes and to assume exactly
  how many they could allocate.  Bump the sizes to match the allocator
  minimum.

* The mbox and pipes API made use of the internals of k_mem_block and
  had to be ported to the new scheme.  Blocks no longer store a
  backpointer to the pool that allocated them (it's an integer ID in a
  bitfield) , so if you want to "nullify" them you have to use the
  data pointer.

* test_mbox_api had a bug were it was prematurely freeing k_mem_blocks
  that it sent through the mailbox.  This worked in the old allocator
  because the memory wouldn't be touched when freed, but now we stuff
  list pointers in there and the bug was exposed.

* Remove test_mpool_options: the options (related to defragmentation
  behavior) tested no longer exist.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2017-05-13 14:39:41 -04:00
Andrew Boie d26cf2dc33 kernel: add k_thread_create() API
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.

This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.

By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.

Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-11 20:24:22 -04:00
Paul Sokolovsky 3f50707672 kernel: queue, fifo: Add cancel_wait operation.
Currently, a queue/fifo getter chooses how long to wait for an
element. But there are scenarios when putter would know better,
there should be a way to expire getter's timeout to make it run
again. k_queue_cancel_wait() and k_fifo_cancel_wait() functions
do just that. They cause corresponding *_get() functions to return
with NULL value, as if timeout expired on getter's side (even
K_FOREVER).

This can be used to signal out of band conditions from putter to
getter, e.g. end of processing, error, configuration change, etc.
A specific event would be communicated to getter by other means
(e.g. using existing shared context structures).

Without this call, achieving the same effect would require e.g.
calling k_fifo_put() with a pointer to a special sentinal memory
structure - such structure would need to be allocated somewhere
and somehow, and getter would need to recognize it from a normal
data item. Having cancel_wait() functions offers an elegant
alternative. From this perspective, these calls can be seen as
an equivalent to e.g. k_fifo_put(fifo, NULL), except that such
call won't work in practice.

Change-Id: I47b7f690dc325a80943082bcf5345c41649e7024
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2017-05-10 09:40:33 -04:00
David B. Kinder fc5f2b3832 doc: spelling check doxygen comments include/
fix misspellings found in doxygen comments used for API docs

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2017-05-02 22:21:37 -04:00
Ramesh Thomas 89ffd44dfb kernel: tickless: Add tickless kernel support
Adds event based scheduling logic to the kernel. Updates
management of timeouts, timers, idling etc. based on
time tracked at events rather than periodic ticks. Provides
interfaces for timers to announce and get next timer expiry
based on kernel scheduling decisions involving time slicing
of threads, timeouts and idling. Uses wall time units instead
of ticks in all scheduling activities.

The implementation involves changes in the following areas

1. Management of time in wall units like ms/us instead of ticks
The existing implementation already had an option to configure
number of ticks in a second. The new implementation builds on
top of that feature and provides option to set the size of the
scheduling granurality to mili seconds or micro seconds. This
allows most of the current implementation to be reused. Due to
this re-use and co-existence with tick based kernel, the names
of variables may contain the word "tick". However, in the
tickless kernel implementation, it represents the currently
configured time unit, which would be be mili seconds or
micro seconds. The APIs that take time as a parameter are not
impacted and they continue to pass time in mili seconds.

2. Timers would not be programmed in periodic mode
generating ticks. Instead they would be programmed in one
shot mode to generate events at the time the kernel scheduler
needs to gain control for its scheduling activities like
timers, timeouts, time slicing, idling etc.

3. The scheduler provides interfaces that the timer drivers
use to announce elapsed time and get the next time the scheduler
needs a timer event. It is possible that the scheduler may not
need another timer event, in which case the system would wait
for a non-timer event to wake it up if it is idling.

4. New APIs are defined to be implemented by timer drivers. Also
they need to handler timer events differently. These changes
have been done in the HPET timer driver. In future other timers
that support tickles kernel should implement these APIs as well.
These APIs are to re-program the timer, update and announce
elapsed time.

5. Philosopher and timer_api applications have been enabled to
test tickless kernel. Separate configuration files are created
which define the necessary CONFIG flags. Run these apps using
following command
make pristine && make BOARD=qemu_x86 CONF_FILE=prj_tickless.conf qemu

Jira: ZEP-339 ZEP-1946 ZEP-948
Change-Id: I7d950c31bf1ff929a9066fad42c2f0559a2e5983
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:28 +00:00
Andrew Boie 73abd32a7d kernel: expose struct k_thread implementation
Historically, space for struct k_thread was always carved out of the
thread's stack region. However, we want more control on where this data
will reside; in memory protection scenarios the stack may only be used
for actual stack data and nothing else.

On some platforms (particularly ARM), including kernel_arch_data.h from
the toplevel kernel.h exposes intractable circular dependency issues.
We create a new per-arch header "kernel_arch_thread.h" with very limited
scope; it only defines the three data structures necessary to instantiate
the arch-specific bits of a struct k_thread.

Change-Id: I3a55b4ed4270512e58cf671f327bb033ad7f4a4f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-26 16:29:06 +00:00
Andrew Boie cdb94d6425 kernel: add k_panic() and k_oops() APIs
Unlike assertions, these APIs are active at all times. The kernel will
treat these errors in the same way as fatal CPU exceptions. Ultimately,
the policy of what to do with these errors is implemented in
_SysFatalErrorHandler.

If the archtecture supports it, a real CPU exception can be triggered
which will provide a complete register dump and PC value when the
problem occurs. This will provide more helpful information than a fake
exception stack frame (_default_esf) passed to the arch-specific exception
handling code.

Issue: ZEP-843
Change-Id: I8f136905c05bb84772e1c5ed53b8e920d24eb6fd
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-22 10:31:49 -04:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Kumar Gala 789081673f Introduce new sized integer typedefs
This is a start to move away from the C99 {u}int{8,16,32,64}_t types to
Zephyr defined u{8,16,32,64}_t and s{8,16,32,64}_t.  This allows Zephyr
to define the sized types in a consistent manor across all the
architectures we support and not conflict with what various compilers
and libc might do with regards to the C99 types.

We introduce <zephyr/types.h> as part of this and have it include
<stdint.h> for now until we transition all the code away from the C99
types.

We go with u{8,16,32,64}_t and s{8,16,32,64}_t as there are some
existing variables defined u8 & u16 as well as to be consistent with
Zephyr naming conventions.

Jira: ZEP-2051

Change-Id: I451fed0623b029d65866622e478225dfab2c0ca8
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-20 16:07:08 +00:00
Anas Nashif 306e15e0a1 kernel: remove legacy kernel support
Change-Id: Iac1e21677d74f81a93cd29d64cce261676ae78a6
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 15:48:37 +00:00
David B. Kinder 8b986d7697 spell: fix comment typos: /include
Change-Id: I20d315ef5f8a2da5cfe28b194126907adda9e13c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2017-04-19 00:41:25 +00:00
Kumar Gala ddece1ccd4 kernel: include inttypes.h to get access to PRI defines in most spots
We need to move using the PRI* defines to use newlib as the default libc
as different arch's define various base types like {u}int32_t
differently.  To deal with that in a consistent manor we need access to
the defines in most spots for print{f,k} or logging functions.

Change-Id: Ic1fbef75cbaee211803d9aaf506056e5e31e73f3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-17 11:09:31 -05:00
Anas Nashif 6ad0420b26 kernel: remove left-over code from object monitoring
This code is non-functional and is a left over from an old version of
the kernel that does not work and is covered through other new features
in the kernel, for example object tracing.

Jira: ZEP-2013
Change-Id: Id12ad09e2d06186b53cd2f0dd030ac6d37d1229f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-11 03:14:25 +00:00
Anas Nashif 5bb0169d02 kernel: remove unused _THREAD_TIMEOUT_INIT and _THREAD_ERRNO_INIT
_THREAD_TIMEOUT_INIT() has been replaced by _nano_timeout_thread_init(),
so it can be removed.

_THREAD_ERRNO_INIT() is defined, but never used. Ben suspects that this
is a bug, and that there should be some code that calls it.

Jira: ZEP-1326
Change-Id: I476c316b80e9f34d1ed61971229ed9afafc80d8a
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-04 15:25:45 +00:00
Luiz Augusto von Dentz 0dc4dd46d4 lifo: Make use of k_queue as implementation
Once all users of k_lifo migrate to k_queue this should no longer be
needed.

Change-Id: Ib8af40c57bf8feba7b06d6d891cfa57b44faad42
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-27 21:20:53 +00:00
Luiz Augusto von Dentz e5ed88f328 fifo: Make use of k_queue as implementation
This makes k_fifo functions rely on k_queue and port k_poll to use
k_queue directly.

Once all users of k_fifo migrate to k_queue this should no longer be
needed.

Change-Id: Icf16d580f88d11b2cb89e1abd23ae314f43dbd20
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-27 21:20:52 +00:00
Luiz Augusto von Dentz a7ddb87501 kernel: Add k_queue API
This unifies k_fifo and k_lifo APIs thus making it more flexible regarding
where the data elements are inserted.

Change-Id: Icd6e2f62fc8b374c8273bb763409e9e22c40f9f8
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-27 21:20:50 +00:00
Andrew Boie e08d07c97d kernel: add flexibility to k_cycle_get_32() definition
Some arches may want to define this as an inline function, or
define in core arch code instead of timer driver code.
Unfortunately, this means we need to remove from the footprint
tests, but this is not typically a large function.

Issue: ZEP-1546
Change-Id: Ic0d7a33507da855995838f4703d872cd613a2ca2
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-02-16 19:27:59 +00:00
Mazen NEIFER 967cb2ef8a Fixed compilation error caused by bad initialization of unamed union field.
The old syntax is not accepted by some compilers including XCC.

Change-Id: Id90849a2159652ec225dd2c50d2dc2ddc22a3e08
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
2017-02-13 08:04:27 -08:00
Mazen NEIFER dc391f566c Xtensa port: Added support for Xtensa architecture in zephyr include files.
Change-Id: I1ac677cd6da5222707fe31ead71dc354f7c94443
Signed-off-by: Mazen NEIFER <mazen@nestwave.com>
2017-02-13 08:04:27 -08:00
Anas Nashif 4fb12ae988 kernel: k_timer_stop: remove assert when called from an ISR
Change-Id: I596e0323a7aafc9d7f3834a8d1b655ad2540d4ef
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-02-04 19:25:11 +00:00
Benjamin Walsh a304f16773 kernel/poll: add k_poll_signal_init() runtime init
Change-Id: Id5a27f7d25e26a1a71ef87000d35a18777210c19
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-03 13:54:01 +00:00
Benjamin Walsh b017986347 kernel/poll: add missing poll_event runtime init
It was in the static initializers, but was missing from the object
runtime init functions.

Change-Id: I10d519760eabdbe640a19cc5cfa9241c1356b070
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-03 13:54:00 +00:00
Benjamin Walsh 969d4a7ff1 kernel/poll: add user tag to struct k_poll_event
This will allow users to install a way of finding out what the event and
the objects are used for without looking at the object itself, or to
tag a bunch of objects that belong together.

The runtime init function _does not_ take a tag so that there is no
runtime hit if not needed. The static initializer macro _does_ take the
tag, so that it does not have to be initialized at runtime if needed,
and thus avoids a runtime hit.

Change-Id: I89a36c6f969ff952f9d1673b1bb5136e407535c6
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-03 13:53:59 +00:00
Benjamin Walsh acc68c1e59 kernel: add k_poll() API
k_poll() is similar to the POSIX poll() API in spirit in that it allows
a single thread to monitor multiple events without actively polling
them, but rather pending for one or more to become ready. Such events
can be a direct event, or kernel objects (currently only semaphores and
fifos).

When a kernel object being polled on is ready, it is not "given" to the
poller: the poller must then acquire it via the regular API for the
object (e.g. k_sem_take()). Only one thread can poll on a particular
object at one time. These restrictions mean that k_poll() is most
effective when a single thread monitors multiple events that are not
subject for contention. For example, being the sole reader on multiple
fifos, or the only thread being signalled by multiple semaphores, or a
combination of both.

Change-Id: I7035a9baf4aa016fb87afc5f5c0f5f8cb216480f
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:30:00 +00:00
Benjamin Walsh 39b80d8f29 kernel: add k_fifo_is_empty()
Allow peeking at the fifo to see if there is an element without
dequeuing it.

Change-Id: I99cbe4495c81f1d7b77ad6a37cef4ec8c24d48eb
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:29:58 +00:00
Benjamin Walsh ed240f2796 kernel/arch: streamline thread user options
The K_<thread option> flags/options avaialble to users were hidden in
the kernel private header files: move them to include/kernel.h to
publicize them.

Also, to avoid any future confusion, rename the k_thread.execution_flags
field to user_options.

Change-Id: I65a6fd5e9e78d4ccf783f3304b607a1e6956aeac
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:50 +00:00
Benjamin Walsh dfa7ce5c94 kernel: include kernel.h in kernel_structs.h in asm files
This will be needed for some thread user options that will move to
kernel.h since they are part of the user API.

Change-Id: I46e302b6cafcdddbad3458134b98feb5b8d45d9b
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:48 +00:00