Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The following 2 improvements are contained in this patch:
- When converting from ms to ticks, instead of using hardware cycles
per tick, use hardware cycles per second. This ensures that the
multiplication is done before the division, increasing precision.
- When converting from ticks to ms, instead of using cycles per tick
and cycles per sec, use ticks per sec. This too increases the
precision.
The concept is to make the dividend as large as possible compared to the
divisor in order to lose as little precision as possible.
Fixes#8898Fixes#9459Fixes#9466Fixes#9468
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Previously (as introduced in 48fadfe62), if k_poll() waited on a
queue (or subclass like fifo), and wait was cancelled on queue's
side using k_queue_cancel_wait(), k_poll returned -EINTR. But it
did not set event->state field (to anything else but
K_POLL_STATE_NOT_READY), so in case of waiting on multiple queues,
it was not possible to differentiate which of them was cancelled.
This in particular broke detection of network socket EOF conditions
in POSIX poll() implementation.
This situation is now resolved with introduction of explicit
K_POLL_STATE_CANCELLED state, which is now set for cancelled queue
(-EINTR return remains the same).
This change also elaborates docstring for the functions mentioned, to
document this behavior.
Fixes: #9032
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.
Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.
Fixes: #9067
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Commit 2b8cf4c98e ("include: kernel: Fix documentation for
TICKLESS_KERNEL API's")' defined a macro to fix documentation when
TKCKLESS_KERNEL is not available but this macro does not return the
same the functions returns, so its use may result in compilation
error.
Another point to consider is that if one is using this function
without it be enabled is better to return a proper error like ENOTSUP
explicitly saying that this is not supported.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The value of sys_clock_ticks_per_sec is obtained using
simple integer division with rounding toward zero. As result
using this variable in _ms_to_ticks() introduces some error.
This commit eliminates sys_clock_ticks_per_sec from equation
used in _ms_to_ticks() removing introduced error.
Also, this commit fixes#8895.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Because errno.h is defined in terms of a syscall we can get into trouble
when one syscall/<FOO.h> ends up include another syscall/<BAR.h>.
Moving errno.h from kernel_includes.h to kernel.h breaks the possible
inclusion issue on some ARM platforms (which arm_mpu.h ends up include
soc.h which ends up include kernel_includes.h which would include
errno.h).
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This commit enables accurate (based on 64-bit math) tick <-> ms
conversion if system clock rate is determined at runtime.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.
In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.
The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit moves the _ms_to_ticks() and __ticks_to_ms() functions
close to each other in order to improve code readability.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
The kernel incorrectly assumed, that system timer frequency is always
divisible without remainder by couple "natural" tick rates (like 100).
As result on some SoCs, time calculations was not correct, producing
strange effects (invalid sleep times, incorrect k_uptime_get() etc.).
This commit enables accurate, but costly (using 64-bit math) tick <-> ms
conversion if the selected tick interval is not exact due to hardware
limitations.
Also, this commit fixes tests in which removed _ms_per_tick were used.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
This commit removes fixed list of "good" sys_clock_ticks_per_sec values
which usage results in integer _ms_per_tick value.
Instead of using the list, simply check if MSEC_PER_SEC could be divided
without remainder by sys_clock_ticks_per_sec.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
This commit moves all implementations of the _ms_to_ticks() into
single file. Also, the function is now inline even if
_NEED_PRECISE_TICK_MS_CONVERSION is defined.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Make these "choice" items instead of a single boolean that implies the
element unset.
Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is a public macro which calculates the size to be allocated for
stacks inside a stack array. This is necessitated because of some
internal padding (e.g. for MPU scenarios). This is particularly
useful when a reference to K_THREAD_STACK_ARRAY_DEFINE needs to be
made from within a struct.
Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
This commit creates a new header file (kernel_include.h) that
contains all header files to be included by kernel_init.h.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state". It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.
Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll. The disadvantage is the 4
bytes of thread space needed. Advantages:
+ Cleaner API, it's now internal to poll instead of being globally
visible.
+ The thread_state bit space is just one byte, and was almost full
already.
+ Smaller code to write/test a full word and not a bitfield
+ Words are atomic, so no need for one of irq lock/unlock pairs.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.
This is problematic:
- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.
Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_work_init() was not initializing all fields in the k_work struct.
Mainly, the atomic_clear_bit() function call was reading a possibly
uninitialized value, clearing a bit, and assigning it back to the
`flags` member. The `_reserved` member was never initialized.
With the struct now initialized with the _K_WORK_INITIALIZER() macro,
initialization is consistent regardless of how a `struct k_work` is
initialized.
This fixes the Valgrind issues found in #7478.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
It's not possible to enforce that K_THREAD_STACK_SIZEOF()
returns the original number passed to K_THREAD_STACK_DEFINE().
Some arches need to round this number up in order to satisfy
alignment constraints.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We want the struct to be packed to conserve space, but the
perms field needs to always be on a 4-byte boundary since
we do bitfield operations on it, arches like ARC require
that the sys_bitfield_* operations be aligned to a 4 byte
boundary.
Instances of struct _k_object will now be 4-byte aligned
if in an array (which they are), even though the members
are still packed.
Fixes: #7776
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add requirement ID place holders based on APIS. The requirements will
appear as a list in doxygen documentation. The IDs will be expanded with
more details somewhere else, probably a requirement catalog on GH or
some other requirement management tool. This is still TBD.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Very simple implementation of deadline scheduling. Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one. Behavior as to thread
selection does not change. New features:
+ Unifies SMP and uniprocessing selection code (with the sole
exception of the "cache" trick not being possible in SMP).
+ The old static multi-queue implementation is gone and has been
replaced with a build-time choice of either a "dumb" list
implementation (faster and significantly smaller for apps with only
a few threads) or a balanced tree queue which scales well to
arbitrary numbers of threads and priority levels. This is
controlled via the CONFIG_SCHED_DUMB kconfig variable.
+ The balanced tree implementation is usable symmetrically for the
wait_q abstraction, fixing a scalability glitch Zephyr had when many
threads were waiting on a single object. This can be selected via
CONFIG_WAITQ_FAST.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs. Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages. Now replacement of wait_q with a different
data structure is much cleaner.
Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro. While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
k_poll is now accessible from user mode. A memory allocation takes place
from the caller's resource pool to copy the provided poll_events
array; this can be large enough to make allocating it on the stack
not preferable.
k_poll_signal are now proper kernel objects. Two APIs have been added,
one to reset the signaled state and one to check the current signaled
state and result value.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.
FIFO/LIFOs are derived from k_queues and have had allocator functions
added.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We get the following warning with CONFIG_DYNAMIC_OBJECTS=n in
_impl_k_object_alloc:
include/kernel.h:322:57: warning: unused parameter ‘otype’ [-Wunused-parameter]
static inline void *_impl_k_object_alloc(enum k_objects otype)
^~~~~
Simple fix is to ARG_UNUSED otype.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Similar to what has been done with pipes and message queues,
user mode can't be trusted to provide a buffer for the kernel
to use. Remove k_stack_init() as a syscall and offer
k_stack_alloc_init() which allocates a buffer from the caller's
resource pool.
Fixes#7285
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode can't be trusted to provide a memory buffer to
k_msgq_init(). Introduce k_msgq_alloc_init() which allocates
the buffer out of the calling thread's resource pool and expose
that as a system call instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.
K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.
Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.
A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.
Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.
We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.
However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.
Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Deprecated k_call_stacks_analyze() API as it is only
dumping (printing) the statically defined main, idle,
work and ISR stacks.
Use k_thread_foreach() API which is a generic API
to iterate over threads.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Add k_thread_foreach API to iterate over all the threads in
the system.
This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
This was wrong in two ways, one subtle and one awful.
The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it). So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.
And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock. Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do. The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention. That this didn't break more things is amazing to me.
The rework is actually simpler than the original, thankfully. Though
there are some further subtleties:
* The lock state implied by irq_lock() allows the lock to be
implicitly released on context switch (i.e. you can _Swap() with the
lock held at a recursion level higher than 1, which needs to allow
other processes to run). So return paths into threads from _Swap()
and interrupt/exception exit need to check and restore the global
lock state, spinning as needed.
* The idle loop design specifies a k_cpu_idle() function that is on
common architectures expected to enable interrupts (for obvious
reasons), but there is no place to put non-arch code to wire it into
the global lock accounting. So on SMP, even CPU0 needs to use the
"dumb" spinning idle loop.
Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
BUILD_ASSERT() macro makes use of __COUNTER__ which may not be
supported in some compilers (like xcc). So, multiple uses of
BUILD_ASSERT() in same scope is not possible for such compilers.
Instead, the expression to BUILD_ASSERT can be "&&"ed to achieve
the same purpose.
Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.
Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.
Fixes#6907.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>