move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.
No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.
Related to #16539
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The first word is used as a pointer, meaning it is 64 bits on 64-bit
systems. To reserve it, it has to be either a pointer, a long, or an
intptr_t. Not an int nor an u32_t.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Folks found the use of @rststar/@endrststar non-intuitive (wanted to use
@rststart). The "star" was there indicating the doxygen comment lines
had a leading asterisk that needed to be stripped, but since our
commenting convention is to use the leading asterisk on continuation
lines, the leading asterisk is always there. So, change the doxygen
alias to the more expected @rst/@endrst.
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
A k_futex is a lightweight mutual exclusion primitive designed
to minimize kernel involvement. Uncontended operation relies
only on atomic access to shared memory. k_futex structure lives
in application memory. And when using futexes, the majority of
the synchronization operations are performed in user mode. A
user-mode thread employs the futex wait system call only when
it is likely that the program has to block for a longer time
until the condition becomes true. When the condition comes true,
futex wake operation will be used to wake up one or more threads
waiting on that futex.
This patch implements two futex operations: k_futex_wait and
k_futex_wake. For k_futex_wait, the comparison with the expected
value, and starting to sleep are performed atomically to prevent
lost wake-ups. If different context changed futex's value after
the calling use-mode thread decided to block himself based on
the old value, the comparison will help observing the value
change and will not start to sleep. And for k_futex_wake, it
will wake at most num_waiters of the waiters that are sleeping
on that futex. But no guarantees are made on which threads are
woken, that means scheduling priority is not taken into
consideration.
Fixes: #14493.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>
In z_sys_mem_pool_block_alloc() the size of the first level block
allocation is rounded up to the next 4-bite boundary. This means one
or more of the trailing blocks could overlap the free block bitmap.
Let's consider this code from kernel.h:
#define K_MEM_POOL_DEFINE(name, minsz, maxsz, nmax, align) \
char __aligned(align) _mpool_buf_##name[_ALIGN4(maxsz * nmax) \
+ _MPOOL_BITS_SIZE(maxsz, minsz, nmax)]; \
The static pool allocation rounds up the product of maxsz and nmax not
size of individual blocks. If we have, say maxsz = 10 and nmax = 20,
the result of _ALIGN4(10 * 20) is 200. That's the offset at which the
free block bitmap will be located.
However, because z_sys_mem_pool_block_alloc() does this:
lsizes[0] = _ALIGN4(p->max_sz);
Individual level 0 blocks will have a size of 12 not 10. That means
the 17th block will extend up to offset 204, 18th block up to 216, 19th
block to 228, and 20th block to 240. So 4 out of the 20 blocks are
overflowing the static pool area and 3 of them are even located
completely outside of it.
In this example, we have only 20 blocks that can't be split so there is
no extra free block bitmap allocation beyond the bitmap embedded in the
sys_mem_pool_lvl structure. This means that memory corruption will
happen in whatever data is located alongside the _mpool_buf_##name
array. But even with, say, 40 blocks, or larger blocks, the extra bitmap
size would be small compared to the extent of the overflow, and it would
get corrupted too of course.
And the data corruption will happen even without allocating any memory
since z_sys_mem_pool_base_init() stores free_list pointer nodes into
those blocks, which in turn may get corrupted if that other data is
later modified instead.
Fixing this issue is simple: rounding on the static pool allocation is
"misparenthesized". Let's turn
_ALIGN4(maxsz * nmax)
into
_ALIGN4(maxsz) * nmax
But that's not sufficient.
In z_sys_mem_pool_base_init() we have:
size_t buflen = p->n_max * p->max_sz, sz = p->max_sz;
u32_t *bits = (u32_t *)((u8_t *)p->buf + buflen);
Considering the same parameters as above, here we're locating the extra
free block bitmap at offset `buflen` which is 20 * 10 = 200, again below
the reach of the last 4 memory blocks. If the number of blocks gets past
the size of the embedded bitmap, it will overlap memory blocks.
Also, the block_ptr() call used here to initialize the free block linked
list uses unrounded p->max_sz, meaning that it is initially not locating
dlist nodes within the same block boundaries as what is expected from
z_sys_mem_pool_block_alloc(). This opens the possibility for allocated
adjacent blocks to overwrite dlist nodes, leading to random crashes in
the future.
So a complete fix must round up p->max_sz here too.
Given that runtime usage of max_sz should always be rounded up, it is
then preferable to round it up once at compile time instead and avoid
further mistakes of that sort. The existing _ALIGN4() usage on p->max_sz
at run time are then redundant.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
There is no point allowing smaller alignments. And on 64-bit systems the
minimum becomes 8 rather than 4, so let's adjust things automatically.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Found a few annoying typos and figured I better run script and
fix anything it can find, here are the results...
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The k_stack data type cannot be u32_t on a 64-bit system as it is
often used to store pointers. Let's define a dedicated type for stack
data values, namely stack_data_t, which can be adjusted accordingly.
For now it is defined to uintptr_t which is the integer type large
enough to hold a pointer, meaning it is equivalent to u32_t on 32-bit
systems and u64_t on 64-bit systems.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We introduce k_float_disable() system call, to allow threads to
disable floating point context preservation. The system call is
to be used in FP Sharing Registers mode (CONFIG_FP_SHARING=y).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Enable generation of doxygen documentation for kernel APIs that are
behind Kconfig options and add a note about the option needed to enable
the APIs.
Enable both CONFIG_SCHED_CPU_MASK and CONFIG_SCHED_DEADLINE in doxygen
config file.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Doxygen comments can include doxygen-specific markup tags. If other
markup tags are used (e.g., restructuredText) we need to indicate that
in the doxygen comments (via @rststar/@endrststar tags).
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This convenience macro wraps Z_DECL_ALIGN() and __in_section() to
simplify static definitions of structure instances gathered in dedicated
sections. Most of the time those go together, and the section name is
already closely related to the struct type, so abstracting things behind
a simpler interface reduces probability of mistakes and makes the code
clearer. A few input section names have been adjusted accordingly.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The alignment fix on struct device definitions should be done to all
such linker list tricks. Let's abstract the declaration plus alignment
with a macro and apply it to all concerned cases.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The fifo/lifo API is implemented on top of the queue API with macros
that blindly force a cast to struct k_queue. Providing a reference to
the _queue member from the k_fifo structure is much cleaner as it let
the compiler perform pointer type checking. Generated code is identical.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Architectures that lack implementations of synchronous traps (via
Z_ARCH_EXCEPT()) end up using a z_except_reason() implementation that
doesn't actually trap at all. It just invokes
z_NanoFatalErrorHandler() in the current thread context.
That has two problems:
First, it was just blindly assuming that the error handling invoked
would abort the current thread, swap away, and never return. But that
can be application code in z_SysFatalErrorHandler that we can't
control.
Second, it was too broad with this assumption and stuff a
CODE_UNREACHABLE hint in for the compiler. But in fact
z_except_reason() may be invoked in interrupt context (for example the
stackprot check) where it may NOT swap away and WILL return
synchronously from the call. This doesn't seem to have caused a
miscompilation in production code, but it made a total voodoo hash out
of my debugging around this macro for an hour or so until I figured
out why my logging was being optimized out.
Do the abort unconditionally instead of relying on the app, and remove
the incorrect compiler hint.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add k_usleep() API, analogous to k_sleep(), excepting that the argument
is in microseconds rather than milliseconds.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
k_poll_signal_raise() returns an error code to indicate that the raise
was too late to notify an expiring poll. Make clear that this does not
mean that the signal was lost: a subsequent poll will find it and expire
immediately.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.
As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.
The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269Fixes: #14766
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Rename reserved function names in drivers/ subdirectory. Update
function macros concatenatenating function names with '##'. As
there is a conflict between the existing gpio_sch_manage_callback()
and _gpio_sch_manage_callback() names, leave the latter unmodified.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
This is used to have each arch canonically state how much
room in the stack object is reserved for non-thread use.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is a trivial change to satisfy C++, which requires that designated
initializers appear in the same order as the members they initialize.
Fixes: #14540
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
There was a detected user error in the code where racing insertions of
k_delayed_work items into different queues would be detected and
flagged as an error (honestly I don't see much value there -- Zephyr
doesn't as a general rule protect against errors like this, and
work_q's are inherently kernel things that don't require
userspace-style checking).
This got broken with spinlockification, where each work_q object got
its own lock, so the single lock wouldn't protect against the other
insert function any more. As it happens, that was needless. The core
synchronization on a work_q is in the internal k_queue object anyway
-- the lock in this file was only ever used for (very fast,
noncontending) delayed work insertion. So go back to a global lock to
preserve the original behavior.
Fixes#14104
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
You can't cancel what hasn't been submitted. Clarification added
following minor bike shed in github. Fixes#14105
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Nothing in the code actually returns -EINPROGRESS, and in the case of
k_work_init() I don't see how that can even be done in a reliable way.
Don't claim we do what we don't. Fixes#14109.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In some circumstances (e.g., a tickless kernel), k_timer_remaining_get()
would not account for time passed that didn't involve clock interrupts.
This adds a simple fix for that, and adds a test case. In addition, the
return value of k_timer_remaining_get() is clamped at 0 in the case of
overdue timers and the API description is adjusted to reflect this.
Fixes: #13353
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
One spinlock per pipe object. Also removed some vestigial locking
around _ready_thread(). That call is internally synchronized now.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Straightforward port. Each struct k_queue object gets a spinlock to
control obvious data ownership.
Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock. But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled. The spinlock API detects that as a recursive lock when
asserts are enabled.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Each work_q object gets a separate spinlock to synchronize access
instead of the global lock. Note that there was a recursive lock
condition in k_delayed_work_cancel(), so that's been split out into an
internal unlocked version and the API entry point that wraps it with a
lock.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.
To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
While k_uptime_get() and k_uptime_get32() return time in
milliseconds, they don't need to have millisecond resolution.
Resolution with default Zephyr settings in 10ms.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
This adds a simple implementation of SMP CPU affinity to Zephyr. The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.
Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway). Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.
The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Added cpu_idle APIs to a doxygen group, otherwise there were missing
from the project documentation.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state. This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.
Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.
Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Although sys_dnode_t and sys_dlist_t are aliases, their roles are
different and they appear in different positions in dlist API calls.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Zero length array is a GNU extension that works as an header for a
variable length object. The portable solution for this is using
flexible length array, but this can be used only in the end of a
struct declaration and this is violates MISRA-C rule 18.8.
The easiest way to rif of this is make the macro expand to nothing but
then we will have a trailing semicolon that is not allowed in C99. So
the macro was changed to automatically add the semicolon when needed.
This may break code identation in some editors but it is a fair price
to pay to have portability and compliance.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This API was using variable number of arguments. Which is not
allowed according to misra c guidelines(Rule 17.1). Hence making
this API into a macro and using the util macro FOR_EACH_FIXED_ARG
to get the same functionality.
There is one deviation from the old function. The last argument
shouldn't be NULL.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Fix misspellings in documentation (.rst, Kconfig help text, and .h
doxygen API comments), missed during regular reviews.
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
In C90 was introduced function prototype, that allows argument types
to be checked against parameter types, though it is not necessary
specify names for the parameters. MISRA-C requires names for function
prototype parameters, it claims that names can provide useful
information regarding the function interface.
MISRA-C rule 8.2
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This commit exposes k_mem_partition_attr_t outside User Mode, so
we can use struct k_mem_partition for defining memory partitions
outside the scope of user space (for example, to describe thread
stack guards or no-cacheable MPU regions). A requirement is that
the Zephyr build supports Memory protection. To signify this, a
new hidden, all-architecture Kconfig symbol is defined (MPU). In
the wake of exposing k_mem_partition_attr_t, the commit exposes
the MPU architecture-specific access permission attribute macros
outside the User space context (for all ARCHs), so they can be
used in a more generic way.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
MEM_PARTITION_ENTRY is problematic, as it assumes that
struct k_mem_partition contains a k_mem_partition_attr_t
field, which is only true if Memory Protection is supported.
Additionally, it works with k_mem_partition_attr_t being a
single element object (scalar or single element structure).
This commit removes the macro function and updates macro
K_MEM_PARTITION_DEFINE() (MEM_PARTITION_ENTRY has only been
used in that macro function definition).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This allows for workqueues to be started in user mode.
No additional kernel objects or system calls are defined
other than starting the workqueue in user mode; for
permission purposes the embedded queue and thread objects
are sufficient.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
There's no current need for this and it makes work items
declared with K_WORK_DEFINE() inaccessible to user mode.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_work and k_work_q are not kernel objects, nor will they
be. k_work_q contains some kernel objects which are tracked
independently.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add an API to peek into a message queue and read the first message
without removing the message from the queue.
Signed-off-by: Sathish Kuttan <sathish.k.kuttan@intel.com>
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_poll_signal was being used by both, struct and function. Besides
this being extremely error prone it is also a MISRA-C violation.
Changing the function to contain a verb, since it performs an action
and the struct will be a noun. This pattern must be formalized and
followed and across the project.
MISRA-C rules 5.7 and 5.9
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
struct k_thread already has a pointer type k_tid_t, there is no need for
this definition to tcs.
Less symbols/names make the code cleaner and more readable.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This patch fixes few issues in queue.c. This patch also changes
the return type of k_queue_alloc_append and k_queue_alloc_prepend
from int to s32_t.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
This commit introduces k_sleep() return value, which provides
information about actual sleep time. If the returned value is
not-zero, the thread slept shorter than requested, which is
only possible if the thread has been woken up by k_wakeup() call.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Can choose the C++ standard (C++98/11/14/17/2a)
Can link with standard C++ library (libstdc++)
Add support of C++ exceptions
Add support of C++ RTTI
Add C++ options to subsys/cpp/Kconfig
Implements new and delete using k_malloc and k_free
if CONFIG_HEAP_MEM_POOL_SIZE is defined
Signed-off-by: Benoit Leforestier <benoit.leforestier@gmail.com>
This patch removes the typecast (void*). This can be better
handled by typecasting to the actual typdef. This fixes the
misra rule of 11.6 for alert.
Part of GH-10042.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
I was pretty careful, but these snuck in. Most of them are due to
overbroad string replacements in comments. The pull request is very
large, and I'm too lazy to find exactly where to back-merge all of
these.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version. The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed. Advantages:
* Properly spinlocked and SMP-aware. The earlier timer implementation
relied on only CPU 0 doing timeout work, and on an irq_lock() being
taken before entry (something that was violated in a few spots).
Now any CPU can wake up for an event (or all of them) and everything
works correctly.
* The *_thread_timeout() API is now expressible as a clean wrapping
(just one liners) around the lower-level interface based on function
pointer callbacks. As a result the timeout objects no longer need
to store backpointers to the thread and wait_q and have shrunk by
33%.
* MUCH smaller, to the tune of hundreds of lines of code removed.
* Future proof, in that all operations on the queue are now fronted by
just two entry points (_add_timeout() and z_clock_announce()) which
can easily be augmented with fancier data structures.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).
Move it to where it belongs. Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API. And rename it to
a more standard zephyr name.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The existing timeout API wants to store a wait_q on which the thread
is waiting, but it only uses that value in one spot (and there only as
a boolean flag indicating "this thread is waiting on a wait_q).
As it happens threads can already store their own backpointers to a
wait_q (needed for the SCALABLE scheduler backend), so we should use
that instead.
This patch doesn't actually perform that unification yet. It
reorgnizes things such that the pended_on field is always set at the
point of timeout interaction, and adds a bunch of asserts to make 100%
sure the logic is correct. The next patch will modify the API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This flag is an indication to the timer driver that the OS doesn't
care about rollover conditions of the tick count while idling, so the
system doesn't need to wake up once per counter flip[1]. Obviously in
that circumstance values returned from k_uptime_get_32() are going to
be wrong, so the implementation had an assert to check for misuse.
But no one understood that from the docs, so the only place these APIs
were used in practice were as "guards" around code that needed to call
k_uptime_get_32(), even though that's 100% wrong per docs!
Clarify the docs. Remove the incorrect guards. Change the flag to
initialize to true so that uptime isn't broken-by-default in tickless
mode. Also move the implemenations of the functions out of the
header, as there's no good reason for these to need to be inlined.
[1] Which can be significant. A 100MHz ARM using the 24 bit SysTick
counter rolls over at about 6 Hz, and if it had to come out of
idle at that rate it would be a significant power issue that would
swamp the gains from tickless. Obviously systems with slow
counters like nRF or 64 bit ones like RISC-V or x86's TSC aren't
as affected.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The kernel.h file had a bunch of internal APIs for timeout/clock
handling mixed in. Move these to sys_clock.h, which it always
included (in a weird location, so move THAT to kernel_includes.h with
everything else).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
k_queue has k_queue_append API which does not check if the element's
address already exists. This creates a problem if the same element
address is appended to queue. This forms circular list showing
unintended behaviour for the application using queue. The proposed
API k_queue_find_and_append takes care of checking if element exists
already before appending. This API is complimentary to k_queue_remove
which checks if the queue element is present before removing.
Signed-off-by: Dhananjay Gundapu Jayakrishnan <dhananjay.jayakrishnan@proglove.de>
Macro _OBJECT_TRACING_NEXT_PTR expands to a member or to nothing.
Macro _OBJECT_TRACING_NEXT_PTR is used in a number of places, like:
struct k_stack {
.. omitted ..
_OBJECT_TRACING_NEXT_PTR(k_stack);
u8_t flags;
};
When the macro expands to nothing, a lonesome semi would remain. This is
illegal in C99, but permitted in GCC with GNU extensions.
Rather than expand to empty, we now expand to a zero-length array.
This means we can retain the trailing semis across structs wherein the
macro is used.
Note that zero-length array (foo[0]) != flexible array member (foo[]):
* zero-length array: Is GNU+Clang extension. Anywhere in struct.
* flexible array member: Is C99. Only in end of struct.
Thus we have really only traded-off one portability issue for
another, more acceptable, one at least.
Signed-off-by: Mark Ruvald Pedersen <mped@oticon.com>
Change APIs that essentially return a boolean expression - 0 for
false and 1 for true - to return a bool.
MISRA-C rule 14.4
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Some minor style fixes and rewording of the documentation
for ARM MPU region types.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The following 2 improvements are contained in this patch:
- When converting from ms to ticks, instead of using hardware cycles
per tick, use hardware cycles per second. This ensures that the
multiplication is done before the division, increasing precision.
- When converting from ticks to ms, instead of using cycles per tick
and cycles per sec, use ticks per sec. This too increases the
precision.
The concept is to make the dividend as large as possible compared to the
divisor in order to lose as little precision as possible.
Fixes#8898Fixes#9459Fixes#9466Fixes#9468
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Previously (as introduced in 48fadfe62), if k_poll() waited on a
queue (or subclass like fifo), and wait was cancelled on queue's
side using k_queue_cancel_wait(), k_poll returned -EINTR. But it
did not set event->state field (to anything else but
K_POLL_STATE_NOT_READY), so in case of waiting on multiple queues,
it was not possible to differentiate which of them was cancelled.
This in particular broke detection of network socket EOF conditions
in POSIX poll() implementation.
This situation is now resolved with introduction of explicit
K_POLL_STATE_CANCELLED state, which is now set for cancelled queue
(-EINTR return remains the same).
This change also elaborates docstring for the functions mentioned, to
document this behavior.
Fixes: #9032
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.
Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.
Fixes: #9067
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Commit 2b8cf4c98e ("include: kernel: Fix documentation for
TICKLESS_KERNEL API's")' defined a macro to fix documentation when
TKCKLESS_KERNEL is not available but this macro does not return the
same the functions returns, so its use may result in compilation
error.
Another point to consider is that if one is using this function
without it be enabled is better to return a proper error like ENOTSUP
explicitly saying that this is not supported.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The value of sys_clock_ticks_per_sec is obtained using
simple integer division with rounding toward zero. As result
using this variable in _ms_to_ticks() introduces some error.
This commit eliminates sys_clock_ticks_per_sec from equation
used in _ms_to_ticks() removing introduced error.
Also, this commit fixes#8895.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Because errno.h is defined in terms of a syscall we can get into trouble
when one syscall/<FOO.h> ends up include another syscall/<BAR.h>.
Moving errno.h from kernel_includes.h to kernel.h breaks the possible
inclusion issue on some ARM platforms (which arm_mpu.h ends up include
soc.h which ends up include kernel_includes.h which would include
errno.h).
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This commit enables accurate (based on 64-bit math) tick <-> ms
conversion if system clock rate is determined at runtime.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.
In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.
The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit moves the _ms_to_ticks() and __ticks_to_ms() functions
close to each other in order to improve code readability.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
The kernel incorrectly assumed, that system timer frequency is always
divisible without remainder by couple "natural" tick rates (like 100).
As result on some SoCs, time calculations was not correct, producing
strange effects (invalid sleep times, incorrect k_uptime_get() etc.).
This commit enables accurate, but costly (using 64-bit math) tick <-> ms
conversion if the selected tick interval is not exact due to hardware
limitations.
Also, this commit fixes tests in which removed _ms_per_tick were used.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
This commit removes fixed list of "good" sys_clock_ticks_per_sec values
which usage results in integer _ms_per_tick value.
Instead of using the list, simply check if MSEC_PER_SEC could be divided
without remainder by sys_clock_ticks_per_sec.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
This commit moves all implementations of the _ms_to_ticks() into
single file. Also, the function is now inline even if
_NEED_PRECISE_TICK_MS_CONVERSION is defined.
Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
Make these "choice" items instead of a single boolean that implies the
element unset.
Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is a public macro which calculates the size to be allocated for
stacks inside a stack array. This is necessitated because of some
internal padding (e.g. for MPU scenarios). This is particularly
useful when a reference to K_THREAD_STACK_ARRAY_DEFINE needs to be
made from within a struct.
Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
This commit creates a new header file (kernel_include.h) that
contains all header files to be included by kernel_init.h.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state". It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.
Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll. The disadvantage is the 4
bytes of thread space needed. Advantages:
+ Cleaner API, it's now internal to poll instead of being globally
visible.
+ The thread_state bit space is just one byte, and was almost full
already.
+ Smaller code to write/test a full word and not a bitfield
+ Words are atomic, so no need for one of irq lock/unlock pairs.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.
This is problematic:
- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.
Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_work_init() was not initializing all fields in the k_work struct.
Mainly, the atomic_clear_bit() function call was reading a possibly
uninitialized value, clearing a bit, and assigning it back to the
`flags` member. The `_reserved` member was never initialized.
With the struct now initialized with the _K_WORK_INITIALIZER() macro,
initialization is consistent regardless of how a `struct k_work` is
initialized.
This fixes the Valgrind issues found in #7478.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
It's not possible to enforce that K_THREAD_STACK_SIZEOF()
returns the original number passed to K_THREAD_STACK_DEFINE().
Some arches need to round this number up in order to satisfy
alignment constraints.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We want the struct to be packed to conserve space, but the
perms field needs to always be on a 4-byte boundary since
we do bitfield operations on it, arches like ARC require
that the sys_bitfield_* operations be aligned to a 4 byte
boundary.
Instances of struct _k_object will now be 4-byte aligned
if in an array (which they are), even though the members
are still packed.
Fixes: #7776
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add requirement ID place holders based on APIS. The requirements will
appear as a list in doxygen documentation. The IDs will be expanded with
more details somewhere else, probably a requirement catalog on GH or
some other requirement management tool. This is still TBD.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Very simple implementation of deadline scheduling. Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one. Behavior as to thread
selection does not change. New features:
+ Unifies SMP and uniprocessing selection code (with the sole
exception of the "cache" trick not being possible in SMP).
+ The old static multi-queue implementation is gone and has been
replaced with a build-time choice of either a "dumb" list
implementation (faster and significantly smaller for apps with only
a few threads) or a balanced tree queue which scales well to
arbitrary numbers of threads and priority levels. This is
controlled via the CONFIG_SCHED_DUMB kconfig variable.
+ The balanced tree implementation is usable symmetrically for the
wait_q abstraction, fixing a scalability glitch Zephyr had when many
threads were waiting on a single object. This can be selected via
CONFIG_WAITQ_FAST.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs. Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages. Now replacement of wait_q with a different
data structure is much cleaner.
Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro. While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
k_poll is now accessible from user mode. A memory allocation takes place
from the caller's resource pool to copy the provided poll_events
array; this can be large enough to make allocating it on the stack
not preferable.
k_poll_signal are now proper kernel objects. Two APIs have been added,
one to reset the signaled state and one to check the current signaled
state and result value.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.
FIFO/LIFOs are derived from k_queues and have had allocator functions
added.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We get the following warning with CONFIG_DYNAMIC_OBJECTS=n in
_impl_k_object_alloc:
include/kernel.h:322:57: warning: unused parameter ‘otype’ [-Wunused-parameter]
static inline void *_impl_k_object_alloc(enum k_objects otype)
^~~~~
Simple fix is to ARG_UNUSED otype.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Similar to what has been done with pipes and message queues,
user mode can't be trusted to provide a buffer for the kernel
to use. Remove k_stack_init() as a syscall and offer
k_stack_alloc_init() which allocates a buffer from the caller's
resource pool.
Fixes#7285
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode can't be trusted to provide a memory buffer to
k_msgq_init(). Introduce k_msgq_alloc_init() which allocates
the buffer out of the calling thread's resource pool and expose
that as a system call instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.
K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.
Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.
A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.
Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.
We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.
However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.
Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Deprecated k_call_stacks_analyze() API as it is only
dumping (printing) the statically defined main, idle,
work and ISR stacks.
Use k_thread_foreach() API which is a generic API
to iterate over threads.
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
Add k_thread_foreach API to iterate over all the threads in
the system.
This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...
Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
This was wrong in two ways, one subtle and one awful.
The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it). So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.
And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock. Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do. The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention. That this didn't break more things is amazing to me.
The rework is actually simpler than the original, thankfully. Though
there are some further subtleties:
* The lock state implied by irq_lock() allows the lock to be
implicitly released on context switch (i.e. you can _Swap() with the
lock held at a recursion level higher than 1, which needs to allow
other processes to run). So return paths into threads from _Swap()
and interrupt/exception exit need to check and restore the global
lock state, spinning as needed.
* The idle loop design specifies a k_cpu_idle() function that is on
common architectures expected to enable interrupts (for obvious
reasons), but there is no place to put non-arch code to wire it into
the global lock accounting. So on SMP, even CPU0 needs to use the
"dumb" spinning idle loop.
Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
BUILD_ASSERT() macro makes use of __COUNTER__ which may not be
supported in some compilers (like xcc). So, multiple uses of
BUILD_ASSERT() in same scope is not possible for such compilers.
Instead, the expression to BUILD_ASSERT can be "&&"ed to achieve
the same purpose.
Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.
Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.
Fixes#6907.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
A red-black tree is maintained containing the metadata for all
dynamically created kernel objects, which are allocated out of the
system heap.
Currently, k_object_alloc() and k_object_free() are supervisor-only.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Ensure this value during static initialization (with build assertions),
and dynamic initializations through system calls.
If initial count is larger than the limit, it's possible for the count
to wraparound, causing locking issues.
Expanding the BUILD_ASSERT() macros after declaring a k_sem struct in
K_SEM_DEFINE() is necessary to support cases where a semaphore is
defined statically.
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
The only difference between this call and k_thread_abort() (beyond
some minor performance deltas) is that "cancel" will act as a noop in
cases where the thread has begun execution and will return an error.
"Abort" always succeeds, of course. That is inherently racy when used
as a "stop the thread" API: there's no way in general (or at all in
SMP situations) to know that you're calling this function "early
enough" to catch the thread before it starts.
Effectively, all k_thread_cancel() gives you that k_thread_abort()
doesn't is an indication about whether or not a thread has started.
There are many other ways to get that information that don't require
dangerous kernel APIs.
Deprecate this function. Zephyr's own code never used it except for
its own unit test.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().
And some places were just calling _reschedule_threads(), which did all
this already.
Unify all this madness. The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield(). The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor. I'm not at all convinced it
should exist...
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This patch lets a C++ application use more of Zephyr by adding guards
and changeing some constructs to the C++11 equivalent.
Changes include:
- Adding guards
- Switching to static_assert
- Switching to a template for ARRAY_SIZE as g++ doesn't have the
builtin.
- Re-ordering designated initialisers to match the struct field order
as G++ only supports simple designated initialisers.
Signed-off-by: Michael Hope <mlhx@google.com>
We would like to offer the capability to have memory pool heap data
structures that are usable from user mode threads. The current
k_mem_pool implementation uses IRQ locking and system-wide membership
lists that make it incompatible with user mode constraints.
However, much of the existing memory pool code can be abstracted to some
common functions that are used by both k_mem_pool and the new
sys_mem_pool implementations.
The sys_mem_pool implementation has the following differences:
* The alloc/free APIs work directly with pointers, no internal memory
block structures are exposed to the end user. A pointer to the source
pool is provided for allocation, but freeing memory just requires the
pointer and nothing else.
* k_mem_pool uses IRQ locks and required very fine-grained locking in
order to not affect system latency. sys_mem_pools just use a semaphore
to protect the pool data structures at the API level, since there aren't
implications for system responsiveness with this kind of concurrency
control.
* sys_mem_pools do not support the notion of timeouts for requesting
memory.
* sys_mem_pools are specified at compile time with macros, just like
kernel memory pools. Alternative forms of specification at runtime
will be a later enhancement.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
For posix layer implementation of message queue, we need to fetch
basic attributes of message queue. Currently this routine is not
present in Zephyr. So adding this routing into message queue.
Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
The k_mem_partition structs need to be placed in the kernel memory.
This patch ensures that these structs are placed correctly.
Also when a struct k_mem_domain is declared it is advised to add
__kernel.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
During system initialization, the global static variable (to
mem_domain.c) is initialized with the number of maximum partitions per
domain. This variable is of u8_t type.
Assertions throughout the code will check ranges and test for overflow
by relying on implicit type conversion.
Use an u8_t instead of u32_t to avoid doubts. Also, reorder the
k_mem_partition struct to remove the alignment hole created by reducing
sizeof(num_partitions).
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>