The alignment fix on struct device definitions should be done to all
such linker list tricks. Let's abstract the declaration plus alignment
with a macro and apply it to all concerned cases.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The fifo/lifo API is implemented on top of the queue API with macros
that blindly force a cast to struct k_queue. Providing a reference to
the _queue member from the k_fifo structure is much cleaner as it let
the compiler perform pointer type checking. Generated code is identical.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Architectures that lack implementations of synchronous traps (via
Z_ARCH_EXCEPT()) end up using a z_except_reason() implementation that
doesn't actually trap at all. It just invokes
z_NanoFatalErrorHandler() in the current thread context.
That has two problems:
First, it was just blindly assuming that the error handling invoked
would abort the current thread, swap away, and never return. But that
can be application code in z_SysFatalErrorHandler that we can't
control.
Second, it was too broad with this assumption and stuff a
CODE_UNREACHABLE hint in for the compiler. But in fact
z_except_reason() may be invoked in interrupt context (for example the
stackprot check) where it may NOT swap away and WILL return
synchronously from the call. This doesn't seem to have caused a
miscompilation in production code, but it made a total voodoo hash out
of my debugging around this macro for an hour or so until I figured
out why my logging was being optimized out.
Do the abort unconditionally instead of relying on the app, and remove
the incorrect compiler hint.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Add k_usleep() API, analogous to k_sleep(), excepting that the argument
is in microseconds rather than milliseconds.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
k_poll_signal_raise() returns an error code to indicate that the raise
was too late to notify an expiring poll. Make clear that this does not
mean that the signal was lost: a subsequent poll will find it and expire
immediately.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.
As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.
The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269Fixes: #14766
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
Rename reserved function names in drivers/ subdirectory. Update
function macros concatenatenating function names with '##'. As
there is a conflict between the existing gpio_sch_manage_callback()
and _gpio_sch_manage_callback() names, leave the latter unmodified.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
This is used to have each arch canonically state how much
room in the stack object is reserved for non-thread use.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is a trivial change to satisfy C++, which requires that designated
initializers appear in the same order as the members they initialize.
Fixes: #14540
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
There was a detected user error in the code where racing insertions of
k_delayed_work items into different queues would be detected and
flagged as an error (honestly I don't see much value there -- Zephyr
doesn't as a general rule protect against errors like this, and
work_q's are inherently kernel things that don't require
userspace-style checking).
This got broken with spinlockification, where each work_q object got
its own lock, so the single lock wouldn't protect against the other
insert function any more. As it happens, that was needless. The core
synchronization on a work_q is in the internal k_queue object anyway
-- the lock in this file was only ever used for (very fast,
noncontending) delayed work insertion. So go back to a global lock to
preserve the original behavior.
Fixes#14104
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
You can't cancel what hasn't been submitted. Clarification added
following minor bike shed in github. Fixes#14105
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Nothing in the code actually returns -EINPROGRESS, and in the case of
k_work_init() I don't see how that can even be done in a reliable way.
Don't claim we do what we don't. Fixes#14109.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In some circumstances (e.g., a tickless kernel), k_timer_remaining_get()
would not account for time passed that didn't involve clock interrupts.
This adds a simple fix for that, and adds a test case. In addition, the
return value of k_timer_remaining_get() is clamped at 0 in the case of
overdue timers and the API description is adjusted to reflect this.
Fixes: #13353
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
One spinlock per pipe object. Also removed some vestigial locking
around _ready_thread(). That call is internally synchronized now.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Straightforward port. Each struct k_queue object gets a spinlock to
control obvious data ownership.
Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock. But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled. The spinlock API detects that as a recursive lock when
asserts are enabled.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Each work_q object gets a separate spinlock to synchronize access
instead of the global lock. Note that there was a recursive lock
condition in k_delayed_work_cancel(), so that's been split out into an
internal unlocked version and the API entry point that wraps it with a
lock.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.
To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
While k_uptime_get() and k_uptime_get32() return time in
milliseconds, they don't need to have millisecond resolution.
Resolution with default Zephyr settings in 10ms.
Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
This adds a simple implementation of SMP CPU affinity to Zephyr. The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.
Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway). Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.
The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Added cpu_idle APIs to a doxygen group, otherwise there were missing
from the project documentation.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state. This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.
Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.
Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Although sys_dnode_t and sys_dlist_t are aliases, their roles are
different and they appear in different positions in dlist API calls.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Zero length array is a GNU extension that works as an header for a
variable length object. The portable solution for this is using
flexible length array, but this can be used only in the end of a
struct declaration and this is violates MISRA-C rule 18.8.
The easiest way to rif of this is make the macro expand to nothing but
then we will have a trailing semicolon that is not allowed in C99. So
the macro was changed to automatically add the semicolon when needed.
This may break code identation in some editors but it is a fair price
to pay to have portability and compliance.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This API was using variable number of arguments. Which is not
allowed according to misra c guidelines(Rule 17.1). Hence making
this API into a macro and using the util macro FOR_EACH_FIXED_ARG
to get the same functionality.
There is one deviation from the old function. The last argument
shouldn't be NULL.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Fix misspellings in documentation (.rst, Kconfig help text, and .h
doxygen API comments), missed during regular reviews.
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
In C90 was introduced function prototype, that allows argument types
to be checked against parameter types, though it is not necessary
specify names for the parameters. MISRA-C requires names for function
prototype parameters, it claims that names can provide useful
information regarding the function interface.
MISRA-C rule 8.2
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This commit exposes k_mem_partition_attr_t outside User Mode, so
we can use struct k_mem_partition for defining memory partitions
outside the scope of user space (for example, to describe thread
stack guards or no-cacheable MPU regions). A requirement is that
the Zephyr build supports Memory protection. To signify this, a
new hidden, all-architecture Kconfig symbol is defined (MPU). In
the wake of exposing k_mem_partition_attr_t, the commit exposes
the MPU architecture-specific access permission attribute macros
outside the User space context (for all ARCHs), so they can be
used in a more generic way.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
MEM_PARTITION_ENTRY is problematic, as it assumes that
struct k_mem_partition contains a k_mem_partition_attr_t
field, which is only true if Memory Protection is supported.
Additionally, it works with k_mem_partition_attr_t being a
single element object (scalar or single element structure).
This commit removes the macro function and updates macro
K_MEM_PARTITION_DEFINE() (MEM_PARTITION_ENTRY has only been
used in that macro function definition).
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This allows for workqueues to be started in user mode.
No additional kernel objects or system calls are defined
other than starting the workqueue in user mode; for
permission purposes the embedded queue and thread objects
are sufficient.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
There's no current need for this and it makes work items
declared with K_WORK_DEFINE() inaccessible to user mode.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_work and k_work_q are not kernel objects, nor will they
be. k_work_q contains some kernel objects which are tracked
independently.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>