Setting CONFIG_NUM_PREEMPT_PRIORITIES to 128 causes the idle thread to be
assigned priority 128, which exceeds the int8_t range. This results in the
idle thread being assigned the highest priority (-128) instead of the
lowest, causing threads to not wake up from k_sleep.
Restrict the range of CONFIG_NUM_PREEMPT_PRIORITIES to 0 to 127 to ensure
the idle thread always has the lowest priority.
Signed-off-by: Jonas Spinner <jonas.spinner@burkert.com>
With spinlock debugging enabled LLEXTs need additional symbols
exported by the kernel.
Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
The essential thread check and panic happens at the top of
k_thread_abort(). This is arguably a performance bug: the system is
going to blow up anyway no matter where we put the test, we shouldn't
add instructions to the path taken by systems that DON'T blow up.
But really it's more of a testability/robustness glitch: if you have a
fatal error handler that wants to catch this panic (say, a test using
ztest_set_fault_valid()), then the current code will panic and
early-exit BEFORE THE THREAD IS DEAD. And so it won't actually die,
and will continue on causing mayhem when presumably the handler code
expected it to have been aborted.
It's sort of an unanswerable question as to what the "right" behavior
is here (the system is, after all, supposed to have panicked!). But
this seems preferable for definable practical reasons.
Kill the thread, then panic. Unless it's _current, in which case
panic as late as possible for maximum coverage of the abort path.
Fixes: #84460
Signed-off-by: Andy Ross <andyross@google.com>
K_KERNEL_STACK_RESERVED can be 0 which can give a warning with
-Wtype-limits. Only perform the check if ARCH_KERNEL_STACK_RESERVED
is set. Also remove the the unncessary sets in arch.h where it's
manually set to 0, it defaults to 0 anyways.
Signed-off-by: Ryan McClelland <ryanmcclelland@meta.com>
The routine k_thread_suspend() has a fast path for non-SMP when
suspending the current thread. When SMP is enabled, it is expected
that the compiler drop the entire fast path checks because the whole
expression would always evaluate to false. However, the compiler has
been observed to only drop whole fast path check when the
"!IS_ENABLED(CONFIG_SMP)" condition appears at the beginning of the
fast path check.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
z_reschedule() already has a check to determine if it is called from
the context of an ISR--no need to duplicate it in k_wakeup().
Furthermore, if the target thread is not sleeping, there is no need
to reschedule and we can do a fast return.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
1. Fixes a performance issue in k_msgq_put() to allow for a fast return
path when handling a poll event does nothing.
2. Allows for a fast return path in k_msgq_purge() when no threads were
awakened.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Updates the queue code to both allow for a fast return path in a
few routines when the operation did not wake or signal another
thread.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When doing a condition variable broadcast, a full reschedule
is only needed if at least one thread was awakened.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Propagates the return value from z_handle_obj_poll_events()
within the message queue, pipes, queue and semaphore objects.
This allows the kernel object code to determine whether it
needs to perform a full reschedule, or if it can perform a
more optimized exit strategy.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Changes the return type of z_handle_obj_poll_events() so that it
returns true if there were polling events to handle (false
otherwise).
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Fix a void* to k_thread_entry_t conversion (that is silent in GCC but
not so in some other tools) in _is_valid_prio()
Signed-off-by: Björn Bergman <bjorn.bergman@iar.com>
Adds a note about the timeout_lock to aid future developers
in following the rules to help prevent deadlocks involving the
timeout and scheduler spinlocks.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The check for an active timeout in z_is_thread_ready() was originally
added to cover the case of a sleeping thread. However, since there is
now a bit in the thread state that indicates if the thread is sleeping
we can drop that superfluous check.
Making this change necessitates moving k_wakeup()'s call to
z_abort_thread_timeout() so that it is within the locked
_sched_spinlock section to ensure that we do not end up with
a stray thread timeout in the timeout list.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Removes an unnecessary clearing of the current CPU's swap_ok field
in do_swap() as that clearing is already done at the end of next_up()
which was just called by z_swap_next_thread() a little earlier.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
SMP does not need to mark the current thread as queued in
k_yield() as that will naturally get done in do_swap().
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When the PM subsystem is enabled, the idle thread locks the scheduler for
the duration the system is suspended. If a meta-IRQ preempts the idle
thread in this state, the idle thread is tracked in `metairq_preempted`.
However, when returning from the preemption, the idle thread is not removed
from `metairq_preempted`, unlike all the other threads. As a result, the
scheduler keeps running the idle thread even if there are higher priority
threads ready to run.
This change treats the idle thread the same way as all other threads when
returning from a meta-IRQ preemption.
Fixes#64705
Signed-off-by: Kalle Kietäväinen <kalle.kietavainen@silabs.com>
The compiler complains that:
```
zephyr/kernel/include/kernel_internal.h:121:29:
error: 'reader' may be used uninitialized [-Werror=maybe-uninitialized]
121 | thread->swap_retval = value;
| ~~~~~~~~~~~~~~~~~~~~^~~~~~~
zephyr/kernel/pipe.c: In function 'copy_to_pending_readers':
zephyr/kernel/pipe.c:92:26: note: 'reader' was declared here
92 | struct k_thread *reader;
| ^~~~~~
```
The static analyzer fails to see through the `LOCK_SCHED_SPINLOCK`
construct that the `reader` pointer is always initialized.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Systems that enabled this option don't have their stacks in coherent
memory. Given our pipe_buf_spec is stored on the stack, and readers may
also have their destination buffer on their stack too, it is not worth
going to the trouble of supporting direct-to-readers copy with them.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
We are waking up threads but failed to let them run if they are
higher priority. Add missing calls to z_reschedule().
Also wake up all pending writers as we don't know how many there might
be. It is more efficient to wake them all when the ring buffer is full
before reading from it rather than waking them one by one whenever there is
more room in it.
Thanks to Peter Mitsis for noticing those issues.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
If there are pending readers, it is best to perform a single data copy
directly into their final destination buffer rather than doing one copy
into the ring buffer just to immediately copy the same data out of it.
Incidentally, this allows for supporting pipes with no ring buffer at all.
The pipe implementation being deprecated has a similar capability so better
have it here too.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Dispense with the call to sys_timepoint_expired() by leveraging
swap_retval to distinguish between notifications and timeouts when
z_pend_curr() returns.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Simplify the logic, avoid repeated conditionals, avoid superfluous
scheduler calls, make the code more efficient and easier to read.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Change:
commit cc6317d7ac
Author: Jukka Rissanen <jukka.rissanen@linux.intel.com>
Date: Fri Nov 1 14:03:32 2019 +0200
kernel: poll: Allow 0 event input
Allows `k_poll` to be user with 0 events, which is useful for allowing just
a sleep without having to create artificial events.
Allow the same for `k_work_submit_to_queue()` and `k_work_submit()`.
Signed-off-by: David Brown <david.brown@linaro.org>
This commit adds new test cases for the pipe API rework.
* basic.c: Sanity check for pipe operations.
* concurrency.c: Test pipe operations with multiple threads.
* stress.c: Test pipe operations under stress conditions.
And moves the old pipe test cases to the deprecated folder.
Signed-off-by: Måns Ansgariusson <Mansgariusson@gmail.com>
This commit adds polling support to the newly rewritten k_pipe interface.
Changes include:
* Removed ifdef CONFIG_POLL from kernel/poll.c to let both implementations
coexist.
* Added the needed datastructures to the new k_pipe struct.
* k_pipe_write(..) now notifies the poll subsystem that data is available.
Signed-off-by: Måns Ansgariusson <Mansgariusson@gmail.com>
The `k_pipe_*` API has been reworked to provide a more consistent and
intuitive interface. The new API aims to provide a simple to use byte
stream interface that is more in line with the POSIX pipe API.
The previous API has been deprecated and will be removed in a future
release.
Signed-off-by: Måns Ansgariusson <Mansgariusson@gmail.com>
This function is getting quite involved and it also gained more callers
lately. This is not performance critical so Uninline it to save on
binary size.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Repeated references to _current won't produce a different result as the
executing thread instance is always the same. Use the const attribute to
let the compiler know it may reuse a previously obtained value. This offset
the penalty for moving z_smp_current_get() out of line and provides yet
more binary size reduction.
This change is isolated in its own commit to ease bisecting in case some
unexpected misbehavior is eventually observed.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Define the generic _current directly and get rid of the generic
arch_current_get().
The SMP default implementation is now known as z_smp_current_get().
It is no longer inlined which saves significant binary size (about 10%
for some random test case I checked).
Introduce z_current_thread_set() and use it in place of
arch_current_thread_set() for updating the current thread pointer
given this is not necessarily an architecture specific operation.
The architecture specific optimization, when enabled, should only care
about its own things and not have to also update the generic
_current_cpu->current copy.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Mostly a revert of commit b1def7145f ("arch: deprecate `_current`").
This commit was part of PR #80716 whose initial purpose was about providing
an architecture specific optimization for _current. The actual deprecation
was sneaked in later on without proper discussion.
The Zephyr core always used _current before and that was fine. It is quite
prevalent as well and the alternative is proving rather verbose.
Furthermore, as a concept, the "current thread" is not something that is
necessarily architecture specific. Therefore the primary abstraction
should not carry the arch_ prefix.
Hence this revert.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Gives a hint to the compiler that the bail-out paths in both
k_thread_suspend() and k_thread_resume() are unlikely events.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Even though calculating the priority queue index in the priority
multiq is quick, caching it allows us to extract an extra 2% in
terms of performance as measured by the thread_metric cooperative
benchmark.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Adds customized yield implementations based upon the selected
scheduler (dumb, multiq or scalable). Although each follows the
same broad outline, some of them allow for additional tweaking
to extract maximal performance. For example, the multiq variant
improves the performance of k_yield() by about 20%.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Dequeuing from a doubly linked list is similar to removing an item
except that it does not re-initialize the dequeued node.
This comes in handy when sorting a doubly linked list (where the
node gets removed and re-added). In that circumstance, re-initializing
the node is required. Furthermore, the compiler does not always
'understand' this. Thus, when performance is critical, dequeuing
may be preferred to removing.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Minor cleanups include ...
1. Eliminating unnecessary if-defs and forward declarations
2. Co-locating routines of the same queue type
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This ensures that the system clock is correctly updated when the first
timeout is aborted, preventing unexpected early wake-up by the system clock
programmed previously.
Signed-off-by: Dong Wang <dong.d.wang@intel.com>