Commit graph

429 commits

Author SHA1 Message Date
Andy Ross
a04a30c6d1 kernel/sched: Move reschedule under lock in k_sched_unlock()
This function was a little clumsy, taking the scheduler lock,
releasing it, and then calling z_reschedule_unlocked() instead of the
normal locked variant of reschedule.  Don't take the lock twice.

Mostly this is a code size and hygiene win.  Obviously the sched lock
is not normally a performance path, but I happened to have picked this
API for my own microbenchmark in tests/benchmarks/swap and so noticed
the double-lock while staring at disassembly.

Signed-off-by: Andy Ross <andyross@google.com>
2026-03-10 17:24:10 +01:00
Andy Ross
2bbcece6ee kernel/sched: Refactor reschedule to permit better code generation
z_reschedule() is the basic kernel entry point for context switch,
wrapping z_swap(), and thence arch_switch().  It's currently defined
as a first class function for entry from other files in the kernel and
elsewhere (e.g. IPC library code).

But in practice it's actually a very thin wrapper without a lot of
logic of its own, and the context switch layers of some of the more
obnoxiously clever architectures are designed to interoperate with the
compiler's own spill/fill logic to avoid double saving.  And with a
small z_reschedule() there's not a lot to work with.

Make reschedule() an inlinable static, so the compiler has more
options.

Signed-off-by: Andy Ross <andyross@google.com>
2026-03-10 17:24:10 +01:00
Andy Ross
d535d17cbc kernel: Minor optimizations to z_swap()
Pick some low hanging fruit on non-SMP code paths:

+ The scheduler spinlock is always taken, but as we're already in an
  irqlocked state that's a noop.  But the optmizer can't tell, because
  arch_irq_lock() involves an asm block it can't see inside.  Elide
  the call when possible.

+ The z_swap_next_thread() function evaluates to just a single load of
  _kernel.ready_q.cache when !SMP, but wasn't being inlined because of
  function location.  Move that test up into do_swap() so it's always
  done correctly.

Signed-off-by: Andy Ross <andyross@google.com>
2026-03-10 17:24:10 +01:00
Mathieu Choplain
5a0f73f045 kernel: events: wake threads atomically using waitq post-walk callback
When CONFIG_WAITQ_SCALABLE=y, wake up all threads from a post-waitq-walk
callback which is invoked while the scheduler spinlock is still held. This
solves the race condition that was worked around via `no_wake_in_timeout`
flag in k_thread and `is_timeout` parameter of z_sched_wake_thread_locked()
which can now both be dropped.

Signed-off-by: Mathieu Choplain <mathieu.choplain-ext@st.com>
2026-02-18 14:43:10 +00:00
Mathieu Choplain
73feef8b20 kernel: sched: add post-walk callback argument to z_sched_waitq_walk()
Modify z_sched_waitq_walk() to accept an optional callback invoked after
the walk while still holding the scheduler spinlock. This can be used to
perform post-walk operations "atomically". Update all callers to work with
this new function signature.

While at it, create dedicated (private) typedefs for the callbacks and
clean up/improve the routine and callbacks' documentation.

Signed-off-by: Mathieu Choplain <mathieu.choplain-ext@st.com>
2026-02-18 14:43:10 +00:00
Mathieu Choplain
e725225489 kernel: sched: perform safe waitq walk inside z_sched_waitq_walk()
z_sched_waitq_walk() used _WAIT_Q_FOR_EACH, a wrapper around the
"unsafe" SYS_DLIST_FOR_EACH_CONTAINER which does not allow detaching
elements from the list during the walk. As a result, attempting to
detach threads from the wait queue as part of the callback provided
to z_sched_waitq_walk() would result in breakage.

Introduce new _WAIT_Q_FOR_EACH_SAFE macro as wrapper around the "safe"
SYS_DLIST_FOR_EACH_CONTAINER_SAFE which allows detaching nodes from
the list during the walk, and use it inside z_sched_waitq_walk().
While at it:
- add documentation on the _WAIT_Q_FOR_EACH macro, including a warning
  about detaching elements as part of the loop not being allowed
- add note to documentation of z_sched_waitq_walk() indicating that
  the callback can safely remove the thread from wait queue as this
  will no longer break the FOR_EACH loop
- add _WAIT_Q_FOR_EACH_SAFE to the list of ForEachMacros in .clang-format

NOTE: this new "safe removal inside callback" behavior is only available
when CONFIG_WAITQ_SCALABLE=n. When the option is 'y', red-black trees are
used instead of doubly-linked lists which prevent mutation of the list
while it is being walked. This limitation is explicitly documented.

Signed-off-by: Mathieu Choplain <mathieu.choplain-ext@st.com>
2026-02-18 14:43:10 +00:00
Mathieu Choplain
eac6c7cb24 kernel: sched: don't acquire scheduler spinlock in z_sched_wake_thread()
Don't acquire the _sched_spinlock in z_sched_wake_thread(). This allows
calling the function from callbacks which already own the spinlock. The
function is renamed to z_sched_wake_thread_locked() to reflect this new
behavior, and all existing callers are updated to ensure they hold the
_sched_spinlock as is now required.

Signed-off-by: Mathieu Choplain <mathieu.choplain-ext@st.com>
2026-02-18 14:43:10 +00:00
Yong Cong Sin
e843931194 kernel: check if interrupt is disabled in k_can_yield
`k_yield()` can't be called when interrupt is disabled, update
`k_can_yield()` to reflect that.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2026-02-16 00:13:38 +00:00
Benjamin Cabé
a87520bd04 kernel: use proper essential type to initialize boolean variables
As per Zephyr coding guideline #59, "operands shall not be of an
inappropriate essential type". This makes sure boolean variables are
initialized with true/false, not 1/0.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
2026-02-04 13:52:38 +01:00
Peter Mitsis
669a8d0704 kernel: O(1) search for threads among CPUs
Instead of performing a linear search to determine if a given
thread is running on another CPU, or if it is marked as being
preempted by a metaIRQ on any CPU do this in O(1) time.

On SMP systems, Zephyr already tracks the CPU on which a thread
executes (or lasted executed). This information is leveraged to
do the search in O(1) time.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-08 17:34:14 -06:00
Thinh Le Cong
15cdf90bee include: zephyr: toolchain: suppress Go004 warning for inline functions
IAR compiler may emit Error[Go004]: Could not inline function
when handling functions marked as always_inline or inline=forced,
especially in complex kernel code

Signed-off-by: Thinh Le Cong <thinh.le.xr@bp.renesas.com>
2026-01-08 12:00:29 +00:00
Peter Mitsis
3a8c9797ca kernel: Re-instate metaIRQ z_is_thread_ready() check
Re-instate a z_is_thread_ready() check on the preempted metaIRQ
thread before selecting it as the preferred next thread to
schedule. This code exists because of a corner case where it is
possible for the thread that was recorded as being pre-empted
by a meta-IRQ thread can be marked as not 'ready to run' when
the meta-IRQ thread(s) complete.

Such a scenario may occur if an interrupt ...
  1. suspends the interrupted thread, then
  2. readies a meta-IRQ thread, then
  3. exits
The resulting reschedule can result in the suspended interrupted
thread being recorded as being interrupted by a meta-IRQ thread.
There may be other scenarios too.

Fixes #101296

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-22 22:33:18 +01:00
Peter Mitsis
5dd36854fd kernel: Update clearing metairq_preempted record
If the thread being aborted or suspended was preempted by a metaIRQ
thread then clear the metairq_preempted record. In the case of
aborting a thread, this prevents a re-used thread from being
mistaken for a preempted thread. Furthermore, it removes the need
to test the recorded thread for readiness in next_up().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-10 10:32:50 +00:00
Peter Mitsis
36d195b717 kernel: MetaIRQ on SMP fix
When a cooperative thread (temporary or otherwise) is preempted by a
metaIRQ thread on SMP, it is no longer re-inserted into the readyQ.
This prevents it from being scheduled by another CPU while the
preempting metaIRQ thread runs.

Fixes #95081

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-10 10:32:50 +00:00
Peter Mitsis
653731d2e1 kernel: Adjust metairq preemption tracking bounds
Adjust the bounds for tracking metairq preemption to include the
case where the number of metairq threads matches the number of
cooperative threads. This is needed as a thread that is schedule
locked through k_sched_lock() is documented to be treated as a
cooperative thread. This implies that if such a thread is preempted
by a metairq thread that execution control must return to that
thread after the metairq thread finishes its work.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-10 10:32:50 +00:00
Daniel Leung
169304813a cache: move arch_mem_coherent() into cache subsys
arch_mem_coherent() is cache related so it is better to move it
under cache subsys. It is renamed to sys_cache_is_mem_coherent()
to reflect this change.

The only user of arch_mem_coherent() is Xtensa. However, it is
not an architecture feature. That's why it is moved to the cache
subsys.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-12-09 09:25:33 +01:00
Peter Mitsis
ce6c26a927 kernel: Simplify move_current_to_end_of_prio_q()
It is now more obvious that the move_current_to_end_or_prio_q() logic
is supposed to match that of k_yield() (without the schedule point).

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Peter Mitsis
77ad7111e1 kernel: Rename move_thread_to_end_of_prio_q()
All instances of the internal routine move_thread_to_end_of_prio_q()
use the current thread. Renaming it to move_current_to_end_of_prio_q()
to reflect that.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Peter Mitsis
ffc6c8839b kernel: Rename z_move_thread_to_end_of_prio_q()
The routine z_move_thread_to_end_of_prio_q() has been renamed to
z_yield_testing_only() as it was only both only used for test code
and always operated on the current thread.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Nicolas Pitre
af7ae5d61f kernel: sched: plug assertion race in z_get_next_switch_handle()
Commit d4d51dc062 ("kernel:  Replace redundant switch_handle assignment
with assertion") introduced an assertion check that may be triggered
as follows by tests/kernel/smp_abort:

CPU0              CPU1              CPU2
----              ----              ____
* [thread A]      * [thread B]      * [thread C]
* irq_offload()   * irq_offload()   * irq_offload()
* k_thread_abort(thread B)
                  * k_thread_abort(thread C)
                                    * k_thread_abort(thread A)
* thread_halt_spin()
* z_is_thread_halting(_current) is false
* while (z_is_thread_halting(thread B));
                  * thread_halt_spin()
                  * z_is_thread_halting(_current) is true
                  * halt_thread(_current...);
                  * z_dummy_thread_init()
                    - dummy_thread->switch_handle = NULL;
                    - _current = dummy_thread;
                  * while (z_is_thread_halting(thread C));
* z_get_next_switch_handle()
* z_arm64_context_switch()
* [thread A is dead]
                                    * thread_halt_spin()
                                    * z_is_thread_halting(_current) is true
                                    * halt_thread(_current...);
                                    * z_dummy_thread_init()
                                      - dummy_thread->switch_handle = NULL;
                                      - _current = dummy_thread;
                                    * while(z_is_thread_halting(thread A));
                  * z_get_next_switch_handle()
                    - old_thread == dummy_thread
                    - __ASSERT(old_thread->switch_handle == NULL) OK
                  * z_arm64_context_switch()
                    - str x1, [x1, #___thread_t_switch_handle_OFFSET]
                  * [thread B is dead]
                  * %%% dummy_thread->switch_handle no longer NULL %%%
                                    * z_get_next_switch_handle()
                                      - old_thread == dummy_thread
                                      - __ASSERT(old_thread->
                                             switch_handle == NULL) FAIL

This needs at least 3 CPUs and the perfect timing for the race to work as
sometimes CPUs 1 and 2 may be close enough in their execution paths for
the assertion to pass. For example, QEMU is OK while FVP is not.
Also adding sufficient debug traces can make the issue go away.

This happens because the dummy thread is shared among concurrent CPUs.
It could be argued that a per-CPU dummy thread structure would be the
proper solution to this problem. However the purpose of a dummy thread
structure is to provide a dumping ground for the scheduler code to work
while the original thread structure might already be reused and
therefore can't be clobbered as demonstrated above. But the dummy
structure _can_ be clobbered to some extent and it is not worth the
additional memory footprint implied by per-CPU instances. We just have
to ignore some validity tests when the dummy thread is concerned.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-04 13:45:24 -05:00
Nicolas Pitre
1c8f1c8647 kernel: sched: use clearly invalid value for halting thread switch_handle
When a thread halts and dummifies, set its switch_handle to (void *)1
instead of the thread pointer itself. This maintains the non-NULL value
required to prevent deadlock in k_thread_join() while making it obvious
that this value is not meant to be dereferenced or used.

The switch_handle should be an opaque architecture-specific value and
not be assumed to be a thread pointer in generic code. Using 1 makes
the intent clearer.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-04 13:45:24 -05:00
TaiJu Wu
d4d51dc062 kernel: Replace redundant switch_handle assignment with assertion
The switch_handle for the outgoing thread is expected to be NULL
at the start of a context switch.
The previous code performed a redundant assignment to NULL.

This change replaces the assignment with an __ASSERT(). This makes the
code more robust by explicitly enforcing this precondition, helping to
catch potential scheduler bugs earlier.

Also, the switch_handle pointer is used to check a thread's state during a
context switch. For dummy threads, this pointer was left uninitialized,
potentially holding a unexpected value.

Set the handle to NULL during initialization to ensure these threads are
handled safely and predictably.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-10-25 15:59:29 +03:00
TaiJu Wu
91f1acbb85 kernel: Add more debug info and thread checking in run queue
1. There are debug info within k_sched_unlock so we shoulld add
   same debug info to k_sched_lock.

2. The thread in run queue should be normal or metairq thread, we should
   check it is not dummy thread.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-10-24 13:26:15 -04:00
Fabio Baltieri
700a1a5a28 lib, kernel: use single evaluation min/max/clamp
Replace all in-function instances of MIN/MAX/CLAMP with the single
evaluation version min/max/clamp.

There's probably no race conditions in these files, but the single
evaluation ones save a couple of instructions each so they should save
few code bytes and potentially perform better, so they should be
preferred in general.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2025-10-24 01:10:40 +03:00
TaiJu Wu
623d8fa540 kernel: cleanup thread state checks and nunecessary CONFIG check
The commit replaces negative thread state checks with a new,
 more descriptivepositive check.
The expression `!z_is_thread_prevented_from_running()`
is updated to `z_is_thread_ready()` where appropriate, making
the code's intent clearer.

 Removes a redundant `IS_ENABLED(CONFIG_SMP)`, they are included #ifdef.

Finally, this patch add the missing `#endif` directive.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-09-24 09:43:30 +02:00
TaiJu Wu
e069ce242c kernel: Consolidate thread state checking functions
This patch moves `is_aborting()` and `is_halting()`
from `kernel/sched.c` to `kernel/include/kthread.h`
and renames them to `z_is_thread_aborting()` and `z_is_thread_halting()`,
for consistency with other internal kernel APIs.

It replaces the previous inline function definitions in `sched.c`
with calls to the new header functions. Additionally, direct bitwise
checks like `(thread->base.thread_state & _THREAD_DEAD) != 0U`
are updated to use the new `z_is_thread_dead()` helper function.
This enhances code readability and maintainability.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-09-24 09:43:30 +02:00
Marcin Szkudlinski
91d17f6931 kernel: add k_thread_absolute_deadline_set call
k_thread_absolute_deadline_set is simiar to existing
k_thread_deadline_set. Diffrence is that k_thread_deadline_set
takes a deadline as a time delta from the current time,
k_thread_absolute_deadline_set is expecting a timestamp
in the same units used by k_cycle_get_32().

This allows to calculate deadlines for several thread and
set them in deterministic way, using a common timestamp as
a "now" time base.

Signed-off-by: Marcin Szkudlinski <marcin.szkudlinski@intel.com>
2025-09-11 14:18:16 +01:00
Peter Mitsis
d397a91c62 kernel: Add arch_coprocessors_disable()
The intent of arch_coprocessors_disable() is to replace
arch_float_disable() in halt_thread() for the FPU will not
always be the only coprocessor that will need to be disabled.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-07-20 12:25:17 -04:00
Anas Nashif
ce119f5d07 tracing: do not mark thread as switched_out in case of no reschedule
If z_get_next_switch_handle determines no reschdule is needed, do not
mark thread as switched_out in set_curent() where new and old thread are
the same.

See zephyrproject-rtos/zephyr#88596 for more details.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-07-08 18:34:11 -05:00
Daniel Leung
1984236c1d kernel: move z_sched_lock inside k_sched_lock
z_sched_lock() has exactly one user in tree which is
k_sched_lock(). So combine them to make it easier to
follow (or not, since there is no jumping to another
file anymore).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-07-08 13:38:48 -05:00
Krzysztof Chruściński
66daaf6ba3 kernel: sched: sleep: Use value returned by z_add_timeout
z_tick_sleep function needs to calculate expected wakeup tick. It
required reading current system clock tick. It can be costly since
it is a register access. When timeout is added this value is
calculated and z_add_timeout returns it. It can be used instead to
optimize sleep performance.

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
2025-04-15 19:09:33 +02:00
Krzysztof Chruściński
6d35969a55 kernel: sched: Optimize sleeping function
Accessing system timer registers can be costly and it shall be avoided
if possible. When thread is waken up in z_tick_sleep it may be because
timeout expired or because thread was waken up before sleeping period
passed.

Add function to detect if timeout is aborted (before it was expired).
Use it in the sleep function and avoid reading system ticks if timeout
was not aborted.

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
2025-04-15 19:09:33 +02:00
Josh DeWitt
c05cfbf15e kernel/sched: Re-sort waitq on priority change
k_thread_priority_set() on a pended thread wasn't re-inserting into the
waitq, causing the incorrect thread to run based on priority. When using
the scalable waitq config, this can also break assumptions of the tree
and leave the owner of a waitq still being in the waitq tree, cycles in
the tree, or a crash.

Remove and re-add a thread to a waitq to ensure the waitq remains in
order and the tree's assumptions are not violated.

To illustrate the issue, consider 4 threads in decreasing priority
order: A, B, C, and D along with two mutexes, m0 and m1. This is
implemented in the new complex_inversion mutex_api test.
1. D locks m1
2. C locks m0
3. C pends on m1
4. B pends on m1
5. A pends on m0, boosts C's priority, now tree on m1 is not sorted
6. D unlocks m1, left-most thread on tree is B. When removing B from
   tree it cannot be found because it searches to the right of C due to
   C's boosted priority when the node is actually on the left. rb_remove
   silently fails.
7. B unlocks m1, left-most thread on tree is still B and it tries to
   unpend itself, resulting in a NULL pointer dereference on
   B->base.pended_on.

Signed-off-by: Josh DeWitt <josh.dewitt@garmin.com>
2025-03-24 07:58:36 +01:00
Peter Mitsis
701aab92e2 kernel: Add Z_IS_TIMEOUT_RELATIVE() macro
Introduces the Z_IS_TIMEOUT_RELATIVE() macro to help ensure that
checking for relative/absolute timeouts is consistent. Using this
macro also helps ensure that we get the correct behavior when using
32-bit timeouts (CONFIG_TIMEOUT_64BIT=n).

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-03-17 02:21:02 +01:00
Andy Ross
8b4ed6655a kernel: Clamp k_sleep() return value on overflow
k_sleep() returns a 32 bit count of milliseconds, as that was its
historical API.  But it now accepts a potentially 64 bit tick count as
an argument, leading to situations where an early wakeup will produce
sleep times that aren't representable.  Clamp this instead of
truncating to an arbitrary value.

Naive code will likely do the right thing with the large return (just
sleeping an extra round), and sophisticated apps can detect INT_MAX to
enable more elaborate retry logic.

(Also fixes a somewhat unfortunate puncutation error in the docs that
implied that it returns zero on early wakeup!)

Fixes: #84669

Signed-off-by: Andy Ross <andyross@google.com>
2025-03-07 20:20:25 +01:00
Andy Ross
ddee1ab4cf kernel/sched: Correct locking in essential thread panic
Calling a (handled/ignored) panic with the scheduler lock held
produces spinlock errors in some circumstances, depending on whether
or not the swap gets reached before another context switch.  Release
the lock around the call, we don't touch any scheduler state on the
path to z_swap(), so this is safe.

Signed-off-by: Andy Ross <andyross@google.com>
2025-02-26 10:10:29 +00:00
Andy Ross
f6239c52ae kernel/sched: Panic after aborting essential thread, not before
The essential thread check and panic happens at the top of
k_thread_abort().  This is arguably a performance bug: the system is
going to blow up anyway no matter where we put the test, we shouldn't
add instructions to the path taken by systems that DON'T blow up.

But really it's more of a testability/robustness glitch: if you have a
fatal error handler that wants to catch this panic (say, a test using
ztest_set_fault_valid()), then the current code will panic and
early-exit BEFORE THE THREAD IS DEAD.  And so it won't actually die,
and will continue on causing mayhem when presumably the handler code
expected it to have been aborted.

It's sort of an unanswerable question as to what the "right" behavior
is here (the system is, after all, supposed to have panicked!).  But
this seems preferable for definable practical reasons.

Kill the thread, then panic.  Unless it's _current, in which case
panic as late as possible for maximum coverage of the abort path.

Fixes: #84460

Signed-off-by: Andy Ross <andyross@google.com>
2025-02-10 22:26:10 +01:00
Peter Mitsis
e55ac3ef65 kernel: Improve ordering in SMP k_thread_suspend()
The routine k_thread_suspend() has a fast path for non-SMP when
suspending the current thread. When SMP is enabled, it is expected
that the compiler drop the entire fast path checks because the whole
expression would always evaluate to false. However, the compiler has
been observed to only drop whole fast path check when the
"!IS_ENABLED(CONFIG_SMP)" condition appears at the beginning of the
fast path check.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-02-07 02:23:45 +01:00
Peter Mitsis
c63b42d478 kernel: Fix k_wakeup() exit paths
z_reschedule() already has a check to determine if it is called from
the context of an ISR--no need to duplicate it in k_wakeup().
Furthermore, if the target thread is not sleeping, there is no need
to reschedule and we can do a fast return.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-02-03 19:51:20 +01:00
Peter Mitsis
995ad43851 kernel: Streamline z_is_thread_ready()
The check for an active timeout in z_is_thread_ready() was originally
added to cover the case of a sleeping thread. However, since there is
now a bit in the thread state that indicates if the thread is sleeping
we can drop that superfluous check.

Making this change necessitates moving k_wakeup()'s call to
z_abort_thread_timeout() so that it is within the locked
_sched_spinlock section to ensure that we do not end up with
a stray thread timeout in the timeout list.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-01-28 18:14:22 +01:00
Peter Mitsis
bfe0b74aad kernel: Do not mark thread as queued in k_yield()
SMP does not need to mark the current thread as queued in
k_yield() as that will naturally get done in do_swap().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-01-28 07:57:20 +01:00
Kalle Kietäväinen
a2eb78c03b kernel: sched: Fix meta-IRQ preemption tracking for the idle thread
When the PM subsystem is enabled, the idle thread locks the scheduler for
the duration the system is suspended. If a meta-IRQ preempts the idle
thread in this state, the idle thread is tracked in `metairq_preempted`.
However, when returning from the preemption, the idle thread is not removed
from `metairq_preempted`, unlike all the other threads. As a result, the
scheduler keeps running the idle thread even if there are higher priority
threads ready to run.

This change treats the idle thread the same way as all other threads when
returning from a meta-IRQ preemption.

Fixes #64705

Signed-off-by: Kalle Kietäväinen <kalle.kietavainen@silabs.com>
2025-01-27 13:26:20 +01:00
Tom Hughes
0ec126c6d1 kernel: Add missing marshalling header for k_reschedule
Without this header, compiling the kernel.poll test with
-Werror=unused-function fails.

Signed-off-by: Tom Hughes <tomhughes@chromium.org>
2025-01-13 20:24:16 +01:00
Nicolas Pitre
7a3124d866 kernel: move current thread pointer management to core code
Define the generic _current directly and get rid of the generic
arch_current_get().

The SMP default implementation is now known as z_smp_current_get().
It is no longer inlined which saves significant binary size (about 10%
for some random test case I checked).

Introduce z_current_thread_set() and use it in place of
arch_current_thread_set() for updating the current thread pointer
given this is not necessarily an architecture specific operation.
The architecture specific optimization, when enabled, should only care
about its own things and not have to also update the generic
_current_cpu->current copy.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-01-10 07:49:08 +01:00
Nicolas Pitre
46aa6717ff Revert "arch: deprecate _current"
Mostly a revert of commit b1def7145f ("arch: deprecate `_current`").

This commit was part of PR #80716 whose initial purpose was about providing
an architecture specific optimization for _current. The actual deprecation
was sneaked in later on without proper discussion.

The Zephyr core always used _current before and that was fine. It is quite
prevalent as well and the alternative is proving rather verbose.
Furthermore, as a concept, the "current thread" is not something that is
necessarily architecture specific. Therefore the primary abstraction
should not carry the arch_ prefix.

Hence this revert.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-01-10 07:49:08 +01:00
Peter Mitsis
bdb04dbfba kernel: Alter z_abort_thread_timeout() return type
No caller of the internal kernel routine z_abort_thread_timeout()
uses its return value anymore.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-01-09 04:04:36 +01:00
Peter Mitsis
f4f3b9378f kernel: Inline halt_thread() and z_thread_halt()
Inlining these routines helps to improve the
performance of k_thread_suspend()

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-01-07 18:24:09 +01:00
Peter Mitsis
d774594547 kernel: thread suspend/resume bail paths are unlikely
Gives a hint to the compiler that the bail-out paths in both
k_thread_suspend() and k_thread_resume() are unlikely events.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-01-07 18:24:09 +01:00
Peter Mitsis
af14e120a5 kernel: Simplify clear_halting() on UP systems
There is no need for clear_halting() to do anything on UP systems.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-01-07 18:24:09 +01:00
Peter Mitsis
ea6adb6726 kernel: Add custom scheduler yield routines
Adds customized yield implementations based upon the selected
scheduler (dumb, multiq or scalable). Although each follows the
same broad outline, some of them allow for additional tweaking
to extract maximal performance. For example, the multiq variant
improves the performance of k_yield() by about 20%.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-01-07 18:24:09 +01:00