kernel: sched: avoid unnecessary lock in z_impl_k_yield

`z_impl_k_yield` unlocked sched_spinlock, only to lock it again
immediately, do a little bit more work, then unlock it again.
This causes performance issues on SMP, where `sched_spinlock`
is often fairly highly contended and cores often end up spinning
for quite a while waiting to retake the lock in `z_swap_unlocked`.

Instead directly pass the spinlock key to `z_swap` and avoid the
extra lock+unlock.

Signed-off-by: James Harris <james.harris@intel.com>
This commit is contained in:
James Harris 2021-03-01 10:14:13 -08:00 committed by Anas Nashif
commit 6543e06914

View file

@ -1209,17 +1209,19 @@ void z_impl_k_yield(void)
__ASSERT(!arch_is_in_isr(), "");
if (!z_is_idle_thread_object(_current)) {
LOCKED(&sched_spinlock) {
if (!IS_ENABLED(CONFIG_SMP) ||
z_is_thread_queued(_current)) {
dequeue_thread(&_kernel.ready_q.runq,
_current);
}
queue_thread(&_kernel.ready_q.runq, _current);
update_cache(1);
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
if (!IS_ENABLED(CONFIG_SMP) ||
z_is_thread_queued(_current)) {
dequeue_thread(&_kernel.ready_q.runq,
_current);
}
queue_thread(&_kernel.ready_q.runq, _current);
update_cache(1);
z_swap(&sched_spinlock, key);
} else {
z_swap_unlocked();
}
z_swap_unlocked();
}
#ifdef CONFIG_USERSPACE